Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

  • Blog Navigation
Volatility: it's not just for sublimation any more

Multithreaded programming was a hot topic at RubyConf this year, and a common theme in many talks was the use of functional languages to prevent contention between threads. This totally makes sense to me, to the limited extent that I can wrap my head around truly functional programming, and I’m sure it’s an excellent approach. However, imagine a case in which we can’t just drop in a new language, so we need to write some multithreaded code in Ruby. I’m sure you won’t have to think too long or hard.

Now, one point several speakers at RubyConf made that I would like to reiterate is this: multithreaded programming is difficult. Gosh darn difficult. People who write software tend to thrive on determinism and linearity. After all, computers always do what we tell them to, right? They don’t make mistakes or change their minds; not like those silly, silly humans. But now big, bad concurrent programming comes along and suddenly computers can come up with different answers depending on, oh, the alignment of the planets. Chaos.

So, functional programming languages aside for the moment, what tools do we have to rein in the inevitable entropy that will, more than likely, eventually bring the planets into alignment against us?

Several years ago Andrei Alexandrescu wrote this excellent article on how to use the C++ type system (stick with me, it’s all Ruby after this paragraph) to automatically prevent race conditions at compile time. I recommend you read it; it’s quite short. Now, however you feel about static typing, you must admit that the approach he describes is a beautiful use of the expressiveness of the C++ type system. The question is, can we do something analogous in Ruby?

First off, lets start with some thread-unsafe code, similar to what Jim Weirich used in his talk on threads at RubyConf:

account =
threads = []

1000.times do |i|
  threads << do |account|

threads.each { |thread| thread.join }
puts account.balance

This code results in an account balance somewhere between 1 and 1,000. The exact value depends, of course, on the planets.

Now, to make this thread-safe. I came up with a few attempts using #alias_method and #undef_method, but didn’t find anything satisfying. After that, I figured I’d try a simple proxy to approximate the effect. Here’s a first cut:

class VolatileProxy
  attr_reader :obj

  def initialize(obj)
    @obj = obj

  def method_missing(method, *args)
    raise "Unsynchronized message '#{method}' sent to volatile object"

def volatile(obj)

def locked_scope(volatile, mutex)
  mutex.synchronize do

Now, calls to methods on objects you declare as volatile (shared across threads) will fail messily unless you use them within a locked_scope. To make this work the example code now becomes:

mutex =
account = volatile
threads = []

1000.times do |i|
  threads << do
    locked_scope(account, mutex) { |safe_account| }

threads.each { |thread| thread.join }
puts account.balance

This example has some issues, namely:

  • The final call to #balance will actually fail, since it’s not in a locked_scope; it would be nice to be able to declare individual functions as volatile without too verbose a syntax.
  • It’s tempting to name the locked account object the same name as the unlocked account object, but doing so will cause them to overwrite one another (fixed in Ruby 1.9, of course, but until then…)
  • The unlocked object is easily available. Given the opportunity to circumvent the lock, someone will do something horrible.

I really like the idea of using blocks to scope behavior like this. This particular example doesn’t feel particularly clean to me yet, but hopefully it will give some people something to think about. If you have a better approach, please don’t be afraid to shout it out.

  • My solution, slightly more hackish:

  • Adam Milligan


    I really like the declarative #synchronized syntax for individual methods. But, a couple comments:

    1) You include the #mutex method as an instance method. This means each instance of the Synchronized class will have its own mutex, and they won’t block one another.

    2) I like the idea of not modifying the shared class, but decorating it only when it will be potentially shared. Now, I realize this is Ruby and we can modify a class at any time, so we could do this:

    a1 =
    a2 =

    class < 1 # => synchronized

    But the proxy syntax seems much more pleasant (if less powerful):

    a1 =
    a2 = volatile # => 1 # => BOOM!

    I also like the idea of the caller locking the method call explicitly, as with the #locked_scope method, which takes the synchronization object as a parameter. In some cases multiple methods on multiple objects need to be locked within the same critical section. Allowing the caller to pass a (reentrant) mutex into the locking scopes gives more control over which mutex locks what and when; this can be important for avoiding deadlocks.

  • ste

    I don’t get it… why should two instances lock each other, if they’re not accessing the same piece of shared data?
    In Java, every object has its own lock, which is used when you declare a method with the “synchronized” keyword. In other words, every synchronized method of a class locks the instance it’s called on. My solution is a (very crude and probably buggy :-) implementation of the same mechanism in Ruby.
    Anyway, here’s another take, this time using an approach more similar to yours:

    (note that I’ve modified the example Account class to make it a bit more “pathologic”)

  • Adam Milligan

    I should have been more clear. I didn’t mean that two instances would lock each other, but that a caller could potentially need to lock two shared objects at once. Jim Weirich illustrated this in his talk by giving the example of moving money from one account to another. The code sends the #debit message to one account, then sends #credit to a second account with the same value. The combination of these two operations, on two different objects, must be atomic, so locking individual methods is insufficient.

Share This