h1

Optimistic Concurrency Control

July 17, 2008

There are several important lessons that can be gleaned from Aloha and easily applied to other projects. One major lesson is the integration of robustness and performance builds into the continuous integration environment. This is of course the topic I am presenting on at Agile2008 along with Robbie and Fab (As a side-note, I am currently working on creating an extremely simple and lightweight robustness and performance framework based on Aloha’s model that can easily be integrated into any Java project with a minimum of fuss. I will be putting out what I have in open-source land, hopefully early next week. I also currently don’t have a name for this framework – I am calling it RPFramework for now, but would like a better, cooler name without the word framework in it. Any suggestions would be welcome!)

But what I want to focus on right now, as the title would suggest, is the optimistic concurrency control model implemented in Aloha. It isn’t the easiest or most intuitive mechanism to implement in any multi-threaded application. Most applications tend to go the pessimistic route with some locking mechanism, typically using semaphores or other language constructs such as the synchronized keyword. From my experiences in trying to describe how the optimistic model works to others (typically colleagues joining the Aloha team), developers, even highly experienced and competent ones, have loads of trouble really grasping the concepts behind it. Their first few dabbles at working with the implementation are inherently wrong as they struggle to cope with the intricacies behind the model.

One (un-scientific) way of gauging the effectiveness of the model is to look at other applications that use it. Among databases, Oracle is the only major player which uses such a model. All the others use a pessimistic locking mechanism – it is however hard to pinpoint this as the primary reason (or even one of them) why Oracle outperforms most other RDBMS. Java itself uses this modelin its implementation of the increment() method in AtomicInteger. And just last night, I read a very interesting article that delved into the depths of Transaction Memory (TM).

Transaction Memory attempts to make life easier for programmers who work on concurrent applications. It gives the developer two of the ACID properties that databases give you – atomicity and isolation. Therefore, you get to write code, never worrying about concurrency and synchronicity. TM allows you to work as if you were writing a single-threaded application, not worrying about locks et al. Now, how does TM work under the covers? There are two main flavours – STM and HTM (software and hardware versions). The hardware versions, as always, are better performing but much harder to implement and the more complex functionality is currently implemented in software. So what does TM have to do with the subject of this post? – Apparently, most TM systems are implemented using an optimistic model, using nearly all of the same mechanisms implemented in Aloha. There are two notable differences: its transparency to the developer and the retry mechanism. Let me first describe how the optimistic model itself works and then I will go into a bit more detail about these differences.

Instead of going into the textbook details of an optimistic concurrency model, let me describe the optimistic model as implemented in Aloha and why we chose this path. Since there is no locking in an optimistic model, any thread looking to read shared state can do so instantaneously. Writing is more complicated and can lead to delays and retries – but if the majority of data access is for reading (which is true in most applications), you really have reason to be optimistic. When you read shared state, what you really get is a cloned copy of the state. Any mutations on this state is local; it affects no one else. When you try to write the changes back into the state store, the following happens:

  • If no other thread has saved a newer version into the collection since the one you read, your changes are saved and no further action is required.
  • If some other thread(s) has saved newer changes, all your changes are rolled back and you get to try your entire “transaction” again.
  • If a ConcurrentUpdateBlock has rolled back 10 times, it isn’t retried again as it is extremely unlikely it will ever succeed.

The two main differences between optimistic concurrency control as implemented in Aloha and in a TM are:

  • In TM, the rollback and re-try mechanisms are done transparently. In Aloha, this is done explicitly by writing code which accesses and changes shared state in ConcurrentUpdateBlocks. Each ConcurrentUpdateBlock is then invoked through the ConcurrentUpdateManager which handles the retry mechanism for you. Therefore, the developer has to be very aware of shared state accesses and has to understand how the model works while writing code.
  • The other big difference is the retry mechanism. Aloha retries each ConcurrentUpdateBlock 10 times. The TM system that was described in the article I read implemented a more complex system using priority levels which increased with each failed commit. This is obviously better to guarantee fairness but we didn’t see the need to implement it until an issue became apparent.

And finally, Aloha’s state and concurrency models are easily extensible such that applications built on top of Aloha can use them with next to no effort.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: