Archive for the ‘Aloha’ Category

h1

TestRR

July 30, 2008

As mentioned in my previous post, I have open-sourced a very generic and lightweight version of the robustness framework in Aloha. The name, very unimaginatively is TestRR, and is available on google code.

For the moment, the features and documentation available on the download are quite basic. The framework and basic functionality are there, and more will be added if there is any demand at all for it.

I hope people download it, try it and give me some feedback to improve it!

Advertisements
h1

Optimistic Concurrency Control

July 17, 2008

There are several important lessons that can be gleaned from Aloha and easily applied to other projects. One major lesson is the integration of robustness and performance builds into the continuous integration environment. This is of course the topic I am presenting on at Agile2008 along with Robbie and Fab (As a side-note, I am currently working on creating an extremely simple and lightweight robustness and performance framework based on Aloha’s model that can easily be integrated into any Java project with a minimum of fuss. I will be putting out what I have in open-source land, hopefully early next week. I also currently don’t have a name for this framework – I am calling it RPFramework for now, but would like a better, cooler name without the word framework in it. Any suggestions would be welcome!)

But what I want to focus on right now, as the title would suggest, is the optimistic concurrency control model implemented in Aloha. It isn’t the easiest or most intuitive mechanism to implement in any multi-threaded application. Most applications tend to go the pessimistic route with some locking mechanism, typically using semaphores or other language constructs such as the synchronized keyword. From my experiences in trying to describe how the optimistic model works to others (typically colleagues joining the Aloha team), developers, even highly experienced and competent ones, have loads of trouble really grasping the concepts behind it. Their first few dabbles at working with the implementation are inherently wrong as they struggle to cope with the intricacies behind the model.

One (un-scientific) way of gauging the effectiveness of the model is to look at other applications that use it. Among databases, Oracle is the only major player which uses such a model. All the others use a pessimistic locking mechanism – it is however hard to pinpoint this as the primary reason (or even one of them) why Oracle outperforms most other RDBMS. Java itself uses this modelin its implementation of the increment() method in AtomicInteger. And just last night, I read a very interesting article that delved into the depths of Transaction Memory (TM).

Transaction Memory attempts to make life easier for programmers who work on concurrent applications. It gives the developer two of the ACID properties that databases give you – atomicity and isolation. Therefore, you get to write code, never worrying about concurrency and synchronicity. TM allows you to work as if you were writing a single-threaded application, not worrying about locks et al. Now, how does TM work under the covers? There are two main flavours – STM and HTM (software and hardware versions). The hardware versions, as always, are better performing but much harder to implement and the more complex functionality is currently implemented in software. So what does TM have to do with the subject of this post? – Apparently, most TM systems are implemented using an optimistic model, using nearly all of the same mechanisms implemented in Aloha. There are two notable differences: its transparency to the developer and the retry mechanism. Let me first describe how the optimistic model itself works and then I will go into a bit more detail about these differences.

Instead of going into the textbook details of an optimistic concurrency model, let me describe the optimistic model as implemented in Aloha and why we chose this path. Since there is no locking in an optimistic model, any thread looking to read shared state can do so instantaneously. Writing is more complicated and can lead to delays and retries – but if the majority of data access is for reading (which is true in most applications), you really have reason to be optimistic. When you read shared state, what you really get is a cloned copy of the state. Any mutations on this state is local; it affects no one else. When you try to write the changes back into the state store, the following happens:

  • If no other thread has saved a newer version into the collection since the one you read, your changes are saved and no further action is required.
  • If some other thread(s) has saved newer changes, all your changes are rolled back and you get to try your entire “transaction” again.
  • If a ConcurrentUpdateBlock has rolled back 10 times, it isn’t retried again as it is extremely unlikely it will ever succeed.

The two main differences between optimistic concurrency control as implemented in Aloha and in a TM are:

  • In TM, the rollback and re-try mechanisms are done transparently. In Aloha, this is done explicitly by writing code which accesses and changes shared state in ConcurrentUpdateBlocks. Each ConcurrentUpdateBlock is then invoked through the ConcurrentUpdateManager which handles the retry mechanism for you. Therefore, the developer has to be very aware of shared state accesses and has to understand how the model works while writing code.
  • The other big difference is the retry mechanism. Aloha retries each ConcurrentUpdateBlock 10 times. The TM system that was described in the article I read implemented a more complex system using priority levels which increased with each failed commit. This is obviously better to guarantee fairness but we didn’t see the need to implement it until an issue became apparent.

And finally, Aloha’s state and concurrency models are easily extensible such that applications built on top of Aloha can use them with next to no effort.

h1

Mock me not

May 20, 2008

This is a follow-up to an earlier post of mine, where I ranted about the (ab)use of mocks in unit tests. Last week, I got into a debate with one of my colleagues on the same issue. He was of the opinion that mocks were THE way to go in unit tests. You just don’t test with real stuff at unit test level. You mock out all your dependencies and only test your code.

I agree. In principle. But like all rules, there are some exceptions. Lets take an over-simplified example. Suppose we are writing the get() method in a DAO class, in test-first fashion. The test might initially look like this:

 @Test
 public void testTest() throws Exception {
  // setup
  Statement statement = EasyMock.createMock(Statement.class);
  EasyMock.expect(
    statement.executeQuery(EasyMock.isA(String.class))).
    andReturn(EasyMock.createMock(ResultSet.class));
  EasyMock.replay(statement);
  RTest test = new RTest();
  test.setStatement(statement);
  
  // act
  Object result = test.get("id");
  
  // assert
  EasyMock.verify(statement);
  assertNotNull(result);
 }

The simplest code to pass the test:

public ResultSet get(String string) {
  try {
   return statement.executeQuery("");
  } catch (SQLException e) {
   e.printStackTrace();
   return null;
  }
}

Now, we enhance the test and add more stringent assertions:

 @Test
 public void testTest() throws Exception {
  // setup
  String expectedSql = "select * from table where id='id'";
  Statement statement = EasyMock.createMock(Statement.class);
  EasyMock.expect(
    statement.executeQuery(expectedSql)).
    andReturn(EasyMock.createMock(ResultSet.class));
  EasyMock.replay(statement);
  RTest test = new RTest();
  test.setStatement(statement);
  
  // act
  Object result = test.get("id");
  
  // assert
  EasyMock.verify(statement);
  assertNotNull(result);
 }

Now, this code fails the test:

public ResultSet get(String string) {
  try {
   return statement.executeQuery(String.format(
     "select * from table where id = '%s'", string));
  } catch (SQLException e) {
   e.printStackTrace();
   return null;
  }
}

Some of the problems with this test:

  • Since the SQL query is part of the code, your test should assert on the correctness of the query itself. By typing it twice (once in the test, once in the code), how exactly do you achieve that goal?
  • Each unit test should test a single unit of code (in this case, a method). Assert on the expected result (of course!), but the test shouldn’t define the implementation. The second run of the test should not fail.
  • An integration test or acceptance test might show any possible problems with the query. But then again, those tests may not cover all boundary cases. The unit test is the closest place to the execution of that piece of code and is the right place to carry out that real test.

How would you do that? Simple. Use an in-memory database, run the CREATE script as part of the test setup (wow, this tests your create script as well – what a bonus!), populate data required for your test, execute your test method, and assert on the data. If you are testing get() functionality, assert on the returned data. If testing save() functionality, assert on the new data in the database. Your unit tests actually test your code (and SQL) without definining how the code should be implemented.

This rationale has been used (extensively) in Aloha’s unit tests. As you might expect, we use Hypersonic to unit-test the DAO classes. Similarly, we use SipUnit to test code which sends and receives real SIP. It provides several benefits to the use of mocks to test SIP:

  • SIP is an incredibly complex protocol. There is no easy way to validate that a newly created SIP message is valid by itself, or valid in the current message flow. If JAIN-SIP sends out the message with no exceptions and it is duly received by SipUnit, we can rest assured that the message is valid.
  • Mocking out SIP responses also results in unmanagable tests. Since the required data in a JAIN-SIP response object is several levels deep, constructing the mocks is quite complicated. If any code is refactored later, the changes to the tests are very time-consuming, without providing the necessary confidence that the changes to the code are accurate. (A by-product of the test defining the implementation)
  • Because SIP is truly real-time, it is essential to test various race-conditions and concurrency issues. While our robustness and performance builds go a long way to achieving this goal, our unit tests provide our first level of sanity checking. With the current unit test setup, it is much easier to write failing unit tests when fixing a concurrency issue than it would with mocks!

This is not to say that we don’t use mocks in Aloha. Infact, they are used extensively across the codebase, just not in ALL places.

h1

What is Aloha?

May 9, 2008

Is it:

  1. Hawaiian, for hello and goodbye?
  2. The Web21C team shirt, represented here by Eastmad?
  3. The newly open-sourced SIP Application Server?

The problem with most SIP Application Servers is that they are built on the SIP Servlet specification. This usually leads to unwieldy implementations that are over-engineered and inflexible. Our goal with Aloha was to create something simple, easy to scale and can easily be embedded into our different voice services. The API we expose to applications is purposefully quite coarse, hiding the horrible workings of SIP and media negotiation from them. (Of course, none of this is really “hidden” – you can always get your hands dirty now that this is open-sourced!)

Dependency Injection has become a very common pattern in writing testable code. Spring Beans lend themselves to this pattern naturally, whereby you can inject properties into beans via constructors and/or setters. We therefore decided to use singleton Spring beans for most of our core classes. As long as you are thread-safe while storing and accessing data, the beans are just an implementation of a finite state machine. You have the added bonus of being able to run the application server within any Java container, as long as you can load your Spring application context!

The beans themselves are layered at different layers of abstraction. The “dialog” layer is at the lowest level of the stack, responsible for sending and receiving SIP messages. Beneath the covers, it uses the open-source JAIN-SIP stack to handle a lot of the internal workings of SIP (such as the resending of messages, handling of retransmissions, working out of timeouts etc). The “callLeg” layer is a thin layer above dialog, meant to abstract most of the SIPpy stuff from the higher levels. Unfortunately, there is still some level of leakiness present at this layer as evidenced by the following declaration in the CallLegBean interface:

void reinviteCallLeg(String callLegId, MediaDescription offerMediaDescription, AutoTerminateAction autoTerminate, String applicationData);

It is a clear violation of the intended abstraction to expose details of media negotiation above the callLeg layer. It has been a story on our backlog for quite a while now to fix this but has consistently moved down the prioritization list!

Moving on, the next layer of abstraction is the “call” layer. This is where the interesting stuff starts (in terms of telephony, it isn’t very interesting if you are on a call by yourself. The most basic call would be to connect two people on a single call). Most applications using Aloha will use the API exposed by CallBean. In addition to calls, there are “media” level beans (both for call legs and for calls), which allow the application to create calls using the media server. The media beans themselves are defined in the SimpleSipStack project; ConvediaMediaBeans is an implementation using Convedia’s media server (The media server talks MSML and MOML over SIP). ConferenceBean leverages the media server and allows applications to create conference calls, while MediaBeanImpl can be used for the playback of announcements, collecting dtmf, recording etc.

All of the actions exposed via the API complete asynchronously. On completion/failure, applications are notified of events using the listener pattern. Listeners can either be added via the application context or directly from code. Listener interfaces are defined at each layer and applications can choose the events they are interested in. Infact, beans defined at each layer in Aloha register themselves as listeners to the corresponding beans in the layer beneath them.

Having provided the basic structure, I will next tackle the testing strategy used in Aloha.

h1

Aloha Baby!

May 2, 2008

I’d like to be, but I am not in fact in Hawaii!

Robbie, as usual, is ahead of me in the announcement, but we have finally open-sourced our SIP Application Server. It still has some rough edges around it in terms of the README etc, so you might have trouble building it from source if you download it right away, but it is finally out there.

Over the course of the past year, I have considered writing about several aspects of the project but was always unsure of how much can be spoken about in the public domain. Now that it is open-sourced, and because I consider it one of the cooler projects I have been involved in, I would now like to take up the chance to write up a series of blog posts talking about various design decisions and patterns in the implementation.

I plan to cover the unit-testing and acceptance-testing strategies, the Spring-ified nature of the application, the message flows, the optimistic concurrency model, the state-storage options and whatever else I can think of. Again, as Robbie mentions, if our paper gets accepted, three of us will be presenting at Agile2008 this year in Toronto about our robustness and performance tests.

In short, stay tuned. Or more likely, stay away!