Sunday, August 31, 2008


Quite a bit of software development revolves around getting a piece of structured data from a store, modifying some part of it, and writing it back. If the store is local, and the setting is not very contentious - that is, the probability that two threads are going to be modifying the same structure at the same time is rather low - a mutex or a critical section provides an easy solution.

However, what if the object is managed by a remote web service? What if two threads reading and writing the same object is a common thing? Locking an object remotely is not easy - one has to deal with the situation where client goes away and leaves the lock hanging and other nasty corner cases.

Consider the following example. Let's say we have a structure that has title, description, and a checksum:

#define MAX_TITLE 128
#define MAX_DESCR 512
struct Record {
char title[MAX_TITLE];
char descr[MAX_DESCR];
unsigned long crc;

The record lives inside a blob store. Depending on the implementation, the blob store may or may not know about the internal structure of the record. To make the exercise interesting, let's assume that it does not own the crc computation algorithm - this happens on the client.

Depending on the API, the client can either write parts of the structure, or all at once.

The catch with updating the parts of the structure is that there is a field (crc) that depends on all of the rest of the structure.

Consider the following scenario: thread A reads the structure, modifies the title, and computes the new crc. Thread B reads the structure, modifies the description, and computes the new crc. Then both of them write the result of their work as (title, crc) and (description, crc). The new data structure ends up with the new title and the new description, but the crc is wrong - depending on who wrote last, it would be computed based on either the new title and the old description, or the old title and the new description.

The consistency problem can be rectified by only allowing writing the whole record at once, title, description and crc. This comes with its one set of problem - an early reader, later writer can wipe out changes by a different thread that writes its changes in between. In this scenario thread A reads the structure and updates title and crc. Thread B reads the structure and updates description and crc. Thread B then writes the data first, then thread A writes its changes out - wiping out modifications by thread B.

Short of distributed lock service (or a DTC) which makes the job very complicated and potentially rather slow, there is no direct solution to this problem. There is, however, a way to redefine the problem that makes it much easier: allowing the store to fail conflicting writes.

The question then becomes much simpler - how do we detect the conflict?

One easy (and quite standard way of doing it) is to maintain a version of the structure.

#define MAX_TITLE 128
#define MAX_DESCR 512
struct Record {
char title[MAX_TITLE];
char descr[MAX_DESCR];
unsigned long crc;
unsigned long version;

The version is read by the client and is handed back as part of the update request. It is incremented by the server every time the structure has been modified. If the client supplies the version number other than what currently is in store, the server assumes that the structure has already been modified by someone else, and fails the write. The client then can re-read the structure, reapply its changes, and attempt the write again.

Some systems use a time stamp instead, which makes it somewhat harder for a malicious client to game the system. The disadvantage of course is that one needs a counter with a high enough resolution that guarantees that a read and a write cannot complete within one "tick". Ever growing system performance coupled with the desire of OS vendors to preserve backwards compatibility (in this case, the resolution of a counter defined by a given API set) makes this approach brittle.

This approach is easy to understand, but it has a disadvantage - it exposes the internals of the concurrency management to the client and makes the client responsible for behaving in good faith.

How can we transfer the responsibility for the atomicity and correctness of the updates to the server? It turns out to be rather straightforward: for every read that results in the later update, we will let the CLIENT supply us with a hard to guess identifier (say, a GUID) to server as a transaction ID. We will accumulate these IDs in the internal implementation of the object (never to be returned to a client). We will only allow writes that are accompanied by a GUID that is known to us, and we will clear the known GUIDs the moment the write succeeds, immediately making all other updates stale.

// Synchronized externally
class RecordInternal {
Record record;
set reads;
void Read(GUID *id, Record *rec) {
*rec = record;
if (id)
boolean Write(GUID &id, Record &rec) {
if (reads.count(id) == 0)
return false;
record = rec;
return true;

Potential improvement would be to keep a map between the GUID and the timestamp of the read request instead of the set of all GUIDs - this would allow us to run down very old outstanding updates which will probably never complete because of the client failure.

On the client side, it would be convenient to wrap the messy business of generating the GUID in an "update" class, making the use of the API look as follows:

Update update;
Record record;
service.Read(, &record);
...update the record...
service.Write(, record);

Tuesday, August 26, 2008

Looking for great devs!

As Web 2.0 matures, a new paradigm is being born – massively distributed development, where a typical application runs on hundreds, thousands, sometimes tens of thousands computers at the same time. A new software stack that resembles familiar operating system concepts is being built to support this model.

Would you like to participate in the creation of the “Web 2.0 OS”?

Our team builds components the massively distributed applications will live and die by – deployment and monitoring. We have it all – complex algorithmic tasks, opportunity to write a lot of code, potential to influence the entire industry, and, last but not least, millions of cores to run our stuff!


We’re looking for people with the experience shipping software in native languages and fluent in both computer architecture and algorithms. If your C/C++ is top notch and you can implement Dijkstra's algorithm in your sleep, send me your resume -!

Thursday, August 21, 2008

I am very confused...

According to this graphic in CNN Money section, every child has a probability of at least 24% to get into the top 20%. How is this possible?

Tuesday, August 19, 2008

Is McCain another Bush?

There was some discussion recently (not nearly as much as needed) whether McCain's policies are the same as these of George Bush. But is he as stupid? This article argues convincingly that yes, just like George Bush he lacks intellectual curiosity, his academic record is awful, and his approach to everything from morality to world problems is simplistic.

"John McCain graduated 894th in a class of 899 at the Naval Academy at Annapolis. His father and grandfather were four star admirals in the Navy. Some have suggested that might have played a role in McCain being admitted. His academic record was awful. And it shows over and over again whenever McCain is called upon to think on his feet."

Monday, August 18, 2008

So say you were a president of a very small country...

...and you had this very big neighbor with a huge army, and you were at war with this neighbor because you've done something that royally pissed them off. And say their troops were already controlling half of your territory, and had 100000% capacity to control the rest.

Would you be saying this?

“Unfortunately, today we are looking evil directly in the eye. And today this evil is very strong, very nasty and very dangerous, for everybody, not only for us.”

Thursday, August 14, 2008

Quantum entanglement results in faster than light information transfer

This experiment separated two entangled photons by 18 (!) kilometers, then observed the effect of perturbing of one photon's state in the other - instantly!

If the results can be independently verified, it would have the significance of Michelson-Morley experiment that gave birth to special theory of relativity.

This is a BIG deal!

Wednesday, August 13, 2008

Too much of a good thing

"As people do better, they start voting like Republicans - unless they have too much education and vote Democratic, which proves there can be too much of a good thing." -- Karl Rove

Unsurprisingly, the Wall Street editorial board agrees, once again, with the evil genius of the Bush Administration. In the latest opinion article, "For Most People,
College Is a Waste of Time" (, they argue that the only thing people need is professional training, evaluated by trade exams - not education.

Training teaches skills, which can be used to work for Rupert Murdoch. Education teaches people how to think for themselves. Educated people, as Rove correctly observed, usually do not vote Republican...

Judging from the URL, this crap is in the "Most Popular" category, which is sad, but, again, not surprising. I assume that people who read WSJ Opinion page are the same people who get their information from Faux News.

Monday, August 11, 2008

This is beyond embarrassing...

Russia. Sexual harassment case. Judge rules: "If we had no sexual harassment we would have no children".,-Russian-judge-rules.html

Office Space

A sad, sad, sad, sad story from the trenches:

This falls right in line with the one of my all-time favorites:

The sad reality is that a lot of classic vices of white-collar office environment is seeping into the industry that was rather immune to them in the past.

The good news is that there are still plenty of places where this does not happen. Finding them are not that hard - pick any top couple of products in any software area, and the companies building them are probably very, very good. After all, it's hard to build a great product without attracting - and keeping - great people. And great people do not keep very well in terrible environments.

But how do you get a job in a good environment? The easiest way is to be really good at the aspects of the job that are usually tested on the interview.

Most good companies look for essentially the same qualifications in an employee. Joel Spolsky summarises these as "Smart and gets things done" (

In reality, it is impossible to test these qualifications during the interview, so most interviewers look for indirect evidence, thus equating "smarts" with "great computer science skills", and "gets things done" with "demonstrable passion for technology".

The way computer science skills are tested in the interview are by having an interviewee solve an algorithmic problem, with possible segues into computer architecture (is your solution memory or CPU bound?).

Unlike the true intelligence, the skill of solving algorithmic problem can be developed. First, having a genuine interest in these matters helps a lot (if you don't normally care, a short refresher right before the interview produces only very limited results, but it is still better than nothing). There is a couple of really good books, of course, as well.

First is the venerable "Introduction to Algorithms" by Cormen, Leiserson, Rivest, and Stein (AKA "The CLR book"):

Second is "Computer Architecture: A Quantitative Approach" by Hennessy and Patterson

It is important to point out that buying these books just to put them on the shelf 'till you have more time will not help - I guarantee it! You have to actually read them, and even do a few exercises :-).

A quick test - if you can implement quick- and merge- sorts, BFS and DFS walks, a basic hash table, delete an element from the binary tree, and analyze respective memory performance of heap- vs. quick- sort - all without consulting a book - you cover both Google and Microsoft's definition of developers' "intelligence" - and this means that you will be selecting the place where you will work next, and not the other way around.

Of course, most modern workplaces do not naturally maintain the algorithmic skills - people rely on Java/STL to do the right thing and do not go into details on how exactly they do it. Whatever is learned at school gets eventually forgotten, and as a result the probability to get a hire vote at most places where I worked before actually diminishes with "experience".

What about the "passion for technology" part of the puzzle? A couple of things help here.

Of course, being really passionate about something technology-related - tracking the progress in this area closely, reading about new developments, and ability to coherently discuss it in depth. When a developer tells me that (s)he thinks that XML is important, I ask "Why?" and then sit back and listen. If the answer does not elevate beyond the standard marketing blabber, it's not a good sign.

Having home projects beyond work and school is also important. This is the true sign of passion for the technology. I often ask people when was they last time they did something software developmentish outside work or school.

The unfortunate part of this "advice" is that it's essentially the same as "lose weight by eating less food" - it requires fundamental life changes. You get beneficial effects but at great cost, which only gets greater as the time goes and you settle - deeper and deeper - in your comfort zone. But no comfort zone is forever - the manager changes and... see the link at the beginning of the article.

But on the positive note, if you have already made the life changes, or have always led the healthy life :-) I have a job for you! Check out what we're doing here: and send me your resume!

Friday, August 8, 2008

Bin Laden's driver is sentenced to 5.5 years in prison

This morning NPR called the sentence "surprisingly lenient".

Next to come - Al Zawahiri's ice cream seller...