Thursday, October 05, 2006

Concurrency

There has been a lot of talk recently about how the exponential growth in CPU power is tapering off. Almost everybody is saying that the future will be multicore processors and multiprocessor systems. A lot of consumer PCs are shipping with dual core processors. The Xbox 360 uses a custom, triple-core CPU. The PlayStation 3 will have 7 active cores. This isn't theory; it's quickly becoming fact.

Everybody who has ever tried knows that multithreaded programming is hard. The programmer is required to do a lot of mental juggling to try to imagine what the system will do. I'm not a fan of making a programmer do a lot of mental gymnastics if the computer can do a better job of it.

One of the biggest challenges in multiprocessing is in dividing an application into multiple threads. To this day, most applications run in a single thread. Even when an application has multiple threads, it's usually only when performing some lengthy operation, such as a find. If multiple threads were not used, the entire application would freeze up.

Interestingly, programmers have been dividing their programs into pieces for a long time. Instead of carrying out a process by performing 1 million discrete operations, we perform 10 high-level operations. These operations need to be carried out in order, because the output of one is used as the input to the next. Also, each of those high-level operations is implemented in terms of 15 mid-level operations, and each of those is implemented in terms of another 10 mid-level operations, and so on. These subdivisions are usually used to take what is essentially one long process and split the code for that process into multiple, bite-sized chunks that an individual programmer can understand.

Unfortunately, This doesn't help with the concurrency problem. However, a lot of programmers have started programming in an object-oriented way. Making classes is a very different process from making functions. The purpose of functions is allow the programmer to focus on a small section of code. However, the code that the programmer is working on is still sitting inside of some larger whole. The purpose of classes is to let the programmer work on a piece of the program in complete isolation. This is why a class contains both state and behavior. In fact, the class that a programmer writes for one program should be able to be dropped, without change, into a completely different program. A BlogPost class should be reusable, as long as it provides all of the behavior that everybody needs.

Good OO design is characterized by a principle that I recently learned. The principle is "Tell, don't ask". This has nothing to do with bullying your objects around. To translate, to "tell" your objects is to send messages to them. To "ask" your objects is to write them such that they cough up their internal state to whomever asks. So, the principle is really "write your objects so that they expose behavior to the outside world, but don't write them so that they expose state to the world". I'd love to go into more detail, but I don't have space right now.

If we take TDA to the extreme, we can make some radical assumptions. First of all, we can completely remove return values. Return values, I claim, are simply an artifact of the procedural days. If you need to have an object tell you something, send a callback object to him. You TELL him to TELL you something. Crazy, no? Now, you might point out that this is a lot of work for something that used to be handled by something as simple as a return value. This is a valid concern. However, my gut tells me that the callback scheme would actually simplify a lot of code. Again, I would love to go more into this topic, but I don't have space. Please just assume (as I am assuming) that callbacks are preferable to return values in a lot of cases.

So, with this mindset, we can really think of method calls on an object instead as sending messages to the object. Since the object can't send anything back (except by sending US a message), there's no guarantee that the target object will process the message right away. In fact, there's no guarantee that the target object will process the message at all. For example, when you're working on a networked application, you need to realize that nothing is ever guaranteed. You might have an object that represents a remote computer. You might send a SendPacket message to that object, but that object might not send a packet out right away. In fact, if the remote computer goes down, the packet might never get sent at all.

Most importantly, this shifts our thinking. In the procedural world, it is vital that all steps be carried out in the correct order. This is because each step will update data somewhere and return information to the caller. In the TDA world, this can never happen. Of course, SOME ordering is still needed. But that ordering can be captured in a different way. Only the significant order is enforced.

A trend that I am starting to see in systems that are designed with TDA is that they seem to be "configured" more than "programmed". I mean that main() is pretty much responsible for instantiating a bunch of instances and wiring them up correctly. Then, main() sends one message to one object and, like a bunch of dominos, eventually something comes out of some other object. In the meantime, all of the objects involved were furiously chatting. The important thing is that they were all basically dormant until a message was sent to them. Then, they did a whole lot of stuff and sent a whole lot of messages to other objects, but then they went dormant again.

So, what does all of this have to do with concurrency? Suppose that each object in the system had its own thread and message queue. The queue is pumped by the thread, and each incoming message is dispatched to the appropriate message handler. Now, when an object needs to collaborate with other objects, it not only doesn't HAVE to wait until the other object is done, it simply DOESN'T wait. It is executing in a completely different thread that may be running on a different processor (or even a different machine!). Also, since the object doesn't have to wait, it can run through all of its code very quickly and return to a dormant state. While dormant, the object's thread would not get scheduled.

What this effectively does is to turn a processor's execution cores into a simple resource, like physical ram. Just as a program does not need to decide where in physical ram to store its data (it stores its data in virtual address space, which the processor maps to physical ram), the program will ALSO not need to worry about how many threads it has.

There are so many details that I have left out. This post could turn into a whole lot of smaller posts with a lot of detail about various small parts of this scheme. But this is a decent overview. I hope that people read this and have good ideas.

1 comment:

Dan said...

I've recently run into the Erlang Programming Language. While it's not an object-oriented language, you can get object-like constructs. In Erlang, they are called processes. In Erlang, a process is a schedulable work unit. It is possible for processes to send messages to each other.

You can imagine that processes are close to objects, and that messages are close to public methods. Private data can simply be private to the process' "main function". And that's pretty much all you need. Inheritance... not so much. Then again, who needs it? You can get the conceptual benefits of inheritance in other ways.