Tech in the 603, The Granite State Hacker

Multiprocessing: How ’bout that Free Lunch?

I remember reading an article, a few years back…

The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

Its tagline: “The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency.”

Mr. Sutter’s article suggests that because CPUs are now forced to improve performance through multi-core architectures, applications will need to typically employ multi-threading to gain performance improvements on newer hardware. He made a great argument. I remember getting excited enough to bring up the idea to my team at the time.

There are a number of reasons why the tag line and most of its supporting arguments appeared to fail, and in retrospect, could have been predicted.

So in today’s age of multi-core processing, where application performance gains necessarily come from improved hardware throughput, why does it still feel like we’re getting a free lunch?

To some extent, Herb was right. I mean, really, a lot of applications, by themselves, are not getting as much out of their host hardware as they could.

Before and since this article, I’ve written multi-threaded application code for several purposes. Each time, the threading was in UI code. The most common reason for it: to monitor extra-process activities without blocking the UI message pump. Yes, that’s right… In my experience, the most common reason for multi-threading is, essentially, to allow the UI message pump to keep pumping while waiting for… something else.

But many applications really have experienced significant performance improvements in multi-processor / multi-core systems, and no additional application code was written, changed, or even re-compiled to make that happen.

How?

  • Reduced inter-process contention for processor time
  • Client-server architectures (even when co-hosted, due to the above)
  • Multi-threaded software frameworks
  • Improved supporting hardware frameworks

Today’s computers are typically doing more, all the time. The OS itself has a lot of overhead, especially Windows-based systems. New Vista systems rely heavily on multi-processing to get performance for the glitzy new GUI features.

The key is multi-processing, though, rather than multi-threading. Given that CPU time is a resource that must be shared, having more CPUs means less scheduling collision, less single-CPU context switching.

Many architectures are already inherent multi-processors. A client-server or n-tier system is generally already running on a minimum of two separate processes. In a typical web architecture, with an enterprise-grade DBMS, not only do you have built-in “free” multi-processing, but you also have at least some built-in, “free” multi-threading.

Something else that developers don’t seem to have noticed much is that some frameworks are inherently multi-threaded. For example the Microsoft Windows Presentation Foundation, a general GUI framework, does a lot of its rendering on separate threads. By simply building a GUI in WPF, your client application can start to take advantage of the additional CPUs, and the program author might not even be aware of it. Learning a framework like WPF isn’t exactly free, but typically, you’re not using that framework for the multi-threading features. Multi-threading, in that case, is a nice “cheap” benefit.

When it comes down to it, though, the biggest bottlenecks in hardware are not the processor, anyway. The front-side bus is the front-line to the CPU, and it typically can’t keep a single CPU’s working set fresh. Give it a team of CPUs to feed, and things get pretty hopeless pretty quick. (HyperTransport and QuickPath will change this, but only to the extent of pushing the bottle necks a little further away from the processors.)

So to re-cap, to date, the reason we haven’t seen a sea change in application software development is because we’re already leveraging multiple processors in many ways other than multi-threading. Further, multi-threading options have been largely abstracted away from application developers via mechanisms like application hosting, database management, and frameworks.

With things like HyperTransport (AMD’s baby) and QuickPath (Intel’s), will application developers really have to start worrying about intra-process concurrency?

I throw this one back to the Great Commandment… risk management. The best way to manage the risk of intra-process concurrency (threading) is to simply avoid it as much as possible. Continuing to use the above mentioned techniques, we let the 800-lb gorillas do the heavy lifting. We avoid struggling with race conditions and deadlocks.

When concurrent processing must be done, interestingly, the best way to branch off a thread is to treat it as if it were a separate process. Even the .NET Framework 2.0 has some nice threading mechanisms that make this easy. If there are low communications needs, consider actually forking a new process, rather than multi-threading.

In conclusion, the lunch may not always be free, but a good engineer should look for it, anyway. Concurrency is, and will always be an issue, but multi-core processors were not the event that sparked that evolution.

7 thoughts on “Multiprocessing: How ’bout that Free Lunch?”

  1. Hi Jim,An interesting view. We posted a link to this blog entry on http://www.multicoreinfo.com, a website for a collection of multicore related resources. I agree with your conclusion. There is some free lunch, and much more is required to be done (whether it is with concurrency or with some other programming models) to reap the performance benefits of multicore processors.

  2. So, you're saying we don't have to begin building multithreaded programs because other people are building multithreaded frameworks for us? Wouldn't the very fact that those people are building multithreaded frameworks indicate that Herb is right? Just because you aren't building multithreaded code directly doesn't mean you aren't utilizing tools which do more frequently. Those tools were built by someone who cares about performance. If someone cares about performance and is tackling multithreaded techniques to eek that out, I think that pretty much falls in line with the entire argument in the “no more free lunch” article.In fact, you mention client/server stuff, but so did Herb in his original paper. He very specifically pointed out that we were already seeing the benefit there. This isn't a counter argument, it's an example of what he was talking about.I don't really see how this argument is a counter-argument as wikipedia says it is or your title “how 'bout that Free Lunch?”.

  3. Thanks for taking an interest in the subject M2tM…You might have been too young to remember the OO revolution. The OO revolution touched every software developer out there across all problem domains.My counterpoint is that this “revolution” was limited, at best. Much of the front-line professional software development community yawned, for a number good of reasons.

  4. Yes, I think you're right in that this won't be as big a revolution for the average software developer. I do not believe that the idea “no more free lunch” is invalid, however. It isn't free. It is mostly costing the experts, however.I think there's a tendency to go to extremes, and I agree there were some extraneous claims in that paper about a revolution which I agree with you won't be as bit as the OO revolution. The experts tend to think only about their own expert work, and forget the whole 80/20 thing. But that doesn't mean that there is a 'free lunch' as there once was.It used to be you would just wait and your programs would run faster unaltered. Now, even if you aren't writing the code directly, you do have to choose a toolset and environment which involves code written by someone who did take concurrency into account to reap any benefits.I'm a young programmer, only 24 right now. I've been programming for 11 years, professionally for 4. So you're correct, I wasn't there for the OOP revolution, I'm not exactly sure when that happened, it seems to have been a gradual thing. Even today some people don't really seem to get OOP, so I have at least a taste of that issue.Thank you for your reply! I think you've got some good points, mostly I was just pointing out that this isn't an argument that concurrency isn't important and we can just keep on as we were. Just that most people will be relying on a smaller number of expert's implementations of the solution so much of the difficulty will be hidden from the lay-programmer.-Michael Hamilton

  5. “I think you've got some good points, mostly I was just pointing out that this isn't an argument that concurrency isn't important and we can just keep on as we were.”I should mention I don't think you said this, but the way your article was mentioned in wikipedia implied it. It was suggested your article was promoting programming in a serial fashion, and that wasn't what you said.You said mostly that you just had to pick your frameworks and you could avoid worrying about it… To distill(butcher?) a point.Let's just blame wikipedia for that one.

Leave a Reply

Your email address will not be published. Required fields are marked *