2600Hz Blog

Read about cutting edge telephony thought leadership, 2600Hz product updates, customer use cases and more!

Featured Posts

Subscribe to Email Updates

Amdahl's Law & Parallel Computing

We talked a lot about the various computing power laws in a previous blog post, but one of the themes was the ascent of parallelization in code. We argued previously that the shift in power to performance ratios, as opposed to pure power, will result in non-parallel code producing diminishing returns against the potential of Moore’s Law. Put simply, we think that if your code isn’t parallel, you’ll be working much harder than your competitors and achieving far less.

WHAT IS PARALLEL CODE?

In its simplest definition, parallel code is code which scales linearly as you add additional cores. Obviously, there are practical limits, for example, super computers almost always require tons of custom code refactoring in order to see line-rated performance, but what we’re talking about is taking advantage of the shift to pooled resources, or Cloud Computing. Whereas previously one had to “rack ‘n stack” their own equipment, the ability to spin up (or down) servers on command has changed the industry. Perhaps more importantly, because beefier, bigger machines are more expensive to manage, consuming lots of cheap computing becomes more important. Here’s the difference Amazon makes in a Nutshell.

Previously: If I had 1000 Photos to process, and it would take me one hour per photo on my home computer, that’s it, I’m probably either going to modify that procedure and do something less intensive or I’m going to forget the idea all together, because I can’t wait 1000 hours.

Total Time: 1000 Hours

Now: If I have 1000 photos to process, and it would take me one hour per photo on my home computer, I rent 1000 Amazon instances the same size of my computer and run 1 photo on each of them for an hour.

Total Time: 1 Hour

It’s totally impractical for me to own and maintain 1000 servers at my house or even nearby, and yet, with Amazon, I can harness the power of those servers on-demand to solve problems that would’ve been too cumbersome to tackle before. Whether I rent 1 computer for 1000 hours or 1000 computers for 1 hour, the cost is the same.

So that’s great! I can save time, you can save time, we can all work a good bit faster because of AWS. For something simple like the photo processing, this is pretty obvious, but, unless you’re doing something pretty trivial, chances are you’ll have parts of your application you can serialize in this manner, and parts of your application that you can’t serialize. As luck would have it, this phenomena is described in an awesome theorem entitled “Amdahl’s Law”.

AMDAHL’S LAW

The limit of parallelization is the non-parallel portion of the program. -Amdahl’s Law

Essentially, Amdahl’s law is saying that applications are always limited by the portion of the calls that can’t be serialized. If you have an application that takes 3 minutes to run and 50% of the code could be parallelized and 50% can’t, even if you reduce the parallel section to all operate simultaneously, the upper bound will always be 90 seconds. In short, there are some parts of an application which just have to run before you can address serialization points. The key to winning in the future is to placate Amdahl’s law as much as possible, ergo, to write code that is can be parallelized, from top to bottom. Whatever is in your app that can’t be parallelized will become the bottleneck, because, inevitably, it’s the only part of the program that can’t take advantage of Moore’s law.

CONCLUSION

The limitation of parallel code is non-parallel code. Often, they’re used in tandem. It might very well be impossible to make code that can be run completely parallel, but the closer you can get, the more horsepower your application will be able to consume over time. Parallelization is a generational gap in computing, only this time it’s not hardware making the leap, it’s your code.

At 2600hz we wrestle with code constantly. Finding points of serialization is of paramount importance, and we were recently able to parallelize a large number of API requests which dramatically improved the experience of using our GUI and our APIs. Over time, these points of parallelization will only prove more valuable. At the same time, we think about delays just as much. “Why does this program have to wait for that program to respond?” or “Is there a way we can do this before that?”. When you write parallel code, you have to think about eliminating gaps, and eliminating interruptions (like global garbage collection under load). Erlang, with it’s per-actor garbage collection model, excels at enabling management of individual processes even under extreme load. For us, operating a globally distributed cluster of Telecom systems with a shared messaging bus would be impossible without Parallel code, and in our offices this manifests in our love for Erlang.

In summary, Parallel code is the recipe for unlocking Moore’s Law. Amdahl’s law says that the slowest part of your app is the non-parallel portion. Figuring out how to make more things run at the same time is really important, and will only increase in importance over time. At 2600hz, we want to be able to “spin 'n scale” and, for us, we see parallel code as the only way to achieve that at the pace of Moore’s Law.

Tagged: archives, business