HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
June 05, 2008
Wile E. Coyote is doomed. Hanging in space, he is about to fall, and everyone knows it but him. We all saw it coming. Poor Coyote.
Yet strangely, he doesn't fall right away. According to the alternate-reality rules of cartoon physics, the Coyote must first look down and realize he is standing in thin air. He then has time to gather his thoughts, issue a final desperate wave, and then finally -- poof! -- he plummets body first, leaving his head in the frame for the viewers to witness a comical last-second grimace before that too disappears.
Know what else we saw coming? The crash in HPC application performance that is being brought about by the transition to multicore processors. We've been watching the race, as applications (Codus productivus) desperately chased processors (Waferii siliconium) up the performance mountain. Suddenly multicore came and -- meep! meep! -- the CPUs put on a burst of speed and zoomed around a bend, leaving application software headed for a cliff. HPC users were doomed. Everyone knew it. Poor users.
What's this? Application performance hasn't dramatically suffered? Users are satisfied with the performance they're getting? How is this possible? The answer: cartoon physics.
According to our most recent research, the reason performance hasn't plummeted is that users haven't been forced to deal with the problem yet. Rather than introducing a new level of parallelism at the socket level, most users have responded by running separate jobs on each core. Sure, they're buying a lot more memory to do that -- configured memory per core is staying relatively stable, and therefore configured memory per socket is skyrocketing -- but at least the application is scaling. For now.
We've gone off a cliff; we just don't know it yet. Because those cores aren't getting any faster, we're soon going to come to grips with the reality that new tools or programming models are needed in order to keep up the race. Look down, everybody. The ground isn't there. Now is the time to hold up a little "Oh, no!" sign and wave to the camera.
This is going to hurt, but fear not. The Coyote is resilient, and he always comes up with a new scheme. Soon he'll be back in the race and chasing right behind the Road Runner again.
The ISC conference in Dresden is coming up, and the new things I'll most want to see are tools for improving application performance yield in large-scale, multicore systems. Acme Application Optimizers, anyone?
Posted by Addison Snell - June 04, 2008 @ 9:00 PM, Pacific Daylight Time
Addison Snell is the CEO of Intersect360 Research and a veteran of the high performance computing industry. During his tenure, he has established Intersect360 Research as a premier source of market information, analysis and consulting.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?