October 04, 2010
Last week I spent a great deal of time in conversations with Maria Iordache, a former developer at Abacus and member of the life sciences division at IBM who worked with some of the initial phases of IBM’s Blue Gene project and its formulation of defining elements of on-demand computing resources, particularly as they might be used by researchers.
Our conversation began innocently enough; I wandered by the Brocade table at the R Systems-sponsored HPC 360 event and browsed through some of the literature on display. When Maria approached and we began talking, upon mentioning “HPC and cloud” in the same sentence, her eyes widened and she immediately launched into a strong diatribe about the relationship (or lack thereof) between using virtualized HPC resources.
As you might imagine, this is not the first time I’ve heard similar comments, oftentimes from the researchers themselves who cannot imagine adding another layer of latency into the mix via virtualization. In Maria’s view, however, there is no “HPC” that happens in a virtualized environment since the very definition at hand (high performance computing) cannot exist when performance is compromised.
It’s hard to take issue with this argument when it arises. Interestingly, the only time I get smirks or “them’s fightin’ words” looks when I mention HPC and cloud as a married couple is when I am surrounded by researchers—people in the academic space. For instance, when I went to the International Supercomputing Conference (ISC) in June, I had conversations just like this one with Maria at least a few times per day. Seriously. However, at an event like GTC or in conversations with enterprise customers (many of whom have HPC resource requirements but don’t always call what they’re doing HPC on the application level) it rarely comes up. Why is this?
I wonder if Maria is right about the perceptions issues—that there is a great deal of misunderstanding about that all important performance sacrifice that comes in a virtualized environment… The vendors certainly aren’t talking about this topic so if it’s only the research and academic community that makes a stink about it and enterprise users of HPC (again, whether they call it HPC or not) either aren’t terribly aware, overly worried, or even understanding this really is a big deal, what can we make of this argument?
So I’ll just let Maria speak for all of you who rolled your eyes when someone’s used “HPC” and “cloud” in the self-same sentence because she does, after all, have a point. At least for now.
Posted by Nicole Hemsoth - October 04, 2010 @ 12:53 PM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?