July 01, 2010
We have case studies. We have examples of HPC in action in the clouds in one form or another. But for a still vast majority of large supercomputing centers—the terra firma for traditional HPC—cloud discussions are but fluff. They revolve around business models. And for the lofty aims of the supercomputing elite, business models have nothing to do with the world they live in.
That resistance is rooted in solid arguments now--no question about it--but as the technologies that underlie cloud, including the development of low-latency boosts to cloud power evolve, these arguments are bound to weaken, leaving only a culture question. And this will probably coincide with the arrival of a new group of thinkers who were weaned on the cloud in its infancy via everything-imaginable-as-a-service. That means any of us under what, say 25?
To quote an unnamed director of a supercomputing site following an off-the-cuff discussion at International Supercomputing Conference this year, “what everyone’s forgetting is that we [supercomputing centers and large research centers] have no incentive to have anything to with clouds. They offer no real benefit outside of cost—and that’s not even assured. It’s just a business model. It has no value and we have no reason to consider it since what we have now performs well. Give me a reason to think it is going to revolutionize what we’re doing and I’ll be glad to take a look, but it has nothing to offer, at least not yet.”
When it comes to clouds and HPC, at least in the scientific computing arena, the key question almost invariably comes down to the matter of performance, which has led to overarching question, “if cost isn’t the issue, why would I ever bother to experiment with the cloud when standard HPC has been providing the compute resources needed to begin with.” This question most often stems from research and academia where, true enough, the performance is predictable and there are no concerns about virtualization slowdowns and bottlenecks and the same security concerns that have been around since the cluster was first unleashed are essentially the same.
In short, It is hard to imagine a bright future for HPC in the cloud when the single greatest concern for HPC is rooted in performance--and the single most-discussed flaw with cloud is performance. The two most important goals are at complete odds with one another. At least in the present.
It’s hard to disagree with the points here when we're putting this in the context of the present--where is the incentive for big HPC users if the performance question still hasn’t been broadly answered? To refine this question a little further, where is the incentive for HPC users who already have invested in their own clusters since the smaller outfits who have ready-made HPC on demand via the cloud are often already in simply because of cost?
Even more importantly, if the incentive comes, who is to say that it will ever mean much considering that so many jobs are dependent on the world just as it is, thank you very much.
What these questions embody is substance, but underlying all of this is the issue of culture. Outside of performance, service-level worries, and the other mess of cloud issues we are all well aware of, all the incentive in the world still might not be enough. This makes me wonder what will happen when this fresh new crop of graduating PhD students comes trickling forth from Berkeley, MIT and other universities that are looking to clouds in varying ways will change the culture. Actually, I don’t really wonder…do you?
It was interesting that the unnamed source quoted at the beginning did clarify his position by inserting the “not yet” as it might mean that there is a glimmer of interest—that is, of course, if the performance roadblocks are lifted. Or it might have just been wishful thinking that he was actually considering the possibilities somewhere in the back of his mind. But probably not.
Posted by Nicole Hemsoth - July 01, 2010 @ 9:02 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?