August 10, 2009
Ian Foster penned an interesting blog last week comparing the utility of a supercomputer to that of public cloud for HPC applications. Foster pointed out that while the typical supercomputer might be much faster than a generic cloud environment, the turnaround time might actually be much better for the cloud. He argues that "the relevant metric is not execution time but elapsed time from submission to the completion of execution."
Foster's premise is that the users must wait a significant amount of time for their job to run on a typical supercomputer, while the wait time for a cloud is usually much less. He writes:
For example, let's say we want to run the LU benchmark, which (based on the numbers in Ed's paper) when run on 32 processors takes ~25 secs on the supercomputer and ~100 secs on EC2. Now let's add in queue and startup time:
On EC2, I am told that it may take ~5 minutes to start 32 nodes (depending on image size), so with high probability we will finish the LU benchmark within 100 + 300 = 400 secs.
On the supercomputer, we can use Rich Wolksi's QBETS queue time estimation service to get a bound on the queue time. When I tried this in June, QBETS told me that if I wanted 32 nodes for 20 seconds, the probability of me getting those nodes within 400 secs was only 34%--not good odds.
So, based on the QBETS predictions, if I had to put money on which system my application would finish first, I would have to go for EC2.
This seems especially relevant in the case of shorter HPC jobs. In the example above, the time spent waiting for the system is the dominant metric for both the supercomputer and the cloud. But since the cloud is built for accessibility, it has the advantage over an HPC machine, which is built for raw speed. It's probably safe to say that, in general, the faster the supercomputer, the greater its demand will be, and thus the longer the wait time. If true, the purpose-built supercomputer may only be competitive for longer-running HPC jobs.
From the user's point of view, the key is to be able to estimate service times prior to selecting a platform. Ideally, you would like to be able to negotiate a service level agreement (SLA) dynamically with all potential providers, including your neighborhood supercomputer, at various price points. Right now, of course, this is science fiction. There are currently no standards in place to auto-negotiate SLAs across providers. But it's not too hard to imagine why such a feature would be a great thing for the cloud computing biz.
Posted by Michael Feldman - August 10, 2009 @ 4:10 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?