November 22, 2008
I spent a couple of days this week at the Supercomputing 08 conference in Austin, Texas, and I was excited to write this blog about how cloud computing might be relevant for high-performance computing. Then I read this article on HPCwire, written by Thomas Sterling and Dylan Stark of LSU, which does the subject just a tad more justice than I can do.
I still want to make a few extra points, though. The first is that I saw a presentation by John Storm, an executive director within Morgan Stanley’s Institutional Securities division, who talked about how financial services firms are using HPC. Two disparate comments by Storm caught my attention: (1) that Monte Carlo simulations comprise the majority (up to 70 percent) of HPC computations; and (2) that the law of diminishing returns rears its ugly head most notably around power bills. It’s not unheard of for banks to use Amazon EC2 for Monte Carlo sims, so I wonder how many, after doing the energy math, actually are. How many are seriously considering it?
Also on the power front, a Wednesday panel discussed the power struggles surrounding high-end supercomputers and large enterprise datacenters. There are about a dozen computers on the Top500 list using between 1.2 and 7 megawatts of power (the peak belonging to Cray’s new Jaguar supercomputer), and commercial datacenters tend to use between 36 and 100 megawatts (and now consume up to 200,000 square feet of space). I’m not suggesting the types of apps running on Jaguar would work in a cloud environment, but, certainly, small-time or infrequent HPC users could experience significant capital and operational savings by utilizing an HPC-capable cloud like EC2 instead of buying their own system. Commercial users might note the cloud’s increasing readiness for them, too.
God knows there are plenty of HPC solutions already leveraging EC2. Univa UD’s UniCluster software can run on EC2, and a company called CycleComputing builds on-demand Condor pools for its customers with its CycleCloud service. Wolfram Research has enabled its Mathematica product to run on EC2, as well. Oh, and Amazon itself made life easier a few months back with its High-CPU instances. According to Amazon:
Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.
EC2 Compute Unit (ECU) -- One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
More on HPC and cloud computing, in general, can be found here.
In case you missed it …
Be sure to check out these announcements, which could have big impacts:
Posted by Derrick Harris - November 22, 2008 @ 10:40 AM, Pacific Standard Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?