June 01, 2012
HPC resources are experiencing an increase in adoption outside of their usual market. While the technology has been commonplace in research-based institutions, large enterprises are beginning to tap its potential. A number of advancements are responsible for this change. Most notably, cloud services have increased accessibility to supercomputers. TechWorld discussed the trend earlier this week.
Supercomputers have benefitted from improvements in processor design, open source applications and storage. All of these changes help reduce the overall cost of a system, but owning a supercomputer still remains unfeasible for many enterprise organizations.
Providing a suitable facility to house a cluster and acquiring system hardware typically results in a large initial investment. Beyond the capital required to procure a supercomputer, businesses also have to factor in operational costs of running the system. The going rate for power is roughly $1 million per megawatt year.
Instead of purchasing their own system, an enterprise can outsource HPC resources to a third party Infrastructure–as-a-Service (IaaS) provider using an on demand model. This option is far more affordable by comparison and only requires customers to pay for time used.
The cloud model works for a number of scenarios including graphics rendering, computational fluid dynamics (CFD) simulations and other non-continuous operations. A prime example of an effective cloud workload came from Cycle Computing last month. The company built a 50,000-core cluster on Amazon Web Services to assist in cancer drug discovery. Spanning datacenters in four continents, the virtual supercomputer was used for only 3 hours, costing Cycle just $4,900. Their CEO mentioned that building a similar, private supercomputer would cost anywhere between $20-30 million.
For all the potential benefits cloud services can provide, they exhibit a number of limitations as well. Virtualization, latency and security are typical areas of concern (although a number of adopters actually experience higher security after migrating to the cloud). Virtualization taxes system performance, but that can be countered by choosing a bare metal IaaS provider. Latency on the other hand, can be more difficult to overcome as its performance relies on network connectivity between the client and datacenter.
Noted analyst and service director for Quocirca, Clive Longbottom, admitted that cloud services are not suitable for all HPC workloads. These situations include companies that provide uninterrupted services or rely heavily on supercomputing resources. Oil and gas, large pharmaceuticals, and financial industries fall into this category.
All told, Longbottom views HPC cloud services as the natural evolution of supercomputing:
We've gone from the supercomputers of old (the Crays and so on) to clusters, virtualization and then from grid computing to cloud.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?