April 30, 2012
Determining the most cost-effective HPC infrastructure can be a daunting task for users. While each case is different, a number of key factors can be considered in the decision to run HPC operations in house or in the cloud.
Last week, AMDIN magazine interviewed Bill Nitzberg, CTO of Altair's PBS Works Division. He provided insight into infrastructure decisions and discussed Altair's HPC management software as well.
A self-proclaimed cynic of cloud computing, Nitzberg pointed out past network-based computing models: "…in the '70s with distributed computing, in the '80s with network computing, in the '90s with network Sparc stations and cluster computing, and then in the 2000s with grid computing – and now we have cloud computing."
However, a skeptical attitude didn't keep him from recognizing opportunities born from the platform. For example, enterprise email servers running at nearly 20 percent utilization, resulting in untapped resources. Nitzberg believes datacenters can consolidate these systems to reduce waste.
HPC is different however, as most systems are employed heavily. It's not uncommon to see more than 70 percent utilization. In this respect, cloud computing has to present a different set of benefits for HPC users.
One advantage is the layer of abstraction created by an on-demand infrastructure.
"…when you think of cloud on the business side, you don't really care what is behind the interface when you log in. It could be a whole bunch of people, or it could be a whole bunch of machines. You don't have to know. And that actually carries over from the data center market to the HPC market."
Beyond the interface, ROI can become the ultimate decider. While larger enterprises can save capital by building and operating their own clusters, the story isn't the same for all organizations.
"If you are a small player, and only once a month out of the year – or, say, two months out of the year – you redesign some part, and you only need to use HPC computing for two months out of the year, then actually using HPC cloud computing is a huge advantage over trying to buy and manage your own," offers Nitzberg.
Altair has developed its tool suite with in-house and cloud infrastructures in mind. Applications like Compute Manager, PBS Professional and HyperWorks were all built on a cloud stack, allowing them to function in an on-demand environment as well.
Those features play to Nitzberg's advice for new HPC users. If a tool suite user were considering a cluster, the CTO would advise them to start with HyperWorks On-Demand and make a final decision based on their experience.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?