June 01, 2010
It might have been difficult a number of years ago to imagine offering HPC as a Service but we’re now living in the era of everything-as-a-service, so why should high performance computing be exempt? As enterprise and research institutions see the need for dramatic compute resource needs that don’t require the maintenance of a cluster, the cloud is a hot topic—and for good reason. Even with all of the discussions about security (many of which seem nebulous and refer to security as an overarching, non-specific concern) the move to the cloud becomes something of a no-brainer, particularly because budgets require the scalability and cost flexibility that cloud provides.
If you were to do a search about HPC as a service, one of your first results would be the trademark owner of that idea—Penguin Computing. The company’s on-demand offering, called POD, has gained some significant traction in academia, manufacturing and aerospace and, like many other long-time players in the HPC space, the company had a booth at ISC. I sat down with Penguin Computing’s manager of software development to discuss the concept of HPC as a service and the cloud in general as well as to get some insights about POD and its uses and applications. In this case, the discussion is in the context of a biosciences firm, Life Technologies, who has made use of POD in the same way that other life sciences companies are utilizing similar offerings from other vendors as will be addressed in later updates from the ISC floor.
While there are other companies offering roughly the same service that Penguin Computing does, it was worthwhile to hear about some of the end user experiences to get an idea of who is looking beyond an in-house solution in order to save the time and management difficulties of having an on-site cluster for occasional or non-maximized use. Since HPC as a service is inherently scalable, it stands to reason that those who would be making the most use out of a service like POD would be research institutions and industry sectors that require big data processing, but in unexpected loads, bursts, or cycles that might be otherwise hard to predict or invest in for occasional use.
At ISC this year, Penguin Computing and other cloud offering vendors who’ve been willing to talk details about their end users are consistently talking about bioscience, R&D, and financial services—in that order. One of the reasons the cloud calls to these sectors is because the need for large-scale crunching, if planned on the basis of anticipated maximum need, would not only be wasteful of compute resources, but incredibly expensive.
Posted by Nicole Hemsoth - June 01, 2010 @ 6:09 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?