August 17, 2010
One of the most salient arguments about supercomputing in the cloud is not particularly difficult to disagree with in the context of the cost of building and then maintaining what could qualify as a supercomputer. As Jeffrey Clark noted today in The Datacenter Journal, “as long as the expenses associated with using the cloud to perform supercomputing tasks do not quickly approach the cost associated with implementing an in-house supercomputer, the risks (financially, at least) are minimal.”
At this point, even given the complex management software and other initial application and other costs associated with moving once in-house operations to the cloud, this concept of the cloud being out-priced by the physical infrastructure is not a concern in the slightest. What is worrisome for cloudy ones with the price advantage, is convincing supercomputer and HPC users to entrust their workloads to an “untrusted technology.”
While many will argue that cloud is, first of all, not a technology to begin with and that furthermore, it’s not mistrusted, let’s leave this aside for a moment and take issue with the fact that for many, especially in the enterprise space, the cloud is not fully tested and as such, is not fully trusted. This is particularly the case when it comes to mission-critical tasks—and that paradigm doesn’t appear to be shifting in the cloud’s favor anytime soon due to security concerns.
It is only with the big nasty “S” word (security, of course) rears its ugly head that this risk-benefit argument loses its steam, but for many who do require supercomputing capacity, this is of the utmost importance. Bio-IT companies, financial services, manufacturing firms who protect their designs with a fervor roughly parallel to that which they’d implement on their own children—all of these markets are forced to weigh this cost and in some cases, are required to pay hefty fines for lapses in security or compliance obligations.
Jeffrey Clark stated that from a cost perspective, “Giving the cloud a test run or two may cost some money, but it may also offer significant returns if successful; that is, if cloud-based supercomputing cloud be of potential benefit for a particular company, that company has little to lose by trying.” Again, there could be something big to lose—and no one wants to take that chance. This is why analyzing the experiences from early adopters is so important at this stage. While the cloud is helping small and mid-sized business sail without question, when we venture into the realm of HPC, the situation is completely different. What might be a mild concern for a SMB is magnified exponentially for large-scale computing users.
Full story at Datacenter Journal
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?