May 11, 2011
Chances are, if you’ve been lurking around here for some time you’re already quite familiar with cloud computing in the HPC context. However, it’s easy to get lost in the minutia that constitutes those clouds—the management layers, virtualization, latency, and beyond...
To put things into perspective, we’re posting provide a decent overview (and a link for some free time on Azure, which is running in tandem with the free Amazon trials) from a researcher focused directly on the practical applications of running HPC applications on remote resources.
Rob Gillen, a cloud computing researcher with Planet Technologies out of Knoxville, Tennessee spent a brief few moments on video to lay down some of the core concepts behind scientific uses for HPC clouds.
In the brief video below, he carves out the concept of cloud as it applies to the technical and research computing space and provides a few details about how clouds signal the democratization of large-scale computing.
Gillen’s host asks him what HPC encompasses generally, to which he provides a litany of examples. However, he notes that HPC cloud computing is the “lower end of the HPC space” noting that it works well for average researchers or academics that lack access to high-end machines.
Using Microsoft’s Windows Azure as a starting point, he provides the example of the genome sequence alignment tool BLAST, which runs as an Excel worksheet that is used to define problems, fill in details and shoot it off for remote processing. He notes that this is where the democratization layer comes in. For instance, a professor can use actual BLAST in a class but when it’s over, just shut down and stop incurring charges.
Outside of the rapid-fire definition, did you happen to wonder who you contract right this moment to build you a wall-to-wall dry erase room like the one shown?
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?