August 20, 2013
Cloud-based supercomputing is, theoretically, a great idea, but the trend has not taken off as some in the HPC field believed it would. That isn't stopping the folks at Cycle Computing, who say its Amazon-based supercomputers are not only helping scientists and researchers get real work done, but freeing their brains to ask the really big questions.
Scientific creativity is being hamstrung by the finite resources of traditional fixed-size supercomputing infrastructures, Cycle Computing CEO Jason Stowe said in a recent video. While all kinds of advances are being made in the HPC arena--particularly on the software side--all too often, scientists and researchers can't adequately explore their ideas or ask the big questions due to a sheer lack of HPC capacity.
"We end up with an innovation bottleneck with today's fixed-sized clusters," Stowe said in the video. "We get into a long term habit [where] many researchers and engineers are essentially forced to subtlety confine the questions they ask to the 256 cores of infrastructure that they were able to afford last year.
"And this is not what we want," Stowe continued. "We want researchers asking the big question, asking the one that will move their science forward, move their business forward, fundamentally push humanity forward, and fixed-size internal infrastructure is really bad at that."
Cloud-based HPC resources, such as the ones that Cycle Computing enable, are a much better approach to solving complex scientific and engineering problems, particularly for researchers at smaller institutions who don't have access to big supercomputers, Stowe said.
"We think that cloud will enable us to put supercomputers at researchers' fingertips," he said. "They'll be able to push buttons and build tens of thousands of core clusters that will fundamentally change the category of science that they're able to answer, the types of business insights that they'll get from analytics, and the types of simulations that they'll be able to run."
Part and parcel of this approach is movement toward thinking in "dollars per unit of science." Cycle Computing has several examples of how its customers were able to apply a large number of CPUs to address a particular scientific challenge for a particular cost.
For example, a large pharmaceutical company built a 10,600 server instance with Cycle Computing’s utility HPC solution. Instead of acquiring the 14,400 square feet of data center space and incurring the cost to build in-house at $44 Million, Cycle Computing was able to provision it from Amazon's AWS cloud in about two hours. "They ran 40 years of science in 11 hours for $4,372," Stowe said.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?