February 28, 2013
The HPC in the cloud space continues to evolve and one of the companies leading that charge is Cycle Computing. The utility supercomputing vendor recently reported a record-breaking 2012, punctuated by several impressive big science endeavors. One of Cycle's most significant projects was the creation of a 50,000-core utility supercomputer inside the Amazon Elastic Compute Cloud.
Built for pharmaceutical companies Schrödinger and Nimbus Discovery, the virtual mega-cluster was able to analyze 21 million drug compounds in just 3 hours for less than $4,900 per hour. The accomplishment caught the attention of IDC analysts Chirag Dekate and Steve Conway, who elected to honor Cycle with their firm's HPC Innovation Excellence Award.
Research Manager of IDC's High-Performance Systems Chirag Dekate explained the award recognizes those who have best applied HPC in the ecosystem to solve critical problems. More specifically, IDC is looking for scientific achievement, ROI, and a combination of these two elements.
HPCwire spoke with Cycle CEO Jason Stowe shortly after the award was announced about the growth in HPC cloud and his company. Stowe really sees 2012 as the turning point – both for the space and for Cycle Computing. "We've basically hit the hockey stick growth period where there's more rapid adoption of the technology," he says. "Relative to utility supercomputing and HPC cloud in general we are definitely seeing a lot of interest in the space."
During the Amazon Web Services re:Invent show in November, some big-name customers, including Novartis, Johnson & Johnson, Life Technologies, along with Hartford Insurance Group and Pacific Life Insurance, came forward to discuss their use of Cycle's cluster-building software. The companies highlighted many of their biggest use cases and described how HPC cloud helps move the needle for Fortune500.
"Utility supercomputing applies to a large variety of companies regardless of their industry," says Stowe, "because it supports business analytics, it supports various forms of engineering simulations and helps get the science done."
Cycle's customer base is well-represented across disciplines. "The majority of the top 20 big pharma companies use our software; three of the five largest variable annuity businesses use our software internally and externally or in combination," says the CEO. The vendor also counts several leading life science companies among its customer base, including Schrödinger, who in addition to their initial 50k core run, continues to use the Cycle-EC2 cluster for ongoing workloads. Manufacturing and energy companies are also plugging into the Cycle cloud.
There are still technical and cultural barriers to cloud adoption, however. Stowe concedes the point, but only half-jokingly he adds that Cycle has solved most of the technical challenges. At this juncture, he believes the lag is more on cultural side, but there are signs of progress.
"We have these traditional companies like Johnson & Johnson and Hartford Life transitioning to a cloud model. That's a huge cultural indicator, and definitely a C-change from four-to-five years ago," he says.
The Business Model
What about the long-term profit potential for a business that relies on data parallel workloads? The question is met with a three-part answer. First off, Stowe says that Cycle has always been profitable. As a bootstrapped company, they have no investors. They've built a business off of a real cash-flow stream. Second, he insists that the vast amount of growth in computation is in the area of data-parallel applications.
He considers business analytics, the entirety of big data and a majority of even traditional simulation codes to be strong candidates for the cloud or utility supercomputing model.
"Sure, people still use MPI, they still use fast interconnect – but we have cases (and we hope to publish soon) where folks are running Monte Carlo simulations as a data-parallel problem. There's a small MPI cluster that's running the simulation, but the overall structure of the computation is parallel," says Stowe.
Stowe expects these kinds of data-parallel or high-throughput applications to make up the bulk of new commercial workloads. The activity is coming from a range of verticals: genomics, computational chemistry, even finite element analysis.
Stowe's final point in the context of MPI applications might be surprising to some. Cycle has seen at least two examples of real-world MPI applications that ran as much as 40 percent better on the Amazon EC2 cloud than on an internal kit that used QDR InfiniBand.
"The only real test of whether or not cloud is right for you is to actually bench it in comparison to the kit you are using in-house," he advises.
Stowe's team was not particularly surprised. "A lot of MPI applications under the hood are essentially doing low-interconnect, master-worker kind of workloads," he adds.
Stowe readily admits there are applications that require the fastest interconnects and highly-tuned systems – "like weather simulations, nuclear bomb testing, the stuff at Oak Ridge or Sandia" – but he contends that some of the newer applications, especially those written in-house or by a domain scientist as opposed to a computer scientist, often run faster on cloud.
"It's so cheap to do a bench, so why not just verify it. I'm an engineer at heart, so I'm very practical. We can talk about the theory, but it's hard to argue with results," he adds.
Another Tool in the Toolbox
So much of the discussion around HPC cloud focuses on the so-called I/O problem – the bandwidth and latency challenges associated with a general public cloud like Amazon. "What about performance?" critics will ask.
Stowe feels that questions like this point to cloud necessarily replacing large capability machines, but that's not how he sees it.
"I think of it as a radically different kind of capability machine," says Stowe. "The old kind of capability machine required millions of dollars and tons of planning and special environments to be created, heating/cooling/power, expert staff, and so on. These systems are used very heavily for a certain kind of application, and that's the right thing to do."
Stowe looks at utility supercomputing as another tool in the toolbox. It doesn't need to replace traditional capability machines, which will still be needed for certain kinds of applications. In fact, he says you can think of the Cycle-AWS cloud as another kind of capability machine with an attractive set of benefits (on-demand, pay for what you use, scalable, elastic, lower overhead).
It's a different branch of the same tree, he says.
IDC's Dekate takes pretty much the same position. He sees HPC in the cloud and dedicated HPC clusters as complementary.
"The HPC ecosystem is diverse and there's a class of applications that makes sense for utility supercomputing," says Dekate. "Solving the diverse needs of the user community requires different kinds of technological capabilities, including dedicated hardware infrastructure and HPC cloud frameworks. Our argument is that one does not have to replace the other. It's more important to find the right kind of matches for applications that work well in either or both of these cases."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?