April 11, 2013
For those who followed the news this week about HP’s Moonshot Project, which is their super-compact server pitch for “hyperscale datacenters”, the idea of plugging this into some high performance computing context likely wasn’t first in the thought-queue. However, according to some at the edge of low power computing, including Calexda’s Karl Freund, “the potential says something about the future.”
Without some background, that quote might sound rather vague until one thinks about all the speculation that’s been pumped into ARM-based and low-power x86 architectures. The attention around this has increased now that we’re looking at a 64-bit and double-precision-ready reality around the bend within the next couple of years.
While Moonshot might be wooing the mobile and hosting camp, there is something more compelling here for HPC--at least in the coming years. This concept presents high-density, low-power servers with the ability to swap in accelerators, DSPs, GPUs, FPGAs to create an efficient heterogeneous platform that is tailored around specific workloads.
Add to that an integrated cluster fabric and embedded low latency switches and this should strike the HPC crowd as a “blade on steroids” where storage and other workload-specific needs can (eventually) be snapped in. Again, it's still down the road for the needs of high performance computing, but we did explore the possibilities with a few folks this week.
During a conversation about how HP and others who take this type of swing at big datacenters might be able to strong-ARM their way through the gilded HPC gates, Freund cited some heavy-hitters taking an honest shine to lightweight approaches. He rattled off a long list of national labs, from Sandia to Los Alamos, Argonne, Oak Ridge, and others who are actively exploring the potential of ARM and how it might help them tackle exascale-class problems efficiently. He was referring to his company’s own ARM-based cartridges, which HP will offer as an option (although they were a first choice during the initial phases of Moonshot) to the core Intel Atom S1200 “Centerton” in addition to other offerings from ARM vendors.
While the labs might be turning a theoretical eye to the low power field, at this point it’s more on the level of playing with a few boxes to get a sense of scale and capability. So far, they (and some in the life sciences and oil and gas industries who aren’t concerned with striking double-precision gold) are pleased, but there is still a great deal of development to be done, including the (likely late) 2014 release of 64-bit ARM and then the critical tooling required to make it all function.
But it’s not all a power/efficiency play for the labs and those thinking about new server approaches, says Freund—it’s just as much a matter of flexibility and being able to build out boxes based on specific elements that are wrapped around specific workload needs. The labs and others at such scale have been “told by Intel that you get what everyone else gets” unless you’re willing to fork over a bunch of bucks to have them cobble together a specialized chip. This just isn’t the way systems are going to be built if we look ahead, Freund argues.
What HP showed off this week in the form of the very hopeful-sounding “Project Moonshot” is a glimpse into that application-centric future. Is any of it ready for HPC primetime? Of course not—in fact, in their current form sporting Intel Atom processors, they’re really only good for cloud datacenters munching pretty common tasks in an efficient but unimpressive (performance-wise) way. But there is a little twinkling there that’s bright enough to lull the forward-looking on the supercomputing side.
That sparkle is in flexibility, Freund argues—it’s a glimmer that is hard for the labs to ignore. And when the light hits Moonshot just right, this crew is seeing the promise of stitching all sorts of pretties to the naked boxes; from GPUs on the same die, to FPGAs finding their way in, to 60 gigabit fabric switches, sprinklings of DSPs (ala Texas Instruments) and additional offload engines, at least from Freund’s vantage point.
As HP noted when they slung out Moonshot this week, “There is a solid return for investing in finding an optimal balance of density, costs and expenses for each workload class. Given the rapid rate of workload and application evolution, finding optimal performance pints will be a continuous process for at least the next few years; it demands flexible hardware and software infrastructure.”
One could argue even further that Freund’s simple statement about possibility is far deeper than anything else he could have said in more words. And the synergies don’t end there, nor do they really even begin with HP for that matter either.
What HP has done is thrown out a holistic view of how the future could, when ideally imagined, work, for the big boys of HPC and the enterprise peasants alike. Even if for now the messaging is trumpeted to the cloud datacenters serving up vanilla apps, this unified vision resonates. They plan on “enabling a variety of partner silicon and component vendors to accelerate hypersale workloads for customers. This includes the lowest power CPUs and adds to it APUs, GPUs, DSPs, and FPGAs at scales those vendors would not be able to access on their own.” This is music to vendor ears, but on the receiving end, they note that “HP’s customers will benefit via broader access to innovative accelerators at a faster pace than HP achieve on its own.” They note (quite humbly despite the grandiosity of the project name) that their “success in bootstrapping and sustaining their Pathfinder Innovation Ecosystem will determine their future in the hyperscale infrastructure market.”
He imagines that at ISC in Leipzig and other shows with an HPC-heavy cast, HP will try to shine its Moonshot diamond in such a way that there’s a visible “big data” glint. The issue here is, HPC in this case isn’t really the same thing as big data as it fits in the Moonshot box. One of the biggest weaknesses of an architecture like the one they’ve crammed into the teensy space is that it rather sucks at floating point performance. And that’s kind of, well, you know , like really, really important. Still, for the I/O and integer-intensive stuff, which also has a place in the hallowed halls of HPC, there is a story angle. Again, that tale will have to play out following some maturing post 64-bit release.
Even after that release and the addition of double-precision capability, there’s something of a chicken and egg problem. Since no one is investing in the tools and software side until 64-bit double-precision emerges (no money, no develop-y), even the experiments at this early stage have their limits—and the development needed after the basics are in place will add even more time to the process. It will be slow going, but it could be all worth the wait if an efficient, flexible and fine-tuned server approach emerges that pulls the exascale dream a little closer.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?