September 24, 2010
In a thought-provoking piece over at ZDNet, Numerical Algorithms Group's Andrew Jones takes a look at the supercomputing power consumption equation, examining whether its current trajectory might not be so untenable.
There are a range of estimates for the likely power consumption of the first exaflops supercomputers, which are expected at some point between 2018 and 2020. But probably the most accepted estimate is 120MW, as set out in the Darpa Exascale Study edited by Peter Kogge (PDF).
At this figure, the supercomputing community panics and says it is far too much -- we must get it down to between 20MW and 60MW, depending who you ask -- and we worry even that is too much. But is it?
What follows is a comparison of today's largest supercomputers with their closest kin, major scientific research facilities.
In Jones' opinion:
[T]he largest supercomputers at any time, including the first exaflops, should not be thought of as computers. They are strategic scientific instruments that happen to be built from computer technology. Their usage patterns and scientific impact are closer to major research facilities such as Cern, Iter, or Hubble.
Thinking of the big supercomputers that way, their power consumption and other costs -- construction, operation, and so forth -- are comparable to other major research centers and not that outrageous, concludes Jones.
Jones also tackles the subject of whether it makes sense to continually improve and replace systems every couple of years (as we currently do) or whether it would offer more value to society to collaborate on the construction of one mega-supercomputer every decade -- putting ten years of resources into it, and then relying only on that system for ten years. There are, of course, pros and cons to each path. Because supercomputing performance increases exponentially, the first option results in a greater number of exflops per year, but also think of the resources saved with the second option by not having to continually rewrite and validate code and the value to society in having a 2030-era system ten years ahead of schedule.
Jones is not sold on either path, but wonders why we are so set on the first option without giving some consideration to the second. Check out the full article for more in-depth treatment of these ideas.
Full story at ZDNet
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?