September 01, 2011
Thanks to the issues of global climate change and rising energy costs, there has been an unrelenting focus on minimizing power consumption across nearly every industry, including computing, and more recently supercomputing. The prospect of building exascale machines that won't be able to be plugged in because of energy costs looms large.
According to a 2010 DOE Office of Science report on the challenges and opportunities of building and using exaflop supercomputers, "All of the technical reports on exascale systems identify the power consumption of the computers as the single largest hardware research challenge."
The report goes on to state the fundamental issue: money. At a million dollars or so per megawatt (MW) per year, the cost of running these machines is making the big government agencies more than a little nervous. Today the largest multiple-petaflops supers on the planet cost $5 to $10 million per year to power. The energy bill for an exaflop built with current technology would run over $2.5 billion a year, says the report.
Not surprisingly, both the DOE and DARPA have zeroed in on energy efficiency on their exascale initiatives, and target 20 MW as the ceiling for power consumption for a single exaflop. That's only about twice the consumption of today's K supercomputer, which, at 8 petaflops, is the most powerful computer in the world (Linpack-wise at least). Since an exaflop represents more that 100 times the performance of that machine, obviously a lot of energy-saving engineering has to be developed over the next several years to hit that 20MW target.
But is this line of thinking justified? This week's contributed feature by Numerical Algorithms Group's Andrew Jones manages to do a good job at exposing some of the problems with this aggressive focus on exascale power consumption. From his perspective, the concern about energy costs has to be placed against the backdrop of what the machines can accomplish. He writes:
Are we really saying, with our concerns over power, that we simply don't have a good enough case for supercomputing -- the science case, business case, track record of innovation delivery, and so on? Surely if supercomputing is that essential, as we keep arguing, then the cost of the power is worth it.
Indeed. According to exascale's proponents, these supercomputers will enable significant advances in nuclear energy and fusion technology, climate modeling, aerospace engineering, battery design, and combustion. Ironically advancements in these technologies could revolutionize --or at least significantly evolutionize -- energy production, and thus enabling a greater supply of power on which these same machines are so dependent.
There is a cultural imperative in play here too. And that is that successive computer technologies must become cheaper and more power efficient than the previous one, regardless of the end user value those technologies delivers. While this has actually come to pass in most of the computer industry, it has not at the upper echelons of supercomputing. Those machines still cost hundreds of millions of dollars and their power consumption is rising.
In fact, as recently as two years ago the average power consumption of the top 5 supercomputers for was 3.22 MW; today the top five average is 4.97 MW. At that rate, the average top 5 machines in 2019 will be around 27.96 MW, and one or more of those should be an exaflop machine. That's not too far off from 20MW, but barring the artificial acceleration of this curve with a concerted effort at energy efficiency, we'll overshoot the power target by a fair margin.
But that is only for the first batch of such machines that will blaze the trail at the end of the decade. The greater value of exascale supercomputing will be performed by less costly, less power-hungry, and, presumably, more numerous machines built and deployed in the 2020s and beyond -- analogous to the petascale system of the current decade. Those supercomputers will be more practical in every way than the first custom-built exaflop systems of the late 2010s.
According to Jones, the biggest roadblock for delivering exascale computing is software. Even though there are several initiatives in the pipeline to get exascale-capable tools, algorithms, and libraries developed in advance, applications will be hard pressed to take full advantage of the first exascale system. Even today, there are only a handful of applications that can achieve a sustained petaflop, three years after Roadrunner hit that milestone.
Unlike hardware advances, software innovation comes in fits and starts and requires a whole ecosystem of talent to move forward. Developing software has been the enduring challenge for computing of every stripe and certainly requires more sophistication than sending a check to the power company. As Jones puts it:
It certainly requires money, but it needs other scarce resources too, specifically time and skills. That involves a large pool of skilled parallel software engineers, scientists with computational expertise, numerical algorithms research and so on. Scarce resources like these are possibly even harder to create than money!
Posted by Michael Feldman - September 01, 2011 @ 8:45 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?