August 29, 2011
That title is probably controversial to most readers. It is likely that if you asked members of the supercomputing community what is the single biggest challenge for exascale computing, the most common answer would be "power." It is widely reported, widely talked about, and in many places, generally accepted that finding a few orders of magnitude improvement in power consumption is the biggest roadblock on the way to viable exascale computing. Otherwise, the first exascale computers will require 60MW, 120MW or 200MW -- pick your favorite horror figure. I'm not so convinced.
I'm not saying the power estimates for exascale computing are not a problem -- they are -- but they are not the problem. Because, in the end, it is just a money problem. For most in the community, the objection is not so much to the fact of 60-plus MW supercomputers. Instead, the objection is the resulting operating costs of 60-plus MW supercomputers. We simply don't want to pay $60 million each year for electricity (or more precisely we don't want to have to justify to someone else -- e.g., funding agencies -- that we need to pay that much). But why are we so concerned about large power costs?
Are we really saying, with our concerns over power, that we simply don't have a good enough case for supercomputing -- the science case, business case, track record of innovation delivery, and so on? Surely if supercomputing is that essential, as we keep arguing, then the cost of the power is worth it.
There are several large scientific facilities that have comparable power requirements, often with much narrower missions -- remember that supercomputing can advance almost all scientific disciplines -- for example, LHC, ITER, NIF, and SNS. And indeed, most of the science communities behind those facilities are also large users of supercomputing.
I occasionally say, glibly and deliberately provocatively, if the scientific community can justify billions of dollars, 100MW of power, and thousands of staff in order to fire tiny particles that most people have never heard of around a big ring of magnets for a fairly narrow science purpose that most people will never understand, then how come we can't make a case for a facility needing only half of those resources that can do wonders for a whole range of science problems and industrial applications?
[There is a partial answer to that, which I have addressed on my HPC Notes blog to avoid distraction here.]
But secondly, and more importantly, the power problem can be solved with enough money if we can make the case. Accepting huge increases in budgets would also go a long way toward solving several of the other challenges of exascale computing. For example, resiliency could be substantially helped if we could afford comprehensive redundancy and other advanced RAS features; data movement challenges could be helped if we could afford huge increases in memory bandwidth at all levels of the system; and so on.
Those technical challenges would not be totally solved but they would be substantially reduced by money. I don't mean to trivialize those technical challenges, but certainly they could be made much less scary if we weren't worried about the cost of solutions.
So, the biggest challenge for exascale computing might not be power (or your other favorite architectural roadblock) but rather our ability to justify enough budget to pay for the power, or more expensive hardware, etc. However, beyond even that, there is a class of challenges for which money alone is not enough.
Assume a huge budget meant an exascale computer with good enough resiliency, plenty of memory bandwidth and every other needed architectural attribute was delivered tomorrow, and never mind the power bills. Could we use it? No. Because of a series of challenges that need not only money, but also lots of time to solve, and in most cases need research because we just don't know the solutions.
I am thinking of the software related challenges.
Even if we have highly favorable architectures (expensive systems with lots of bandwidth, good resiliency, etc.) I think the community and most, if not all, of the applications are still years away from having algorithms and software implementations that can exploit that scale of computing efficiently.
There is a reasonable effort underway to identify the software problems that we might face in using exascale computing (e.g., IESP and EESI). However, in most cases we can only identify the problems; we still don't have much idea about the solutions. Even where we have a good idea of the way forward, sensible estimates of the effort required to implement software capable of using exascale computing -- OS, tools, applications, post-processing, etc. -- is measured in years with large teams.
It certainly requires money, but it needs other scarce resources too, specifically time and skills. That involves a large pool of skilled parallel software engineers, scientists with computational expertise, numerical algorithms research and so on. Scarce resources like these are possibly even harder to create than money!
Power is a problem for exascale computing, and with current budget expectations is probably the biggest technical challenge for the hardware. In terms of getting to exascale computing, demonstrating the value of increased investment in supercomputing to funders and the public/media is probably a more urgent challenge. But the top roadblock for achieving the hugely beneficial potential output from exascale computing is software. There are many challenges to do with the software ecosystem that will take years, lots of skilled workers, and sustained/predictable investment to solve.
That "sustained/predictable" is important. Ad-hoc research grants are not an efficient way to plan and conduct a many-year, many-person, community-wide software research and development agenda. Remember that agenda will consume a non-trivial portion of the careers of many of the individuals involved. And when the researchers start out on this necessary software journey, they need confidence that funding will be there all the way to production deployment and ongoing maintenance many years into the future.
About the Author
Andrew is Vice-President of HPC Services and Consulting at the Numerical Algorithms Group (NAG). He was originally a researcher using HPC and developing related software, later becoming involved in leadership of HPC services. He is also interested in exascale, manycore, skills development, broadening usage, and other future concerns of the HPC community.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?