November 26, 2012
Apparently, the US Department of Energy (DOE) is revising its timetable for deploying its first exaflops-capable supercomputers. According to William Harrod, Research Division Director of the DOE's Office of Science Advanced Scientific Computing Research (ASCR) program, the agency is now looking at the 2020 to 2022 to reach get its first exascale machines up and running. That effectively means the US is delaying its plans for this next-generation technology by two to four years.
Harrod outlined the impact of the delay at the Supercomputing Conference (SC12) last week in Salt Lake City, Utah. In an article posted today in Computerworld, Harrod described the slippage thusly: "When we started this, [the timetable was] 2018; now it's become 2020 but really it is 2022."
The DOE is in the process of writing up a proposal, known as the Exascale Computing Initiative (ECI), which is expected to be presented to Congress in February of next year. Of course, there's no guarantee that the feds will actually act on the proposal in a way that meets the agency's needs.
According to the Computerworld report, the effort is expected to cost in the neighborhood of a billion dollars over the next several years. Given the failure of the Obama White House and Congress to come to terms on budgets over the previous four years, that doesn't bode well. Even at best, funding for the work won't be put in place until October 2013, as part of the fiscal 2014 budget.
Although the budget stalemate that has gripped Washington for the last four years has not helped, a more fundamental problem is that it's been difficult to make the case for exascale systems. Despite Obama's 2011 State of the Union address invoking the Russian Sputnik challenge as a model for lighting a fire under US R&D, there is little public outcry for more federal spending in technology. Scientists insist that exascale machines will enable advancements in an array of fields – biology, energy, physics, material science, national security, and climate research; but such talk has not captured the public imagination to the degree that would force policymakers to act.
Unfortunately, to develop such supercomputers by the end of the decade requires actions now. While the hardware may indeed become available by 2018 – Intel, Cray and others have stated their intentions to supply such hardware in that timeframe – the software models for exascale computing haven't been developed yet and will require a long lead time.
China is also working on these systems and intends to field an exaflop-capable machine around the same time – perhaps using domestically produced technology. Governments in Japan and Europe have plans to field exascale machines around the end of the decade as well Those nations have the same daunting challenges as the US, but if the Americans dawdle, it's not inconceivable that the first exaflop machine will be in Europe or Asia.
In fact, if the TOP500 trends are to be believed, a supercomputer that is able to execute a Linpack exaflop will appear somewhere in world by 2019. Whether that machine becomes a platform for exascale computing or just a container for a collection of petascale and terascale applications is another matter.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?