HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
January 18, 2011
Last month, the President's Council of Advisors on Science and Technology (PCAST) -- 20 of the nation's leading scientists and engineers selected by the President -- released a report, entitled "Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology." The council argues that networking and information technology is a key enabler of economic competitiveness, national security and quality of life, and therefore should be appropriately funded. Don't let the humdrum summary fool you, there are revolutionary ideas afoot in this report, and William Gropp, professor of computer science at the University of Illinois, provides a rundown on those relevant to high performance computing. Gropp has a pretty good understanding of the material since he was also part of the team that authored the report.
One of the key claims in the report is that the TOP500 list alone is not a significant indicator of HPC prowess.
While the HPC community has long known that no single benchmark adequately captures the usefulness of a system, the PCAST report explicitly calls for a greater focus on what I'll call sustained performance: the ability to compute effectively on a wide range of problems:
"But the goal of our investment in HPC should be to solve computational problems that address our current national priorities,"
Addressing this is becoming critical, because developing systems based solely to rank at the top of the Top500 list will not provide the computational tools needed for productive science and engineering research.
Gropp asserts that the business as usual approach to high-end computing will no longer be effective, and that for HPC to continue to advance, a dramatic revamping will be required in all parts of the ecosystem: the hardware, software and algorithms. If this overhaul fails to happen, Gropp opines that the end of Moore's Law and the relatively-painless progress that goes with it may really be at hand.
To avoid this fate, the report calls for "substantial and sustained" investment in a broad range of basic research for HPC, specifically:
"To lay the groundwork for such systems, we will need to undertake a substantial and sustained program of fundamental research on hardware, architectures, algorithms and software with the potential for enabling game-changing advances in high-performance computing."
Gropp concludes his analysis with a sobering glimpse into the future of HPC:
Without a sustained investment in basic research into HPC, the historic increase in performance of HPC systems will slow down and eventually end. With such an investment, HPC will continue to provide scientists and engineers with the ability to solve the myriad of challenges that we face.
It's easy to dismiss Gropp's prediction as doom-and-gloom rhetoric understandably intended to galvanize resources but in a way I think he's right. I don't think anyone wants to see HPC's demise, but the likely scenario is that we will carry on doing business as usual, making incremental changes and tradeoffs and avoiding the really hard challenges until absolutely forced to do otherwise. I don't think that we'll see really big changes, unless we hit the rock bottom of stalled performance. Or unless HPC experiences a game-changing breakthrough that recasts the trajectory of its progress. These types of scientific leaps can't be predicted, but increased support at the federal level increases their likelihood.
Posted by Tiffany Trader - January 18, 2011 @ 4:30 PM, Pacific Standard Time
Tiffany Trader is the editor of HPC in the Cloud. With a background in HPC publishing, she brings a wealth of knowledge and experience to bear on a range of topics relevant to the technical cloud computing space.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?