October 03, 2013
The TOP500 list of the world's fastest supercomputers first debuted more than two decades ago, in June 1993, the brainchild of Berkeley Lab scientist Erich Strohmaier and Professor Hans Meuer. The much-celebrated list is compiled using the Linpack benchmark, which was developed by Jack Dongarra. Although the continued relevance of the Linpack benchmark as a sole measure of big iron performance has been called into question, the impact of this twice-yearly list as a widely recognized metric and a valuable historical record cannot be denied.
Recently, Dr. Strohmaier, who heads up the Future Technologies Group at Berkeley Lab, shared some of his insight on the evolution of the TOP500 and the future of HPC in a short Q&A for the US Department of Energy website.
At Berkeley, Professor Strohmaier's team explores the design and development of hardware and software systems that enable application scientists to extract the greatest performance gains. The TOP500 list cofounder got his start on the application side as well, studying physics, which provided a natural segue to computers. Strohmaier's PhD work involved using numerical methods in particle physics. The compute-intensive applications could only be done on the largest systems of the day, so the transition to HPC was "a natural progression," according to the scientist.
The TOP500 list was developed to provide the community with a simple but meaningful point of reference. In the 1980s the vector machines were clearly delineated as supercomputers. You could survey the upper bounds of computing with a quick count of such systems. But in the 90s the line between "regular computers" and supercomputers (aka vector computers at the time) began to blur, notes Strohmaier. Furthermore, the appearance of parallel processor supercomputers meant that the "old method" of counting vector computers was no longer valid.
That was the context for the list, but the need runs deeper. "You really can't discuss or improve what you can't measure," remarks Strohmaier. "You need a definition for performance if you want to talk about supercomputing and how it's improving. Benchmark results are essential here as they provide a practical way to define and measure performance."
Strohmaier's next sentiments might be interpreted as a response to allegations that the list has outgrown its usefulness, at least as a sole measure of performance. Some academic leaders, like Blue Waters Project Director Bill Kramer are calling for real-world sustained-performance applications to serve as benchmarks.
Strohmaier acknowledges that "there is no single metric or benchmark that can truly represent the huge variety of programs that we use."
"For different purposes, different users and different situations, you need to define different benchmarks to represent progress," he adds.
The father of the Linpack benchmark, Jack Dongarra, holds a similar perspective.
"The Linpack benchmark is an incredibly successful metric for the high-performance computing community," said Dongarra. "The trends it exposes, the focused optimization efforts it inspires, and the publicity it brings to our community are very important. Yet the relevance of the Linpack as a proxy for real application performance has become very low, creating a need for an alternative."
The "need for an alternative" has prompted Dongarra, a professor of computer science at the University of Tennessee, and colleague Michael Heroux from Sandia National Laboratories in Albuquerque, New Mexico, to develop a new benchmark. It is expected to debut this November in tandem with SC13.
The Strohmaier interview covers other topics as well. The Berkeley scientist discusses some of the historical trends embedded in two decade's worth of TOP500 data, the main roadblocks to achieving exaflop-class supercomputers, as well as the importance of hitting this next 1,000X goal.
For students who are considering a career in advanced computing, Strohmaier extolls the benefits of a strong foundation. "You need to learn about the science discipline, but you also need to understand the computer science," he says. "And you will need to keep learning, changing and adapting to the rapidly changing hardware and software environments of HPC."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?