December 02, 2005
Last month on the campus of the University of California, Berkeley, Dr. Horst Simon presented a lecture entitled "Progress in Supercomputing: The Top Three Breakthroughs of the Last 20 Years and the Top Three Challenges for the Next 20 Years." The presentation was sponsored by the Center for Information Technology Research in the Interest of Society (CITRIS) as part of its Distinguished Speaker Series.
As one of the foremost authorities in supercomputing, Dr. Simon offers us his unique perspectives in this area. In the lecture, he gives us his historical perspective of supercomputing over the previous two decades and his outlook for the next two. What follows is an abstract of the presentation as well as a link to the taped video -- which is approximately one hour in length. The abstract and video were provided courtesy of CITRIS.
As a community we have almost forgotten, what supercomputing was like twenty years ago in 1985.
The state-of-the-art system then was a 2 Gflop/s peak Cray-2, with at that time a phenomenal 2 GBytes memory. It was the era of custom-built vector mainframes, where anything beyond 100 Mflop/s sustained was considered excellent performance. The software environment was Fortran with vectorizing compilers (at best), and a proprietary operating system. There was hand tuning only, no tools, no visualization, and dumb terminals with remote batch. If one was lucky and had an account, remote access via 9600 baud was state-of-the-art. Usually, a single code developer developed and coded everything from scratch.
What a long way we have come in the last twenty years! Teraflops-level performance on inexpensive, highly parallel commodity clusters, open source software, community codes, grid access via 10 Gbit/s, powerful visualization systems, and a productive development environment on a desktop system that is more powerful than the Cray-2 from 20 years ago -- these are the characteristics of high performance computing in 2005.
Of course, a significant contribution to this progress is due to the continued increase of computing power following Moore's Law. But what I want to argue here is that progress was not just simply quantitative alone. We did not just get more of the same at a cheaper price. There were several powerful ideas and concepts that were shaped in the last 20 years that made supercomputing the vibrant field that it is today. As an active researcher in the field for the last 25 years, I will offer my subjective opinion, what were the real top breakthrough ideas that led to qualitative change and significant progress in our field?
Retrospection leads to extrapolation: in the last part of the lecture, I will envision what supercomputing will be like 20 years from now in the year 2025. Can we expect similar performance increases? How will supercomputing change qualitatively, and what are the top challenges that we will have to overcome to reach that vision of supercomputing in 2025?
To view the video of the presentation, download mms://netshow01.eecs.berkeley.edu/Horst_Simon.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?