June 18, 2008
There wasn't much suspense on which machine would nab the top spot on the June TOP500 list, which was released earlier today. Last week, IBM and LANL had already let everyone know that Roadrunner crossed the petaflop finish line first. Nonetheless, the new list portends some big changes ahead for supercomputing.
IBM continues to dominate the top systems, finishing 1, 2 and 3. LANL's Roadrunner system was No. 1 at 1.026 petaflops. LLNL's Blue Gene/L, which had held the top spot since 2004, drops down to No. 2 at 478.2 teraflops. Argonne National Lab's new Blue Gene/P follows close behind at 450.3 teraflops in the No. 3 spot. The new Sun-built Ranger supercluster at The Texas Advanced Computing Center (TACC) slides into the 4 spot at 326 teraflops and ORNL's recent upgrade of the Cray-built Jaguar machine moved it from No. 7 on the November 2007 list to No. 5.
But because of everyone's fascination with petaflops, the Roadrunner was the star of the show. Besides pure performance, the machine also broke another important barrier. Roadrunner became the first hybrid supercomputer -- in this case, Opteron and Cell blades -- to grab the top spot. Because of the much lower performance per watt offered by commodity x86 processors compared to the Cell, it wouldn't have been feasible to field a petaflop machine built entirely from the current crop of x86 processors. Such a system would require at least 5 megawatts, not including cooling.
Besides Roadrunner, there is only one other hybrid machine on the TOP500 -- the TSUBAME machine at Tokyo Tech. It's a Sun Fire Opteron-based cluster sped up by ClearSpeed Advance boards and, because of recent upgrades to the system, holds the No. 24 spot at 67.7 teraflops. At some point, TSUBAME might add some NVIDIA GPUs into the mix. According to Satoshi Matsuoka, the tech lead on the project, they've been looking at accelerating some nodes with GeForce 8800 GTS boards as they build toward a petaflop machine in the 2010 timeframe.
NVIDIA expects to have its GPUs on a top system or systems on November's TOP500 list. At ISC this week, Bull was talking about a system in development that had 200 teraflops of GPU acceleration hooked up with 100 teraflops of x86 servers, although no deployment date was offered.
In Roadrunner, the Cell acceleration represent 97 percent of the raw compute power of the machine. Undoubtedly x86 chips will continue to shrink and grow extra cores, but accelerators will continue to have the edge in energy efficiency until (and if) they are integrated with the CPU. Even if we don't see a wealth of petaflop machines in the next few years, accelerated hybrid systems, TOP500 or otherwise, should become much more common.
Somewhat surprisingly, Roadrunner is the only top system ever to employ InfiniBand as the interconnect, which up until now have all been proprietary. Overall, InfiniBand is the interconnect growth market in supercomputing, and is used in 49 of the top 100 systems. While Gigabit Ethernet still claims more total systems (285) than InfiniBand (120), GbE's days are numbered in the TOP500, and 10GbE has yet to make an appearance.
Other fun facts about the top supers:
This is the first list that includes a power consumption metric for many of the systems. The number represents how much power the computer draws while running Linpack, which supposedly is fairly representative of a system under a typical HPC application workload. It doesn't take into account external cooling, disks or other environment-related power draws. The idea is to offer a metric that should be reproducible if the machine were relocated. A nice addition.
Using current projections, the first exaflop system is expected in 2019, and a zettaflop system in 2030. But by that time (if you believe Ray Kurzweil), mind uploading will be all the rage, so programming the zettaflop supers should be a snap.
Posted by Michael Feldman - June 17, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?