November 29, 2012
Although Lawrence Livermore Lab's Sequoia supercomputer got knocked of its TOP500 perch a few weeks ago, the DOE machine hosted by National Nuclear Security Administration (NNSA) is proving its worth in the world of real applications.
According to the National Nuclear Security Administration, Sequoia, the world's largest IBM Blue Gene/Q system, delivered nearly 14 petaflops on the recently developed Hardware/Hybrid Accelerated Cosmology Codes (HACC), a software framework that simulates the behavior of galaxies on a cosmological scale. Its purpose is to help scientists to reveal the nature of dark matter and dark energy. While that might seem a little tangential to NNSA's primary mission of managing the nation's nuclear arsenal, it does demonstrate the power of Blue Gene platform.
|Sequoia supercomputer; Photo credit: Bob Hirschfeld/LLNL|
In fact, 14 petaflops is just a couple of petaflops shy of Sequoia's Linpack mark, and just four petaflops off its peak performance number. According the DOE press release: "The HACC framework is designed for extreme performance in the weak scaling limit (high levels of memory utilization) by integrating innovative algorithms, as well as programming paradigms, in a way that easily adapts to different computer architectures."
Applications that exhibit weak scaling (the ability to increase the problem size by applying more processors) are good candidates to use the full capability these petascale supers since they rely on high levels of compute parallelism. This is especially true of the Blue Gene architecture, which uses large numbers of relatively slow CPUs (1.6 GHz, in this case) to achieve high aggregate performance. Sequoia, with more than 1.5 million PowerPC A2 CPUs, is perhaps the most extreme example of this.
Although these results were obtained in the NNSA's shop at LLNL, the team conducting the work came from the Argonne National Lab (ANL), a DOE facility devoted to open science and engineering. They will be running this same application on the 10-petaflop Mira supercomputer, another Blue Gene/Q system, installed at ANL.
Blue Gene systems haven't cornered the market on petascale apps though. Titan, the new Cray XK7 supercomputer at Oak Ridge, recently debuted with a 10-petaflop run of WL-LSMS, a material science code that performs thermodynamic calculations. Titan relies on NVIDIA GPUs of the Kepler persuasion for 24 of its 27 peak petaflops, so this represents a much different architecture than that of the CPU-only Sequoia.
As multi-petaflops supercomputers start to fill in the TOP500 list, applications that can sustain this level of computing will start to proliferate as well. In three years, all of the top 500 supercomputers are expected to be a petaflop or better, offering a much wider array of machines for such computing. The real era of petascale supercomputing has just begun.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?