January 28, 2013
The 20 petaflop, third-generation IBM BlueGene system, Sequoia, may be the number two supercomputer according to the latest TOP500 rankings, but when it comes to max core usage, Sequoia has apparently set a new record. A team of Stanford engineers harnessed one million of Sequoia's nearly 1.6 million CPUs in parallel to solve a sophisticated fluid dynamics problem.
Sequoia, the crown jewel of Lawrence Livermore National Laboratory (LLNL), was the fastest supercomputer in the world from June 2012 until November 2012, when it was knocked from its perch by another DOE machine, Titan, the 27 petaflop (peak) Cray XK7 system installed at Oak Ridge National Lab. Sequoia's 96 racks house 98,304 compute nodes, nearly 1.6 million cores and 1.6 petabytes of memory, connected by a 5-dimensional torus interconnect.
Researchers from Stanford Engineering's Center for Turbulence Research (CTR) used Sequoia to model the noise output of supersonic jet engines with the aim of designing quieter aircraft engines. Minimizing this dangerous acoustical hazard is important not only for the health and safety of the ground grew, but for the surrounding communities. In addition to the hearing damage that can result from sustained high-decibel exposure, there is a "noise nuisance" factor that affects property values.
Advanced computer models called predictive simulations enabled scientists to "look" inside the engine's harsh environment to examine processes that would otherwise be off-limits to physical experimental designs. The information attained from this data-intensive simulation helps researchers gain insight into the "physics of noise."
|Jet noise simulation. A new design for an engine nozzle is shown in gray at left. Exhaust temperatures are in red/orange. The sound field is blue/cyan. (Source: the Center for Turbulence Research, Stanford University)|
"Computational fluid dynamics (CFD) simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed," said Parviz Moin, the Franklin M. and Caroline P. Johnson Professor in the School of Engineering and Director of CTR.
For Joseph Nichols, a research associate who worked on the project, and the rest of the team, there is a lot to celebrate: the successful full-scale implementation of Sequoia, breaking the million-core barrier, and the real-world benefits of this research.
"These runs represent at least an order-of-magnitude increase in computational power over the largest simulations performed at the Center for Turbulence Research previously," said Nichols. "The implications for predictive science are mind-boggling."
The project relied on a code called CharLES that was developed by former Stanford senior research associate, Frank Ham. A high-ﬁdelity unstructured compressible ﬂow solver, CharLES is an ideal code for aeroacoustic applications characterized by high-speed ﬂows and complex geometries.
CFD simulations are a good way to test the entire supercomputer, because they stress all the components, computation, memory and communication. Ideally, systems with more cores should be able to handle more difficult problems in less time, but system complexity comes with its own challenges and million-way parallelism can create unexpected bottlenecks.
As computers continue to hit their 1000-fold marks, one of the most difficult tasks is developing real-world applications that can scale to make use of the entire machine. Sequoia is already making something of a name for itself in this regard. Last month, the system achieved nearly 14 petaflops on the Hardware/Hybrid Accelerated Cosmology Codes (HACC), just a couple of petaflops shy of its 16.2 petaflop Linpack measurement (and nearly 70 percent of its peak flops).
This latest announcement from Stanford didn't discuss FLOPS, but we can gather that the jet engine simulation employed nearly two-thirds of Sequoia's total core count (one-million out of a possible 1,572,864). In the ideal scenario, all available cores would be put to use, but that proposition gets more difficult every decade. Exascale computers, for example, will likely have billions of cores. What will it take to achieve billion-way parallelism?
"Every generation in computing increases the complexity of the system," noted Mark Seager, former assistant department head for advanced computing technology at LLNL's Integrated Computing and Communications Department, in a DOE Office of Science feature.
"Every factor of 10 improvement in computing-delivered performance brings an entirely new vista of problems that we can solve and physics that we can investigate, but to scale up by a factor of 10 in parallelism isn't easy," he added.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?