November 19, 2009
Second ORNL-led team also finalist for Gordon Bell Prize
Nov. 19 -- A team led by Oak Ridge National Laboratory's (ORNL's) Markus Eisenbach was named winner Thursday of the 2009 ACM Gordon Bell Prize, which honors the world's highest-performing scientific computing applications. Another team led by ORNL's Edo Aprà was also among nine finalists for the prize.
Results of the contest were announced in Portland, Ore., during the SC09 international supercomputing conference. The prize is supported by high-performance computing pioneer Gordon Bell and is administered by the Association for Computing Machinery.
Eisenbach and colleagues from ORNL, Florida State University, and the Institute for Theoretical Physics and Swiss National Supercomputing Center achieved 1.84 thousand trillion calculations per second -- or 1.84 petaflops -- using an application that analyzes magnetic systems and, in particular, the effect of temperature on these systems. By accurately revealing the magnetic properties of specific materials--even materials that have not yet been produced -- the project promises to boost the search for stronger, more stable magnets, thereby contributing to advances in such areas as magnetic storage and the development of lighter, stronger motors for electric vehicles.
The application -- known as WL-LSMS -- achieved this performance on ORNL's Cray XT5 Jaguar system, making use of more than 223,000 of Jaguar's 224,000-plus available processing cores and reaching nearly 80 percent of Jaguar's peak performance of 2.33 petaflops. Earlier in the week Jaguar was named number one on the TOP500 list of the world's fastest computers. The system was recently upgraded from four-core processors to six-core processors, boosting its peak performance to 2.33 petaflops.
WL-LSMS allows researchers to directly and accurately calculate the temperature above which a material loses its magnetism--known as the Curie temperature. The team's approach differs from earlier efforts because it sets aside empirical models and their attendant approximations to tackle the system through first-principles calculations.
"What we can do is calculate the Curie temperature for materials with high accuracy without external parameters," Eisenbach explained. "These first-principles calculations are orders of magnitude more computationally demanding than previous models; it's only with a petascale system such as Jaguar that calculations like this become feasible."
WL-LSMS combines two methods to achieve its goal. The first -- known as locally self-consistent multiple scattering, or LSMS -- applies density functional theory to solve the Dirac equation, a relativistic wave equation for electron behavior. The code has a robust history, having been the first code to run at a sustained trillion calculations per second, and earned its developers the prestigious 1998 Gordon Bell Prize. This approach, though, describes a system in its ground state at a temperature of absolute zero, or nearly -460°F. By incorporating a Monte Carlo method known as Wang-Landau, which guides the LSMS application, Eisenbach and his colleagues are able to explore technologically relevant temperatures ranges.
The work improves on previous advances in magnetic materials, Eisenbach said. He noted that materials research has led in the past century to more than a 50-fold increase in the magnetic strength of materials per volume and in the last decade to more than a 100-fold increase in the density of magnetic data storage. Other efforts that may benefit from the research include the design of lighter, more resilient steel and the development of future refrigerators that use magnetic cooling.
Aprà's team -- the other finalist led by an ORNL researcher -- achieved 1.39 petaflops on Jaguar in a first principles, quantum mechanical exploration of the energy contained in clusters of water molecules. The team, comprising members from ORNL, Australian National University, Pacific Northwest National Laboratory (PNNL), and Cray Inc., used a computational chemistry application known as NWChem, which was developed at PNNL.
The application used 223,200 processing cores to accurately study the electronic structure of water by means of a first-principles quantum chemistry technique known as coupled cluster. The team will make its results available to other researchers, who will be able to use this highly accurate data as inputs to their own simulations.
The unprecedented power of the Jaguar system is necessary for these calculations because the bond between water molecules is far more complex than that between other small molecules, and less demanding computational approaches fail to describe the system accurately. Aprà's simulation of a 24-molecule cluster is the first to explore these bonds from first principles using quantum mechanical forces as implemented in the coupled cluster method.
"With a single water molecule it's easy to see the structure," Aprà explained. "But the chemical bond formed by several water molecules clustered together is long range in nature. It's something that cheaper [less computationally demanding] and less accurate quantum mechanical methods don't describe accurately."
Source: Oak Ridge National Laboratory
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?