July 28, 2006
For 90 years, physicists have tried to solve the equations that constitute Albert Einstein's theory of general relativity -- the concept that matter, space and time are intertwined. But some of Einstein's abstract equations have proven too complicated to reliably calculate using traditional computer software and hardware.
Until now, that is. Thanks to the ingenuity of NASA scientists and computer technology from Silicon Graphics, Inc., that list of incalculable problems is growing shorter.
Recently, physicists at NASA Goddard Space Flight Center successfully simulated the merger of two massive, orbiting black holes -- an achievement that has eluded physicists for decades. Relying on Columbia, NASA's record -- setting supercomputer built from 20 SGI Altix systems, the Goddard team was able to simulate how colliding black holes will throw off gravitational waves that ripple throughout the fabric of the universe.
Variations on 24 equations based on Einstein's relativity theory helped create the simulation of colliding black holes with equal mass -- an event whose effects can continue for years. The black hole calculation stands out as the largest astrophysical "single run" ever performed on a NASA computer -- the equivalent of 18 years of CPU time devoted to a single problem.
"These mergers are by far the most powerful events occurring in the universe, with each one generating more energy than all of the stars in the universe combined," said Joan Centrella, head of the Gravitational Astrophysics Laboratory at Goddard. "By combining our latest codes with the tremendous computing power of Columbia, we now have realistic simulations that will help guide gravitational wave detectors coming online."
To run the simulations on Columbia, Goddard physicists developed sophisticated software called Hahndol, an English representation of the Korean word for "one stone" -- or in German, "ein stein."
The Goddard team scaled its Hahndol code across up to 2,032 processors on Columbia -- one-fifth of the system's total processor count. By linking four, 512-processor Altix systems via the high-speed SGI NUMAlink interconnect, NASA enabled the scientists to access all of the processors' memory at once. The project, begun some 18 months ago, has required millions of CPU hours. Individual calculations involved hundreds of gigabytes of information.
According to John Baker, NASA astrophysicist and one of the project leaders at NASA Goddard, calculating some of Einstein's more involved equations had proven elusive because representing the three-dimensional fabric of the universe is enormously complex, and simulating its behaviors grows increasingly complicated. Previous calculations relying on software that was less sophisticated than Hahndol would, before long, render results that were obviously inaccurate.
"You can picture the simulation taking place on a kind of 3D graph paper with hundreds of points, and we'll calculate 80 variables for each point," said Baker. "If the coordinates aren't accurate, things go awry very quickly."
NASA pursued the simulations because gravitational waves are notoriously difficult to detect and measure. By successfully simulating the waves, the Goddard researchers are assisting another NASA project: the Laser Interferometer Space Antenna (LISA). Made up of three spacecraft flying just more than 3 million miles apart in an equilateral triangle, the LISA project will carry extraordinarily precise instruments to track one another and -- more importantly -- to detect if a gravitational wave passes between them. The sensitive instruments will recognize even the slightest force caused by a passing wave. For instance, if the laser that connects two LISA spacecraft is nudged as little as the width of an atom, the system will detect it.
The long-term project should help NASA scientists learn more about how black holes merge and how dying stars are consumed by black holes.
In the simulation created jointly by NASA Goddard and scientists at NASA Ames Research Center, the black holes seen merging are roughly 4 million times the mass of the sun. An animation of the simulation, created by Chris Henze, senior research scientist at NASA Advanced Supercomputing Division, can be viewed at http://www.nasa.gov/centers/goddard/universe/gwave.html. The 29-second animation of circling black holes illustrates the final stage of a rapidly accelerating process. Though the entire merger process occurs over hundreds of millions of years, the last stage is over in only minutes.
"The work of the Goddard scientists is significant," said Henze, who rendered the simulation that was computed on Columbia across 10 nodes of one of NASA Ames' two HyperWall displays. "These are very difficult problems. People have been working on them for decades."
One of the world's most powerful computers, the Columbia supercomputer is built from 20 SGI Altix systems, each powered by 512 Intel Itanium 2 processors, and has revolutionized the rate of scientific discovery at NASA. For instance, on NASA's previous supercomputers, simulations showing five years worth of changes in ocean temperatures and sea levels were taking a year to model. But using a single SGI Altix system, scientists can simulate decades of ocean circulation in just days, while producing simulations in greater detail than ever before. And the time required to assess flight characteristics of an aircraft design, which involves thousands of complex calculations, dropped from years to a single day.
Recently, NASA added 600 TB of SGI InfiniteStorage 6700 storage capacity to the 10,240-processor Columbia system, and acquired a new 4 Gbit infrastructure to optimize data management with SGI InfiniteStorage Shared Filesystem CXFS. Originally outfitted with 440 TB of storage, NASA's Columbia supercomputer required additional storage capacity to accommodate the massive data management, access and retrieval demands of its broad user base.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?