September 25, 2012
The concept of dark matter and dark energy has flummoxed physicists since it was discovered that the rate at which the universe expands is growing. To this day, dark matter and energy represent a significant challenge to physicists’ understanding of the universe.
With the help of Mira, an IBM Blue Gene/Q supercomputer at the Argonne National Laboratory, physicists hope to garner a greater theoretical understanding of dark matter to better guide future astronomical explorations.
In an interview with Dr. Salman Habib, senior physicist at the DOE lab, The Atlantic profiled the cosmology simulation work under his direction. Habib’s project is called Cosmic Structure Probes of the Dark Universe, and over four years it is scheduled to occupy 150 million core hours of the Mira supercomputer. In order to better study theoretical dark matter and energy, Habib hopes to model the entire universe from its inception.
“The discovery potential of almost all of these missions relies crucially on theoretical modeling of the large-scale structure of the Universe,” he said. “As observational error estimates for various cosmological statistics edge towards the one percent level, it is imperative that simulation capability be developed to a point that the entire enterprise is no longer theory-limited.”
The search for dark matter began unofficially when Einstein noted the existence of a cosmological constant when developing his relativity theories. He initially threw out the idea, but cosmologists later came back to it. However, expressing that constant mathematically has proved remarkably difficult, mostly because a proper model of the universe has yet to be formed.
Habib hopes to develop that model with Mira and what its new Blue Gene system lets it do. “This is possible,” Habib said “because the next generation, 10-petaflop IBM Blue Gene system will provide, at last, the computational power to resolve galaxy-scale mass concentrations in a simulated volume as large as state-of-the-art sky surveys.”
Mira, the number three system on the TOP500, and which tied for the top spot with another Department of Energy Blue Gene/Q system in June’s Graph 500 benchmark, is capable of performing 10 quadrillion calculations per second. More to the point, it is able to execute a graph operation as expansive as mapping the universe. According to the Argonne Leadership Computing Facility (ALCF), the machine is committed to 786 million core-hours available to scientists in 2013, eventually increasing to 5 billion hours per year of scientific computing time.
Dark matter observance is not the only interesting astronomical phenomenon this model hopes to detect. Habib also hopes to get to the bottom of galactic and star cluster formation. Projects like this are the type to which the ALCF referred when noting on their website that “Mira ushers in a new era of scientific supercomputing.” Other projects include climate models on the planet-wide scale and detailed numerical analysis of carbon-12 reactions.
Full story at The Atlantic
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?