March 25, 2013
The European Space Agency’s massive Planck telescope has been hard at work digging through ancient light signals to find the original spark of the Big Bang.
The clue-yielding light has traveled 13.8 billion years to reach research equipment and is so faint that Planck has to scan every point on the sky an average of 1,000 times to spot illuminations. This has resulted in an incredibly massive map of the cosmos, not to mention some interesting new spin-outs of the original research mission.
As one might image this sky-mapping and light-combing process requires some serious HPC resources. "So far, Planck has made about a trillion observations of a billion points on the sky," said Julian Borrill of the Lawrence Berkeley National Laboratory, Berkeley, Calif. "Understanding this sheer volume of data requires a state-of-the-art supercomputer."
But scientists behind the project point to another particularly difficult angle to their research that necessitates a high performance system.
To get to the light sources and make accurate models, there is a lot of noise from the Planck sensors to plow through—and a lot of teasing apart of these critical signals versus the static that they are wrapped in. Project scientists point to the noise as one of the fundamental challenges of the mission and have looked to a top 20 system to solve the problem.
At the heart of these signal search and filter process is the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab.
According to NASA, the computations needed for Planck's current data release required “more than 10 million processor-hours on the Hopper computer. Fortunately, the Planck analysis codes run on tens of thousands of processors in the supercomputer at once, so this only took a few weeks.”
Hopper is NERSC’s first system at the petascale pedestal, which rounded out at number 19 on the last Top 500 list with 217 TB of memory running across 153,216 cores. The center is looking to continue the Cray tradition by tapping into the Cascade, as announced around ISC last year.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?