December 15, 2010
This week, the Partnership for Advanced Computing in Europe (PRACE) announced its third petascale supercomputer for the organization's Tier 0 research infrastructure. The upcoming machine, known as SuperMUC, will be built by IBM and is estimated to deliver 3 peak petaflops when it is deployed in 2012 at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany.
SuperMUC will follow the 1.0 petaflop "JUGENE" (Jülich Blue Gene" Blue Gene/P supercomputer, already in service at Forschungszentrum Juelich (FZJ), and the 1.25 petaflop Bull-built "Curie" system at the Commissariat a l'Energie Atomique (CEA) in France. Those machines currently hold the 9 and 6 spots, respectively on TOP500 list. When SuperMUC is installed at LRZ in the middle of 2012, it too will likely be a top 10 system, although by then all ten machines should be operating in the multi-petaflop range.
All Tier 0 machines will support PRACE's mission to provide a pan-European HPC research infrastructure for scientific computing. As of June 2010, four of the twenty member nations have anted up 100 million Euros apiece to fund supercomputer deployment and operation over the next five years. The goal to field as many as six of these petascale systems for Europe during this time. With the tri-petaflop system from IBM, they're halfway there.
The SuperMUC system headed for LRZ will use IBM's iDataPlex System X platform, and will incorporate Intel's next-generation Xeon processors. Most likely that means SuperMUC will be sporting Sandy Bridge Xeons, given that these are next up on the Intel server processor roadmap.
The next-gen Xeons are scheduled to be released in Q3 2011 (Sandy Bridge EP) and Q4 2011 (Sandy Bridge EX), which should provide plenty of time for a mid-2012 system deployment. SuperMUC will incorporate more than 14,000 of these future chips, although the exact core count is still under wraps. Sandy Bridge Xeons will come in 4-core, 6-core, and 8-core flavors, so we can assume the system will have at least 56,000 x86 cores.
Storage-wise, SuperMUC will hook into 10 petabyte file system based on IBM's GPFS. The GPFS storage system is spec'ed to deliver 200 GB/second of aggregate I/O bandwidth. A two-petabyte NAS storage system, with 10 GB/second of bandwidth, will also be available. Aggregate RAM storage is on the order of 384 terabytes.
Besides next-gen Xeons, SuperMUC will also employ a number of other newer technologies. First, SuperMUC will use FDR (Fourteen Data Rate) InfiniBand as the cluster interconnect, technology which is expected to be in the field by 2011. But the system's most significant innovation is its novel hot water cooling system pioneered by IBM with its Aquasar supercomputer located at ETH Zurich.
The advantages of water over air as a cooling medium are considerable. IBM says the system will consume 40 percent less energy than a comparable air-cooled machine. According to Klaus Gottschalk, IBM's lead HPC architect for the system, the processors and other components in the supercomputer will be cooled with water up to 60 degrees C (140 degrees F). The cooling system itself is comprised of micro-channel liquid coolers which are attached directly to the processors, where most heat is generated.
"With this chip-level cooling, the thermal resistance between the processor and the water is reduced to the extent that even cooling water temperatures of up to 60 degrees C ensure that the operating temperatures of the processors remain well below the maximally allowed 85 degrees C," explains Gottschalk. "The high input temperature of the coolant results in an even higher-grade heat at the output, which in this case is up to 65 degrees C."
SuperMUC also represents the first implementation of an energy aware HPC software stack on x86, says Gottschalk. Application energy consumption will be monitored, stored and reported to the user. When an application is ready to run, the scheduler will decide which processor frequency is optimal for the application, based on administrative policies. System nodes not in use will be put in sleep mode, or if capacity expectations warrant, shut down entirely.
The rationale, of course, is to reduce power consumption as much as possible. Although IBM and PRACE are not revealing SuperMUC's expected power draw, for a 3-petaflop supercomputer based on x86 CPUs, it's apt to be considerable. And in Europe, where energy costs tend to be even higher than in the US, power is going to be a driving consideration for these big PRACE systems.
The price tag for SuperMUC, which includes power and other operational costs for five or six years, is 83 million Euros. That doesn't include the additional 50 million Euros to expand LRZ's buildings needed to house the new system. That funding, as well as the aforementioned operational costs, will be provided by the State of Bavaria and Germany.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?