March 26, 2013
This year's newest upcoming incarnation of the Top 500 list will see another high-placing commercial or “civilian” super—one that will likely dramatically outpace the performance of the current top private sector-focused system, Germany’s Hermit.
While Hermit is buried in a larger-scale healthcare and civil initiatives, a new system from SGI tweaked from its ICE X HPC system is set to meet the needs of a singular company. French oil and gas giant, Total will enjoy a top spot on June’s list with their new “Pangea” super, which will be harnessed for seismic processing and reservoir modeling.
The approximately $78 million ICE X based system will clock in about ninth place on the Top 500 list as it stands now, ringing in at around 2.3 petaflops, at least in terms of its peak Linpack calculations. SGI expects it to pull the title of top commercial system this year, which is probably not an unreasonable assumption given its predicted performance across its 110,592 Xeon E5-2670 cores and 442 TB of memory that is split on this distributed-memory system.
On the specs front, SGI points to the data management capabilities, consisting of 7 PB of storage, including its native InfiniteStorage disk arrays (17,000 of them to be exact) and their DMF tiered storage virtualization backed by integrated Lustre.
The energy-seeking super will be housed at Total’s Jean Feger Scientific and Computing Centre in France, where it is expected to offer a 10x boost to the current seismic landscape visualization. The company hopes to extract far more exact views of the models that point to oil and gas reserves by revealing more detail about some of the sub-surface conditions that aid oil and gas exploration.
Total has been an SGI shop for well over a decade. When we spoke with SGI 20-year veteran product marketing VP, Bill Mannel, this morning, he was able to rattle off a range of systems that SGI has churned out for Total over the last ten or more years, both on the oil and gas giant’s home turn if France and at their Houston center.
Before moving to the ICE X, they had an Altix Itanium platform, which was replaced with two older incarnations of their ICE platform, which has now evolved into the ICE X. Mannel says that what’s notable about ICE X is that unlike the previous generations, this one was the result of a rip-and-replace of the overall approach to the system—a first-of-its-kind collaborative effort between the Rackable and SGI sides of their business.
According to Mannel, the new design that backs the Infiniband-connected ICE X platform was “a reaction to the market need…customers were focusing on efficiency from the standpoint of cooling and its collateral effect on power to do that cooling.” To address these demands, he said they looked backward to find new solutions, pulling in water versus air as the cooling mechanism.
This was appealing for Total, whose CIO for Exploration and Production said that the efficiency of the ICE X system was “a key factor in [the] selection of the SGI ICE X for the Pangea system.” He noted that the choice “represents high computational power using a minimal amount of energy, which gives Total the smallest footprint and lowest TCO possible.”
Many of the innovations on the power and cooling side come from some of the folks that were ushered into the SGI fold when Silicon Graphics acquired Cray Research back when you were just a baby. The talent they acquired at the time was looking to liquid cooling alone to keep the still-developing supercomputers from going up in smoke, a trend that continued before air cooling hit the circuit in the 200s. The Cray Research and Rackable arms joined forces for the ICE X to bring bubble water back into the power and cooling picture.
Using water directly on the blades is one of the cooling innovations of the system. In essence, they’ve dropped cold sinks onto the processors themselves to cool the hottest elements, then opt for air on memory, drives, network drives and other less steamy pieces. This means that SGI has been able to wick away the processes behind water—getting the water out, dealing with condensers and cooling towers—without sacrificing on the cooling capabilities. The warm water cooling, closed loop airflow system and unified cooling racks of the self-contained ICE X M-Cell, which is what Total opted for with this installation, were key attractors, says Mannel.
Massive systems for the oil and gas industry are nothing new, but we’ve seen some key announcements over the last two years of these companies seeking to delve into petascale territory. For instance, just over a year ago Russian gas giant Gazprom put out feelers for its own petascale system. And for the lucky, it’s not unlikely to stumble upon a oil and gas prospector at the SC or ISC shows.
But anyway, for the curious, here is a video overview of the system Total selected…
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?