July 30, 2008
In a brief press release issued on Wednesday, the Barcelona Supercomputing Center (BSC) announced a prototype computer system, which will be the basis of a future 10 petaflop supercomputer. The prototype is called MariCel, which means sea and sky in Catalan, but which I've interpreted to mean that the Spanish are smitten with the Cell processor.
Apparently the system is a hybrid architecture, based on Cell and Power6 processors. Although not mentioned in the press release, the 10 petaflop machine is probably "MareIncognito," the system IBM and BSC announced back in 2007 as an R&D collaboration. According to Wednesday's announcement, MariCel will define the hardware components and the software stack for the10 petaflop machine, which is scheduled to be installed early in the next decade.
"MariCel is part of an initiative to create a common supercomputing structure for Europe. On this prototype, similar to the architecture of the American Roadrunner, we will test the latest software technologies, some of them developed at the BSC. We think that in Spain we will be able to install supercomputers 100 times more powerful than the current MareNostrum in 2011 or 2012," says Francesc Subirada, associate director of BSC.
No other details about the hardware were forthcoming, but if the general plan is to follow the Roadrunner design that would mean the system will be constructed from Power6-based JS22 blades and Cell-based QS22 blades. Of course if the hardware won't be installed for another three or four years, I imagine IBM will have refreshed their blade lineup in the interim. Also, since Power7 should be hitting its stride by 2011, the choice of Power6 seems a bit odd. On the other hand, the Roadrunner petaflop system didn't use the latest quad-core Opterons in that system either. They opted for the older dual-core Opterons, letting the new Cell chips do most of the heavy lifting.
One set of applications already slated for the new Spanish super will be produced under the Kaleidoscope Project, a group that is developing next generation seismic imaging technology. BSC, Repsol, 3DGeo Development, and the Spanish Research Council are collaborating on the project. The software will employ Reverse Time Migration (RTM) to accelerate oil and gas discovery and make it possible to accurately locate reserves eight miles beneath the ocean surface. Since a single drilling test can cost over $150 million ($30 million more than Roadrunner's $120 million price tag), the energy companies are kind of finicky about choosing the right spots.
The catch is that the software needs a lot of FLOPS to be of practical use. According to the Kaleidoscope Web site, one iteration of an average seismic imaging production run takes four months on a 10 teraflop system, but just a day and half on a petascale machine. In anticipation of the future Cell/Power6 multi-petaflop machine, BSC is working on porting and optimizing the software to execute on Cell processors. Today the Kaleidoscope code runs on BSC's MareNostrum supercomputer, a system that tops out at 94 teraflops.
Posted by Michael Feldman - July 29, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?