|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / June 23, 2006|
Late last week, Cray announced it had signed a $200 million contract with Department of Energy's Oak Ridge National Laboratory (ORNL) to provide the lab with a petaflops-speed supercomputer. Part of the contract will involve upgrading the facility's existing Cray XT3 supercomputer over the next two years. The deployment of the petaflops machine is planned for late 2008.
The contract represents the first purchase of a supercomputer that promises at least one petaflops (peak) performance. ORNL, however, is focused on computing performance for its own big science applications, rather than the petaflops metric itself.
"It is, indeed, the case that this system will have peak speeds of a petaflops," says Thomas Zacharia, Associate Laboratory Director Computing and Computational Sciences ORNL. "However, the system specifications for this machine includes important attributes such as memory, memory bandwidth, interconnect and I/O bandwidth, storage etc., in addition to performance expectations on real applications and benchmarks, which would make this a very balanced machine for petascale applications. Petaflops is an incomplete and inadequate description of the capability of this or any other system."
"The Leadership Computing Facility at the Oak Ridge National Laboratory is focused on enabling new discoveries in key science and engineering areas such as nanosciences, biosciences, environmental sciences and energy technologies," continues Zacharia. "We are working with the user communities in defining the key applications focused on delivering petascale science on day one as the machine comes online in late 2008."
So what's a petaflops-speed machine cost these days? Cray is not divulging that information. An unspecified portion of the $200 million will be used to upgrade ORNL's existing "Jaguar" XT3 machine from its current 25 teraflops to an eventual 250 teraflops. This work will be accomplished by replacing the existing single-core Opteron processors with dual-core (and eventually quad-core) versions, as well as by adding more processors to the system. The final upgrade is planned to be completed by the end of 2007.
A year after that, Cray's new 'Baker' class petaflops machine will be installed at ORNL. That system will contain the most advanced Opteron processors available in 2008 timeframe, presumably quad-core or better.
According to Jan Silverman, senior vice president of Corporate Strategy and Business Development at Cray, since they can't predict the exact level of Opteron technology two years in advance, they don't know precisely how many sockets will be required to obtain a petaflops machine. But he expects it will be between 20 and 25 thousand sockets. The exact number will depend on the clock frequency and the overall capability of the Opteron technology in 2008, but the commitment to the petaflops metric will not change.
"That's in the contract," says Silverman. "It will be a petaflops machine."
In relation to its performance, the system will be relatively compact, although Cray is not reporting exactly how much floor space it will occupy. However, Silverman observes that it would be hard to imagine how it could be packed any more densely. A proprietary liquid intercooling system will be used to keep the system at a reasonable operating temperature. It is said to be even more advanced than the cooling technology used for Cray's current big vector machines.
The new supercomputer will also incorporate Cray's next-generation interconnect. Because of the very large number of processors required for this machine, the network technology will be crucial for providing performance for real world supercomputing applications. According to Sliverman, it will borrow elements from the three Cray interconnect technologies represented by the X1/X1E, the XT3, and the XD1 architectures. The three technologies represent different approaches that trade off communication bandwidth and latency with cost, depending upon the intended application of the machine. But, Silverman says the new interconnect promises to be a lot faster than any of these, and with very low latency.
"Depending on the machine we're building, we architected the interconnect slightly differently," says Silverman. "After going down all three of the paths, we came to the conclusion that we could architect a single networking infrastructure that met the needs of all our users. And that's what is in the Baker machine."
The petaflops system also represents something of a milestone for Cray's product line. Instead of the three separate architectures now being built, the Baker machine marks the first step into converging these systems into Cray's Adaptive Computing vision. Silverman says that with the Baker architecture, all the new systems will move together as one, as Cray phases in its new approach to supercomputing.
Adaptive Computing is Cray's grand vision that marries heterogeneous architectures with petascale computing and is the approach behind the Cascade design that the company submitted for the DARPA High Productivity Computing Systems (HPCS) competition. Between 2008 and 2010 (the planned date for Cascade's introduction), the Cray systems will get further refined and include a number of other significant innovations. But the Baker architecture will provide the foundation for subsequent designs.
"In some sense, you can look at it as a preliminary test vehicle for the DARPA concepts, while providing the most sustained petaflops machine that you could possibly build," says Silverman.
The Baker systems would appear to compete head-to-head with the IBM Blue Gene/L machines, or more precisely, its successor. But the Cray design has some interesting differences with the IBM offering. The most obvious one is that the current Blue Gene/L systems are based on 700 MHz PowerPC 440 processors, a lower performing chip compared to even today's Opterons. This allows the Baker machine to use proportionately less processors than a Blue Gene to reach the same level of performance.
Is this an advantage? Maybe. There has to be a balance between processor performance and the ability to scale software across so many individual cores. For example, if you just scaled up the current Blue Gene/L technology to one petaflops, it would require around 400,000 cores. It would be tough to scale applications to take advantage of all those CPUs.
"Quite frankly, the challenge at the end of the day is: How do you write an application that goes across so many processors," asks Silverman? Delivering a lot more processors doesn't necessarily mean that the application software can take advantage of it. If your application can only use 5000 processors, you want them to use the fastest 5000 you can find."
Systems built using cluster architectures can achieve better price/performance at a lower cost than something like a Baker or a Blue Gene. But that comparison usually is made with theoretical peak performance. Once you start to evaluate the systems using sustained performance on challenging applications, the advantage of clusters diminishes.
"If they wanted this thing just to be a capacity machine, they could probably have gotten the same amount of flops by just building a vanilla cluster," says Silverman. "Because this is a capability machine, this is all about running very large applications and getting the maximum bandwidth between the applications. These applications don't run well on clusters because the network gets in the way. It's not a machine that was built just to be a petaflops machine so we could run a benchmark and say we have it. And consequently it's not the cheapest petaflops machine you could build."
Silverman admits he doesn't expect the Cray machine to be the only petaflops system by the time it's deployed in 2008. He realizes that all the big HPC players are busy developing their own version. But Silverman does think the Cray machine will be superior to others in this category in its ability to offer real application performance. Like ORNL and the other national laboratories, delivering sustained performance on challenging, big science applications is what Cray is really focused on.
"How much of the petaflops can you actually get in your application," asks Silverman? "That's where I think this machine will outshine anything else that's out there."