|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / May 18, 2007|
The Linux Cluster Institute (LCI) Conference focus this year was on big clusters. Not necessarily on raw performance per se, but on every other factor required to acquire, host, provision, maintain and achieve scalable performance for systems as a whole.
The first two keynotes set the tone by describing the perils and pitfalls of installing huge systems and getting them to perform. Even after a few years, all of the pieces don't necessarily play together well enough to meet the original design objectives. Horst Simon began the first day with an excellent philosophical discussion about the current state of high performance computing (HPC), hardware architecture, and the political atmosphere surrounding the drive to assemble the worlds' first petaflop machines. He noted that even though we have started construction of a petaflop computer, there are presently only two general-purpose machines in the world capable of 100+ teraflops on the Linpack benchmark.
This was a perfect segue from the opening keynote Monday evening by Robert Ballance of Sandia National Laboratory (SNL) about the difficulties of assembling Red Storm and getting it to perform. Even though Sandia has years of experience building and maintaining some of the largest supercomputers in the world, Red Storm turned out to be a unique experience for them. Why? Because it was much bigger than anything they had previously built. So the old saw in computing, "if it's 10x bigger, it is something entirely new," still holds, and we should not expect a petaflop machine to come together quietly at this moment in HPC time.
One interesting observation which Horst made in his talk is that programming a 100,000+ core machine using MPI is akin to programming each transistor individually by hand on the old Motorola 68000 processor, which of course had only 68,000 transistors. That wasn't so long ago to most of us, and his point is that we can't grow too much more in complexity unless we have some new software methodology for dealing with large systems.
The discussions generated by his comments never really addressed the fact explicitly that we are going to need new compiler technology sooner rather than later to handle the complexity. Neither MPI or OpenMP are the answers by themselves.
The rest of the talks on day one had a heavy emphasis on parallel I/O systems, and the difficulties of getting them to scale on large cluster systems. The problem here is that some of the tests can take so long (Laros, SNL) that the production system would be unavailable for unacceptably long periods of time. So I/O system administrators are forced to do simulations of the I/O systems on smaller development configurations. Presently, it seems that scalable I/O systems are limited to about one KiloClient (my term) for single-process/single-file I/O scenarios. Forget about it if you're talking about shared-file I/O. I think this is still pretty darn good progress, but the performance variability of these I/O systems is large, and it appears that their performance is very sensitive to a huge number of environmental parameters. Repeatability seems to be somewhere over the HPC horizon.
One more issue pertaining to large I/O systems: "operability" is not a synonym for "capability."
An interesting talk by Andrew Uselton and Brian Behlendorf from Lawrence Livermore National Laboratory discussed the difficulties they had with the I/O system delivered with Blue Gene/L. They "sweated bullets" (their term, not mine) for six months trying to get the I/O system to perform up to design specs. Internally, they referred to it as "the death march." The system, as delivered, "worked." However, the severely oversubscribed network design left them with an initial performance deficit of 50 percent of the target of 30+ GB/sec. This seems to be akin to spending two hundred grand on a Ferrari and discovering that it won't get you to the market faster than your neighbors' Buick without considerable tuning. Not that I'm blaming IBM. This talk could have addressed systems from every other manufacturer. There was no sensible way to build the I/O system without oversubscription at that time. It just points out that these complex systems that push the state of the art do not come out of the box ready for prime time.
Hardware and Software Sessions
The second day of the conference was a sandwich of hardware and software sessions. The morning keynote by Norman Miller (UC Berkeley) discussed the usage of cluster-enabled climate modeling software to predict the impact of global warming on California's Sierra mountains snowpack. It's not a pretty picture. This work has thrust him into the state government political system. The message here is the success of the open-source WRF (Weather Research & Forecasting) project. Norman and his colleague Jin have added unique capabilities to the WRF code in order to do these simulations and will deliver these improvements to the WRF project for use by other climate researchers.
A short session on DARPA's HPCS program featured presentations from IBM on their PERCS project and from Cray on the Cascade offering. Both presentations were light on technical details, as might be expected. The important fact to take away from this program was highlighted by the IBM speaker (Rama Govindaraju). He pointed out that the last factor of 10x in performance took IBM five years, but the PERCS project has a target of 100x performance gain over the next five years.
The evening session was the HPC body-building session, where descriptions of several new big machines were paraded before us and muscles were flexed. The parade included Roadrunner (LANL), Abe (NCSA), Ranger (TACC), Jaguar (ORNL), and the Red Storm upgrade (SNL). The price prize went to Ranger, a Sun-built system which is designed to be 529 teraflops with an acquisition cost of $30 million. That works out to slightly less than six cents per megaflop! This is more than a factor of two lower than the typical price range for large clusters.
Finally, Brent Gorda, (LLNL) announced the "Cluster Challenge" for Supercomputing '07 in November. The idea here is for undergrads to build a cluster which can use one 30 amp circuit and run some applications to get a feel for the difficulty of provisioning clusters. Brent came up with the idea after realizing that outside of the laboratories and HPC-centric universities there is not much knowledge and experience in how to obtain and provision clusters. Deadlines for application are approaching, so if you are interested in fielding a team for the challenge contact Gorda at firstname.lastname@example.org.
Cray's Peter Ungaro Kicks Off Last Day of Conference
The morning began with the keynote presentation from Peter Ungaro, titled "From Beowulf to Cray-o-wulf -- Extending the Linux Clustering Paradigm to Supercomputing Scale." In this presentation, Peter unveiled Cray's view of cluster computing and how they are going to compete in the HPC marketplace with future generations of clusters containing ten thousand to one million cores. He predicted a one million core system within five years. For comparison, today's entire Top 500 list represents less than one million cores!
His argument was that commodity Linux clusters are too generalized to provide reliability, availability and scalability when scaled past about one thousand sockets. One example: a typical cluster rack has anywhere from 200 to 300 fans. A "mean-time-to-failure" (MTBF) analysis of the cooling fans in a system with ten racks results in an average fan failure rate of one every 26 hours. On the XT4 system, Cray has reduced the fan count to one per rack -- a very big fan!
This complexity reduction also applies to software. A lightweight OS-provisioned system outscales a Linux-based cluster with as few as sixty four processors (Ron Brightwell, SNL). Cray expects that future superclusters (my term) will require custom value-added simplifications in order to successfully scale, and vendors will need to provide this to the HPC marketplace. Clearly, the message coming from all of the speakers is that the system complexity of the largest supercomputers has become the driving factor in the delivery of product to the HPC user community.
Unfortunately, I do not have time to report on the remaining last day of the meeting. The rest of the day will cover more vendor talks (Cray, Dell, HP, IBM, Intel) followed by some interesting discussions that focus on I/O issues.
Gary Montry is an independent software consultant specializing in parallel applications development and optimization and in attached processor software. He can be reached at email@example.com.