November 24, 2006
On Tuesday, after a delay of almost five months, DARPA selected Cray and IBM as the Phase III winners for the High Productivity Computing Systems (HPCS) program -- the government's initiative to create high productivity petascale computing systems. Each vendor will receive approximately a quarter of a billion dollars in funding over the next four years to finish the design of their systems and build the first HPCS prototypes. The announcement was perhaps the most anticipated news of the year for the HPC community.
If you think the DARPA HPCS program is just of interest for capability-class supercomputing users -- think again. HPCS, in its most ambitious interpretation, is an attempt to drive a stake through the heart of cluster computing. And the government just anted up almost half a billion dollars to do just that.
The Beowulf cluster/MPI programming model, which has enjoyed ten years of dominance and has propelled HPC into the enterprise, may be approaching a wall. As the performance of multi-core processors outpaces the bandwidth of compute node interconnects and as the number of these processor grows to thousands per system, the difficulties of scaling applications across a distributed memory architecture becomes more and more apparent. In addition, as the cost of developing HPC applications and powering the machines starts to dominate the price of supercomputing, the efficiency of the cluster model begins to look questionable.
To its credit, the government had the foresight to begin planning for the next generation HPC model four years ago. Now halfway through the HPCS program, the end result, although not assured, is at least in sight.
"This is a journey that we've been on since 2002 and will culminate in late 2010 with the development of a petascale high productivity computing system capability," said Charles Holland, director of DARPA's Information Technology Processing Office. "This program is not in a race to produce a petaflop computer, with respect to the Top500 or any other arbitrary metric. This program is about developing a high productivity computing capability that achieves performance on real application codes as evidenced by the benchmark suites that we have chosen."
The HPCS program's emphasis on increasing user productivity versus just increasing FLOPS is an attempt to change the Top500 culture of supercomputing. It's based on the realization that Linpack performance is only distantly related to actual value for end users. This Top500 view of the supercomputing world will slowly fade away as more systems are built that decouple Linpack performance with real-world usefulness.
Even though the initial IBM and Cray systems will be strictly capability-class machines, DARPA's HPCS program is intended to develop technologies that will work their way into a broader range of HPC solutions. Exactly how this will happen remains to be seen, but since both vendors are investing considerable sums of their own money in the effort, there should be a strong motivation to produce technologies that can applied to a range of commercial solutions.
To be sure, Cray has openly stated its plans to integrate the HPCS work with their "Baker" (2009) and "Granite" (post-2010) platforms. Moving these technologies into mere mortal machines is more problematic, since Cray tends to gravitate towards high-end solutions. On the other hand, the PGI compiler work that results from the The Portland Group's partnership with Cray has a good chance to find broader applicability.
IBM's has been more circumspect about its HPCS design so its more difficult to guess what technology could find its way into other HPC systems. Certainly the POWER7 processor, the General Parallel File System (GPFS) and IBM's Parallel Environment could be applied to more generalized computing solutions, but as of yet, IBM has not announced any specific plans. There have also been rumors floating about that the innovative TRIPS (Tera-op, Reliable, Intelligently adaptive Processing System) processor may find a home in Big Blue's HPCS effort, but the company has announced nothing publicly.
I'm certainly not the only one sees the potential of HPCS to change the game. The High-End Crusader comments:
"In the HPCS downselect, the strongest and the second-strongest proposals survived, while the weakest proposal was eliminated. DARPA is (apparently) slow but it is not stupid. One great virtue of the HPCS program is to break the following vicious circle: dumb down our HPC applications to make them match our market-driven, already dumbed-down HPC architectures, which further coarsens the market, and repeat.
"But the work is not finished. Take Cray for example.
"Cray Inc. has made great strides in its Cascade design in exploiting heterogeneity of parallelism for broadly applicable sustained performance. It is to be commended for its stated goal of decoupling programming abstractions (above the compiler) from execution abstractions (below the compiler), thus making the programming of parallel machines more accessible to the average programmer. (Here, it is the heterogeneity of parallelism that is hidden: single thread, vector, multithreaded, whatever). Not a penny of Cray Inc.'s investment in software is wasted.
"But we need a more comprehensive view of heterogeneity. We need heterogeneity in all its diversity. There are three forms of heterogeneity: heterogeneity of parallelism (as understood above), heterogeneity of locality, and heterogeneity of programs and programming style, which -- amazingly -- goes beyond the other two.
"We need a visionary, with background in both architecture and programming languages, who can find a successful integration of these three major forms of heterogeneity (sounds like a full-length HEC article).
"Only fully-integrated heterogeneous processing can restore computing's vitality, at any scale, from desktop to petaflop, and we are not there yet. Moreover, we also desperately need to master the many-core dragon, who has the potential to eat us all."
For the true believers in heterogeneity, this is a good time to be in high performance computing. Along with HPCS and the growing interest in the Cell BE processor, FPGAs, GPUs and other accelerator technologies, the world of high performance computing seems to be transforming. With the drive towards alternative computing models, we may be witnessing a period when more progressive technology will begin to replace the traditional HPC systems that have held sway for the last ten years.
The end of the conservative era? It could happen!
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - November 23, 2006 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?