July 09, 2008
In HPC, there has always been a tension between general-purpose and special-purpose architectures. That tension reflects two facets of the market: to apply HPC to more application domains and more users, and to increase performance for the most demanding applications. With a sort of schizophrenic behavior, HPC exploits Moore's Law's for all it's worth, and then, unsatisfied, tries to find a way to beat it.
In the early days of supercomputing, special-purpose silicon dominated the industry, exemplified by the custom-built vector machines from Cray and NEC. But by the 1990s, the "attack of the killer micros" ushered in the era of general-purpose CPUs, specifically the x86 franchise. Because of the favorable hardware and software economics offered by general-purpose hardware, many people thought this model would go on forever. Now there are some indications it won't.
By incorporating a few thousand souped-up game chips into its design, the IBM Roadrunner demonstrated how a far and how fast an architecture can leap over its brethren. Using the latest Cell processors, IBM was able to reach a petaflop in Linpack performance before any of the competition, not to mention the company's own PowerPC-based Blue Gene. Today there are plans afoot to build other high-end supercomputers using the latest GPU chips from NVIDIA. And although both Cell and GPU processors are specialized, they are derived from chips that are used in commodity gaming systems. Thus they retain some of the volume production advantages of general-purpose CPUs, if not the software advantages.
But to squeeze even more application performance from silicon, one must resort to true custom designs. Perhaps the most extreme example of this approach is Japan's latest MDGRAPE supercomputer built by RIKEN. The system was not built for general-purpose computing. It was designed specifically to perform molecular dynamics simulations, especially for protein structure prediction and the development of new drugs. Using 4808 custom MDGRAPE-3 processors, that machine reportedly achieved a petaflop two years before the IBM Roadrunner did. But since it wasn't a Linpack petaflop it didn't count in the TOP500 supercomputer tally.
At the end of 2008, a U.S. based firm, D.E. Shaw and Company, is scheduled to complete the development of another custom-built supercomputer for molecular dynamics (MD). The project is headed by David E. Shaw, a computer scientist who made his fortune on Wall Street as a quantitative trader. The new MD machine, called "Anton" (after the legendary microbiologist Anton van Leeuwenhoek), incorporates 512 custom-built ASICs hooked together by a high-speed communication network. The system is designed to execute millisecond-scale MD simulations.
The millisecond scale is the important feature since it represents at least a thousand-fold increase in the timescale of MD simulations currently being carried out on supercomputers. It will allow researchers to get a much better sense of protein folding behavior and other biochemical interactions. Specifically, it should give scientists a much more powerful tool for understanding disease mechanisms and for developing new drugs.
But will a custom-built design be worth it, even for specific applications with a lot of science and potentially, commercial worth, riding on the results? In an ACM article which describes Anton, the researchers offer their rationalization for the approach:
A natural question is whether a specialized machine for molecular simulation can gain a significant performance advantage over general-purpose hardware. After all, history is littered with the corpses of specialized machines, spanning a huge gamut from Lisp machines to database accelerators. Performance and transistor count gains predicted by Moore's law, together with the economies of scale behind the development of commodity processors, have driven a history of general-purpose microprocessors outpacing special-purpose solutions. Any plan to build specialized hardware must account for the expected exponential growth in the capabilities of general-purpose hardware.
We concluded that special-purpose hardware is warranted in this case because it leads to a much greater improvement in absolute performance than the expected speedup predicted by Moore's law over our development time period, and because we are currently at the cusp of simulating timescales of great biological significance. We expect Anton to run simulations over 1000 times faster than was possible when we began this project. Assuming that transistor densities continue to double every 18 months and that these increases translate into proportionally faster processors and communication links, one would expect approximately a tenfold improvement in commodity solutions over the five-year development time of our machine (from conceptualization to bring-up). We therefore expect that a specialized solution will be able to access biologically critical millisecond timescales significantly sooner than commodity hardware.
A custom-built approach is also being undertaken by researchers at Berkeley Lab who are designing a multi-petaflop supercomputer for next generation climate modeling. In this case though, they're attempting to exploit commodity technology from the embedded computing space. Because of power and hardware limitations, the Berkeley guys believe it will not even be possible to construct practical general-purpose machines as computing approaches the exascale level. If true, special-purpose architectures will not just be an alternative approach, it will be the only way forward.
Posted by Michael Feldman - July 08, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?