November 09, 2011
A recent report by analyst firm IDC, titled "Heterogeneous Computing: A New Paradigm for the Exascale Era," makes the case that heterogeneous computing is going mainstream and will be "indispensable for achieving exascale computing."
Forgetting for a moment that NVIDIA sponsored the report, those conclusions are well supported by the fact that heterogeneous computing in the form of GPGPUs has indeed enjoyed a relatively fast adoption cycle in the normally staid HPC community, not to mention that essentially every major HPC chip and system vendor has a some sort of roadmap that includes heterogeneous components in its future.
While the report cites some of the usual adoption barriers for this relatively new paradigm (i.e., programming challenges, communication bottlenecks, uncertainty about advantages of accelerators versus future CPUs), it notes that system cost, energy efficiency, and space limitations are all driving users to adopt the more compute-efficient GPUs that have made their way into the HPC landscape over the last five years. Those same issues, the report says, will make heterogeneous computing the basis of exascale systems by the end of the decade.
IDC backs this up with its own research. From the report:
IDC's 2008 worldwide study on HPC processors revealed that 9% of HPC sites were using some form of accelerator technology alongside CPUs in their installed systems. Fast-forward to the 2010 version of the same global study and the scene has changed considerably. Accelerator technology has gone forth and multiplied. By this time, 28% of the HPC sites were using accelerator technology — a threefold increase from two years earlier — and nearly all of these accelerators were GPUs. Although GPUs represent only about 5% of the processor counts in heterogeneous systems, their numbers are growing rapidly.
The report also notes that GPUS and accelerator technology more generally (with a shot-out to the Intel MIC coprocessor) is moving from experimental use into more mainstream production work. Nowhere is this more apparent than in the top supercomputers, where currently three of the top ten machines in the world employ GPUs, a number which is expected to grow as more US supercomputers like Titan (ORNL) and Stampede (TACC) come online over the next 12 to 18 months.
IDC's only caveat is that x86 technology is not standing still and they expect products based on that architecture to remain the revenue leader in HPC through 2015. The implication is that even in a world replete with exotic HPC accelerators, x86 is likely to survive as a complementary CPU technology, or in the case of Intel MIC, as its own accelerator.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?