October 06, 2006
Last Friday's announcement of ATI's intent to build a "stream processing ecosystem" was the last piece in the why-AMD-bought-ATI puzzle. Though AMD's initial plans for ATI's graphic processing units (GPUs) may be for the desktop/laptop segment, the company appears to view the GPU as a fundamental technology across all of their markets: desktop, mobile, enterprise server and high performance computing. For all platforms, the strategy is to use the GPU to do what the CPU cannot -- data-level parallelism (DLP).
Having some of the same characteristics of proprietary vector processors, graphics engines can process data arrays much more efficiently than a standard microprocessor. Using GPUs, DLP-friendly workloads can achieve performance boosts on the order of 10X to 50X when compared to a CPU. Applications that can take advantage of this type of parallelism includes seismic modeling, financial risk assessment, protein folding, climate modeling, "physics processing" and image/speech recognition -- virtually any high performance computing workload.
What has pushed GPUs into the limelight at this point in time? Many (certainly AMD) believe that the graphic processing unit has finally grown up. The hardware has become both more powerful and general-purpose (supporting both MIMD and SIMD pipelines); the IEEE floating-point precision has been enhanced; and high-level languages that target GPUs are starting to emerge (Cg: C for Graphics), allowing for easier programming. In addition, the widening scope of data-intensive applications has created an opening for more data-centric architectures.
In a 2004 paper, titled GPU Cluster for High Performance Computing, the authors state:
"Driven by the game industry, GPU performance has approximately doubled every 6 months since the mid-1990s, which is much faster than the growth rate of CPU performance that doubles every 18 months on average (Moore's law), and this trend is expected to continue. This is made possible by the explicit parallelism exposed in the graphics hardware. As the semiconductor fabrication technology advances, GPUs can use additional transistors much more efficiently for computation than CPUs by increasing the number of pipelines."
At last Friday's announcement, ATI CEO David Orton, without revealing a specific roadmap, suggested that their graphics engine architecture would be further enhanced to benefit both traditional graphics workloads and general stream processing. Their current GPUs achieve about a third of a teraflop; the next generation is expected to reach a half a teraflop.
High energy consumption is a drawback. The ATI X1900 XT, the device currently being used for some stream processing demos, tops out at over 100 watts. That's not a big problem for desktops or non-mobile game machines, but if you want to deploy hundreds or thousands of them in a supercomputer, that level of power usage is sure to be a concern.
In the near term, ATI plans to use coherent HyperTransport so that its chips can take advantage of the AMD's native interconnect. In a couple of years (at 45nm process technology), ATI GPUs may end up on the same die as AMD CPUs, perhaps creating a Cell-processor-like device -- but with the advantage of a commodity software base. AMD has hinted that it might eventually make sense to transfer some silicon between the CPU and the GPU to optimize each unit's functionality; jettisoning the SIMD 3DNow! instructions on the AMD processors comes to mind.
If GPUs are destined to achieve parity with CPUs, it will be interesting to see what happens with Nvidia and Intel. Being late to the GPU party could have devastating effects for the procrastinators, since building a software base for your graphics engine will be critical in establishing product momentum. So far Intel has not made a move, but as I write this, rumors of Intel acquiring Nvidia are circulating around the Web. Stay tuned ...
Ten Petaflops or Bust
This week's feature article comes from a Herbert Wenk, a new contributing author for HPCwire. During a recent scientific conference at NEC's research facility in Germany, Wenk was able to gather information on Japan's plans for a ten petaflop system. Dr. Mitsuyasu Hanamura, who heads the applications software group within the RIKEN Next-Generation Supercomputer R&D Center, took part in a press briefing organized by the NEC Europe Computing & Communication Research Lab in St. Augustin, Germany. According to Wenk, Dr. Hanamura believes a heterogeneous architecture can meet Japan's ten petaflop goal by the end of 2011.
The developing controversy between interconnect models -- RDMA versus Send/Receive -- is being played out here at HPCwire. The original RDMA critique from Patrick Geoffray at Myricom, generated a rebuttal by Renato Recio, chief engineer at IBM eSystem Networks.
This week, Gilad Shainer at Mellanox Technologies weighs in with the view that you don't have to exclusively choose between RDMA or Socket Send/Receive; you can use either one depending on what's best for your application. See his "Why Compromise?" article in this week's issue.
Christopher Aycock at Oxford University, counters that the main trouble with that whole RDMA model of communication is the memory registration requirements, which really drags the performance of most applications on commodity networks like InfiniBand. Read the "Why Pretend?" counter-rebuttal to get his take on the problems with RDMA.
Gelato ICE Anyone?
Linux-on-Itanium enthusiasts got a dose of their favorite platform this past week in Singapore at the Gelato Itanium Conference and Expo. Before heading off to the equatorial event, we got the chance to interview a couple of the conference presenters: SGI's Steve Neuner and Intel's Cameron McNairy. In Neuner's interview, we ask how Linux was modified to enable a single system image to run on 1024 processors on an SGI Altix machine (in the lab, they're now up to 1742 processors). In our other interview, McNairy, who is the principal engineer and an Intel architect for the Montecito program, talks about Itanium's role in high performance computing. The money quote from his interview:
"Hardware is certainly easier to change than software..."
Oh how conventional wisdom does change!
Whew! I think that's everything on my list this week. If I missed something, I'll just say I'm following John West's suggestion. In this week's High Performance Careers column, he actually makes a case for NOT doing everything on your list. Great advice for the over-burdened technology worker ... or editor.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - October 05, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?