August 13, 2008
It's not enough that GPUs are doubling their capability every year or so. Performance demand is such that GPU vendors are increasingly turning to multi-GPU configurations. AMD's introduction of the dual-GPU ATI Radeon HD 4870 X2 and NVIDIA's announcement of dual- and quad-GPU Quadro Plex (D Series) devices suggest multi-GPU scaling is becoming more commonplace.
The debut of the ATI Radeon HD 4870 X2 has caused quite a splash in the graphics market. It means that gamers will soon be able to buy 2.4 teraflops of GPUness for a mere $549. AMD is touting its new offering as the "world's fastest graphics card." Technically, this is probably true. But for HPC applications, NVIDIA's S1070 Tesla offering is a four-GPU server board that tops out at 4 teraflops. Of course at $7,995, the S1070 costs a lot more, but it's really not worth comparing to the less expensive Radeon offering since the two products are geared for very different applications.
That said, AMD might be missing the boat by not letting users know that its stream computing SDK is compatible with the ATI Radeon GPUs. A high-end game machine might not be a production HPC platform, but it might help AMD build a grassroots following for their budding GPGPU FireStream business. A TechNewsWorld article quotes analyst Jon Peddie, president of Jon Peddie Research, who suggests that an enterprising geek with four PCI slots could build a supercomputer on the cheap with the new dual-GPU Radeon X2s.
"Each one of those chips is about 1.2 teraFLOPs in compute power. If you put eight of them in the board, you're talking about a 10 teraFLOP supercomputer. Do the math with the board coming up at about $500 -- if you put four of them in there you spend $2,000 on the board. And say you spend another $1,000 on your computer. So you spend $3,000 for a 10 teraFLOP computer. I think that's just astounding," Peddie stated.
Of course, with each card dissipating 270 watts under load, you might want to invest in really big fan to keep your new TOP500 wanna-be from melting.
The dual-GPU move by AMD has at least temporarily out-flanked NVIDIA by using two mid-range Radeon GPUs to outperform its single-GPU competition, the GeForce GTX 280. Scaling out rather than up may turn out to be good strategy for GPUs -- and not just for graphics apps. In the HPC space, it's often more efficient to spread highly parallel workloads across more threads even if those threads are running more slowly than in a monolithic implementation. Even if clock speed is kept constant, it might make economic sense to populate the hardware with a number of lower performing GPUs rather than a single faster processor.
It's interesting that in the new Radeon offerings, AMD has chosen to connect the two GPUs over a high speed bus. Although the advantages of direct GPU-to-GPU communication remain to be seen, for HPC apps, I assume it's more important that the GPUs talk with their CPU host than with each other. But if the intent is to build a single "virtual GPU," internal communication will be required.
It wouldn't surprise me if AMD used a multi-GPU strategy to extend its HPC FireStream product, which is currently based on a single Radeon GPU. This might encourage NVIDIA to go to a dual-GPU configuration for its Tesla workstation offering, if the company is not already thinking of doing so. The real issue here is getting the software environment to work seamlessly in a multi-GPU environment. Both NVIDIA's CUDA and AMD's Brook+ language seem to have some level of support for multiple GPUs, but my impression is that there's work to be done here.
The good news is that for general-purpose computing, multiple GPUs should actually scale pretty well, performance-wise. Since these devices are being used for data parallelism, as long as the ratio between GPUs and on-board memory is maintained, many applications should see near linear acceleration. Just as in the CPU arena, offering a more scalable system architecture seems destined to become a lot more important than just having the biggest GPU on the block.
Posted by Michael Feldman - August 12, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?