October 07, 2010
If you've been reading this publication for any length of time, I'm sure you've noticed how much ink has been spilled on NVIDIA's GPU computing business. The reason for that is simple: general-purpose GPU (GPGPU) computing has become a technology disrupter in HPC, and NVIDIA is the company driving it. And if you followed our recent coverage of the GPU Technology Conference (GTC) in September, you'll get a pretty good idea of why and how this is happening.
But the technology, and especially the business, is still in its early stages. It was only in June of 2007 that NVIDIA announced its first Tesla GPU products for technical computing. Although AMD pushed its GPU FireStream products into market that same year, it is NVIDIA that has set the pace in this market. At GTC, I got a chance to talk with Andy Keane, who has headed NVIDIA's Tesla unit since its inception. During our conversation, he offered his perspective on how the company's GPU computing business unfolded over the past three years.
The first question I asked him was if the Tesla business was where he thought it would be when they began three years ago. Although he's been at the center of the storm, so to speak, Keane said that even he is a bit surprised at how far the technology has come in such a short amount of time. "I felt we pushed the GPU faster than I had expected," he admitted.
He credits a lot of this to the enthusiasm of the developer and user community.The high-end features coalesced in the current Fermi generation, like support for ECC memory and serious double-precision performance, were always on the roadmap, he said. They were just put in ahead of schedule because the community was asking for them.
The first-ever Tesla GPU-equipped cluster was shipped to the Max Planck Institute in 2008 to support Professor Holger Stark's work in understanding the 3D structure of "macromolecules." Stark had been using GeForce GPUs for awhile, but he wanted to scale his work to a cluster to speed up the image processing. Later that year, the first deployment of the next-generation Teslas (the 10-series GPUs), was undertaken at Tokyo Tech. Those GPUs, in this case, 170 S1070 Tesla servers, were folded into the TSUBAME 1.2 system. That machine became the first GPU-equipped supercomputer on the TOP500 list.
More Tesla cluster deployments followed. According to Keane, these larger deployments suggested the world needed ECC support and a lot more double precision -- features required by large-scale scientific computing. Customers also needed more sophisticated CUDA driver software to optimize the CPU-GPU interface. "So the people you're selling to influence the type of features you put in the GPU and the software," Keane said.
In that sense, NVIDIA sees itself more as a catalyst for the community, rather than a market leader, per se. It's certainly conceivable that some company is going to make more money from products based on NVIDIA's GPGPUs than NVIDIA itself. Beyond straight HPC, GPU computing is now being employed in everything from computer vision to business intelligence. Like the CPU, the GPU is now in that territory where developers are adapting to the chip, rather than the other way around.
"We could not have written the list of applications that are here at GTC," Keane told me. "Some are obvious, like pattern recognition and graphics. But things like neuron research? We wouldn't have come up with that. So there are areas we're going into because of the creativity of the developer."
NVIDIA is counting on its next two generations of GPUs -- Kepler and Maxwell -- to keep the momentum going. Although new GPU computing features are in the offing for these architectures, there is going to be a concerted focus on energy efficiency. Although GPUs already have an enviable FLOPS/watt ratio, system vendors can't accommodate devices that are more power-hungry than the current crop of chips. Fermi Teslas are rated at 225 watts today, which is frankly more than most server makers are comfortable with. So like its CPU competition, NVIDIA will be compelled to bring out more powerful devices in the same (or lower) thermal envelop.
For supercomputing, this is going to be a critical feature, especially for those counting on GPGPUs as a path to exascale. According to Keane (but not only him), delivering a 1,000-fold performance improvement over today's computers cannot be achieved with the old techniques -- certainly not with transistor and voltage scaling, and probably not with x86 manycore. The route to faster computers will be accomplished indirectly through lower power, which will translate into more parallelism, said Keane.
But achieving that level of parallelism on a conventional CPU is a lot trickier than doing it on a GPU. NVIDIA Chief Scientist Bill Dally is convinced the GPU architecture is inherently superior in delivering more FLOPS/watt than general-purpose CPUs and has even sketched a path to exascale based on extrapolations of GPU technology.
Technology aside, there's still the question of how NVIDIA is going to make the business model work for HPC. Keane admitted that his Tesla business wouldn't be viable as a stand-alone company. Given the cost of semiconductor design and the rest of the infrastructure need to support processor development, you need a broad product base, he said. A $2,000 Tesla device would probably cost $10,000 if you factored in all the overhead costs. You just have to look to now-defunct ClearSpeed to see the folly of such a business model.
The way NVIDIA makes this work is to amortize the R&D costs over a much larger product set, in this case the GeForce and Quadro offerings. (The Tegra products use a somewhat different set of technologies.) Tesla is designed as a higher end product, with more cores, more floating point performance, and ECC support. The consumer side needs those things. But since all three units are able to share design and development, Keane can extract his HPC goodies. "AMD has that model, Intel has that model, now NVIDIA has that model," he said.
But that doesn't mean the company is content to see the Teslas remain a niche business. Far from it. Keane envisions a volume market for his high-end GPUs beyond strict high performance computing. For example, computers running air traffic control, Internet traffic, and billing systems for a telecom can all benefit from the data parallel muscle of a GPU. Although mostly invisible, these "infrastructure" computers form the backbone of many IT businesses, not to mention the government. "The real volume market for a product like Tesla is in the computers you don't see," said Keane.
Posted by Michael Feldman - October 07, 2010 @ 5:35 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?