August 19, 2010
Alongside multicore CPUs and cloud computing, GPGPU is a technology that will continue to shake up the way computing is done for years to come. To the general public, the GPGPU phenomenon is probably the least visible of the three I mentioned, but it may end up having just as much of an impact.
The most noticeable effect from GPU computing will be the way it redefines what we think of as a general-purpose processor. Historically, specialized processors get swallowed by the CPU when their functions are no longer thought of as specialized. We saw this with floating point units and, more recently, with memory controllers. (Although some flexibility gets lost, the integrated model is much more economical in terms of power usage and space.) We're seeing this same general-purpose capability coming to fruition in GPUs. Today these devices can be used for traditional graphics, advanced visualization, and floating point/vector processing. The rise of general-purpose GPU computing will inexorably push graphics-flavored logic onto the CPU die.
We're already seeing CPU-GPU designs coming from the two big x86 chip vendors. AMD is blazing the trail with their Fusion APU (Accelerated Processing Unit) processors, the first of which are slated to show up in early 2011. These initial designs will be targeted to the consumer market -- desktops and notebooks -- where video processing and CPU-centric applications are already well integrated. Intel's upcoming Sandy Bridge processors that are aimed at the consumer space will also incorporate a GPU on the same chip. Like AMD, these processors will available in early 2011. For both chipmakers, this represents the first time CPUs and GPUs will share the same silicon real estate.
For GPGPU enthusiasts though, these early heterogeneous designs really represent transition technologies. In most cases, the integrated GPU will be used for traditional graphics and visualization, with the CPU still handling most of the floating point and vector math. In that sense, there will be some redundant functionality on these early chips. The larger payoff will occur when the CPU's floating point and SIMD logic is merged with the GPU. It's probably wrong to think of that as an endpoint, since it's more likely to play out as gradual evolution over multiple generations of processor architectures.
Before then we should see CPU-GPU designs for server chips. AMD has hinted about such platforms, but hasn't committed to any specific products or roadmap. For this to make sense economically, the semiconductor process will have to be small enough to get a high-end CPU and GPU on the same die. That probably won't be practical until chips can be manufactured below the 32nm node. Also, software that can take advantage of heterogeneous designs will have to be in place to support a broad market for these chips in the enterprise -- i.e., not just for high performance computing. Because of these constraints, I think the earliest we'll see CPU-GPU server chips will be 2012,and more likely 2013.
So where does this leave CPU-less NVIDIA? Right now, the company sits atop the GPGPU computing market, but has no public plans to integrate its high-end GPUs with a CPU. For the time being, at least, NVIDIA seems content to pursue the GPU computing market with discrete devices, like its Tesla products, connected remotely to x86 processors.
Ironically, though, the greater success NVIDIA has in building a GPGPU business and bringing more applications into the fold, the greater the demand will be for CPU integration. And if both AMD and Intel start offering high-end CPU-GPU products, NVIDIA's discrete GPU business will suffer.
It's worthwhile noting that NVIDIA actually does have a CPU-GPU platform in its current Tegra line of processors for mobile devices. The CPU in this case is the ARM processor, a compact little chip that is quite popular for low-power platforms like cell phones. It's not too far a stretch to think NVIDIA may be designing a chip that marries its CUDA-class GPUs with ARM CPUs. This week, startup Smooth-Stone revealed it will build servers based on ARM processors. If these are able to gain a foothold in datacenters, an NVIDIA Tesla-ARM server chip would look very interesting indeed.
Posted by Michael Feldman - August 19, 2010 @ 2:51 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?