The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Blog: From the Editor

From the Editor | Main Blog Index

Yes Indeed, NVIDIA Has x86 Ambitions


Although rumors of NVIDIA developing its own x86 products have been circulating for years, a comment this week by Michael Hara, the company's senior VP of investor relations, all but confirmed the GPU maker's intention to bring x86 silicon to market.

The x86 remarks were made toward the end of an NVIDIA "fireside chat" at Morgan Stanley's Technology Conference on Tuesday in San Francisco. In response to a question about NVIDIA's plans to enter the general-purpose processor business, Hara revealed the idea of duplicating its Tegra approach (ARM CPU-based SoC) for mobile internet devices, but with an x86 core.

"I think some time down the road it makes sense to take the same level of integration that we've done with Tegra," said Hara. "Tegra is by any definition a complete computer on a chip, and the requirements of that market are such that you have to be very low power, very small, but highly efficient. So in that particular state it made a lot of sense to take that approach, and someday it's going to make sense to take the same approach in the x86 market as well."

He went on to say that it was not a matter of if the company will do this, but when, and gave a two or three year timeframe when we might expect to see the first NVIDIA x86 parts. At that point, SoC architectures will even make sense for larger platforms like small form factor PCs (netbooks and nettops), a market NVIDIA is currently going after with its ION platform. ION incorporates a GeForce 9400 GPU with an Intel Atom CPU on a hand-sized board.

So how will this impact HPC? At this point, there was no talk of NVIDIA going after the x86 server market, a la Xeon or Opteron, so we're not likely to see NVIDIA x86-based servers anytime soon. For the foreseeable future, the company's Tesla-based products (along with CUDA) will be NVIDIA's main contribution to high performance computing.

But NVIDIA's survival may depend upon having an x86 play to be viable as a company over the long term. With the integration of GPUs and CPUs proceeding apace at Intel and AMD, NVIDIA would be left in a precarious position selling only discrete GPU parts, integrated chipsets, and ARM-based ASICs. SoC is going to be where the action is for mobile and embedded devices, and x86-based parts will probably end up grabbing a large chunk of those markets.

Also, even though the volume SoC parts won't end up in HPC datacenters, by the time these chips get to 32nm, and then 22nm, a lot of these mobile devices will be powerful enough to run some high performance technical workloads, like image recognition and language translation. The advent of OpenCL promises to help pave the way for these types of applications on all sorts of handheld electronic devices.

How NVIDIA goes about getting a license to build x86 silicon is still an open question. Right now, Intel is not exactly on speaking terms with NVIDIA, having recently taken the GPU maker to court over a cross-licensing dispute regarding Nehalem chipsets. Even if the two chipmakers decide to kiss and make up, it's hard to imagine why Intel would grant NVIDIA an x86 license to compete in the same markets.

However, NVIDIA could gain access to such a license by buying VIA Technologies, the Taiwan chip manufacturer that has developed the x86-compatible Nano processor. Rumors of such an acquisition have been floating around for almost a year. To be sure, it's not clear if VIA's x86 license would be transferrable in the event of a buyout, so NVIDIA may have to seek another type of arrangement. But with NVIDIA's intention to enter the mobile x86 arena out in the open, an alliance of some sort with VIA now seems more likely than ever.

Not that NVIDIA has extra cash to throw around right now. The company's earnings have certainly taken a beating lately. Last month, it reported a quarterly loss of $147 million, reflecting a 60 percent drop in revenue from the same quarter of the previous year. NVIDIA is trying to right the ship by lowering operating expenses and focusing on the healthiest parts of the GPU business -- namely mobile graphics and cutting-edge GPUs.

Hara said he thinks the biggest upside surprises this year will likely come from Tegra at the low end and Tesla at the high end. He did note that the recession is holding back Tesla right now. "It's getting great traction as we speak, but it's also being somewhat contained by the economy," he admitted. Without offering specific numbers, he said that the number of people programming for Tesla and the number of applications ported to the platform continues to be "very high." Also, according to him, since Tesla carries a gross margin of about 50 percent, as opposed to a corporate average of 35 percent, an uptick in Tesla revenue could lift the business quite effectively.

Asked about the competition from Intel's upcoming Larrabee CPU-GPU hybrid processor for high end graphics and visual computing applications, Hara said he thinks Intel will be behind the performance curve. Larrabee, unlike traditional GPUs, relies on software rather than hardware to provide a lot of the graphics smarts.

"Ultimately if they can't benchmark well in applications against a traditional hardwired GPU, then they have to do things like add cores, which will then make the chip bigger and add issues with power," he noted. "So I think the work they have to do to get up to the levels of the current architectures in the market [i.e., AMD and NVIDIA] is going to be very high. Obviously, we're not sitting still, so by the time they come out with their parts, we'll have raised the bar again."

What makes all of this so interesting is the prospect of a three-way competition in the general-purpose microprocessor business. If integrated CPU-GPU chips represent the default architecture over the next several years -- and I think this likely -- it would be a lot more healthy for the industry if all the major players were involved. That's assuming, of course, they all survive the current economic calamity. Here's hoping.

Posted by Michael Feldman - March 05, 2009 @ 4:38 PM, Pacific Standard Time

Michael Feldman

Michael Feldman

Michael Feldman is the editor of HPCwire.

More Michael Feldman


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?