The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

NVIDIA, Supermicro Give Birth to CPU-GPU Server


Until now, the only practical way for customers to get GPU-accelerated clusters was to combine NVIDIA's own S1070 Tesla servers with x86 CPU servers from a traditional system vendor. Before May, the onus was on the users to configure the Tesla and x86 boxes themselves. But on May 4, NVIDIA launched its pre-configured cluster program, which brought in OEM partners to construct these mixed-processor clusters, allowing customers to purchase pre-built GPU-accelerated systems.

Now NVIDIA has taken its next step in GPU computing with the introduction of a new Tesla card, the M1060, that is designed to fit neatly inside CPU servers. With this new offering, NVIDIA hopes to expand the scope of GPU high performance computing by using a more traditional model for building large-scale HPC systems.

The M1060 module contains a single 1.3 GHz Tesla Series 10 GPU, the same device found in the C1060 for workstations. The GPU contains 240 stream processing cores, which provide 933 gigaflops of single precision floating point performance or 78 gigaflops of double precision. Four gigabytes of GDDR3 memory are included in the module, and can be accessed at up to 102 GB/second.

Supermicro will be the first vendor to bring an integrated CPU-GPU server to the HPC market. At Computex in Taiwan this week, the company announced its new SS6016T-GF, a 1U server that houses two Tesla GPU modules alongside two quad-core Nehalem (Xeon 5500) CPUs. The new server delivers two single-precision teraflops of computing power. According to Andy Walsh, who heads the NVIDIA Tesla business unit, the encapsulation of dual GPUs inside the Supermicro box will make it "the world's fastest 1U server." Although Supermicro is the only vendor that has announced a GPU-juiced server, Walsh says other vendors are being lined up and will offer CPU-GPU systems later this year.
Supermicro SS6016T-GF Server
Having a couple of teraflops in a 1U server provides the same compute density as when the CPU and GPU servers are purchased separately. But Walsh explains that having all the processor chips under one roof provides much easier deployment and better manageability. Set up is simpler since there are no external cables to hook up between separate CPU and GPU servers. Instead, each GPU module is connected internally via a PCIe 2.0 x16 interface. Also, when the GPUs inhabit the same host, the server's management software (which monitors and controls temperature, fans, voltage, etc.) can be applied to the GPU components as well.

Inside the SS6016T-GF Supermicro box, the two M1060 GPUs modules are on opposite sides of the server chassis in a mirror image configuration, where one is facing up, the other facing down, allowing the heat to be distributed more evenly. The NVIDIA M1060 part uses a passive heat sink, and is cooled in conjunction with the rest of the server, which contains a total of eight counter-rotating fans. Supermicro also builds a variant of this model, in which it uses a Tesla C1060 card in place of the M1060. The C1060 has the same technical specs as the M1060, the principle difference being that the C1060 has an active fan heat sink of its own. In both instances though, the servers require plenty of juice. Supermicro uses a 1,400 watt power supply to drive these CPU-GPU hybrids.

Pricing on the servers has not been released, although Boston Limited, a European distribution partner for Supermicro, is offering the C1060-based server variant for £4999 ($8,227) and claim they are ready to ship such systems today.

For its part, NVIDIA is positioning these integrated servers as a way to help push its GPUs into the largest supercomputing systems. As such, it represents the company's relentless climb up the HPC food chain, starting with GPU-accelerated workstations, moving to heterogeneous CPU/GPU clusters, and now to monolithic CPU-GPU servers. As GPUs reach parity with CPUs, it's more likely that these hybrid systems will start to vie for the top spots in the supercomputing. And until AMD or Intel manage to come up with a compelling alternative, NVIDIA will continue to define how GPU-based supercomputing is done.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events