May 04, 2010
The new "Fermi" Tesla 20-series products from NVIDIA are about to hit the streets and HPC vendors are lining up to get the latest GPU goodies into their machines. This week, HPC cluster maker Appro has launched two Fermi-based systems: an updated GPU-accelerated GreenBlade offering and a brand new 1U server that puts 2 CPUs and 4 GPUs in the same box.
Launched is maybe too strong a word. According the John Lee, Appro's vice president of advanced technology solutions, the new products won't be shipping until late May or early June when the Fermi chips finally start rolling out of the TSMC fabs in volume. But Appro is already taking orders for the new systems and is expecting NVIDIA's third-generation CUDA hardware to light a fire under the GPU acceleration business.
As an HPC specialist, Appro has been following NVIDIA's GPU computing ascendence with much interest. Fermi is the first graphics processor to bring ECC memory, hardware support for C++, and more than half a teraflop of double precision to the GPU computing realm. With the vector-like processor about to debut, what was once a two-CPU rivalry between Intel and AMD is now a much more interesting three-way race. "I think it's a pretty huge milestone for high performance computing," says Lee.
Both new Appro offerings will make use of the M2050 Tesla modules from NVIDIA, which are being integrated onto the system motherboards rather than attached as a standalone card that plug into a PCIe slot. As it turns out the M-series devices are the only ones NVIDIA is going to certify for datacenter deployment. According to Lee, the GPU maker is not supporting C-series cards in the rackmount form factors. Those are intended only for workstations and deskside systems. The M2050 comes with 3 GB of GDDR5 memory and delivers about 515 double precision gigaflops per GPU or just over a teraflop if your app can get by with single precision floating point.
Appro's Fermi option on the GreenBlade is based on a one-to-one pairing of CPUs and GPUs. The 5U enclosure consists of 5 dual-CPU blades hooked up to 5 dual-GPU expansion blades using a PCIe link. The CPUs may be either late model AMD Opterons or Intel Xeons, but most of the FLOPS are provided by the GPUs. A fully configured enclosure delivers more than 5 raw teraflops of double precision goodness.
The Fermi-revved GreenBlade is aimed at small to mid-sized cluster deployments of GPUs for users that need a balance of CPU and GPU resources or who may be otherwise be constrained from denser GPU configurations for lack of available power. One advantage of the CPU-GPU blade separation is the ability to upgrade components individually. Given that CPUs are GPUs are on different refresh cycles -- and generally the cadence for the GPU refresh is somewhat faster -- it should be possible to snap in new blades whenever Intel, AMD, or NVIDIA release the next generation of their silicon.
Appro's second product is a new 1U server that holds four M2050 Tesla GPUs plus two CPUs (either Xeon 5600 or Opteron 6100 processors). Called the Tetra -- 4 GPUs, get it? -- Appro is claiming it is the densest CPU-GPU combo in the industry. Each 1U enclosure delivers two double precision teraflops, plus change. For external storage, there's support for up to six 3 TB SATA drives.
As you can imagine, it takes plenty of juice to run the Tetra. The server comes with a 1,400 watt power supply and a whopping 12 cooling fans.
According to Lee, the Tetra is aimed at two customer sets: 1) customers who might otherwise opt for NVIDIA's quad-GPU S-series servers and 2) those looking to deploy GPUs at scale and wanting to maximize floating point density in the datacenter.
NVIDIA's own 1U Tesla boxes -- the previous generation S1060 and the upcoming Fermi-based S2050 and S2070 -- offer 4 GPUs per server, but have to be connected to a host CPU box via a PCI Express cable. By integrating CPUs and GPUs in the same 1U enclosure, Appro believes Tetra can usurp a chunk of this market.
The other Tetra market is for really big systems where codes scale particularly well on the GPU -- oil and gas apps and all sorts of science codes that have an insatiable appetite for matrix math. "With this particular product, you can theoretically fit about 80 teraflops of double precision performance into a single rack," says Lee. " We've very close to getting to that magical 100 teraflops per rack."
Although Appro is not releasing specific pricing on the Tetra, Lee says he believes the new platform will be a very cost-effective solution for users looking to maximize double precision FLOPS/dollar. He estimates an entry level Tetra server would cost approximately $11-12K, while a more richly configure system could run $15-16K.
The most important configuration choices for both new systems are CPU type and memory capacity. Those selections will mostly be a function of how much of your code is (or can be) ported to the GPU, since unported apps will be confined running on the CPU host hardware.
Appro has conveniently provided intelligent power control for these systems so that when the GPU parts are idle, they can be shut off. Since each M2050 module is drawing 225 watts, the energy savings will add up fast when these systems are in CPU-only mode. Of course, once you've gone to the expense to buy all these Fermis, there's going to a lot of incentive to migrate as many of your production codes to the GPU as possible, especially considering that performance per watt numbers can be an order of magnitude better on the GPU than on its CPU counterpart.
Lee says the low-hanging fruit for GPU acceleration is the energy sector and big government labs, with biotech firms and financial institutions a close second. One of the first installations for Appro's Fermi gear will be at the Virginia Polytechnic Institute and State University. That system is scheduled for deployment in July. The company also has an order from an oil and gas company, which will remain anonymous.
Although Appro is one of the first cluster vendors out of the gate with new Fermi offerings (HPC ODM vendor AMAX previewed its Tesla 20-series offerings last month), Supermicro also announced its new Fermi gear this week. Expect more HPC system vendors large and small to roll out their latest Tesla-accelerated machinery over the coming weeks.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?