November 02, 2011
Sandia National Labs made a bit of HPC history this week when it announced it had installed the first HPC cluster outfitted with AMD's Accelerated Processing Units (APUs), based on the chipmakers 'Fusion' processor design. The chip integrates x86 CPU cores and ATI GPU cores onto the same processor.
This is an experimental system only, to be used as a platform to evaluate the CPU-GPU heterogenous processor model of supercomputing. Currently, AMD's Fusion chips are designed for personal computing platforms and are not intended for servers. But apparently Sandia's Scalable Computer Architectures group was eager enough to get their hands on an APU cluster to contract Penguin and AMD to be build them a system.
According to the press release, the Penguin system, known as the Altus 2A00, was specifically designed with AMD APUs in mind. The QDR InfiniBand-based cluster is comprised of 104 servers, which houses an unspecified number AMD A8-3850 APUs.
Designed for desktop PC duty, the A8-3850 a quad-core x86 design integrated with 400 Radeon (HD 6550D) cores. The CPU cores run at 2.9 GHz, while the GPU side runs at a more modest 600 MHz. TDP is a respectable 100 watts.
The whole cluster delivers 59.6 peak teraflops, but the idea here is not to break any performance records. The Sandia researchers will use the machine to explore programming models for integrated CPU-GPU platforms, relying primarily on OpenCL and MPI. The draw here is to be able to access the considerable vector capabilities of the GPU within the same memory space as the CPU.
The rationale for such research is that heterogeneous processors such as these APUs will be the basis for future exascale computers. In a roundtable discussion organized by AMD on Wednesday, AMD Technology Group CTO Chuck Moore noted that while their x86 server CPUs, like their new Interlagos Opterons, continue to make strides, those designs will likely not be the path to exaflop performance by the end of this decade.
"Instead what we need to do is to make really good use of heterogenous computing," said Moore. "By combining the best of CPU and GPU technology, presumably on the same chip, we believe we can build computing nodes that are on the order of 10 teraflops in that timeframe."
According to him, AMD has plans in the works for an "HPC APU that would utilize even a larger GPU and fewer x86 core" than the current desktop chips. He thinks the company can build HPC APUs that run about 150 watts and would be powerful enough to power an exaflop computer that consumes no more than 20 MW. And, thanks to the integrated CPU-GPU memory space, these machines would be reasonably easy to program.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?