October 17, 2011
The Vikram Sarabhai Space Center (VSSC) is acting as a proving ground for the future of GPUs and heterogeneous computing in India. According to an article that explored the center’s use and subsequent power, space and performance improvements following implementation of a GPU and CPU “hybrid” environment, there are significant benefits to moving from a CPU-only approach to supercomputing.
According to Vishal Dhupar who manages Nvidia’s South Asian presence, VSSC had “equipment in a single room delivering 220 teraflops” but to get to the 200+ teraflop range to run a homegrown x86-tailored CFD application called PARAS, they would have needed 5,000 CPUs. Dhupar says that Nvidia “offered them the same architecture, [ability to] use the same room and offer a quantum jump in performance with a hybrid architecture of CPUs and GPUs.” He said that by adding 400 GPUs to the existing 400 CPUs, they got to their 220 teraflop goal.
In comparison, another Indian supercomputing center, Tata CRL, has a 170 teraflops system with 3600 CPUs built at a cost of $30 million. VSSC achieved 220 teraflops with an investment of $3-3.5 million.
Dhupar says that “Only the code that was more parallelized had to be tweaked and this gave them a 40x performance boost on one account and a 60x boost on the other.”
As a further point of comparison, Prashant L. Rao writes that “There’s a substantial energy efficiency advantage from using GPUs. VSSC consumes 150 kWh for generating 220 teraflops. Tata CRL, on the other hand, is using 2.5 mWh for 170 teraflops.”
Rao also pointed to other differences between CPU-only and heterogeneous systems, noting “Cost being a perennial problem, Nvidia hopes to convince scientists that they should move their data centers onto GPUs. At the same time, it wants to boost the acceptance of CUDA. They have been looking at Message Passing Interface (MPI) for parallel computing. MPI is a subset of the CUDA framework. So, there’s no relearning. The framework has SDKs, debuggers, libraries, compilers etc. Whether you use Fortran, C or C++, it’s all supported,”
Vishal Dhupar summed up the focus on GPUs in the rapidly-growing Indian market (IDC estimates claim the HPC market in India is worth $200 million and is growing at a 10% annual rate), pointing to the price, performance and efficiency changes that hybrid computing could bring. He claims “with 2 teraflops available for $10,000, it changes the equation. We want every scientist or researcher to have this.”
This statement makes no bones about the fact that Nvidia is setting its sights on the Indian academic sector. The company hopes to provide these researchers with 2-8 teraflops on personal supercomputers and make it simple to mesh these together to form clusters or grid computing environments.
Full story at Express Computer
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?