December 14, 2009
NTU achieves breakthrough results in sub-atomic particle calculations at 1 percent the cost and 10 percent the power consumption of a BlueGene/L supercomputer
TAIPEI, Taiwan, Dec. 14 -- A research team at National Taiwan University (NTU) is achieving breakthrough results in learning about the early evolution of the universe by harnessing NVIDIA Tesla parallel processors -- which provide the computational horsepower of an IBM BlueGene/L supercomputer, at just 1 percent the cost and 10 percent the power consumption.
The team, led by Ting-Wai Chiu, Professor of Physics and Associate Director of the Center for Quantum Science and Engineering (CQSE), is studying the interactions of sub-atomic particles, to learn about the origins of the universe, which requires enormous computational power.
NTU is carrying out this work on the first GPU-based supercomputer in Taiwan, the 128-GPU cluster at CQSE, which uses 16 NVIDIA Tesla S1070 1U systems and 64 Tesla C1060 processors. The system plays a key role in large-scale computations for quantum physics, ranging from the strong interaction at the subatomic scale to the strongly correlated electrons in condensed matter physics, and to the cosmology at the astronomical scale.
"We are excited to see our GPU-based cluster outperform many conventional supercomputers in both cost and energy use," said Chiu. "With our GPU-enabled supercomputer, we are delivering 15 teraflops at a price of US$200,000, 1 percent the cost of a conventional supercomputer like IBM BlueGene/L."
"It's deeply rewarding to see NVIDIA Tesla GPUs helping professionals and researchers achieve amazing breakthroughs in their work," said Andy Keane, general manager, Tesla business, NVIDIA. "The exceptional speed-up being seen by NTU has the ability to dramatically accelerate the research into one of life's biggest and most complex scientific challenges."
NVIDIA Tesla GPUs are based on CUDA, NVIDIA's computing architecture that enables its GPUs to be programmed using industry standard programming languages and APIs, opening up their massive parallel processing power to a broad range of applications beyond graphics. The CQSE has developed highly efficient CUDA-optimized codes for the computationally challenging problems in QCD, quantum spin systems, and astrophysics.
In addition, the lattice QCD group (TWQCD) based at National Taiwan University is now the first group in the world to use a GPU cluster to perform large-scale simulations of lattice QCD with exact chiral symmetry.
For more information about NVIDIA Tesla GPUs, visit www.nvidia.com/tesla.
NVIDIA (Nasdaq: NVDA) awakened the world to the power of computer graphics when it invented the graphics processing unit (GPU) in 1999. Since then, it has consistently set new standards in visual computing with breathtaking, interactive graphics available on devices ranging from portable media players to notebooks to workstations. NVIDIA's expertise in programmable GPUs has led to breakthroughs in parallel processing which make supercomputing inexpensive and widely accessible. Fortune magazine has ranked NVIDIA #1 in innovation in the semiconductor industry for two years in a row. For more information, see www.nvidia.com.
Source: NVIDIA Corp.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?