November 29, 2011
'Nobel Prize of supercomputing' awarded for breakthrough performance in the search for new metal materials
SANTA CLARA, Calif., Nov. 29 -- NVIDIA today announced that the Tokyo Institute of Technology's Global Scientific Information Computing Center (GSIC) has received the coveted Gordon Bell Prize, the supercomputing industry's highest honor, with its NVIDIA Tesla GPU-accelerated supercomputer.
The Gordon Bell Prize, awarded by the Association for Computing Machinery in conjunction with the Institute of Electrical and Electronics Engineers, recognizes achievements by researchers utilizing parallel computing to achieve scientific breakthroughs. Takayuki Aoki's research group at the GSIC won the Gordon Bell "Special Achievement in Scalability and Time-to Solution" award for its work on the Tsubame 2.0 supercomputer.
The Tokyo Tech research team was recognized for achieving 2.0 petaflops of performance on a practical research application in single precision. The application, which is designed to simulate the behavior of metal alloy microstructures called dendrites, enables researchers to identify lighter, stronger metal materials necessary for the development of more fuel-efficient automobiles. Previous attempts to simulate these complex dendrite microstructures have been limited by the available performance of even the largest supercomputers.1
"This kind of breakthrough performance and research is precisely why we decided to accelerate Tsubame 2.0 with NVIDIA Tesla GPUs," said Takayuki Aoki, professor of the Tokyo Institute of Technology. "This is one of many research projects we are working on that take advantage of the performance and energy-efficiency of GPUs."
The Gordon Bell Prize carries a $10,000 award provided by Gordon Bell, a pioneer in high performance and parallel computing.
Tesla GPUs are massively parallel accelerators based on the NVIDIA CUDA parallel computing architecture. Application developers can accelerate their applications either by using CUDA C, CUDA C++, CUDA Fortran, or by using the simple, easy-to-use directive-based compilers.
For more information about Tsubame 2.0, visit the Tokyo Institute of Technology, Global Scientific Information and Computing Center website. To learn more about Tesla GPUs, visit the Tesla website. To learn more about CUDA, visit the CUDA website.
1 Peta-scale Phase-Field Simulation for Dendritic Solidification on the TSUBAME 2.0 Supercomputer
NVIDIA (NASDAQ: NVDA) awakened the world to computer graphics when it invented the GPU in 1999. Today, its processors power a broad range of products from smart phones to supercomputers. NVIDIA's mobile processors are used in cell phones, tablets and auto infotainment systems. PC gamers rely on GPUs to enjoy spectacularly immersive worlds. Professionals use them to create visual effects in movies and design everything from golf clubs to jumbo jets. And researchers utilize GPUs to advance the frontiers of science with high-performance computing. The company holds more than 2,100 patents worldwide, including ones covering ideas essential to modern computing. For more information, see www.nvidia.com.
Source: NVIDIA Corp.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?