January 10, 2011
In the future, 2010 may be known as the year of the GPU, or at least its big debut. China stole TOP500 glory using the massively parellel processing power of the graphics chip. And while the US can claim no GPU-based supercomputers among the top 10, GPGPU computing is having a big influence on US science and research.
In a piece over at Scientific Computing, Rob Farber examines the growing popularity of GPU computing. As a senior research scientist at Pacific Northwest National Laboratory, Farber has a good vantage point to see how the evolution of computing technology affects science on the ground. Farber argues that multi-threaded and GPGPU technology are changing the dynamics of scientific computing, delivering fresh opportunities into the realms of academia, product development and HPC research. In particular, GPGPU computing has made it possible to do more science with fewer or cheaper resources.
Graphic processors have matured into general purpose computational devices at exactly the right time to be considered in this industry-wide retooling to utilize multi-threaded parallelism. To put this in very concrete terms, any teenager (or research effort) from Beijing, China, to New Delhi, India, can purchase a teraflop-capable graphics processor and start developing and testing massively parallel applications.
While it's no secret that multicore hardware requires applications that can harness that power, the fact is that hardware is way out in front with software struggling to catch up. Lest that disconnect continue to be a major blight on scientific progress, Farber doles out this cautionary advice:
Legacy applications and research efforts that do not invest in multi-threaded software will not benefit from modern multi-core processors, because single-threaded and poorly scaling software will not be able to utilize extra processor cores. As a result, computational performance will plateau at or near current levels, placing the projects that depend on these legacy applications at risk of both stagnation and loss of competitiveness.
Still, Farber predicts that HPC will experience tremendous progress as the next generation of software developers master the challenges associated with massively-parallel programming. Multicore-aware software is the key that will unlock the full potential of multicore hardware. Hardware that is already here. Farber notes that major HPC vendors have developed or are in the process of developing hybrid systems that can take advantage of the parellel nature of GPUs. Many, if not most, supercomputing centers are themselves evaluating hybrid CPU-GPU architectuers, among them Tokyo Tech, Oak Ridge National Laboratory (ORNL), National Energy Research Scientific Computing Center (NERSC) and PNNL.
Full story at Scientific Computing
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?