October 25, 2013
When Piz Daint – the Cray supercomputer installed at the Swiss National Supercomputing Center (CSCS) – was first announced, the project leaders cited the benefits for COSMO, an atmospheric model used by the German Meteorological Service, MeteoSwiss and other institutions for their daily weather forecasts. The COSMO model is maintained by the Consortium for Small-scale Modeling (aka COSMO), a group of seven national weather services.
A recent article at Swiss HPC provider community hpc-ch provides an in-depth look at recent developments with the COSMO application, including how it is being modified to take advantage of hardware accelerators, like GPGPUs.
Over the past three years, researchers from the Center for Climate Systems Modeling and MeteoSwiss have been revising and refining the COSMO model's code and the algorithms it employs as part of their work with the High Performance and High Productivity Computing (HP2C) initiative. The main goals of this project were to make the software more efficient and to adapt it to leverage the performance gains offered by hybrid GPU-based computing systems. The code was tested successfully on Piz Daint, a Cray system that derives its FLOPS from both CPUs and GPUs. In September the group reported that the simulations performed with greater efficiency and also enabled reduced energy consumption.
Because of these promising results, the Steering Committee of the COSMO Consortium has decided to fully support the new developments in the official version. This means that a GPU-friendly version will be distributed to all users of the COSMO model. Oliver Fuhrer, a senior scientists with MetoSwiss who worked on the code changes, provides additional details about the benefits of GPU-based computing platforms and the significance of the changes.
Fuhrer notes that the integration project will prove a little challenging since the "official" model has also undergone some development work since the start of the HP2C projects – so the new version will need to incorporate both sets of changes. It's a "strict" process that will require some code refactoring and a lot of testing, according to Fuhrer.
Fuhrer also explains that the two HP2C projects illustrated three important points:
Firstly, it is feasible to target GPU-based hardware while retaining a single source code for almost all of the COSMO code. Secondly, using GPU hardware is very attractive for accelerating simulation time and reducing the electric power required to run the computer executing the simulation. Thirdly, it is possible for domain scientists to develop and work with this new version of the COSMO model.
Even though it's a lot of work to make the changes, the efficiency gains and power consumption benefits are a compelling case, especially given the still-expanding popularity of GPUs in big science systems. The upgraded Piz Daint supercomputer, which is coming online this November at CSCS, will use an equal number of CPUs and GPUs. "Applications that can also be run on GPUs will have unprecedented compute power available [to them]," says Fuhrer.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?