The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Climate Modeling at Max Planck Institute


Earth system and climate science deals with complex phenomena in the atmosphere, the ocean, and on land surfaces, including physical, chemical and biological processes within and feedback-loops between these areas. Modeling such phenomena numerically on extremely powerful computers is a crucial element of Earth System Sciences program at the Max Planck Institute for Meteorology (MPI-M) in Hamburg, Germany. Such model simulations would not be possible without strong links to the IT-industry and strong partnerships with other computer and data centers.

The largest database in the world under Linux was installed in Hamburg by the World Data Center for Climate (WDCC) and the German Climate Computing Center (DKRZ), according to the international ranking of databases published by the Winter Corporation in September. NEC installed the database system at the DKRZ three years ago in conjunction with a 1.5 teraflop NEC SX-6 series vector supercomputer, which is one of the fastest supercomputers for climate research in Europe.

A new CRAY XT3 supercomputer has recently been employed to run ECHAM, a global atmospheric circulation model developed at MPI-M. Using this new resource, the ECHAM model executed substantially faster, and at higher resolution, than ever before. Thousands of processors ran the application at a record speed of 1.4 trillion calculations per second.

Optimization and improvement of scalability of the ECHAM model code has been accomplished in co-operation with Sun Microsystems using Grid technology. The Grid environment seem to have the potential to deal successfully with the vast amounts of data, which are produced by MPI-M on a routine basis and stored in the WDCC.

The WDCC database at the DKRZ has an almost inconceivable volume of almost 220 terabytes and is about twice the size of the database of a well-known search engine. The Model and Data Group at the Max Planck Institute for Meteorology (M&D/MPI-M) and the German Climate Computing Center (DKRZ) operate the WDCC of the International Council for Science. The WDCC's database contains the latest climate research data on the state of the climate and anticipated climatic changes. Approximately 115 terabytes of storage - corresponding to around 24,500 DVDs - are exclusively dedicated to simulation data for the new report of the United Nations' Intergovernmental Panel on Climate Change (IPCC), which is due to be published in 2007.

MPI-M estimates that a Cray XT3 would make it possible to complete their next-generation IPCC assessment runs in about the same real time as today, despite requiring 120 times more computation. This advance promises to significantly improve the scale and scope of the analysis researchers will be able to submit for the next assessment report of the IPCC. The newest findings have recently been published.

Grid technology is another area of research cooperation. MPI-M provided its ECHAM code to Sun Microsystems to improve optimization and scalability with the Solaris x64 Operating System. Using the Sun Studio Development Tools, Sun benchmarked the code on a Solaris x64 based cluster. Preliminary runs show a nearly linear scaling on 8 and 16 Sun Fire dual-core Opteron nodes. In cooperation with MPI-M, Sun intends to continue the optimization of the ECHAM code for much higher numbers of nodes. Equally important for this joint venture between industry and public research is the Grid optimization and adaptation of the data evaluation software environment of MPI-M necessary for advanced Earth System modeling.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events