May 23, 2013
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions.
However, according to NASA, “It takes a small army of scientists and computer programmers a year or more to build a model. Then they need a supercomputer fast enough to run it. Like those models, the machines themselves can be unfriendly.”
In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
The notion of a desktop supercomputer, according to Project Manager Mike Seablom, for running global climate models is to run preliminary tests and simulations on small but high-performance systems before being transplanted and incorporated into applications on NASA’s larger supercomputers.
The ‘Climate in a Box’ systems all come standardized with a baseline software framework to facilitate a common environment in which researchers can program. The idea is that these systems could all be connected in somewhat of a virtualized cloud environment. Further, it allows scientists working in different areas to combine their expertise in a simplified manner.
“The reason we put in a common software framework is exactly for the different disciplines to come in and use the same interface to be able to exchange data, models, and workflow,” said Tsendgar Lee, High End Computing Manager, NASA HQ.
These common climate modeling conditions are based on the NASA-generated GEOS-5 model, which according to the video below, “produces highly detailed, tightly calibrated output by facilitating heavyweight climate research on modest budgets.”
The system, which runs on Linux, according to NASA, but can also be used with Windows HPC, has its programming models written in FORTRAN, meaning programmers are for the most part writing their applications in a familiar environment. Further, those programmers, according to NASA, will be able to start their applications quicker than they would for larger supercomputers for the system is “able to ingest data and start crunching shortly after installation.”
Building a comprehensive model of global climate change is essential to understanding the planet’s underlying environmental problems. A deeper understanding is the first step in taking meaningful action to correct those problems and mitigate climate change.
It is NASA’s hope that facilitating high-performance access points for smaller research institutions will further the global research effort as a whole. “[NASA’s] plan would essentially declare a minimum standard for planet climate and weather research.”
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?