March 28, 2013
The Yellowstone supercomputer has a 1.5-petaflop I-data plex system at peak. With 72,288 processor cores, the machine is powerful enough for No. 13 on the Top500. The machine was first tasked with 11 compute-intensive projects as part of the Accelerated Scientific Discovery (ASD) initiative.
Yellowstone is based on IBM's iDataPlex architecture and can perform 29x the workload throughput of NCAR’s Bluefire, which was decommissioned on January 31. It is capable of performing one-and-a-half quadrillion operations a second and stores eleven petabytes of information, one thousand times the total print holding of the Library of Congress.
The ASD initiative provides these large-scale computational resources to a small number of projects for a short time period. These projects help give the system a workout and allows for the pursuit of scientific objectives that otherwise would not be possible through normal allocation opportunities.
These projects, chosen at the National Center for Atmospheric Research (NCAR), were part of the system’s original purpose. The supercomputer carried out large amounts of computing over a two-month period, investigating timely issues surrounding Earth and its atmosphere, such as creating better long-range weather forecasts and closing the spatial gap between model cloud dynamics and cloud microphysics.
Yellowstone has customized Geyser and Caldera clusters, which are specialized data analysis and visualization resources within Yellowstone’s data-centric environment. These systems provide a 20-fold increase in Computational and Information Systems Laboratory’s (CISL) dedicated data analysis and visualization resources. With 16 large-memory nodes and 1 TB of memory per node, Geyser is designed to facilitate large-scale data analysis and post-processing tasks, including 3D visualization; Caldera also has 16 total nodes, with two NVIDIA Tesla GPUs per node, to support parallel processing, visualization activities, and development and testing of general-purpose GPU code.
Taken together, these components improve capabilities central to NCAR’s mission, such as supporting the development of climate models, weather forecasting, and other critical research.
One of these projects selected by NCAR involved predicting North American air quality through the year 2055. Gabriele Pfister of NCAR led the project, which had 6.25 million core hours allocated to Yellowstone. The study performed simulations with the nested regional climate model with chemistry (NRCM-Chem) to study possible changes in weather and air quality over North America between present-day and two future time periods: 2020-2030 and 2045-2055. This will provide insights into expected future changes related to air quality and will also be used for dynamical downscaling (of meteorology and air quality) of global climate simulations performed at NCAR.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?