November 08, 2012
URBANA-CHAMPAIGN, IL, Nov. 8 – The full Blue Waters petascale computing system is now available in "friendly-user" mode to the National Science Foundation-approved science and engineering teams. These groups from across the country will use Blue Waters for challenging research in weather and climate, astrophysics, biomolecular systems, and other fields.
Blue Waters is one of the largest computing systems in the world, consisting of 237 racks of Cray XE6 nodes, 32 racks of Cray XK7 nodes with NVIDIA GK110 Kepler GPUs, and over 25 petabytes of usable online storage. All the computational and online storage hardware is in place and has passed preliminary testing at scale. NCSA and Cray are conducting functionality, feature, performance, and reliability testing of the system at full scale. As these tests are completed, a representative production workload of science and engineering applications will run on the full Blue Waters system during an extensive availability test period.
Selected "friendly users" will have access to the entire system during this window in order to help the Blue Waters team test and evaluate the full system and to expedite the Petascale Computing Resource Allocation (PRAC) teams' ability to use the full Blue Waters system productively as soon as it is in full production status.
Many of the PRAC teams used the Blue Waters Early Science System in the spring, achieving impressive results using just 15% of the full Blue Waters system configuration:
Blue Waters is designed for the most data-, memory- and compute-intensive computational science and engineering work and to provide sustained performance of 1 petaflop on a range of science and engineering applications. The benchmark codes that measure the performance of Blue Waters include DNS3D, VPIC, MILC/Chroma, NAMD, NWChem, GAMESS, Paratec, PPM, QMCPACK, SPECFEM3D, and WRF.
The Blue Waters project is supported by the National Science Foundation and the University of Illinois.
Source: The Blue Waters Project
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?