August 01, 2012
Argonne National Laboratory is home to Mira, currently the third fastest system on TOP500 list. The 48-rack IBM Blue Gene/Q supercomputer runs on 768,000 cores and cranks out more than 8 Linpack petaflops. While Mira is not fully operational yet, applications are already being optimized to run on the machine. Today, InformationWeek detailed a number of workloads the system is expected to handle.
Under the umbrella of Argonne’s Early Science Program, Mira will be assisting research in earthquake modeling, quantum mechanics, the effect of clouds on the climate, and materials science. These applications, along with others in the Early Science Program, should help researchers judge the system’s capabilities.
Mike Papka, the deputy associate director of the lab’s computing, environment and life sciences directorate, explained the how applications would be ramped on Mira. "A new architecture with a new system software stack, and at a scale that is larger than anyone else has run previously, results in a system that will have issues never seen before,” he said. “These issues need to be exposed and addressed before we go into production, and it often requires real users running real code on the system."
Mira will be taking over for the Intrepid supercomputer, a Blue Gene/P machine. Back in 2008, the system took number four on the TOP500 at 458 Linpack teraflops. Intrepid was used for an “immediate need” project during the summer of 2010, when researchers ran simulations of oil rising through water, in response to the Deepwater Horizon oil spill disaster.
Intrepid will stay online until Mira becomes fully operational, at which point the system will most likely get decommissioned. The laboratory cannot support the operational costs of both systems, so Intrepid may get sold to a university or simply get stripped down for parts.
According to the artice, 60 percent of Mira’s cycles will be allocated to the DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The project allows researchers from industry, government and academia to submit proposals to a panel. The panel then reviews the proposals, determining which applications display the most relevance to the program with computationally-ready software.
The Advanced Science Computing Research Challenge accounts for another 30 percent of Mira’s computing time. This program works on issues aligned with the DOE’s energy priorities. Cycles related to the challenge will be allocated in June 2013.
The left over resources will be reserved for “immediate need” workloads like the Intrepid’s oil spill application.
Full story at InformationWeek
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?