November 10, 2011
Moab technology plays integral role in NOAA's plan to create an organization-wide grid
PROVO, Utah, Nov. 10 — Adaptive Computing, managers of the world's largest supercomputing workloads and experts in HPC workload management, today announced that the National Oceanic and Atmospheric Administration (NOAA), in conjunction with Oak Ridge National Laboratory (ORNL) and Computer Sciences Corporation (CSC) has selected Moab HPC Suite as the intelligent grid resource management solution for existing and future NOAA HPC sites. During Supercomputing '11 in Seattle, Washington, NOAA will be presenting their MOAB application during a Birds of a Feather on November 17th @ 12:15 in room TCC 305. The Moab decision engine is the workload management software for Gaea, NOAA's new leadership class supercomputer, and as the standard for providing HPC grid functionality to all NOAA supercomputers. With Moab, NOAA gains a robust management infrastructure for compute jobs that unifies HPC resources across large geographic divides and maximizes job throughput and CPU utilization to deliver on the project's overall goal of developing better models for predicting climate variability and change.
In choosing a workload manager, one of NOAA's primary considerations was location-aware scheduling. NOAA's Geophysical Fluid Dynamics Laboratory (GFDL), located in Princeton, New Jersey supports their local researchers as well as other NOAA researchers across the country. Gaea is physically located at ORNL in Tennessee. The disparate locations of users and system(s), current and future, create challenges in networking, data transfer, and job submissions. Moab solves the job submission problem by allowing a local instance of Moab to be installed in New Jersey where users can interact with the system, manipulate their data sets, and analyze their results. Moab then communicates with, and migrates workload jobs and data between GFDL and the instance of Moab running on Gaea in Tennessee. This model can grow organically as new users and compute resources come online.
Moab is unique among workload managers as it can run on multiple resource managers. This capability is a crucial component to NOAA's goal of delivering a unified grid. On Gaea, NOAA plans to use Moab with TORQUE Resource Manager, a PBS-based open-source resource manager that is maintained and supported by Adaptive Computing.
"NOAA's mission is to understand and predict changes in the Earth's environment and we rely on supercomputing technologies like Moab to support the data-intensive research of our scientists," said Joseph Klimavicz, chief information officer and director of high performance computing and communications at NOAA. "We look forward to working with a well-established HPC software provider such as Adaptive Computing and are confident in the product's capabilities."
"We selected Adaptive Computing for NOAA's mission-critical deployment based on the company's proven Moab technology and its unique, location-aware functionality," said Steven Baxter, program manager at CSC.
NOAA is currently licensed to run Moab at three other HPC sites, including Boulder, Colorado and the $27.6 million supercomputing center in Fairmont, West Virginia. NOAA's long-term plan is to link the sites under a single HPC grid for global job submission and a single point of reporting.
"We are honored to play a critical role in supporting NOAA's ground-breaking climate research," said Robert Clyde, CEO of Adaptive Computing. "As HPC systems grow more complex, flexibility is a key component for any resource management solution. The latest upgrades to the Moab and Viewpoint technology enable the type of flexibility required for next-generation supercomputers."
Funded through the American Recovery and Reinvestment Act of 2009, Gaea will serve as a dedicated high performance computing resource for NOAA and its extensive network of research partners. This system will enable scientists to leverage a significant increase in computing capacity to address some of the most pressing global climate change questions. Moab manages more TOP500 CPUs than any other solution. It is experienced in managing large numbers of users in complex research environments, while simultaneously optimizing the utilization of petaflop-scale supercomputers.
About Adaptive Computing
Adaptive Computing, manages the world's largest supercomputing environments with its self-optimizing dynamic cloud management solutions and HPC workload management systems driven by Moab, a patented multi-dimensional decision engine. Moab delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing is the preferred dynamic cloud and workload management solution for the leading global HPC and datacenter vendors. For more information, call 801-717-3700 or visit www.adaptivecomputing.com.
Source: Adaptive Computing
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?