May 22, 2013
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. While many of these problems are conveniently parallel, their collective complexity exceeds computational time and throughput that average user can obtain from a single center.
Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
This experiment has been performed by a joint team of researchers from the Rutgers Discovery Informatics Institute – RDI2 (Javier Diaz-Montes, Manish Parashar, Ivan Rodero, Jaroslaw Zola) and the Computational Physics and Mechanics Laboratory at Iowa State University (Baskar Ganapathysubramanian, Yu Xie).
The ability to control fluid streams at microscale is of great importance in many domains such as biological processing, guiding chemical reactions, and creating structured materials. Recently, it has been discovered that placing pillars of different dimensions, and at different offsets, allows fluid transformations to “sculpt” fluid streams (see Figure 1). As these transformations provide a deterministic mapping of fluid elements from upstream to downstream of a pillar, it is possible to sequentially arrange pillars to obtain complex fluid structures. To better understand this technique, the team from Iowa State University has developed a parallel MPI-based Navier-Stokes equation solver, which can be used to simulate flows in a microchannel with an embedded pillar obstacle. The search space consists of tens of thousands of points, where an individual simulation may take hundreds of core-hours and between 64 and 512GB of memory. In particular, in this experiment the team determined that to interrogate the parameter space at the satisfactory precision level 12,400 simulations (tasks) would be required.
|Figure 1: Example flow in a microchannel with a pillar. Four variables characterize the simulation: channel height, pillar location, pillar diameter, and Reynolds number.|
The computational requirements of the problem suggest that solving this problem using standard computational resources is practically infeasible. For example, the experiment would require approximately 1.5 million core-hours if executed on the Stampede cluster - one of the most powerful machines within XSEDE. However, the high utilization of the system and its typical queue waiting times make it virtually impossible to execute such an experiment within an acceptable timeframe. These constraints are not unique to one particular problem or system. Rather, they represent common obstacles that can limit the scale of problems that can be considered by an ordinary researcher on a single, even very powerful, system.
To overcome these limitations, the team from Rutgers University developed a novel federation framework, based on CometCloud, and aimed at empowering users with aggregated computational capabilities that are typically reserved for high-profile computational problems. The idea is to enable an average user to dynamically aggregate heterogeneous resources as services, much like how volunteer computing assembles cycles on desktops. The proposed federation model offers a unified view of resources, and exposes them using cloud-like abstractions, as illustrated Figure 2. At the same time the model remains user-centered, and can be used by any user without special privileges on the federated resources.
|Figure 2: Multi-layer design of the proposed federation model. Here, the federation overlay dynamically interconnects resources; the service layer offer services such as associative object store or messaging; the programming abstractions offers APIs to easily create user applications; and the autonomic manager is a cross-layer component that based on user data and policies provisions appropriate resources.|
In the UberCloud experiment, the MPI-based solver was integrated with the federation framework using the master/worker paradigm. In this scenario, the simulation software served as a computational engine, while CometCloud was responsible for orchestrating the execution of the workflow across the dynamically federated resources.
As part of the experiment, a single user federated 10 different resources provided by six institutions from three countries. The execution of the experiment lasted 16 days, consumed 2,897,390 core-hours, and generated 398GB of the output data. The overall experiment is summarized in Figure 3. As seen in this figure, even though the resources were heterogeneous and their availability changed over time, the sustained computational throughput was above 5 simulations completed per hour.
|Figure 3: Summary of the experiment. Top: Utilization of different computational resources. Line thickness is proportional to the number of tasks being executed at given point of time. Gaps correspond to idle time, e.g. due to machine maintenance. Bottom: Dissection of throughput measured as the number of tasks completed per hour. Different colors represent component throughput of different machines.|
The success of this experiment clearly demonstrates the capability, feasibility, and advantages of such a user-centered computational federation. In the experiment, a regular user was able to solve a large scale computational engineering problem, within just two weeks. More importantly, this result was achieved in a few simple steps executed completely in a user-space. The user was required to provide the kernels executed by the master and the workers, and gained access to a unified and fault-tolerant computational platform with cloud-like capabilities that was able to sustain the computational throughput required to solve the problem. This result is of great relevance if we consider the growing complexity of computational engineering problems, which very often outpace the increase in performance of individual HPC resources. More information can be found at http://nsfcac.rutgers.edu/CometCloud/uff/. To join the UberCloud HPC Experiment one can register at http://www.hpcexperiment.com.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?