The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Running Stochastic Models on HTCondor


Groundbreaking scientific research is becoming more reliant on computationally intensive HPC resources, and mid-level research organizations without the resources to build an extensive HPC cluster are looking for cost-effective ways to contribute to these initiatives.

In an effort to evaluate creative methods of participating in those large scientific projects, research out of Brigham Young University done by Spencer Taylor examined the open source software HTCondor, which makes use of computing power from idle computers to perform jobs on a local network. In this case, it was specifically applied to a water resource model called Gridded Surface Subsurface Hydrologic Analyst (GSSHA), a model that requires computationally intensive stochastic functions not uncommon to many scientific disciplines.

The resulting tests showed that HTCondor can be a workable alternative to acquiring additional HPC resources for mid-level research institutions. “We found that performing stochastic simulations with GSSHA using HTCondor system significantly reduces overall computational time for simulations involving multiple model runs and improves modeling efficiency,” Taylor argued.

The idea behind employing HTCondor, using idle computing resources to help process large amounts of data and perform intensive computations, has notably been used by researchers at Berkeley in the SETI@home project, where home computers are volunteered when idle to form a grid that analyzes extra-terrestrial radio signals. HTCondor hopes to accomplish something similar such that mid-sized research institutions can integrate their computing base with existing HPC resources both on-site and in the cloud. As noted in the research, “the goal of this project is to demonstrate an alternative model of HPC for water resource stakeholders who would benefit from an autonomous pool of free and accessible computing resources.”

The architecture diagram below shows how the HTCondor software accesses and implements the various resources, including on-site ‘worker computers,’ local HPC implementations, and the existing HTCondor network, built similarly to the SETI@home network via volunteer computers across the country.

The specific instance set up by the BYU research utilized a model that ran six precipitation events, using hydrometeorological data over a two-week period. The simulation required 14 minutes on a single desktop computer, and the test was set up to run 150 of those simulations, which would take 35 hours on average on a single processor.

“Because of the nature of HTCondor,” as Taylor explained in the research, “each stochastic simulation ran on a different number of processors ranging from about 80 to 140. As expected, with about 100 times the computational power of normal circumstances I was able to essentially reduce the runtime by factor of 100.” In essence, by running these formerly idle processors in parallel, the BYU implementation was able to achieve performances consistent with other localized HPC instances.

As seen in the figure above and noted in the research, “it is also possible to include commercial cloud resources as part of an HTCondor pool.” The software makes it possible to optimize what jobs are sent to the cloud based on the price points.

“For example,” the research noted, “if you were using Amazon’s Elastic Compute Cloud (EC2) you could set the ‘ec2_spot_price’ variable to ‘0.011’ so that HTCondor would send jobs to the cloud only if the cost per CPU hour was $0.011 or less.” Many research institutions utilize cloud services for excess data storage and computation at peak times, so being able to incorporate those into the HTCondor system is an important consideration.

Stochastic simulations, ones where the results are dependent on several randomized probabilistic variables, are commonplace across various scientific disciplines. As such, Taylor is hopeful this application can be utilized across those disciplines. “Using the scripts developed in this project as a pattern, HTCondor could be used for many other applications besides GSSHA jobs.”

Related Articles

NASA Builds 'Climate in a Box'

Building Supercomputers with Raspberries

Computing the Physics of Bubbles

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Newsletters

Stay informed! Subscribe to HPCwire email Newsletters.

HPCwire Weekly Update
HPC in the Cloud Update
Digital Manufacturing Report
Datanami
HPCwire Conferences & Events
Job Bank
HPCwire Product Showcases


Xyratex

HPC Job Bank


Featured Events


HPCwire Events