January 08, 2007
Purdue University's Rosen Center for
Advanced Computing has become the largest provider of high-throughput
computing cycles on the National Science Foundation's TeraGrid.
Carol X. Song, senior research scientist in the Rosen Center and principal investigator for TeraGrid at Purdue, says that more than 4,300 computers of all sizes -- from desktop machines used by students to do homework and check e-mail, up to large, powerful research computers -- are linked together using the open source application Condor.
"By using Condor and making resources available over the TeraGrid, we are leveraging our national and international science resources," Song says. "We will continue to expand our Condor pool to include additional machines as well as machines at other campuses through regional grids."
By early 2007, Purdue officials expect to have more than 5,000 machines available in its Condor pool.
Miron Livny, professor of computer science at the University of Wisconsin, says that Purdue's Condor pool is the largest in the nation.
"Purdue is committed to a vision, and they are making that vision a reality. I am pleased to say that early on I worked closely with people at Purdue, and we shared this vision for research computing," Livny says. "I think it's wonderful that Purdue has taken the leadership on this on the TeraGrid. And I don't pass out these kinds of compliments often."
One researcher, Michael Deem, Rice University's John W. Cox Professor of Chemical Engineering, has used nearly one million hours of computer cycles to catalog the chemical structure of compounds called zeolites.
Deem aims to identify and categorize as many of these structures as possible so that chemical engineers can select the exact zeolite they need. This is just the kind of high-throughput job that works well on Purdue's distributed computing system.
"The throughput is much higher there than I can get locally because of the large size of the Condor pool at Purdue," Deem says. "Purdue is doing a great service to the scientific community by providing this resource."
The distributed computing resource is available over the TeraGrid, of which Purdue is one of nine resource provider sites. Charlie Catlett, director of the NSF's TeraGrid project, says that it is important to provide a variety of computing resources to researchers.
"High-throughput, or capacity, computing is extremely important to the TeraGrid user community," Catlett says. "Purdue and the Condor team have provided an excellent model for harnessing campus cyberinfrastructure in a way that benefits local users and also serves the national community."
The computers in the Condor pool at Purdue are used roughly 45 percent of the time for their intended purpose, 45 percent for Condor, and they are idle the other 10 percent of the time.
"This shows that our site can provide significant computing power to the nation without requiring dedicated resources," Song says.
Preston Smith, a systems research engineer for Purdue's Rosen Center, says that Purdue has refined its use of the software by using it as a secondary scheduling system on the computers, which allows the computers to be put to use whenever they are available instead of making them available only at certain times, such as at night. The primary schedule for computing jobs at the Rosen Center is the Portable Batch System, or PBS. Purdue uses PBS Pro.
"The thing we do that is unique is that we use Condor in tandem with PBS Pro," Smith says. PBS Pro was developed by Altair Engineering.
Condor and PBS Pro are connected so that they can "talk" to each other before a job is assigned to see what computers are available. This scheduling tool allows Condor to send a job to a computer whenever it's not being used instead of at set times, which allows many more unused computing cycles to be harvested, Smith says.
Livny says that he hopes Condor usage increases at other universities and that the now-wasted cycles can be put to good use.
"Other campuses should follow Purdue's leadership," Livny says. "I believe this is the right way for us to move forward, get organized and get resources together, and then go out on the national level and share resources with other institutions."
Purdue's Rosen Center for Advanced Computing publishes a daily graph showing Condor usage.
Source: Purdue University, Steve Tally
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?