June 29, 2009
"We've come a long way," said John Towns, chair of TeraGrid's leadership group, the TeraGrid Forum, describing "The State of the TeraGrid" on Wednesday afternoon. Into its fifth year as a production resource, TeraGrid remains what it set out to be: deep, wide and open.
These three simple words have endured as shorthand for TeraGrid's enabling vision: deep — to provide powerful computational resources that enable research that can't otherwise be accomplished; wide — to grow the community of computational science and make the resources easily accessible; open — to connect with new resources and institutions.
Towns noted that these high-level objectives guide TeraGrid's annual planning, which includes extensive input from user communities, and review from TeraGrid's Science Advisory Board (SAB), which in its most recent report (April 2009) said, among other things: "TG has been instrumental in the discovery of new science that has required the most advanced hardware capabilities as well as the human expertise to utilize those capabilities effectively."
It's been helpful, said Towns, to think of TeraGrid as a "social experiment" — an organization that brings together 11 computational research centers across the country as resource providers and that serves researchers across the diverse spectrum of NSF-supported work is, to say the least, unique, and the management structure has evolved as the organization has established its staying power. The SAB review commented on the effectiveness of TeraGrid management in gluing together diverse entities.
By quantified measures, TeraGrid has grown significantly over the past year as new NSF-funded resources — notably the Ranger and Kraken systems — have come online. This is illustrated dramatically with the statistic that during the last quarter of 2008 TeraGrid delivered more computer cycles than during all of 2007. This sharp growth in usage is accompanied by continued growth in the number of new users, and is reflective of changes that have streamlined the allocations process.
With expansion of computational capacity, data transfer stands out as a challenge, as Towns acknowledged in the question period. New technology plans include further work toward a wide-area global file system, with Lustre-WAN becoming the focus of TeraGrid's effort in this area — to provide a single file system accessible from all TeraGrid resources.
by Michael Schneider, Pittsburgh Supercomputing Center
Posted by Debbie Walsh - June 29, 2009 @ 10:25 AM, Pacific Daylight Time
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?