November 24, 2006
The Pittsburgh Supercomputing Center (PSC) has announced that it will more than double the capability of its 10-teraflop Cray XT3 system, BigBen, to 21.5 teraflops, an increase that improves the ability of U.S. scientists and engineers to address the most demanding large-scale computational science projects.
PSC will replace the current processors (AMD Opteron, 2.4 GHz) of the 2,090 processor BigBen system with AMD's top-end dual-core (2.6 Ghz) Opteron chip, doubling the processor count to 4,180, with a corresponding boost in peak performance, and also doubling memory (from two to four terabytes). The upgrade will happen by the end of 2006, say PSC officials.
BigBen was the first Cray XT3 system to become available worldwide and became a production resource of the National Science Foundation TeraGrid in October 2005. It is the only XT3 available to NSF-supported researchers and is currently the lead performer among "tightly-coupled" architectures on the TeraGrid. PSC expects this upgrade to significantly boost the TeraGrid's ability to support "capability" computing -- the most demanding, large-scale scientific applications.
"The Cray XT3 has proven itself as a massively parallel scientific platform of exceptional capability," said PSC scientific directors Michael Levine and Ralph Roskies in a joint statement. "In the course of a year since becoming a production resource on the TeraGrid, this new system has made possible a number of remarkable achievements. We look forward to new insights into important problems that scientists will produce as a result of this upgrade."
More than sheer processor speed, BigBen's primary technological advance has been its high inter-processor bandwidth, the speed at which processors share information. This is a large advantage for projects that demand hundreds or thousands of processors working together. Over the past year, because of this capability, BigBen has demonstrated performance as much as 10 times or more better than prior tightly-coupled systems on a number of applications. Because of this capability also, BigBen has proven itself to be a champion at "scaling" -- the ability to use a large quantity of processors without seriously reducing the per-processor performance.
Several research groups, including Klaus Schulten's group at the University of Illinois, Urbana-Champaign and Michael Klein's group at the University of Pennsylvania, have found that BigBen scales to twice as many processors as was possible before, an improvement that, along with faster processors, represents a big gain in capability, and has led to many research successes, including:
Researchers with large-scale parallel projects quickly caught on to BigBen's advantages. Over its first year as a production resource, half of BigBen's usage has been for projects that use 1,024 processors or more, and at the last national allocation meeting, it was the TeraGrid's most "oversubscribed" resource.
Computation at this scale of performance, 20 teraflops, means that if every person on Earth, about 6.5 billion people, held a calculator and did one calculation per second, they would all together still be 3,000 times slower than the upgraded BigBen.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?