June 24, 2013
June 24 -- Thanks to PSC personnel — chiefly Joseph Lappa, Principal Network Design Engineer, and Kathy Benninger, Network Performance Engineer — the National Science Foundation’s network of supercomputing sites now has the “pipe capacity” it will need to keep pace with the Big Data era.
XSEDE, the National Science Foundation’s U.S.-wide network of high performance computing centers that includes PSC, has migrated its data network to Internet2, a vastly higher-capacity system than the previous carrier. XSEDE’s improved network will enable sites to achieve connection rates of up to 100 Gigabit per second (100 GE) — 10 times faster than currently possible. The architecture of the new system will also enable a number of upgrades that will help the transfer of data through the system.
As part of the Internet2 migration, Lappa has taken on new responsibilities for the XSEDE network. Newly appointed as XSEDE’s operations networking manager, he will be XSEDE’s main contact with Internet2. In this role, he and his team will monitor the performance of the new network, oversee details of transitioning sites to 100 GE, assist with campus bridging and help Internet2’s programmers and service representatives optimize and tailor the network to XSEDE and its users’ needs.
The approaching bottleneck
In 2006, Senator Ted Stevens made the mistake of referring to the Internet as “a series of tubes.” He instantly became the brunt of jokes about a guy who grew up in a time when people communicated via post, in cursive script, trying to make sense of an email world. But to be fair, it isn’t such a bad metaphor.
Information — data — is as critical to our economy and society as fresh drinking water is to our homes. Like the plumbing running through our houses, the Internet transports data through “pipes” that are limited both by their size and by the capacity their “faucets” can deliver.
Users at XSEDE sites employ some of the largest, fastest computers in the world to generate vast volumes of data. Moving those data between researchers, the supercomputers and storage sites is no small mission. To accomplish that job, XSEDE originally built what was then one of the highest-capacity, most reliable networks in the world.
“Advanced networking is critical … to support the researchers and educators who are making innovative use of our … resources,” says John Towns, XSEDE project director, noting that XSEDE supplies about 8,000 users with 17 supercomputers, data storage and management tools and networking resources.
In the Information Age, though, technology ages quickly. As the XSEDE network and its demands grew, it began to approach the limits of its infrastructure: in particular, a potential bottleneck between XSEDE sites in Denver and Chicago loomed large.
“As far as the technical reasons for migrating to Internet2, it was the ‘speeds and feeds’ problem,” Lappa says. A factory, for example, can perform an operation on a product quickly (speed). But if it can’t then move the next product up the line (feed) fast enough, that speed is wasted. Similarly, the blinding speed of XSEDE’s computing machines was in danger of being made far less relevant by the approaching difficulty of getting data into and out of them.
Unclogging the pipes
Internet2’s 100 GE backbone proved to be the solution to the problem, Benninger says. “With 100 GE, there is a clearer path to allow us to operate.”
While not all the sites will initially have 100 GE connections to the new backbone, she adds, the system will have room to grow to meet the next three years’ needs. Currently, Indiana University and Purdue University share a 100 GE connection, with a number of other sites planning to upgrade over the next several years.
In addition to supplying the leadership for the migration process, PSC also served as one of the first sites on the new network, testing out and helping Internet2 improve and customize the system to serve XSEDE’s needs.
Internet2’s architecture offers a big plus in terms of managing data flow with what’s known as “dynamic provisioning capability.” If a particular network path between two sites is congested with large data flows, a network engineer can establish a virtual local area network (VLAN) to route additional data transfers over an alternate path.
In addition to optimizing the network and helping sites connect with the backbone or upgrade to 100 GE, Benninger and Lappa will support efforts by a number of PSC and XSEDE staff to add new functions that take advantage of the higher bandwidth.
Pittsburgh Supercomputing Center (http://www.psc.edu) is a joint effort of Carnegie Mellon University and the University of Pittsburgh together with Westinghouse Electric Company. Established in 1986, PSC is supported by several federal agencies and private industry, and is a major partner in the National Science Foundation XSEDE program.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?