September 05, 2013
The supercomputing network that provides a big data pipeline between some of the most notable supercomputing centers in the U.S. is in the process of a throughput boost thanks to an Internet2 upgrade. The network, known as XSEDE (Extreme Science, Engineering and Discovery Environment), which previously had a single 10 GB per second backbone, is now connected through a 100 GB per second Internet 2 national network.
The upgrade is aimed at keeping pace with discovery in the rising age of big data, said Victor Hazelwood, Deputy Director of Operations for XSEDE in a recent article. “Eliminating XSEDEnet’s previous, single backbone, we’ve improved our overall bandwidth between sites – and, now with Internet2, we’re part of a national infrastructure,” he said.
The XSEDE network, a project of the US National Science Foundation, is said to serve over 8,000 users, providing them with access to 17 supercomputers across the United States. Now through $62.5 million in funding from the National Telecommunications and Information Administration for the US Unified Community Anchor Network (US CAN) program, users on the supercomputing network will benefit from the glories of bountiful throughput.
Among the glories of the upgrade, aside from the immensely increased backbone bandwidth, users on the XSEDE network will profit through access to the Internet2’s software-defined networking (SDN) ready infrastructure. “This will allow us to provide SDN as a service,” Hazelwood told the Science World Report, adding that research and proposals for equipment at various test sites to take advantage of this network upgrade has already begun.
According to the report, the SDN-capable architecture and tools like OpenFlow will allow XSEDE network engineers manage and schedule bandwidth on demand, including wide area network applications like XWFS, and extreme point-to-point data transfers between sites. Additionally, XSEDE engineers are reportedly researching the provisioning of virtual network services, allowing engineers to define network service from specific points on the national grid through the SDN tools.
“We are now set up for future work with researchers and scientists who want to move extreme amounts of data,” Hazelwood said. “But it’s just a stepping stone, as we work on evaluating SDN, and getting all end users connected at 100 gigabits to the new Internet2 100gb backbone.”
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?