December 01, 2006
A team of experts from the University of Illinois at Chicago's National Center for Data Mining (NCDM), Northwestern University and Johns Hopkins University won the 7th annual Bandwidth Challenge held November 16th in Tampa, FL at SC06, the international conference for high performance computing, networking and storage.
They transported the 1.3 TB Sloan Digital Sky Survey Data (SDSS) from the University of Illinois at Chicago to the SC06 floor at Tampa with a sustained data transfer rate of 8 Gb/s over a 10 Gb/s link, and a peak rate of 9.18 Gb/s.
This was a major new milestone that demonstrated that it is now practical for working scientists to transfer large data sets from disk to disk over long distances from 10 Gb/s network.
Until recently, the easiest way to transport data sets of this size was by using Federal Express, but today's high speed networks and emerging network protocols can now be used to move these massive data sets efficiently.
"Not too long ago it took days to move around such terabyte datasets. Moving data at such speeds opens up whole new ways of approaching scientific problems. Our collaboration has been a wonderful example of how computer scientists, network experts and astronomers work together to solve real-life problems that will impact our whole discipline," says Alexander Szalay, Alumni Centennial Professor of Physics and Astronomy at the Johns Hopkins University.
The data set was the BESTDR5 catalog data set from the Sloan Digital Sky Survey and when compressed consisted of 60 files of about 23 GB each and totaling 1.3 TB.
The technology that made this possible was an open source high performance network transport protocol called UDT that the NCDM developed several years ago. Since then it has been downloaded over 8000 times and is being deployed in a variety of research and business settings.
The technology that made this easy was an open source peer-to-peer storage system called SECTOR that NCDM recently developed. SECTOR is built using UDT and is designed to distribute large e-science data sets such as the Sloan Digital Sky Survey.
"Winning this year's Bandwidth Challenge graphically demonstrates that it is now practical for the working scientist to access terabyte size data sets from anywhere in the world. All it takes are modern high performance networks and new network protocols, such as UDT," said Robert Grossman, Director of the National Center for Data Mining at the University of Illinois at Chicago and Managing Partner of Open Data Group.
The network that made this feasible was a 10 Gb/s network provided by the National Lambda Rail (NLR) called PacketNet.
"By using the National LambdaRail and its member regional optical networks, scientists can access terabyte size data sets in minutes instead of days. This is a great example of what you can do with member-owned infrastructure. We are just beginning to see the implications of this," said Tom West, NLR's president and CEO.
In the past, UDT and other technologies could move data at high speeds but faced challenges when used to move data from disk to disk over long distances (additional protocols and services are required when moving data disk-to-disk versus memory-to-memory). By using SECTOR, it is now possible to transport large data sets from disk-to-disk just as easily as transporting large data sets from memory-to-memory.
"This demonstration showcases new techniques for data analysis by closely integrating application processes with leading edge advanced communication technologies. This innovation is significant because it results in both high performance data transport and in high quality analytic results," says Joe Mambretti, Director of the International Center for Advanced Internet Research at Northwestern University.
The winning team consisted of Yunhong Gu, Robert Grossman, Michal Sabala, David Hanley and Shirley Connelly from the National Center for Data Mining at the University of Illinois at Chicago; Alex Szalay, Ani Thakar, Jan vandenBerg, and Alainna Wonders from John Hopkins University and Joe Mambretti from Nortwestern University.
For the Bandwidth Challenge, Force 10 loaned the NCDM an E600 switch to use on the show floor; Extreme Networks provided an 8810 switch to use in Chicago; and Data Direct Networks provided a S2A9550 RAID controller and 80 disks to use on the show floor in Tampa.
The technology was tested using the Teraflow Network, which is managed by the Consortium for Data Analysis Research (CDAR).
For more details, see the web site: sdss.ncdm.uic.edu.
Source: National Center for Data Mining, University of Illinois at Chicago
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?