July 07, 2006
Summer has barely begun and some in the HPC community are already looking toward November's 2006 Supercomputing Conference (SC06), which takes place in Tampa, Florida, this year. In fact, the effort to develop the conference network, SCinet, actually began at the end of 2005 and will continue throughout 2006, right up to the beginning of the conference.
SCinet supports the conference's entire network requirements. And as one might imagine from a supercomputing event, it is not just a vanilla LAN. It has to provide a typical commodity network for the use of attendees, as well as a high-speed, state-of-the-art communication test-bed for various HPC research exhibits, demonstrations and event programs.
Over 100 volunteers from a large partnership of academic institutions, national labs, supercomputing centers, network hardware vendors, and telecommunications carriers are working together to design and build SCinet. The hardware vendors and carriers donate much of the equipment and services needed to build the infrastructure. While the planning typically begins more than a year in advance of each conference, the actual installation is done in the week just prior to the event.
"In that week, basically everything is built," says Dennis Duke, chairman of SCinet for 2006. "All of the equipment is delivered, 60 to 70 miles of fiber are run, and all of the wide area network connections are made."
Although most of the installation is done just before the event, Duke admits that at last year's conference, the wide area network setup was actually started a month in advance because there were so many WAN connections. And it looks like they're going to be doing the same thing this year. But the vast majority of the SCinet is installed in just seven days. Duke says the only reason this is possible is that they have several dozen of the best network people in the world there to build it -- along with the support of all the vendors.
The vendors give both their time and their equipment to SCinet. According to Duke, the value of the donated equipment for each of last two years was around 25 million dollars. They know that because all the hardware has to be insured.
SCinet is actually composed of three networks:
1. The commodity network. This is the conference's production network that is intended to be extremely stable and reliable and is similar to a network found at any research institution. It includes free wireless access for all the attendees and Gigabit Ethernet drops to all the booths and meeting rooms.
2. The high-performance network. This network is used to support high-performance demonstrations, the HPC Bandwidth Challenge, and other research exhibits. It will deliver multiple 10 Gigabit Ethernet links.
3. Xnet. This represents the conference's bleeding-edge network. It will showcase experimental next-generation technology from vendors with equipment that is not quite ready for prime time. Unlike the commodity and high-performance networks, this infrastructure is not intended to be stable.
The Xnet is always the wild card in SCinet. No one knows what it's going to look like until fairly close to the conference. Parts of it may be connected to the commodity and high-performance networks.
"The rule is that it can't do anything that would endanger the stability of the other networks," says Duke. "Apart from that, they can do anything they want. Last year, they set up an InfiniBand network that actually carried some wide area network traffic -- very dramatic and successful."
The HPC Bandwidth Challenge always seems to attract a lot of attention. This year they plan to do something a little different. Rather than focusing on pure speed, the emphasis in Tampa will be on production-level networking.
"So we're not trying to create a record for how much bandwidth is used as much we're trying to create a record for how much real-world work gets done, that is, production network capability," explains Duke. "It's sort of like the difference between peak performance and sustained performance on real applications."
Duke says they are planning to have ten to twelve 10-Gig links into the Tampa Convention Center. In the past, each team typically had dedicated 10-Gig lambdas just for the Bandwidth Challenge. This year, each team will have a single 10 Gigabit link. So the teams are being encouraged to use their production networks at their own institutions, which means that in many cases upgrades will be needed. But it also means that once they are finished, they will have that high-performance backbone in place. Duke is hoping that the Bandwidth Challenge evolves into something that results in upgrading network capability all over the country.
The commodity network will also take on a new dimension this year. A high-performance wireless link will be added to accommodate the spill-over of the SC06 programs to buildings outside the Convention Center. For example, the Education Program will be hosted at the nearby Marriot Hotel, which is located about 200 yards from the Convention Center. SCinet will be linked to the hotel using a 2.6 Gigabits-per-second wireless beam from a company called GigaBeam. This state-of-the-art wireless technology represents a major speed bump for SCinet. By comparison, last year's wireless link was a paltry 50 Megabits-per-second.
"That's a first for us and the conference," says Duke.
One of the biggest problems every year is upgrading the local infrastructure to be able to accommodate all the bandwidth required by the conference. Most cities do not have the type of network connectivity that is needed for something like SCinet.
"There's a geographical challenge every year," says Duke. "Wherever we go, they never have the local infrastructure to support what we need, because we're effectively building one of the most powerful networks on the face of the earth. A good bit of effort during the year is to repair that problem."
This year in Tampa it will be no different. The wide area network connection for the city comes into two places, about 12 miles from the downtown area. Both Level 3 and Quest will be spending a lot of their own resources to get the fiber downtown to within six blocks of the Tampa Convention Center. Verizon has also joined the SCinet effort this year as a major partner. They have installed a bunch of fiber pairs from the downtown POP into the Convention Center. According to Duke, SCinet precipitates this kind of major uprade every year.
Even within the convention centers, there are challenges. Most centers are geared for "normal" conferences and the local network is rather limited in performance. So for the past several, they've also had to rebuild the fiber infrastructures within the centers themselves. As SC has traveled around the country -- Dallas, Denver, Baltimore, Phoenix, Pittsburgh and Seattle -- they have left behind some very well-connected facilities. But the individuals who do all this work get something as well.
"The people who do it just love it," says Duke. "It's just such a challenge for them. That's why they volunteer."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?