The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Woven Launches New 10 GbE Switch


Woven Systems has expanded its Ethernet product lineup with a new 10 Gigabit Ethernet (GbE) top-of-rack switch. The 24-port TRX 200 offers 10 gigabits per second (Gbps) wirespeed performance on each port and InfiniBand-like latencies. The TRX 200 joins Woven's other two offerings, the 48-port Gigabit Ethernet TRX 100 top-of-rack switch and the 144-port 10 GbE "Fabric Switch" for the network core.

Like its Woven brethren, the TRX 200 is aimed at HPC and Web services -- two markets where bandwidth and latency requirements exceed that of the standard enterprise setup. Leading-edge interconnect performance has been the norm in HPC environments for some time. But with the advent of the Web services industry, a whole new market is developing for high bandwidth, low latency infrastructure. In this area, search engines or any application that performs Web page indexing must often operate with soft real-time constraints, so node-to-node latencies must be kept to a minimum.

Bandwidth can always be overprovisioned with extra switches, but that doesn't help the latency picture. Woven has specifically designed its products for InfiniBand-like latencies. Instead of store-and-forward switching used in standard Ethernet gear, Woven employs cut-through switching. The company claims latencies of 1.6µs for its flagship EFX 1000 switch.

Woven's big story with the new TRX 200 top-of-rack switch is its pricing. At less than $500 per wirespeed 10 Gbps port ($11,995 for a single unit), Woven is pushing back against Arastra, its closest competitor in high-performance 10 GbE switching. When Arastra launched its line of Ethernet gear last year, it quoted $400 per port. But it's not clear if that pricing applies across its entire product line. The new TRX against Arastra's 24-port 7124S would be the real apples-to-apples comparison, since both products claim to offer bi-directional wirespeed performance (480 Gbps aggregate) plus low latency.

The closest Cisco gear is probably the 4900M, which is a top-of-rack switch for users transitioning from GbE to 10 GbE. But at a maximum aggregate throughput of just 320 Gbps, and latencies in the 2.6µs range (according to eWeek testing), the Cisco switch is really not in the same performance ballpark as the Woven and Arastra offerings. Also, with a price that starts at $22,000, the 4900M is at least twice as expensive as its upstart competition.*

The roadblocks remaining for the Woven offerings, and for 10 GbE switches in general, are price (compared to standard Gigabit Ethernet) and performance (compared to InfiniBand). But if you are an Ethernet vendor, time may be on your side.

Many in the industry are predicting that by 2010 10 GbE will move onto the server motherboard en masse, reducing the cost of connection from about $300 or $400 down to around $22 dollars. (The real cost to the buyer is even a bit less than that since motherboard manufacturers will be replacing the older GbE interfaces.) In that same year, the total cost of a 10 GbE connection will be just twice that of a GbE connection. In 2002, the 2x cost differential proved to be an inflection point for the transition from Fast Ethernet to GbE. "That will usher in a much bigger ramp for 10 GigE servers and thus the beginning of a large transformation of the datacenter," predicts Woven VP of marketing Joe Ammirato.

If history does repeat itself, one of the first places we're likely to see the GbE to 10 GbE transition is on the TOP500 list. Even today, 57 percent of the top "supercomputers" are based on GbE. It must be said, though, that in most of these cases, the interconnect is not the bottleneck for system performance, or if it is, it's a tolerable one. For loosely-coupled, embarrassingly-parallel applications, node-to-node communications are only needed intermittently, so larger latencies and lower bandwidth are not as much of an issue.

For more tightly-coupled HPC applications, DDR InfiniBand is now the interconnect of choice. When 10 GbE goes mainstream, the choice becomes more difficult. Joe Ammirato says both performance and cost are catching up to InfiniBand, even without the benefit of native 10 GbE on the motherboard. When that happens, the interconnect interface becomes essentially free for Ethernet fabrics compared to InfiniBand, which will still require a $300 adapter.

DDR and QDR InfiniBand will still have the raw performance advantage, offering perhaps a half or a third the latency of the best Ethernet solutions and more than twice the bandwidth (QDR is 40 Gbps, but because the on-board PCIe interface limits how fast data can be moved, only about 25 Gbps is realized). Masum Mir, Woven's senior product manager, admits that InfiniBand will remain viable, but the presence of affordable 10 GbE solutions will compete at the high end. Especially with larger clusters and more variable traffic data traffic patterns, Mir sees Ethernet solutions like theirs -- with dynamic congestion avoidance and lossless fabric support -- as the more flexible choice.

Certainly for end users looking for a longer ROI horizon, Ethernet will look less risky. The battle cry of all Ethernet vendors continues to be that Ethernet will prevail. This may be less true for HPC users, who have come to view InfiniBand as a more mainstream technology with each passing year. And with much of the discussion about 10 GbE still in the future tense, companies like Woven Systems will be required to push the technology uphill for the next couple of years.

*Update: A more accurate comparison may be with Cisco's new Nexus 5020, a 40-port 10 GbE switch that offers wirespeed performance and a switch latency of 3.2µs. The 5020 can be expanded to up to 52 10 GbE ports to yield an aggregate throughput of 1 Tbps. At around $900 per port it's twice as expensive as the Woven or Arastra gear, but the Cisco box also comes with support for Fibre Channel over Ethernet and Cisco Data Center Ethernet.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events