The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Battle of the Network Fabrics


Over the last few years, the penetration of InfiniBand into the HPC market has become well established. InfiniBand's high bandwidth, low latency connectivity has encouraged its use wherever performance or price/performance are the driving factors, such as HPC clusters and supercomputers. In the commercial data center, where LAN/WAN connectivity over TCP/IP is still important, Ethernet is still king. And for storage connectivity, Fibre Channel has become an established fabric. Most tier one OEMs -- including IBM, HP, Sun, and Dell -- that offer systems across a variety of markets, offer connectivity with all fabrics.

With the support of InfiniBand and iWARP (10 Gigabit Ethernet implemented as RDMA over TCP) by the OpenFabrics Alliance, software stacks now exist that have open interconnect and protocol standards for HPC clusters, data centers, and storage systems. But convergence still seems a long way off. InfiniBand and iWARP are based on fundamentally different architectures, representing two approaches to high performance connectivity.

That's not to say Ethernet and InfiniBand can't mix. With their recently announced ConnectX multi-protocol technology, Mellanox will be supporting both InfiniBand and Ethernet fabrics with a single adapter. This will enable storage and server OEMs to develop systems that are able to support both interconnects with a single piece of hardware. With this move, Mellanox appears to be conceding the point that 10 GbE will be the interconnect-of-choice for an important class of systems -- the medium-scale commercial cluster.

With ConnectX, each port can be configured as either a 4X InfiniBand or a 1 or 10 Gigabit Ethernet, based on OEM preferences. The InfiniBand port supports single, double and quad data rates (SDR, DDR and QDR) delivering 10, 20 and 40 Gbps full duplex bandwidth. The application interfaces, supported over either fabric, include IP, sockets, MPI, SCSI, iSCSI, Fibre Channel (for InfiniBand only).

The first Mellanox ConnectX adapter products are scheduled for availability in Q1 of 2007. These will include both a multi-protocol and an InfiniBand-only (10 and 20 Gbps) offering. A 40 Gbps InfiniBand ConnectX adapter is also scheduled to be delivered when the corresponding switches become available -- probably sometime in 2008.

The multi-protocol architecture will allow compatibility with software based on either the legacy Ethernet networking and storage stacks or the OpenFabrics RDMA software stack. In addition, system software that has been ported to InfiniBand RDMA can now be extended to Ethernet environments, bringing some of the advantages of InfiniBand to the Ethernet applications.

"All of the RDMA-capable software solutions that have been proven over InfiniBand can now run over an Ethernet fabric as well," said Thad Omura, vice president of product marketing for Mellanox Technologies. "This is not iWARP, which is implemented over legacy TCP/IP stacks.  What we're doing is leveraging existing InfiniBand RDMA stacks over Ethernet."

Mellanox's decision not to support iWARP was based on a couple of factors. Omura believes the multi-chip solution required for iWARP's TCP offload makes the design too complex and expensive to attract widespread support. In addition, the technology's scalability remains a question. Omura says iWARP silicon would need to be redesigned to reach 40 or 100 Gbps of bandwidth performance.

In contrast, InfiniBand is already architected to support performance of 40 Gbps with 1 microsecond latency, and can do so on a single chip.  Beyond that, there's a clear path to 120 Gbps InfiniBand within the next few years. But, according to Omura, in the commercial data center environments, connectivity to Ethernet (from a LAN/WAN perspective) is often more important than performance or cost.

"Mellanox believes InfiniBand will always deliver the best price/performance for server and storage connectivity," says Omura. "At the same time we see that in enterprise solutions, 10 Gigabit Ethernet will emerge from medium- to low-scale types of application in a clustering environment. Our expertise in high performance connectivity will serve both markets."

While Mellanox passed on iWARP as an Ethernet solution, others are embracing it. So far NetEffect is the only vendor that supports the full implementation with its iWARP adapter. But the attraction of a standardized RDMA-enabled Ethernet solution will probably attract other companies as well. As Rick Maule, CEO of NetEffect, likes to say, "iWARP has arrived."

Maule believes the world pretty much accepts that Ethernet is the de facto networking fabric. And for storage devices, there's no longer a big performance mismatch between Fibre Channel and Ethernet. According to him, in the storage sector, determining which fabric is preferable is more a matter of economics now.

"The thing that no one has been able to prove is that Ethernet can really do clustering fabrics on par with Myrinet or InfiniBand or whatever -- until now," says Maule. According to him, "Ethernet can now be a true clustering fabric without any apology."

InfiniBand had a head start to 10 Gbps, low-latency performance. But now that 10 GbE iWARP has arrived, Maule believes it makes a compelling alternative. With RDMA technology, Ethernet has become competitive with InfiniBand and Myrinet in both bandwidth and latency performance metrics.

Maule envisions the adoption of iWARP as a cluster interconnect will drive broader adoption of 10 GbE in the data center. While clustering, networking and storage fabrics have evolved separately in the past, he believes that a high-performance Ethernet solution will start to converge them in 2007.

Adoption of iWARP in the storage area will trail clustering, but the requirement for 10 Gbps bandwidth will start to pressure Fibre Channel-based storage. Maule thinks at some point soon the storage market will have to choose between adopting 10 GbE or moving to 8 Gbps Fibre Channel. For the networking segment, increased aggregated bandwidth requirements and server consolidation will encourage more servers to use 10 GbE. Maule thinks adding iWARP to the data center in one of these three areas becomes a doorway to assessing the technology for broader adoption in the other two areas.

The stakes are high. Maule estimates that there are around 20 million Gigabit Ethernet ports shipped each year. He sees each one as an opportunity to upgrade from GbE to 10 GbE. His prediction is that over the next three to five years they will become iWARP ports, not InfiniBand ports.

However, Maule admits that InfiniBand is technologically sound. He should know. NetEffect actually started out as Banderacom Inc., a company that was founded in 1999 to develop InfiniBand silicon. But Banderacom became disaffected with the technology when InfiniBand failed to take hold as a new fabric standard. The company was restructured (and renamed) to develop chips based on the emerging iWARP Ethernet standard.

Like many people, Maule thinks that if the industry could have easily adopted InfiniBand, it already would have done so to a much greater degree. He believes that since IT managers already have a large investment in Ethernet technology (in personnel training, software and hardware), they will seek the path of least resistance to improve network performance. Because of this, he's betting that InfiniBand will not be the volume play in the interconnect market.

"We did InfiniBand in a previous part of our life," said Maule, referring to the Banderacom adventure. "The recognition that we got to is that it's not a technology problem; it's an ecosystem and economic problem. Basically the marketplace has been waiting on Ethernet to get its act together and go to the next level."

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events