The Portland Group
CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Mellanox Rolls Out Next Iteration of ConnectX


This week Mellanox announced a refinement of its ConnectX line with the ConnectX-2 architecture. This latest evolution enhances its combination InfiniBand/Ethernet network adapter cards with new features, such added support for IEEE DCB standards and enhanced RDMA access, while maintaining the advantages of the previous line for those needing to support multiple protocols with limited server real estate.

The ConnectX-2 family of controller chips and adapter cards comes in a variety of flavors supporting Ethernet, InfiniBand, and (most interestingly) both. The ConnectX-2 EN/ENt cards support 10 gigabit Ethernet (GbE) with options for CX4, SFP+, and 10GBASE-T connections, while the IB cards support 10/20/40Gb/s InfiniBand with CX4 or QSFP connections. The favorite children in the family, however, are clearly the ConnectX-2 Virtual Protocol Interface (VPI) cards that support both 10/20/40Gb/s InfiniBand and 10 GbE on a single Converged Network Adapter (CNA). Each card sports two ports, one IB and one Ethernet, and come in either CX4 variant (both ports) or a QSFP/SFP+ version.

So, why might one want a single card that supports both interconnects? There is a lot of talk about something called convergence, most of which centers on whether or not everything will, or will not, eventually end up running on Ethernet. Even if you aren't a datacenter networks person, you have probably heard of Fibre Channel over Ethernet (FCoE), and there are other examples as well. Proponents say that Ethernet is already deployed everywhere and a single fabric will focus R&D efforts and streamline deployments. Opponents say that one size never really does fit all, and by the time you finish fixing the problems of Ethernet relative to purpose-built protocols (Ethernet is a best effort protocol with no flow control), you've lost all the advantages of convergence in lower performance and system complexity.

Whichever side you come down on here (if indeed you have a side at all), there is a clear advantage for HPC and cluster builders with the ConnectX-2 family of adapters, and that's in server real estate and cabling. Although many applications will use the ConnectX-2 in either Ethernet or IB mode, the VPI card supports both simultaneously. In the latest TOP500 list, 30 percent of clusters have InfiniBand interconnects, and the VPI card will allow cluster designers to have an IB network for cluster communications and support access to Lustre storage over 10 GbE, or other permutations (an Ethernet control network and an IB network for data communications, and so on). In fact, Lawrence Livermore is using the VPI card in precisely this mode:

"This technology allows us to provide greater high-performance computing resources to researchers in our national security programs by simplifying the design, and lowering the cost and power requirements of our scalable units for scientific simulation clusters," said Mark Seager, assistant department head for advanced technology at Lawrence Livermore National Laboratory. "In addition, these new adapters enable higher Lustre file system performance with greater connection flexibility between the InfiniBand cluster interconnect and our 10 Gigabit Ethernet storage area network."

You could also imagine, for example, provisioning a cluster with two data communications networks, and tailoring the network to the workload.

Among the improvements in this version of the product family are support for IEEE's 802.1 Data Center Bridging (DCB) specifications and hardware offload support for improved FCoE performance. The new cards also use less power: 35 percent less on the 10GbE side, and 15 percent less for IB. The InfiniBand port supports up to 40 Gb/s bandwidth with 1 microsecond latencies; on the Ethernet port the cards support 10Gb/s bandwidth with 6 microsecond TCP latency or 3 microsecond RDMA latency. Kernel bypass is also available for Low Latency Ethernet environments. ConnectX-2 samples are available today, and the products are expected to be generally available in October.

There are other vendors offering converged networking solutions, but, in general, the available solutions today -- including Mellanox's offering -- are outstanding in only a few of the possible areas of interest. For example, Brocade offers a CNA that works well for storage and server networks with support for both FCoE and iSCSI. The Mellanox ConnectX-2 family seems to hold a lot of promise for combined storage and low latency server networking.

As Brian Sparks of Mellanox said when I talked with him about this announcement, "It really is hard for a single technology to be great at both LAN and high performance local interconnect." The analogy he used in our discussion was the displacement of magnetic disk drives by new technologies like optical and SSD. Each time, the new technologies have opened up new areas of application, and taken a little share from the magnetic incumbents, but at the end of the day, there was a place where each technology was clearly superior. If there is an ultimate convergence, it will be a long time out, but until then, Mellanox is well positioned to sell to all sides of the debate.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events