June 17, 2013
LEIPZIG, Germany, June 17 -- Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced the next major advancement in GPU-to-GPU communications with the launch of its FDR InfiniBand solution with support for NVIDIA GPUDirect remote direct memory access (RDMA) technology.
The next generation of NVIDIA GPUDirect technology provides industry-leading application performance and efficiency for GPU-accelerator based high-performance computing (HPC) clusters. NVIDIA GPUDirect RDMA technology dramatically accelerates communications between GPUs by providing a direct peer-to-peer communication data path between Mellanox’s scalable HPC adapters and NVIDIA GPUs.
This capability provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU and system memory subsystem from all GPU-GPU communications across the network. The latest performance results from Ohio State University demonstrated MPI latency reduction of 69 percent, from 19.78us to 6.12us, when moving data between InfiniBand-connected GPUs, while overall throughput for small messages increased by 3X and bandwidth performance increased by 26 percent for larger messages.
The performance testing was done using MVAPICH2 software from The Ohio State University’s Department of Computer Science and Engineering, which delivers world-class performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand. MVAPICH2 software powers numerous supercomputers in the TOP500 list, including the 7th largest multi-Petaflop TACC Stampede system with 204,900 cores interconnected by Mellanox FDR 56Gb/s InfiniBand.
“The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before,” said Gilad Shainer, Vice President of Marketing at Mellanox Technologies. “Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA ensures the highest level of application performance, scalability and efficiency for GPU-based clusters.”
“Application scaling on clusters is often limited by an increase in sent messages, at progressively smaller message sizes,” said Ian Buck, General Manager of GPU Computing Software at NVIDIA. “With MVAPICH2 and GPUDirect RDMA, we see substantial improvements in small message latency and bisection bandwidth between GPUs directly to Mellanox’s InfiniBand network fabric.”
GPU-based clusters are widely used for computationally-intensive tasks, such as seismic processing, computation fluid dynamics and molecular dynamics. Since the GPUs perform high-performance floating point operations over a very large number of cores, a high-speed interconnect is required to connect between the platforms to deliver the necessary bandwidth and latency for the clustered GPUs to operate efficiently and alleviate any bottlenecks in the GPU-to-GPU communication path.
Mellanox ConnectX and Connect-IB based adapters are the world’s only InfiniBand solutions that provide full offloading capabilities critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters. Combined with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions are driving HPC environments to new levels of performance and scalability.
The alpha-code to enable functionality of GPUDirect RDMA is available today, including the alpha version of MVAPICH2-GDR release from OSU to enable existing MPI applications. General availability is expected in the 4th quarter of 2013. For more information please email email@example.com.
Live demonstration during ISC’13 (June 17-19, 2013)
Visit Mellanox Technologies at ISC’13 (booth #326) during expo hours to see a live demonstration of Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA, and the full suite of Mellanox’s end-to-end high-performance InfiniBand and Ethernet solutions. For more information on Mellanox’s event and speaking activities at ISC’13, please visit http://www.mellanox.com/isc13.
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at www.mellanox.com.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?