June 17, 2013
LEIPZIG, Germany, June 17 -- Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that its FDR 56Gb/s InfiniBand solutions deliver unmatched application performance over competing interconnect solutions. Benchmarks performed on multiple applications, such as OpenFOAM (Computational Fluid Dynamic), NAMD (Molecular Dynamics), RADIOSS (Structural Analysis), LAMMPS (Molecular Dynamics), WRF (Weather Research and Forecasting) and CP2K (Molecular Simulations) demonstrate higher performance -- 20 to 30 percent higher performance with sixteen compute nodes compared to QDR 40Gb/s InfiniBand, and 100 to 200 percent higher performance compared to 10 and 40 Gigabit Ethernet. Furthermore, Mellanox's Connect-IB FDR 56Gb/s InfiniBand adapter delivers 3X higher message rate over competing solutions, enabling 137 million messages per second. The performance capabilities demonstrated by FDR 56Gb/s InfiniBand are critical to High-Performance Computing (HPC), Web 2.0, cloud, Big Data and financial applications which require the highest bandwidth and the lowest latency to provide a competitive advantage to their users.
"These application benchmarks highlight the performance advantage of FDR 56Gb/s InfiniBand and the return-on-investment it provides to users," said Gilad Shainer, Vice President of Marketing at Mellanox Technologies. "Mellanox's FDR 56Gb/s InfiniBand solution is the most efficient interconnect solution for connecting servers and storage systems, delivering high throughput, low latency and world-leading application performance."
Available today, Mellanox's FDR 56Gb/s InfiniBand solution includes ConnectX-3 and Connect-IB adapter cards, SwitchX-2 based switches (from 12-port to 648-port), fiber and copper cables, and ScalableHPC accelerator and management software. Mellanox will demonstrate these performance advantages at the International Supercomputing Conference (ISC'13).
Visit Mellanox Technologies at ISC'13 (June 17-19, 2013)
Visit Mellanox Technologies at ISC'13 (booth #326) to see demonstrations and the full suite of Mellanox's end-to-end high-performance InfiniBand and Ethernet solutions. For more information on Mellanox's event and speaking activities at ISC'13, please visit http://www.mellanox.com/isc13.
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at www.mellanox.com.
Source: Mellanox Technologies
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?