June 17, 2013
LEIPZIG, Germany, Jun 17, 2013 (BUSINESS WIRE) -- Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced it will showcase the world's first technology demonstration of ConnectX-3 FDR 56Gb/s InfiniBand solutions running on the SECO ARM development platform featuring NVIDIA Tegra processors. The SECO development platform is powered by an NVIDIA Tegra quad-core ARM processor. The RISC-based ARM architecture is advancing far beyond today's billions of client devices, and is rapidly moving towards the mainstream technical computing and high performance computing (HPC) market segments.
Together with NVIDIA, Mellanox will demo the ARM architecture with InfiniBand at the Mellanox exhibit booth #326 during the International Supercomputing Conference (ISC) in Leipzig, Germany from June 17-20.
"This particular technology demonstration represents a significant development milestone for adoption of Mellanox's InfiniBand solutions in new CPU platforms such as NVIDIA Tegra-based ARM platforms," said Gilad Shainer, Vice President of Marketing at Mellanox Technologies. "As future generations of 64-bit ARM solutions come on-line, applications will continue to demand ultra-low latency communications and scalability that only Mellanox InfiniBand can provide."
"Mellanox and NVIDIA are working together to bring all the benefits of a modern HPC network to ARM-based platforms," said Ian Buck, General Manager of GPU Computing Software at NVIDIA. "This technology demo, coupled with support for ARM platforms in the latest release of the CUDA parallel programming toolkit, provides the foundation for developers to build out the ARM HPC application ecosystem."
"The R&D work performed by Mellanox that enabled SECO to integrate InfiniBand technology on its NVIDIA Tegra based development kit allowed the ARM+ GPU architecture to be implemented in real HPC clusters, i.e. BSC's Pedraforca which was presented this year at ISC by E4 Computer Engineering," said Alessandro Santini, HPC sales, at SECO.
Live demonstration during ISC'13 (June 17-19, 2013).
Visit Mellanox Technologies at booth #326 to see the live demonstration of Mellanox's ConnectX-3 FDR InfiniBand adapters on NVIDIA's Tegra ARM platform. For more information on Mellanox's event and speaking activities at ISC'13, please visit http://www.mellanox.com/isc13.
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at www.mellanox.com.
Source: Mellanox Technologies
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?