November 12, 2012
SALT LAKE CITY, Nov. 12 – Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that the U.S. Department of Energy’s Brookhaven National Laboratory has deployed Mellanox FDR 56Gbp/s InfiniBand with RDMA to build a cost-effective and scalable 100Gb/s network for compute and storage connectivity. Some of the key research being conducted currently at Brookhaven National Laboratory includes system biology to advance the fundamental knowledge underlying biological approaches to producing biofuels, sequestering carbon in terrestrial ecosystems, advanced energy systems research and nuclear/high-energy physics experiments to explore the most fundamental questions about the nature of the universe.
“Researchers at Brookhaven National Laboratory rely on data-intensive applications that require high speed (throughput) accesses to data storage systems,” said Dantong Yu, research engineer at Brookhaven National Laboratory. “Scientists often need to read and write data in an aggregated speed of 10Gbps, 100Gbps and beyond, which is equivalent to fetching a full-length HD movie in less than a second. The efficiency and scalability of Mellanox InfiniBand solutions with RDMA should help us eliminate bottlenecks on the interconnection between servers and storage, while also controlling processing cost and latency. Faster access to data enables us to move our research forward more quickly.”
One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical and environmental sciences, as well as in energy technologies and national security.
Brookhaven National Laboratory constructed a storage area network (SAN) testbed utilizing iSCSI Extensions for RDMA (iSER) protocols over Mellanox InfiniBand-based storage interconnects with RDMA. This storage solution is scalable to allow a large number of cluster/cloud hosts to have unrestricted access to virtualized storage and enable gateway hosts, such as FTP and web servers, to move data between client and storage with an extremely high speed. Combined with its front-end network interface, the upgraded SAN will eliminate bottlenecks and deliver 100Gb/s end-to-end data transfer throughput to support applications that constantly need to move large amounts of data within and across Brookhaven’s data centers.
“National research labs, such as Brookhaven National Laboratory, require extremely fast data access for their applications in order to conduct their research more effectively,” said Gilad Shainer, vice president of market development at Mellanox. “Mellanox InfiniBand and RDMA solutions provide the most efficient and scalable interconnect infrastructure to enable Brookhaven National Laboratory to increase their application performance and achieve their research goals.”
Visit Mellanox Technologies & Brookhaven National Laboratory at SC12 (November 12-15, 2012)
Visit Mellanox Technologies at booth #1531 on Wednesday, November 14th at 11:45am, to see Brookhaven National Laboratory’s live demonstration of its high speed data transfer network.
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?