April 02, 2007
SANTA CLARA, Calif., and YOKNEAM, Israel, March 26 -- Mellanox Technologies Ltd., a leading supplier of semiconductor-based high-performance interconnect products, today announced the availability of the industry’s only 10 and 20Gb/s InfiniBand I/O adapters that deliver ultra-low 1 microsecond (ms) application latencies. The ConnectX IB fourth-generation InfiniBand Host Channel Adapters (HCAs) provide unparalleled I/O connectivity performance for servers, storage, and embedded systems optimized for high throughput and latency-sensitive clusters, grids and virtualized environments.
“Today’s servers integrate multiple dual and quad-core processors with high bandwidth memory subsystems, yet the I/O limitations of Gigabit Ethernet and Fibre Channel effectively degrades the system’s overall performance,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “ConnectX IB 10 and 20Gb/s InfiniBand adapters balance I/O performance with powerful multi-core processors responsible for executing mission-critical functions that range from applications which optimize Fortune 500 business operations to those that enable the discovery of new disease treatments through medical and drug research.”
Building on the success of the widely deployed Mellanox InfiniHost adapter products, ConnectX IB HCAs extend InfiniBand’s value with new performance levels and capabilities:
Leading OEM Support
“Our high-performance BladeSystem c-Class customer applications are increasingly relying on lower interconnect latency to improve performance and keep costs in check,” said Mark Potter, vice president of the BladeSystem Division at HP. “With the promise of even better application latency, HP's c-Class blades featuring the forthcoming Mellanox ConnectX IB HCAs will further enhance HP's industry-leading 4X DDR InfiniBand capability, bringing new dimensions to how Fortune 500 companies deploy clusters and improve ROI.”
“Clearly InfiniBand is reaching market maturity with this fourth generation server host chip and adapter level interface technology from Mellanox,” said Bill Erdman, marketing director of Cisco Systems Server Virtualization Business Unit. “As we bring these host interface cards to market over the next several calendar quarters, as fully integrated with our scalable Server Fabric Switching product line, customers will see significant latency improvements, and greater end to end delivery reliability, especially when scaling large computing clusters with thousands of high end compute nodes.”
“Scaling high-performance applications and clusters without compromising performance is becoming a critical need, driven by ever-increasing computation needs,” said Andy Bechtolsheim, chief architect and senior vice president for Sun Microsystems. “ConnectX IB HCAs offer novel scalability features that complement our vision for delivering compelling solutions to our end users.”
“IT organizations in industries ranging from HPC to financial services are continually looking at ways to get the most out of their critical software applications,” said Patrick Guay, senior vice president of marketing at Voltaire. “The increased bandwidths and lower latencies delivered in Mellanox’s ConnectX InfiniBand adapters combined with Voltaire’s multi-service switching platforms will bring significantly greater application acceleration benefits to our customers.”
I/O as a Competitive Advantage
The performance and capabilities of ConnectX IB HCAs support the most demanding high-performance computing applications while at the same time reduce research and development budgets.
“Today’s science demands continue to outpace the number of available engineers and their associated budgets, driving the need for more productivity per scientist,” said Shawn Hansen, director of marketing, Windows Server Division at Microsoft Corp. “Technologies that improve I/O latencies and message rates, like ConnectX IB adapters, enhance the ability of Windows Compute Cluster Server to deliver high performance computing for the mainstream researcher and engineer.”
In addition, the volume of transactions and data transferred in Fortune 500 companies is increasing exponentially, jeopardizing profits and competitiveness for IT infrastructures that cannot scale to address the additional load.
“Extremely high volumes of concurrent users and increasingly complex transactions are making access to data one of the greatest bottlenecks to performance in grid computing,” said Geva Perry, chief marketing officer at GigaSpaces. “ConnectX IB InfiniBand HCAs offer leading latency, throughput and reliable performance that can help eliminate interconnect-related data latency degradations and is therefore a perfect complement to GigaSpaces’ products for increasing overall application performance and scalability.”
Enhanced Virtual Infrastructure Performance and ROI
ConnectX IB InfiniBand HCAs offer Channel I/O Virtualization (CIOV), which creates virtualized services end-points for virtual machines and SOA deployments. CIOV enables virtualized provisioning of all I/O services including clustering, communications, storage and management. CIOV enables accelerated hardware-based I/O virtualization and is complementary to CPU and memory virtualization technologies from Intel and AMD.
“When used with the Xen virtualization technology inside of SUSE Linux Enterprise Real Time, ConnectX IB InfiniBand adapters can lower I/O costs and improve I/O utilization,” said Holger Dyroff, vice president of SUSE Linux Enterprise product management at Novell. “Service-oriented architectures demand native I/O performance from virtual machines and Mellanox’s I/O virtualization architecture perfectly complements Novell's technical leadership in delivering mission-critical operating systems to our customers.”
ConnectX IB InfiniBand HCAs deliver leading performance while maintaining compatibility with operating systems and networking software stacks. For high-performance remote direct memory access (RDMA) based operations, the adapters are fully backward compatible to the OpenFabrics (www.openfabrics.org) Enterprise Distribution (OFED) and Microsoft WHQL-certified Windows InfiniBand (WinIB) protocol stacks, requiring only a device driver upgrade. RDMA and InfiniBand hardware transport offload is proven to deliver software-transparent, application performance improvements. For traditional TCP/IP-based applications, the adapters support standard operating system stacks, including stateless-offload and Intel QuickData technology enhancements.
“PCI Express and Intel QuickData technology provide a low disruption path to scaling I/O by respectively increasing bandwidth and efficiencies for I/O in Intel-based servers,” said Jim Pappas, director of technology initiatives for Intel’s Digital Enterprise Group. “With innovative implementation of these technologies by companies like Mellanox, I/O on Intel’s enterprise platforms continues to be accelerated for the demanding multi-core application needs of today and the future.”
Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand interconnect products that facilitate data transmission between servers, communications infrastructure equipment, and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems. In addition to supporting InfiniBand, Mellanox's next generation of products support the industry-standard Ethernet interconnect specification. Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at www.mellanox.com.
-----Source: Mellanox Technologies
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?