November 25, 2005
AMD has unveiled its Emerald cluster installation. The AMD Developer Center's Emerald is its newest and largest system, as well as its first publicly-available cluster based entirely on dual-core AMD Opteron processors with Direct Connect Architecture. AMD is offering use of Emerald to industry partners, developers, customers and end users as a cutting-edge, multi-core test and development platform.
The AMD Developer Center has helped hundreds of innovators test and optimize their products, enterprise configurations and clusters for AMD64 technology. Located in Sunnyvale, Calif., the AMD Developer Center provides on-site technical support and global virtual access to the AMD64 environment -- enabling scheduled sessions onsite or remotely at https://devcenter.amd.com.
"Supercomputing is all about performance. The combination of AMD dual-core processors with AMD64's Direct Connect Architecture not only delivers leading-edge performance, but also excels in energy efficiency and performance per watt, key issues in datacenters today." said Joe Menard, corporate vice president, software strategy, AMD. "AMD is helping set the standard for commercial computing, so Emerald allows customers, partner companies and end users to test and optimize applications across hundreds of dual-core processors. To make Emerald happen, we worked with an outstanding group of industry associates that showcase their extraordinary products alongside our multi-core technology."
"As AMD continues to offer innovative solutions for multi-core computing, Rackable Systems' Dual-Core AMD Opteron processor-based servers help offer the scalability to achieve such impressive results," said Colette LaForce, vice president of marketing, Rackable Systems. "As AMD successfully responds to the need for multi-core testing and development in compute-intensive environments, Rackable Systems is fully aligned with and proud to be part of the AMD Emerald cluster."
This cluster is comprised of 144 Rackable Systems nodes, each containing two dual-core AMD Opteron processors model 275, for a total of 576 cores. For the HPC Challenge benchmark suite, using a 512-core Emerald configuration, AMD was able to achieve the highest score in the Random Access (GUPs) benchmark and also measure 1.865 TFlop/s for an efficiency rate of 82.8 percent (http://icl.cs.utk.edu/hpcc/hpcc_results.cgi). The HPC Challenge benchmark is designed to measure a variety of factors influencing application performance, including sustainable memory bandwidth and latency. These benchmarks, which are sensitive to memory update performance and the speed of network communications, illustrates the advantages of AMD64's Direct Connect Architecture.
The Emerald cluster has 1,152 GB RAM, and includes Cyclades TS3000 Console Server; a Force10 Networks E300 Series Switch; Iwill DK8-HTX motherboards; Panasas ActiveScale Storage Cluster; Pathscale InfiniPath HTX InfiniBand Adapters; Rackable Systems servers leveraging DC Power technology; Samsung Electronics 2GB DDR400 memory modules based on single-rank 1Gbit technology; and a Silverstorm 9120 InfiniBand Switch.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?