June 18, 2009
Wondering how the new quad-core Intel Nehalem (Xeon 5500 series) and six-Core AMD Istanbul (Opteron 2400 series) stack up against each other on HPC-style codes? The folks at Advanced Clustering Technologies, a company that builds customized HPC clusters from standard components, have been putting the latest high-end x86 silicon through its paces, and have generated some interesting results. Company engineers there ran the High Performance Linpack (HPL) benchmark on comparable Nehalem- and Istanbul-based machines, and reported their findings on the firm's Web site.
Linpack, of course, is an artificial benchmark, but it is a decent measure of peak HPC performance on a given architecture and is the basis of the popular TOP500 list of supercomputers. Benchmarks, in general, are easy to misuse though, so the HPC system buyer has to be aware of their application. (Our friend, Andy Jones, vice-president of HPC at the Numerical Algorithms Group, spells out how to use benchmarking to good effect in Thursday's ZDNet article.) Serious HPC buyers tend to use a variety a benchmarks to make procurement decisions, but Linpack is often the starting point.
For their HPL tests, the engineers at Advanced Clustering Technologies took some pains to match up the systems so as to provide an apples-to-apples comparison of CPUs. According to the post, written by cluster engineer Shane Corder:
All of the testing showed we could achieve the highest performance when using both the Intel Compilers and Intel Math Library -- even on the AMD system -- so these were used ... as the base of our benchmarks. The benchmarks were run on an Opteron 2435 Istanbul system (6 core 2.6GHz processor with 16GB of 800MHz DDR2) and a X5550 Nehalem system (quad core 2.66GHz processor with 12GB of 1333MHz DDR3). An attempt was made to keep the systems identical in every other way.
They did adjust the HPL problem size to compensate for the larger memory capacity on the Nehalem platform, such that the code would approach 100 percent of memory usage on each system.
In a nutshell, Istanbul beat out Nehalem, 99.38 gigaflops to 74.03 gigaflops, respectively. It might not be too surprising that the six-core beat out the quad-core, but since Intel supports two threads per core with its so-called "hyperthreading" technology, one might surmise that Intel has the overall advantage in parallel computation. In practice though, a speed boost from hyperthreading is highly application dependent. According to the engineers at Advanced Clustering Technologies, they actually noticed a decrease in performance when using hyperthreading while running HPL. They told me that Linpack is one of the few codes that does not benefit from this kind of technology.
Nehalem did turn out to be more computationally efficient (HPL peak/theoretical peak), which they attributed to the higher memory bandwidth of DDR3 -- Istanbul uses DDR2 -- and less cache snooping. Users are not usually concerned with such metrics, but it does point to a better system balance in the Intel design.
The more telling metric is price-performance, which the AMD platform won hands down: $35.21/gigaflop for the Istanbul-based system versus $52.33/gigaflop for the Nehalem system. When you're talking teraflops, that difference adds up quickly.
As I mentioned before, the results here are all based on Linpack, so the results won't necessarily reflect real-world HPC codes. It's quite likely that a quad-core Nehalem will outperform the six-core Istanbul on many applications, especially the ones that are memory-constrained or can benefit from Intel's hyperthreading architecture. Advanced Clustering Technologies says it hopes to run more HPC benchmarks in the future and intends to publish the results.
Posted by Michael Feldman - June 18, 2009 @ 3:09 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?