June 30, 2009
The engineering team at Advanced Clustering Technologies is at it again. A couple of weeks ago, they published the results of the High Performance Linpack (HPL) benchmark for comparable Intel Nehalem- and AMD Istanbul-based systems, which I discussed in a previous article. Those results had Istanbul edging out Nehalem for Linpack bragging rights.
Now the engineers at Advanced Clustering Technologies have pitted those same microprocessors against each other using the STREAM benchmark and have posted the results on their Web site. STREAM is part of the HPC Challenge suite and measures sustainable memory bandwidth -- one of the most important attributes of high performance computing systems today.
Memory bandwidth, or lack thereof, has become increasingly significant for many applications, since as core counts increase, computational power is racing ahead of memory performance. Like HPL, STREAM is a synthetic benchmark, but, in general, if an application is memory constrained, the STREAM benchmark is a good indicator of relative performance.
The STREAM results for the Nehalem and Istanbul offered no surprises. If you've been following the x86 rivalry, you've probably guessed that Intel's Nehalem (Xeon 5500) processor, with its more advanced memory subsystem, bests AMD's Istanbul Opteron, which relies on the older DDR2 technology. According to Advanced Clustering Technologies engineer Shane Corder:
Even the slowest memory speed on a Xeon 5500 processor bests the fastest produced by the Opteron by as much as 20%; comparing the Opteron to the fastest Xeon, the Xeon outperforms by over 75%. The Xeon 5500 gets these much higher memory bandwidth results because of tri-channel instead of dual-channel memory, the increased clock speed of DDR3 (up to 1333MHz), and the fast point-to-point CPU interconnect provided by its Quick Path Interconnect.
One other noteworthy data point is that STREAM performance on the six-core Istanbul turned out to be slightly worse than on the quad-core Shanghai. The Advanced Clustering Technologies folks attribute this to the two extra Istanbul cores having to contend for bandwidth on the same number of memory controllers (two) that are present in the Shanghai chip. As the company did with the Linpack results, the results were also described in terms of price-performance:
When you add cost per machine into the mix, the results still show the Xeon 5500 series with a clear lead. The Xeon machine as configured has a price of approximately $3,800 while the Opteron is priced at $3,500. This gives the Xeon a rate of 9.8 megabytes per second per dollar vs. 5.9 megabytes per second per dollar for the Opteron: a 66% advantage for the Intel Xeon 5500 series.
As before, the caveat is that the synthetic benchmark results may not correspond to real-world apps. The recommendation from Advanced Clustering Technologies is that you use your own codes to figure out which processor and system configuration is going to give you the most bang for the buck.
Posted by Michael Feldman - June 30, 2009 @ 10:55 AM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?