April 10, 2013
One of the hallmarks of HPC is a speedy interconnect. Amazon's EC2 Cluster Compute instance runs on a 10 Gigabit Ethernet network, but is it fast enough for MPI applications?
HPC users wondering if Amazon's virtual cluster is right for them just got some additional data points to consider thanks to a series of MPI benchmark tests undertaken by Glenn K. Lockwood. A user services consultant at the San Diego Supercomputer Center, Lockwood ran the OSU microbenchmark suite on both the Amazon EC2 Cluster and on a Myrinet 10GigE cluster.
The Point-to-Point MPI Benchmark from Ohio State measures latency, bandwidth and bidirectional bandwidth. Lockwood ran each test five times and averaged the scores together, as represented by this chart:
Discussing the results, Lockwood doesn't beat around the bush.
"The numbers speak for themselves," he writes. "EC2's interconnect performance is not great, and the disparity only worsens when comparing EC2 to Infiniband." (He's making a reference to Adam DeConinck's blog, which compared Amazon's Cluster Compute instances to QDR InfiniBand.)
In another experiment, Lockwood ran a quantum chemistry application across four EC2 Cluster Compute instances and again on the reference architecture with Myrinet. The setups were otherwise identical with two Intel Xeon E5-2670 processors and 60 GB of RAM. EC2 came up short again by about 30 percent.
In a bonus trial, Lockwood puts the EC2 cluster up against the 2007-era Blue Gene/P Torus interconnect as well as the newer Myricom adapter. The graphed results show EC2 and Blue Gene/P on an equivalent trajectory, with Myrinet the clear winner, especially at larger message sizes.
While Lockwood's report focuses primarily on point-to-point communications, he notes that the Collective and One-Sided Benchmarks did not work out any better for EC2.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?