June 27, 2013
When an underdog team of undergrads from South Africa arrived in Leipzig, Germany for the 2013 Student Cluster Challenge last week, they had the odds stacked against them. But what the team lacked in experience was more than made up for with intestinal fortitude, not to mention a heavy dose of NVIDIA GPUs.
Things were not looking up for members of the Center for High Performance Computing (CHPC) team when they arrived in at the ISC 2013 event. For starters, the group was the youngest of the eight teams. CHPC had no graduate students, as other teams did, and were representing the continent of Africa for the first time in an international supercomputing challenge.
What's more, CHPC couldn't even access its cluster until the day after it arrived in Leipzig. Other teams, from the US, China, Germany, the UK, and Costa Rica, had already been working with their systems for some time, according to Dan Olds, principal analyst with Gabriel Consulting and creator of the Student Cluster Competition website.
But after CHPC put its Dell PowerEdge cluster of Xeon CPUs and NVIDIA GPUs through its paces, it emerged as the clear winner, earning the "Overall Championship Award."
Every group in the competition used co-processors to give their cluster extra punch, but CHPC's use of eight NVIDIA K20 cards appeared to give it an edge, Olds says in a blog post. "While some teams used Intel Phi co-processors, team South Africa went with eight NVIDIA K20 cards, which seemed to do the trick," he says.
The 2013 Student Cluster Challenge tested how well student-built systems could run real-world applications. The apps included GROMACS, a molecular dynamics package; MILC, a quantum chromodynamics app; WRF, a weather research and forecasting application; and two mystery apps that were disclosed on the day of the competition, including the AMG numerical analysis application and the CP2K molecular simulation package. These benchmarks accounted for 60 percent of a team's score, and the remaining 40 percent comes from interviews with event judges.
Student systems had to consume less than 3,000 watts, which may have proved a challenge for some teams stuffing up to 16 GPUs into their clusters. According to Olds, CHPC's system was based on PowerEdge R320/R720 servers equipped with dual eight-core Xeon E5-2660 processors on each node, giving it a total of 128 cores. The CHPC system had 512 GB of memory and utilized a Mellanox FDR Infiniband interconnect.
NVIDIA-equipped clusters also dominated in the "Highest LINPACK" component of the Student Cluster Challenge. The four top teams all used NVIDIA GPUs, including: Team Huazhong from China, which won with a score of 8.455 teraflops; Team Edinburgh from the UK, which came in second with 8.321 teraflops; Team Tsinghua from China, which came in third place with 8.132 teraflops; and CHPC from South Africa, which came in fourth with 6.371 teraflops.
The next Student Cluster Challenge will occur in November at the SC 2013 conference in Denver, Colorado.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?