May 17, 2013
The Xeon Phi coprocessor might be the new kid on the high performance block, but among all first-rate kickers of Intel's new tires, the Texas Advanced Computing Center (TACC) got the first real jab with its top ten Stampede system.
Even before the completion of the 6,400-node Dell-built hybrid, the team was able to work with the Knight’s Ferry early platform development, eventually plugging in Knight’s Corner parts last October. Since then, they’ve pushed their 6,880 Phi coprocessors to new programming and performance limits against the flanks of both their Sandy Bridge and NVIDIA GPU capabilities.
One of the practical challenges of experimenting with mixed architectures at a major research center is that there are many users with a broad range of applications. Many of the users are scientists—but not of the computing variety. When Stampede kicked up its first dust at the beginning of the year, it supported more than 600 scientific and engineering projects from over 1,000 researchers. This meant a switch to a new system--and a new learning curve, even if it was offset by some x86 comfort.
On the Phi front, the porting of so many users' code was relatively simple, which was beneficial in terms of getting up and running, but there's far more to the story past the pure port. According to TACC Director of Scientific Applications, Dr. Karl Schulz, getting code clicked over to Phi is the relatively easy part (unless they’re reliant on a large number of third party libraries). It’s getting the code optimized that's the real challenge.
To put this in perspective, with 61 active cores, each supporting four threads, users are looking at 240 threads that the kernel has to scale across. Schulz says what they have to keep reminding users is that if they have a program, say an OpenMP application, and it doesn’t scale well on Sandy Bridge, it’s not going to just miraculously scale on Phi. It may sound simple, but optimization is where the hidden difficulty (ultimate value) lies. Just as with other architectures and GPUs, the real performance can only be met through the same optimization process that supports accelerator performance.
“You can port easily, but the things you do in CUDA to vectorize your code still have to be done for Phi,” he explained. “if you don’t vectorize on MIC, you’re not going to get the insane performance you were hoping for. You have to have well-vectorized code, you still have to think about affinity and processor placement, and you still have to have a kernel that supports high degrees of parallelism.”
Following the emphasis on optimization, Schulz and team came up with a simple, but surprising finding. When users took the time to optimize for Phi thoroughly, they regularly found that they were getting far better performance out of the Sandy Bridge side—meaning that there has been some floating point scrap left on the table that the MIC optimization effort sniffed out.
On the “easy” part of the port-to-Phi equation, Schulz notes that the hype around the ease of moving code to the coprocessor is hard to argue with. “You can just fundamentally compile Fortran code if you want…So in the case of full native offload, for instance, assuming you don’t have a lot of third party library requirements, you take your code, compile the whole thing, and run it on Phi—even completely ignoring Sandy Bridge (in our case) or Ivy Bridge.” He points to a case where his team took a million-line Fortran code and demonstrated this. In short, he says that if code already has reasonably good threaded scaling performance, users can expect reasonably good performance.
There are some other unique programming tales that are being spun on Phi as well. He says the CUDA folks at TACC that have solid experience with GPUs can’t port their code to CUDA directly, of course, but they have already gone through their code to target the parts that GPUs kill on, and these also tend to do well on Phi. For these users, all they need to do is take their CUDA code, write it back to C (even though chances are it’s already in C anyway) and they’re ready to roll with Phi pretty quickly with, again, what Schulz says is “reasonably good performance.”
The most interesting element of programming for Phi that they’re probing at TACC is a different model altogether—it’s not offload or native. Members of his team are essentially working on doing MPI between the host and the MICs. So in theory, if there was an app with the flexibility to support domains that aren’t of equal size (and usually there’s the assumption of equal capability), users spend a lot of time trying to decompose their geometries into equal domains, then farm those out to all the processors. But in the scenario where someone wants to run a part on the Xeon and part on the Phi, the two obviously won’t run at the same speeds. Not just that, the serial portions are going to run much slower on Phi but the things that are vectorized will be be faster. The point is, for those who have the capability to do domain decomposition in a fairly general way, there will be more ease in taking advantage of all the performance possibilities.
With many processors and programming a port away, more experienced users have been able to run micro-benchmarks on their code. While it’s too early give a concrete reference to compare approaches, Schulz says that there are some apps that are a big win for Phi, some where it’s modest (if at all) and in other cases, there are rather dramatic slowdowns—the same of which can be said for any accelerator.
On that note, Schulz says that the need for hybrid programming now is great—but he expects it to be a necessity going forward, especially in the era he predicts will see systems much like TACC’s new beauty that require teaching some old code new tricks.
Schulz did a rather remarkable presentation on the finer points of the TACC system--from the file system to unique cabling with his wife's hairbands-this is worth a look.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?