December 12, 2011
In Part 1, I advocated that we should explore using ARM-architecture mobile processors in HPC for three reasons: innovation (the marketplace will dictate that future innovation focus on mobile systems), federation (the ARM architecture is ubiquitous and available from many vendors), and customization (the mobile market has a strong history of custom parts).
In addition, the cost of an ARM processor is an order of magnitude lower than current commodity processors, and they can be built to consume less than 10 Watts per multicore processor. It's worth noting that there is already significant movement in this direction.The Mont-Blanc project coordinated by the Barcelona Supercomputing Center, is already building and experimenting with a prototype cluster using ARM processors to explore the challenges.
However, moving to any new processor architecture is not an easy decision. There are challenges and missing pieces that need to be addressed before we can make the leap, but there are opportunities as well. Here, we explore challenges and opportunities in three areas: processor and system architecture, software, and economics.
Architectural Challenges and Opportunities
If we compare the architectural features of the high end ARM Cortex-A15 processors to the most common current HPC processors from Intel, AMD and IBM, we find many similar features. ARM processors support virtual memory (with small 4KB and large 64KB pages), a cache hierarchy, cache coherence across multiple cores, and a full set of integer and floating point registers and instructions. The high-end Cortex-A15 supports a modern superscalar, out-of-order execution pipeline. Some instructions commonly used in performance-sensitive applications to better manage the cache when processing large datasets, such as cache prefetch or nontemporal (noncaching) loads and stores, are not available in the current ARM instruction set. Many embedded processors are used in applications where floating point is unnecessary, but we should only consider fully functional processors. The table below compares high-end IBM, Intel and AMD processors to the Cortex-A15.
There are question marks for the ARM Cortex-A15 because no implementations are available yet, and the numbers may depend on the vendor and fab technology. The most striking differences are the lower core count and smaller cache size for the ARM Cortex-A15. A manufacturer could produce a chip with multiple quad-core tiles, effectively increasing total core count and cache size, but the cores in different tiles would not be cache coherent. Also, the ARM NEON SIMD instructions do not currently support double-precision floating point.
Current ARM processors, including the Cortex-A15, are 32-bit processors. One of the reasons to build exascale machines is to process very large datasets, and this will benefit from, if not demand, a true 64-bit processor. ARM processors support large physical memory, but that's not the same as true 64-bit registers and instructions. There had been rumors of 64-bit ARM processors for the past year; last month, ARM disclosed details of the ARMv8 architecture, which supports both classical 32-bit ARM instructions, and a new 64-bit execution state with a true 64-bit instruction set, A64.
Importantly, the A64 NEON SIMD instructions support double precision, as well as full IEEE rounding modes, denormalized numbers, and NaNs. Products based on the 64-bit ARM architecture are still in the future, but Applied Micro Circuits Corp. demonstrated the first 64-bit ARM processor implemented on a Xilinx Virtex-6 FPGA. NVIDIA is also reportedly a lead partner for the 64-bit ARMv8 architecture.
ARM-based products are typically systems-on-chip, with variations in the ARM core used and in the selection of devices and interfaces included on the chip. This is both an opportunity and a challenge. One of the advantages of the ARM architecture is the wide selection of vendors supplying parts, so that's an opportunity. However, each vendor will have a slightly different feature set. Today, when choosing between Intel and AMD, a system vendor or customer may consider the slight difference in instruction sets, cost, performance, maybe the difference in motherboard design or processor interface (quickpath vs. hypertransport), but otherwise the features are essentially the same. Between ARM suppliers, the features are potentially much different, making the selection process much more interesting.
ARM+GPU or (more generally) ARM+accelerator is a likely configuration for products aimed at HPC. Accelerator-based systems are becoming increasingly more prevalent, and there are several efforts addressing the programming challenges. Current accelerators are NVIDIA and AMD GPUs, and the future Intel MIC will compete directly with them. Now, Texas Instruments seems to be testing the HPC waters with a new multicore DSP. These all connect to the host on the PCI express bus, which although a relatively fast IO bus, is very slow relative to memory speeds. AMD is integrating stream processors (FKA GPUs) on the same chip as the processor; right now these are not targeting the highest performance, but the plan seems to be to move in that direction.
We should see more advantages for accelerated computing with tighter integration. But no one other than AMD can integrate on chip with AMD processors, and similarly for Intel. One could integrate an accelerator more closely to the processor on the AMD Hypertransport (which is open) or the Intel Quickpath (which is not), but we've seen little movement in that direction, in spite of AMD's short-lived Torrenza initiative. However, ARM vendors will have more opportunities for tighter accelerator integration. NVIDIA's Project Denver chips will have ARM cores integrated at some level with NVIDIA GPUs, for instance. Adapteva has announced multicore-architecture IP that could be produced as a standalone chip, or possibly included on chip with ARM cores or other devices.
It's hard to compete with Intel's silicon technology; arguably no one else has the resources to support advanced process technology at the same pace. While Intel is starting production of 22nm Ivy Bridge processors, targeting delivery in the first half of 2012, most other vendors are still producing microcontrollers at 32nm and 45nm feature sizes, or a 0.9 shrink of those. However, ARM is aggressively exploring future technologies, and is working with TSMC on the design of the Cortex-A15 in a 20nm process.
Using mobile processors such as ARM opens the door to new levels of innovation. IBM is building some of the world's fastest computer systems out of relatively slow (1.6GHz) processors. The Blue Gene/Q design is a carefully managed balance of performance, power and cost, as was its predecessors. With a variety of ARM-architecture chip vendors, system architects will have even more opportunity (and challenge) to innovate and optimize system performance balanced with power, cost and features.
The software story for ARM cores is both good and bad. Various operating systems are available for ARM architecture now, including several distributions of Linux and various real-time and mobile OSes; Microsoft has announced that it will support the next Windows version on the ARM architecture as well. It's not clear what support is available for the variety of devices that we find in HPC, such as high-performance network interfaces or compute accelerators.
There are several good C and C++ compilers for ARM cores, including GCC and compilers from ARM Ltd., however the only Fortran available on ARM cores is GNU Fortran or Fortran-to-C preprocessors. As near as I can tell, there isn't even an official Fortran ABI yet. Mathworks has some support for ARM architecture already. The ARM instruction set has special support for Just-In-Time compiled languages, such as Java, Python, and Perl. Other tools will be needed as well; debuggers are available, and Allinea just announced support for ARM-based products in support of the Mont-Blanc project.
Other software needs in the HPC space include optimized math (BLAS, LAPACK, more) and communication (MPI) libraries. Unoptimized versions of these can probably be generated directly from open source. At this point, there is a distinct lack of support for the ARM architecture by any third-party library or application vendor, such as ANSYS, CD-Adapco, Gaussian, LSTC, and others.
This is a classical rock-paper-scissors problem. The software vendor won't invest in the port until there is sufficient demand, the demand won't be there until enough customers have these machines, and customers won't buy the machines until the libraries and applications are available. The minisupercomputer manufacturers of the 1980s all had exactly the same problem. Current HPC suppliers benefit by standardizing on just one or two instruction sets, hence creating sufficient aggregate demand to make the application vendors take notice. Solving this problem for the ARM ecosystem may require a large customer (read government lab) to take the lead.
However, a unique advantage for HPC is that much of the software is under continual development, and is regularly reconfigured, recompiled and rebuilt to improve the model or tune the performance. Many of these codes are community applications that are available in source form, and many more are developed in the same organization where they are used. As a result, the HPC space is not as dependent on binary compatibility or on migration of a large body of proprietary licensed applications. Unlike the general server market, many HPC users are ready to experiment and explore with just the right mix of operating systems and software development tools.
This brings us to the hard reality of the economics of ARM products, and customization in particular. For the most part, the mobile industry doesn't deal in standard parts; it thrives on mass customization, producing the right part for each specific market. If we move to adopt ARM-based processors in HPC, we really want a chip with all the parts and interfaces we will use, and without the ones we won't. Unfortunately, the volume required for really custom HPC parts just isn't there.
Apple announced that the new iPhone 4S sold more than four million units over the opening weekend, worldwide. If I add up all the cores of the Top500 computers from November 2011, the sum is about 9.2 million; if I add up the processor chips, the sum is about 1.7 million. To get a chip vendor interested in producing a custom part for your market, you've got to demonstrate that you have enough volume to support the cost, and that your part is more profitable than any other part the fab plants might produce instead. Just producing the mask set can cost upwards of a million dollars. If you can demonstrate a volume on the order of a million chips (per year), you can get the interest of any of a number of vendors. But even if we replaced every processor chip in every computer in the Top500 list in a single year, we are just getting to the volumes required.
Given the interest by the server market for lower-power alternatives, there are likely to be several vendors supplying ARM-based parts tailored for enterprise servers, such as HP's Redstone system, designed with Calxeda ARM architecture SoCs. HPC may end up in much the same situation it is in with x86: having to choose between two (Intel and AMD) or more (all the ARM IP licensees) vendors delivering chips with the same instruction set, but different cost / power / performance profiles. We would give up on-chip customization, but still benefit from any cost and power advantages.
The benefits of using mobile processors for HPC are power and cost. The power load of mobile processors is much lower than the high-performance Intel or AMD chips in most of the Top500 systems, typically well under ten Watts, instead of 50-100 Watts or more. Moreover, at sufficient volume, the cost of the chips themselves can be significantly lower, tens of dollars instead of hundreds or thousands of dollars. Some of this advantage is reduced if it takes multiple chips to reach the same performance as a single Intel or AMD processor, but unless that multiple is an order of magnitude, mobile processors still come out ahead. If the lower processor cost and power load results in a lower purchase price and lower cost of operation, the HPC market itself could grow.
It's time to explore alternatives to current standard processors for HPC, and the ARM architecture appears to be the best, and probably the only, viable candidate. However, there are challenges and opportunities if we choose to go this route. Even with only two x86 vendors, there are instruction set, performance and interface differences; with the ARM architecture, the number of suppliers is quite a bit larger, and the differences will be magnified. However, the opportunities for innovation and integration of accelerators are quite exciting.
Just as it took years to get all the software we needed for HPC on our large-scale Linux clusters, it will take time to port to the ARM architecture, and convince the third-party software vendors to port their software. To make this economically feasible, we need to settle on a small set of common features and operating systems.
Finally, the economics may not play fully in our favor. We benefit from commodity x86 parts because most of these are sold in personal computers or workstations or servers. If we find standard ARM-based parts that fit our needs, we can enjoy the same benefits. But standard parts don't allow for the customization that is another important potential benefit, and customization reduces the volume to a level that is no longer economically viable. However, the potential for lower purchase price and cost of operation is quite appealing, and may draw new customers to HPC. It may also force the mainstream vendors to focus more on lower cost and lower power parts, giving essentially the same benefits as a move to mobile processors. It will be an interesting next few years, as the HPC community explores alternatives on the way to Exascale.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?