July 06, 2007
"The slow one will later be fast and the present now will soon be the past, the order is rapidly fading. The first one now will later be last, for the times, they are a changing..." sang by Bob Dylan, 1964.
Over 1200 participants from 44 countries attended the 22nd International Supercomputer Conference (ISC) from June 26-29, and 85 exhibitors took part in the associated exhibition in the city of Dresden.
This ISC annual event enables many Europeans to appraise the new technology from Japanese and U.S. vendors and to also be updated by our American colleagues about where they are in addressing the issue of leadership in large-scale scientific technical computing. The presentations at the conference were broad-based and some were at the cutting edge of developments. ISC2007 provided an opportunity for vendors to peddle their wares and share with us their plans for future products.
As usual, Professor Dr. Hans Meuer and his team from the University of Mannheim put on a fine vendor exhibition, a collection of stimulating presentations and a seamless conference in the beautiful historical city of Dresden. The main sponsor this year was Microsoft, a vendor with an aspiration to capture a big share of the parallel and HPC software market. The Microsoft-sponsored Saxon night at the Albrechtsberg Palace was exquisite.
Burton Smith from Microsoft has been quoted extensively from this conference, and the next two paragraphs highlight Burton's main ideas for the new parallel languages needed in the multicore world.
In parallel languages there are (at least) two promising approaches: functional programming and atomic memory transactions. Neither is completely satisfactory by itself. Functional programs don't allow mutable state, and transactional programs implement dependence awkwardly. Database applications show the synergy of the two ideas. SQL is a "mostly functional" language, while transactions allow updates with atomicity and isolation. Many people think functional languages are inefficient. Sisal and NESL are excellent counterexamples of that view as both competed strongly with Fortran on Cray systems. Others believe the same is true of memory transactions, but this remains to be seen as we have only begun to optimise.
We need to support multiple programming styles, functional and transactional, data parallel and task parallel, message passing and shared memory, declarative and imperative, implicit and explicit. We may need several languages to accomplish this, similar to the use of multiple languages today with a helpful language interoperability bridge (e.g. .NET). It is essential that parallelism be exposed to the compiler so that the compiler can adapt it to the target system. It is also essential that locality be exposed to the compiler and for the same reason.
The only other thing I can say is that with the advent of multicore and future manycore chips, Microsoft is aware that producing parallel software has become core for their future business. This means that their entry is for real.
Several hardware vendors highlighted how they intend to deliver the productivity promise and a sustained one petaflop by 2010 and beyond. These included Cray with their Cascade, IBM with their precursor Power6 and the Blue Gene/P product lines leading to upgraded versions for petaflops systems at a later date, as well as other vendors NEC, Fujitsu, Bull and Sun Microsystems with the 32 threads Niagara Chip, and so on. These companies have roadmaps heading for the petaflops milestone.
The conference provided a broad range of talks. The "geeks" embraced the multicore revolution and relished the idea of having manycores and, as Thomas Sterling (LSU) expounded, myriad-cores. Sterling found ample support from John Shalf (LBNL) who betrayed his enthusiasm with an audacious presentation title: "Overturning the Conventional Wisdom for the Multicore Era: Everything you know is wrong." I am wondering whether John reflected on the semantics of such a sweeping assertion. I leave it to the reader to decide, but as for myself, some of what I know about multicore could probably be wrong, but not everything.
John went on to say that power efficiency motivates manycore design and made the case for using manycores with a simplified instruction set and shorter pipelines. As John observed: "In the old computer world, innovation trickles down from high end computing to the PC and consumer electronics. In the new world, innovation trickles up from the PC and consumer electronics to HPC."
If only the world were that simple. The real issue is not about old and new, but rather of "good" ideas being adopted and then transferred to a different application domain. For example, in the 1980s PC innovations related to human-machine interfaces were transferred to the high end computers. At that time, the emphasis of high end computers, such as the Cray-1, was on using its scarce resources for numerical calculations, neglecting the human-machine interface. As soon as PCs arrived with easy-to-use interfaces, the high end user community demanded, and soon got, a better deal. Another example is in storage devices, where the developments were powered by the music industry, and the technology was then taken up by the computer industry and HPC.
To be fair, John recognized that latency tolerance and lack of software to exploit multicore are key limiting factors. For me this talk effervesced with enthusiasm (always a good thing) about manycores, but provided sparse practical solutions on how to overcome the difficulties. The multicore era can become a reality, but the pain of this transition needs to be eased for the long suffering application user. In the words of the Bard: "Between the ideal and reality stands the shadow…."
At this point a reality check is in order. The increasing demand for higher performance can no longer be achieved through Moore's law processor improvements and a one-size-fits-all system mentality. HPC users are no longer getting the performance advances they need from microprocessors. Commercial response to Moore's law slowdown has been to provide multicore and promise manycore chips. These are general-purpose architectures, optimised for the most widely used applications. But, as it is widely recognized, when scientific computing migrated to commodity platforms, interconnect speed, both in terms of bandwidth and latency, became the limiting factor on application performance and remains a bottleneck to this day.
The new mantra is that although multicore commodity processors will deliver some improvement, exploiting parallelism through a variety of processor technologies using scalar, vector, multithreading and hardware accelerators, e.g., FPGAs, GPUs, etc., creates the greatest opportunity for application acceleration.
Near future supercomputing systems combine multiple processing architectures into a single scalable system. Looking at it from the user point of view, one has the application program, followed by a transparent interface, using libraries, tools, compilers, scheduling system management and a runtime system. The intention is to adapt the system to the application -- not the application to the system.
As readers of this publication are aware, there are many challenges to be overcome, not least in memory and network subsystem capabilities as well as in managing software complexity, on the way to the petaflops productivity promise. In current architectures, processors are separated from memory, from which they fetch operand data to feed the arithmetic functional units. This is accentuated by the network latency, when servicing the many thousands of processors required for a petaflops system. Thus, delays tend to accumulate.
In practice, scaling an SMP or cluster to the large numbers of processors required to achieve petaflops is very difficult. Efficiency degrades sharply because of requirements for cache coherence and also from operating system jitters. The key task for system software in heterogeneous systems lies in scheduling strategies and other system functions that maximize the performance extracted from scarce system resources, notably the heterogeneous system's limited global system bandwidth -- in other words, how one minimises and hides latency.
Thomas Sterling gave two talks: one on the HPC achievements and impact since last year and the other on multicore -- the next Moore's Law. He used as an exemplar the IBM Blue Gene/L and its successor the Blue Gene/P, illustrating the emergence of multicore processors on one die used to stem the power consumption explosion. For new systems, the flops/watt metric is expected to become as important as the flops/dollar metric became in the 1990s.
Thomas pointed out that multicore exploits the extra real estate due to increased circuit density and increases functional units per chip (spatial efficiency), which in turn limits energy consumption per operation. Multicore would improve on Moore's Law in respect to peak performance, but the number of pins would grow much slower. An example is the IBM Cell processor, a 0.25 teraflops chip (9 cores). To address the multicore challenge, one needs more than an SMP on a chip. One needs parcels for latency hiding, destination locale split phase and message driven transaction computing. Latency hiding with parcels will deliver one to two orders of magnitude performance benefits.
Thomas then described work at LSU where his team is currently exploring key challenges of a new class of computer architecture to confront efficiency, scalability, power and reliability. This requires a paradigm shift of execution and programming models. There is a desperate need for intrinsic latency hiding mechanisms to be incorporated in the infrastructure of programming and runtime resource management.
He went on to say: "We are developing a new model for computing called "ParalleX," extending our earlier work in processor in memory (PIM), and combining these with new work in static dataflow to provide a new class of architecture that adaptively responds to variations in temporal locality. The short-term impact is that the execution model has a spin off of a programming methodology that can operate on conventional architecture. It should improve latency hiding and scalability."
Jose Duato, from the Technical University of Valencia, gave an excellent keynote presentation describing the pros and cons of systems based on commodity chips, current trends and synergies, feasible future system architectures and identified interconnect as the key subsystem.
He started by explaining that research in academia usually focuses on narrow topics, e.g. processor micro-architecture, memory hierarchy, cache coherence protocols, interconnection networks, and so on. Even when radically new solutions are proposed, e.g. a cost-effective fully adaptive routing algorithm, those solutions only improve a subset of the system and do not eliminate the inefficiencies that are a direct consequence of the system architecture, which may not be globally optimal. This means too many resources (or too much of a power budget) are devoted to improve a component that is not the system bottleneck. A global system view is required even when addressing problems in a particular subsystem.
When looking at computer systems from a global perspective, researchers start (or should start) by looking at application requirements, but there is a fundamental flaw in this approach: Existing applications were designed for existing computer systems and new computer systems are designed to run existing benchmarks faster. In this global optimisation process, practitioners neglect the opportunity to replace the existing programming model and style and may end up proposing techniques to recover parallelism that has been lost due to previous optimisations.
In some proposed solutions, applications are written in such a way that most parallelism is lost, having to use speculation techniques to recover it. The proposed techniques tend to increase power consumption. A more efficient approach is to redesign the inner program loops, transmitting each value after computing it by specifying it in the program and letting an optimised implementation of MPI to decide whether each value should be immediately transmitted or should be packed together with other values into a single message to reduce the communication start-up overhead. The correct solution is a truly global view.
The heat dissipation wall forced microprocessor manufacturers to move to multicore chips needing much less power consumption for the same peak computing power. Manufacturers are increasing the number of cores per chip but at a slower frequency rate. At least one core should be as fast as the fastest core in the previous generation chip. Many users do not know what to do with additional cores (beyond running anti-virus and firewall). The current trend will soon face the memory bandwidth wall problem on how to feed the cores. This is further aggravated when running applications that do not share data (e.g. multiple virtual servers) and/or when including the graphics accelerator on the same chip.
Necessity is the mother of invention. Accelerators, which can execute repetitive compute-intensive functions much faster than host processors, are being utilised. Different flavours -- GPU-based accelerators, FPGA-based accelerators, DSP-based accelerators -- are available, but these are not good for code fragments with high memory bandwidth requirements unless the accelerator implements a large and fast local memory (e.g., graphics cards). They are nevertheless becoming popular due to the availability of compilers and programming tools.
With multicore chips, it is no longer possible to exploit parallelism in an automatic mode. Applications need to be multithreaded. It has been quite easy to convince desktop and laptop users that a second core is beneficial even for running single-threaded applications. Perhaps one can run anti-virus and firewall on the second core, but what does one do with four cores?
Simpler programming models are likely to become much more widespread than more sophisticated ones (e.g., shared memory versus message passing). Not hiding architectural details from application developers may make it more difficult to accept a given architecture. Observe the XBox 360 versus PlayStation 3 battle.
Looking at synergies, mass-market applications that require more computing power (e.g., video games) are forcing application developers toward parallel programming. The number of programmers able to develop multithreaded applications is likely to increase at a fast pace during the next few years. Most of these application developers will become familiar with shared-memory models, but not so much with message passing. This trend is likely to make parallel programming more popular, but shared-memory machines are likely to be preferred over current clusters.
Multicore processors have become commodity components, while chip architecture and system architecture have become much more relevant. Many system characteristics need to be scrutinized: core count; chip interconnect type; core type (homogeneous versus heterogeneous); cache hierarchy and design; pin bandwidth; and memory organization (local versus shared, hardware coherence versus software coherence versus coherence domains versus non-coherent). There's also the issue of network interfaces and where we attach them. As for storage, we now need to choose from traditional hard disks, solid-state disks and non-volatile memory (i.e., FLASH).
As for memory subsystems, large-scale cache coherent NUMA architectures are based on the idea of using physically distributed, logically shared memory. Caches are mandatory to deliver good performance and keeping them coherent in large systems is a nightmare. Cache coherent NUMA architectures are very expensive and not very scalable. Non-coherent shared-memory architectures as well as shared-memory architectures with multiple coherence domains are feasible. Accelerators can play a vital role in increasing computing power and reducing power consumption. A feasible, scalable, flexible and cost-effective approach for future systems is a global address space, not necessarily coherent, where each page has configurable semantics (coherent, non-coherent, transactional).
The most difficult task when developing multithreaded applications using transactional memory is making sure that the program works (e.g., deadlocks may occur when combining correct code fragments). Transactional memory is a concurrency control mechanism for controlling access to shared memory. A transaction is a piece of code that executes a series of reads and writes to shared memory, which logically occur at a single instant in time, and are typically implemented in a lock-free way.
Transactional memory is optimistic: every thread completes its modifications to shared memory without regard for what other threads might be doing, recording every read and write, which are validated in the commit stage. Implementing part of the system memory as transactional memory could be the solution for storing shared data in parallel applications while simplifying programming.
To recap. The use of commodity components has been the key to delivering tremendous affordable computing power. Architectures based on current commodity components have intrinsic limitations that prevent efficient exploitation of parallelism. Current multicore trends will force the rapid expansion of shared-memory parallel programming. The computer industry should use this unique opportunity to design scalable, cost-effective shared-memory architectures.
Low-latency, high-bandwidth interconnects are the key subsystem to enable the design of scalable shared-memory architectures. Several efficient solutions exist for different subsystems including interconnects. What remains to be done is finding the right combination of components that will enable those high performance architectures to be implemented at low cost.
As Martin Luther King said: "I have a dream." In my vision of the future, architectures will be a comprised of a series of standard modules, such as compute cores, defining coarse system functions, and memories, including transactional memory, along with their respective interconnects and inter-chip networks. The compute cores will be heterogeneous, including blank pieces of silicon, field programmable and populated by processor designs taken from a library on demand, verified to optimally match the user application. The system will have globally addressable memory, but not necessarily globally coherent. A new standard parallel programming paradigm, functional and transactional but at a higher abstraction, will be universally adopted. The age of the "soft computer" will then be upon us.
I leave you with an Albert Einstein maxim on simplification: "All should be as simple as possible, but not simpler."
Brands and names are the property of their respective owners. Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. July 2007.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?