The Portland Group
CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

IBM Has Its PERCS


Overview

The High Productivity Computing System (HPCS) program offers IBM an exciting opportunity to target stretch goals and consider bold ideas, all within the realistic constraints of delivering innovation in a commercially viable product. For over four and a half years our team has been focused on applications of the high-end computing community and exploring new technologies that will introduce fundamental and exciting changes in the way we program and use high-end systems. IBM envisions a quantum leap in performance and productivity providing compelling differentiation to high-value systems compared to so-called commodity clusters, and making programming parallel systems an attractive end user experience.

The PERCS (Productive, Easy-to-Use, Reliable Computing System) team members come from IBM, Los Alamos National Laboratory and a dozen universities. During phases one and two of the HPCS program, the team collaborated to examine new ideas that span the entire computing stack, from basic technology to algorithms. The result is a concept system that will be based on IBM's mainstream POWER processor-based servers. We envision technologies resulting from the PERCS effort to start appearing in IBM systems as early as 2007 (in software), culminating into several peta-level systems by the middle of 2011 (assuming the IBM-HPCS relationship continues through Phase 3). The technology will also serve a broad range of configurations, especially those in small-scale deployments, and as a result we expect to reach more than just those interested only in the maximum configuration of the system.

This article summarizes our vision, our general approach, and the challenges of petascale computing. The competitive nature of the program and the impending down selection bid into phase 3 of HPCS means that we cannot reveal the details of our design or technologies, but we hope to provide a glimpse of what we are considering and to instill an appreciation of the issues involved in petascale systems.

A Vision for 2010

The following section is devoted to IBM's projections - a vision for 2010. IBM believes high productivity systems in 2010 will feature a rich programming environment with many tools that help in programming new applications, and maintaining existing ones. The programming environment will support existing programming models and languages, in addition to new emerging models designed for scalability to the peta-level. An application can be written using a mix of legacy and new programming languages, promoting code and asset reuse while availing programmers to the advantages of newer technologies. Open source communities will own many of the tools and parts the software stack. This will help ensure viability for the long term and promote a common look-and-feel and portability across different architectures. Tools will automate most or many of the performance tuning tasks and catch bugs statically, and newer programming models will be designed to prevent unnecessary bugs from happening in the first place (e.g. illegal memory referencing, deadlock, demonic access of shared variables, out of bound matrix indexing, etc.).

IBM expects the HPCS systems will be managed via rich graphical interfaces that automate many of the monitoring and recovery tasks, enabling fewer system administrators to handle larger systems more effectively. Through the WAN, users will potentially be able to share their data files and programs across the HPCS system and other clusters across the enterprise in a seamless manner. Open source operating systems and hypervisors will provide HPC-oriented virtualization, security, resource management, affinity control, resource limits, checkpoint-restart and reliability features that will improve the robustness and availability of the system.

The systems likely will feature balanced and flexible architectures that adapt to application needs. Innovations in the memory subsystem and inter-process communications will mitigate the effects of data access latency locally or across an Interconnect. The result will be better utilization of resources and an unprecedented performance leap along the four components of the HPC Challenge benchmark. Advanced packaging and water cooling will reduce system footprint with computing densities double today's best systems.

Approach

The HPCS program calls for a departure from the traditional myopic focus on performance to consider instead the broader value that a system provides, i.e. productivity. Time-to-solution, performance, portability and robustness are the dimensions of the productivity space according to the HPCS program vision. These dimensions may sometimes lead to conflicting requirements. For instance, the desire for portability imposes restrictions that can limit the range of innovations that one may consider for performance. Similarly, high-level abstractions that reduce the difficulty of programming entail a performance and resource overhead. Throughout the course of the program our team discarded some interesting ideas that could improve one aspect of the productivity proposition at the expense of the others. To address these tradeoffs, we used "real application" analysis and held countless discussions with the user community.

During phase 2 of the HPCS program, we conducted an extensive analysis of a number of "large" applications that were provided to the three HPCS contestants. These applications were particularly useful because they represent "real" workloads that stress a wide range of system parameters (e.g., CPU, caches, memory, network, storage), unlike benchmarks that narrowly focus on one system aspect or another. We used the analysis of the applications as a guide to evaluate innovations we proposed and also to develop new innovations from the insight we gained. The investigation revealed a wide range of often conflicting user requirements. For example, some users prefer the shared memory approach while others are heavily invested in message passing. Our approach depends on a high degree of configurability to address the disparity in requirements.

Our proposed system centers on an innovative processor chip design that is designed to leverage IBM's unique advantages in CMOS technology and the POWER processor server line. Advances in circuits and manufacturing processes can improve chip yield and lower the susceptibility to Soft Error Rates (SER), which are likely to flare up in future generations of CMOS technology. We also plan to leverage our unique processes to reduce the latency of memory accesses by placing the processors close to large memory arrays. The chip itself can be configured to build different system flavors, each aimed at a particular kind of workloads. Various interconnect options will be available, providing a tradeoff between cost and performance. Thus, IBM believes it will be possible to build systems optimized for commercial workloads, MPI applications, or HPC applications that depend on the shared memory abstraction such as in OpenMP or SHMEM. The wide range of configurations enables us to utilize the innovations in the mainstream product line and help ensure commercial viability of future HPCS systems. Furthermore, innovations in system packaging will allow us to provide computing densities that are orders of magnitude better than today's densest systems.

On the software side, our approach features a large set of tools integrated into a modern, user-friendly programming environment. The combined tool set and environment will support both legacy programming models and languages (MPI, OpenMP, C, C++, Fortran, etc.), and the emerging ones (PGAS, UPC, etc.). We have also worked with an experimental programming language, called X10, which we plan to subject to further research and prototyping in the first 18 months of phase three if our proposal is approved. After that period, a convergence of the research performed under the HPCS program should define a future, standard programming language and programming models that will benefit from the insights gained from X10 and other efforts.

X10 was designed for parallel processing from the ground up. It generally falls under the Partitioned Global Address Space (PGAS) category, and strikes a balance between providing a programmer-friendly, high-level abstraction and exposing the topology of the system to enable the programmer to control data placement and traffic. It features many innovations that will increase programmer productivity and system performance. Programmer productivity will benefit from high-level synchronization abstractions through the use of atomic sections, and the language helps ensure deadlock freedom under most circumstances. Proven productivity enhancers such as strong typing and automatic memory management relieve the programmer from many low-level details. X10 also will address performance issues through pervasive use of asynchronous interactions among the parallel threads, supported through the use of futures, and new concepts such as clocks. The language attempts to avoid the blocking synchronization style that affects system performance and limits program scalability (e.g., 2-way communication in MPI). X10 will require programmers to think differently about their programs, and early experiments have shown that the approach is promising and can improve time to solution compared to other alternatives.

There are many more aspects of the system design than we can cover here, but suffice it to say, our design addresses almost every aspect of the system with an integrated hardware-software approach. This aggressive vision entails a lot of risks, some of which we list to illustrate the difficulty of the task ahead.
 
Challenges Toward Petascale Computing

While today's high-end systems have issues concerning ease of use and programming, the sheer scale of the contemplated peta-level systems in 2010 brings new challenges that cannot be solved by following an evolutionary approach. Challenges include:

a.  Cost: Balanced peta-level systems that feature multi-petaops performance levels will require petabytes of memory and tens or even hundreds of petabytes of disk storage, commensurate with the expected performance and where components will be added not only for storage capacity but also for bandwidth. Projections of cost and power consumption for DRAM and disks translate to "interesting" procurement and operational costs. It is also important to note that these projections are a product of the industry and technology and are largely outside the control of the three HPCS vendors.

Our design factors and cost reduction across all components are an essential goal, and we evaluate innovations not only for their technical effectiveness but also for their cost efficacy. Consequently, our design eliminates or provides alternatives to components with poor cost/performance contributions, and increases the usage out of the most expensive system components.

b.  Quantifying Productivity: The HPCS vision seeks a tenfold improvement in productivity, an interesting challenge given that no one has a solid handle on quantifying productivity today. In PERCS, we introduced an economic model early on to express productivity, and worked throughout phase two of the program to develop metrics and measurement methodologies to quantify productivity in a pragmatic manner which reduces subjective assessment. We also developed various techniques, some of which promise to automate and evaluate systems and programmer productivities.

c.  Programming model: The Message Passing Interface (MPI) is arguably today's prevailing programming model in high-end systems. MPI uses messages for both synchronization and transfer of data, with a prevailing synchronous style of communications using the rendezvous abstraction. The community is aware of the inherent barriers that result from this synchronous nature of MPI and various forms of asynchronous or one-sided communications have been proposed. These, however, are not in much use today for a variety of reasons. As a result, there is a huge volume of legacy investment in software that will not scale easily to peta-level performance but must be protected, as it is unreasonable to expect the community to rewrite existing programs to enjoy the benefits of HPCS-class machines. Therefore, serious efforts and resources must be directed to improve the performance.

However, one must also recognize the need for new programming models that address the scalability problem at a fundamental level and meet the peta-level scalability challenge. Such new programming models must avoid unnecessary synchronization and enable new applications to be written without heroic efforts in programming. There are various risks with the undertaking of such an effort, including the technical challenge of realizing scalability and performance, users' resistance to change, and viability. These risks require a collaborative relationship among vendors and users with a substantial investment in design, implementation, and willingness to experiment with and adopt new models. This presents the HPCS program with an interesting dilemma in how to apportion the finite investment between improving legacy applications and introducing new programming models to enable new applications to scale up to the maximum potential of petascale systems.

Another interesting fact is that the widespread use of MPI has taught us that standardization of new programming models will be essential for success—the activity cannot be confined to just one or two "HPCS winners". In PERCS, we commit to supporting existing programming models and introduce many features in hardware and software to improve their performance. We also introduce a new programming model (X10) that extends the Partitioned Global Address Space (PGAS) programming model with emphasis on scalable asynchronous interactions, and new features that simplify concurrency control, enhance scalability and improve the productivity of parallel programming. The new model, for example, reduces or eliminates the possibility of deadlocks for most cases and provides high-level synchronization based on atomic sections.

d.  Programming languages: FORTRAN, C and C++ are the prevailing languages for production code in today's high-end systems. These languages lack proven productivity enhancers common in modern programming languages such as type safety, automatic memory management and dynamic optimizations. The language is also decoupled from the programming model (e.g. MPI or OpenMP), presenting many problems that require laborious work on the part of the programmer. Modern programming languages, on the other hand, lack the deep support for high performance computing found in the rich libraries of mature languages. A new programming language that bridges this gap can be a great asset toward improving programmer's productivity. Again, such approach entails tremendous risk—there have been many such failed attempts in the past.
 
In PERCS, we have taken a pragmatic approach by recognizing that a new programming language must co-exist with existing languages, even within a single application. This will enable programmers to write an application using several languages, and leverages the existing mature ecosystems of legacy languages. This also will enable extending existing applications with code written in the new language. Toward these goals, we are experimenting with a new programming language, X10, which embodies ideas for programming models and features in a modern programming language platform with proven productivity enhancers. The new programming language is designed to call and be called from other languages using an innovative runtime system.

e.  Programming Environments and Tools: Programming high-end systems has unique requirements that are not currently satisfied by high-productivity programming environments successful in the commercial space. Furthermore, there is a dearth of tools to help programmers in performance tuning and other programming chores. Our preliminary experiments during phase two have shown that substantial improvement in programmer productivity can be obtained if a high-productivity programming environment and integrated tools are made available.

f.  Reliability: The Mean Time Between Failure (MTBF) of the system will be challenged due to sheer scale of the system. Furthermore, projections indicate an increase in Soft Error Rates (SER) in the target Silicon technology for building HPCS systems. In PERCS we innovate at the circuit and micro-architecture levels to harden the reliability of components and target a system-wide MTBF, which is equivalent to or exceeds that of the largest system today.

g.  Unprecedented Performance Leap: The HPCS performance targets require aggressive improvements in system parameters traditionally ignored by the "Linpack" benchmark. Realizing these performance levels will require adding unusual features that can improve the system performance under the most demanding benchmarks (e.g. GUPS), and it will be important to determine whether general applications will be written or modified to benefit from these features.

The above list represents some but not all the challenges that petascale systems will face. We did not address the system management, storage, visualization, interactions other systems, configuration, etc. A comprehensive list is necessarily long, but we hope that the magnitude of the challenge is evident.

The Path Ahead

IBM understands that the HPCS program represents a major commitment on the part of the U.S. government to energize high-end computing. It also represents an important step to encourage innovation and restore momentum to a segment of customers with needs not met by current vendor solutions.

The HPCS program requires that the participating vendors make a commitment to transition the technologies developed through HPCS into the mainstream product line. In other words, the program is not about creating a one-off-system. For vendors, the productization requirement represents a major commitment that responds positively to the government vision. IBM has committed to deliver on this challenge. Going forward, it will be important to develop a full relationship in phase three of the program. The user community must start developing applications now that will be able to harness the power of petascale systems, address grand challenge problems, and push the frontiers of science and technology. New algorithms that scale to these levels need to be developed, and the application expertise is the strength that the user community must bring to this program.

Indeed, no amount of ingenuity in the system architecture and design will rescue poorly written code or bad algorithms at the petascale! The HPC community must also approach new technology and innovation with an open mind, especially innovation that requires changes in the way we program or use the systems. These challenges can be met effectively in a relationship where vendors can better understand how to adjust their design to real application requirements, and where users get to better understand the competitive pressure and the technical and financial risks the vendors are undertaking.

In closing, this is the opportunity of a lifetime. At IBM, we are excited and ready to marshal our resources and second-to-none technical community to work in a partnership with the user community to realize the HPCS vision.

-----

(c) IBM Corporation 2006

This publication was developed for products and/or services offered in the United States. IBM may not offer the products, features, or services discussed in this publication in other countries. The information may be subject to change without notice. Consult your local IBM business contact for information on the products, features and services available in your area.

All statements regarding IBM's future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only.

IBM and the IBM logo are trademarks or registered trademarks of International Business Machines Corporation in the United States or other countries or both. A full list of U.S. trademarks owned by IBM may be found at www.ibm.com/legal/copytrade.shtml.

Other company, product, and service names may be trademarks or service marks of others.

Information concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of the non-IBM products should be addressed with the suppliers.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events