|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / March 17, 2006|
As the cost of developing, deploying and maintaining high performance systems rises, it becomes more and more important to predict system performance in advance. At Los Alamos National Laboratory (LANL), the Performance and Architecture Lab (PAL), lead by Adolfy Hoisie, is developing advanced modeling techniques to assess current high performance systems as well as understand how future computer architectures will perform. The PAL group is part LANL's Computer & Computational Sciences (CCS) Division, lead by Bill Feiereisen.
PAL researchers have developed a number of accurate models of applications that are important to LANL, as well as its government sponsors - NNSA, DARPA and the DOE Office of Science. They use these models to analyze, predict and calibrate performance for the systems of interest. As the hardware becomes available, they validate their predictions with real-world tests. In this way, PAL's performance modeling can be used to guide system development and procurement decisions.
PAL's application models are used to understand the interaction between applications, software environments and computer hardware. The combination of the application workload, the operating system (including the application scheduling environment) and the hardware architecture presents a complex set of criteria for performance modeling.
"As the systems and applications we are using become more and more complex, understanding the interplay between various factors becomes very difficult," explains Adolfy Hoisie, team leader at PAL. "Benchmarking is not cutting it anymore."
In-house at LANL, PAL is able to evaluate a large variety of clusters with different types of processors, interconnects and other hardware components. Cluster systems, such as the Appro HyperBlade, are used for performance analysis of systems and application, performance analysis methodology development, validation and benchmarking and system software analysis and development. The PAL team also has access to supercomputing systems throughout the United States and the rest of the world.
"We've applied our models to analyze performance of many of the very large scale machines in the last decade: ASCI Red, Blue Mountain, ASCI White, ASCI Q, Earth Simulator, ASCI Purple, Blue Gene, Cray X1, etc, and probably tens of clusters of various sizes using most of the microprocessors -- from Intel, AMD, IBM, etc. -- and most of the interconnects on the market -- Myrinet, Quadrics, Infiniband and others," said Hoisie. "We have access to large computing systems virtually anywhere in the world. We've used machines at Sandia, Livermore and NASA as well as European machines. So we are not confined to using machines just in our own backyard."
Hoisie says that their performance modeling work was used to determine that the ASCI Q supercomputer installed at LANL was running at half its potential capacity. By pinpointing the performance degradation causes, they were able to correct the problem. This methodology was since applied to optimize the performance of other large-scale systems. Their models are being used to ensure that some of the latest HPC systems such as Red Storm, Cray X1, and Blue Gene are performing up to their potential.
Once the application models are validated on real hardware, the PAL researchers can use them to predict performance for future systems. For example: what would the application performance on System X be if you doubled the processor speed, quadrupled the memory size, added 50 percent more nodes and increased the network bandwidth by five times? What-if scenarios, such as these, allow engineers to predict how a system may be expanded or redesigned to achieve greater performance. In addition, researchers can modify the application code, itself, to explore algorithm performance dependencies.
Hoisie explains that part of the uniqueness of their approach to performance modeling is that they are able to capture the performance of full applications. LANL is particularly interested in the characteristic applications related to many scientific areas such as global climate modeling, computational biology and astrophysics among others. Some examples of the application workloads used at PAL are the SAIC Adaptive Grid Eulerian (SAGE), Parallel Ocean Program (POP), the Monte-Carlo N-Particle (MCNP), the ocean modeling HYCOM and the shock dynamics CTH codes.
"These [models] are overall predictors of how the whole system performs -- hardware, software and algorithm characteristics," says Bill Feiereisen, division leader for CCS. "So it goes quite a ways beyond benchmarking."
One area of recent attention for LANL has been hardware accelerators -- special-purpose processors that can address specific types of application workloads. Computational performance of acceleration hardware, such as ClearSpeed coprocessors and Graphic Processing Units (GPUs) are now being evaluated with PAL modeling technology.
"The term accelerator has had a negative aspect to it," says Feiereisen. "It's reflective of the fact that when you buy current systems, the only way accelerators can be attached is by plugging them into the I/O buses, as an add-on. We foresee that as being something that will change in the next few years when the manufacturers think about integrating heterogeneous processors into a much more tightly coupled fabric. I think this is just the first step on the road towards a machine that has multiple heterogeneous peered processors."
One the first commercial examples of an integrated heterogeneous architecture is the IBM Cell Broadband Engine. This processor is a multi-core architecture that encompasses a general-purpose PowerPC CPU integrated with eight special-purpose DSP coprocessors. LANL researchers are beginning to evaluate application workloads on IBM Cell-based systems as hardware and software become available.
As the industry drives towards petascale systems, an important focus for the PAL team will be to apply their performance models to future systems design. Hoisie believes that the next generation of machines that scale to petaflops will likely continue to contain clusters of processing nodes, which will include higher processor counts, linked with novel high-speed interconnect fabric. A possible example of this type of architecture is the IBM PERCS system -- one of the finalists in DARPA's High Productivity Computing Systems (HPCS) program.
LANL is working closely with IBM in this project, and PAL modeling techniques are used to evaluate PERCS' performance and in guiding its system design. In addition, PAL analyzed other future system designs, including the next-generation Blue Gene architecture also predicted to operate in the petascale range.
In addition to trying to determine where future hardware is going, LANL researchers are also are trying to figure out the optimal operating system environment for high performance computing. Currently they are mostly concerned with Linux. Due to its open source nature, LANL -- like many organizations -- found that Linux provided a more convenient software platform than proprietary Unix implementations. Almost a de facto standard for technical HPC, Linux is first and foremost a general-purpose OS.
But, according to Feiereisen and Hoisie, it also contains some characteristics that are not well suited to HPC. Feiereisen notes that as computer systems scale up to many thousands of processors and Linux evolves, the current operating system models may no longer be practical.
"We are beginning to think about what life after Linux in going to be like," says Feiereisen. "We're starting to throw around ideas about what kind of software we might see on these machines ten years from now. And we have our own ideas about how the operating systems of the future should be configured."
So researchers at LANL are beginning to gather characteristics of what they would like to see in a future high performance computing OS, at the same time realizing Linux will be around for quite awhile.
The work performed for the DOE and other funding agencies is likely to continue into the foreseeable future, even as LANL, itself, undergoes a management change. On June 1, 2006, when Los Alamos National Security LLC -- made up Bechtel, University of California, BWX Technologies, and Washington Group International -- takes over the administration of LANL, the new management will oversee an organization that has supported government research and development for the past 63 years.
Although Feiereisen admits there's some trepidation in the Laboratory about the changeover, he believes LANL will adapt and become a better organization after the new management takes over.
"What they're looking for is some fresh ideas about how to administer the laboratory, streamline processes, and make it more efficient for us to get our science and engineering work done," explains Feiereisen. "The people that pay the bills still want the same work."