|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / November 15, 2006|
The panel of the day at SC06 was the "High Productivity Computing and Usable Petascale Systems" discussion. Panelists representing Cray (Steve Scott), IBM (Rama Govindaraju), Sun Microsystems (Jim Mitchell) and USC (Bob Lucas) gave their perspective on the challenges of DARPA's HPCS program. Jeremy Kepner (MIT Lincoln Laboratory) organized and chaired the panel and also participated in the discussion. Prior to SC06, we had asked Kepner to share his own views on HPCS and petascale computing. Here's what he had to say.
HPCwire: Why do we need government funding for HPCS? Why can't the market produce productive petascale systems?
Kepner: First, we must remember that there are only two reasons for using a high performance computer: (1) to run programs faster; (2) to work on larger problems. Now we can ask the simpler question: are there important applications that are not accelerated by existing market produced systems? I think many of your readers would agree that there are. In addition, for those applications that are accelerated by current market based solutions, we can ask the question: Are these systems too difficult to use for many users? Again, I think many of your readers would answer "yes." Given this situation, where there are important unmet needs, I think it is a fairly classic "win-win" role for the government to step in and help companies make solutions that will broaden the applicability of their products.
HPCwire: How will we measure productivity?
Kepner: The productivity of HPC users intrinsically deals with some of the brightest people on the planet, solving very complex problems, using the most complex computers in the world. Anyone who truly wants to get insight into such a complex situation must be prepared to invest some time in the endeavor. For those who truly do care about this problem, I highly recommend you look at the special issue of CTWatch. This issue features 18 articles on the topic written by folks who have devoted themselves to these questions for the past few years. The articles deal with the question of productivity from a number of perspectives. Tools are then provided for folks want to look at productivity from an organizational perspective, from an individual programmer perspective, or from the perspective of a hardware or software innovator. From the technology innovator's perspective I think we have provided two particularly useful tools. The HPC Challenge benchmarks developed under HPCS is a suite of benchmarks that allows hardware designers to get credit for innovating above and beyond the design space defined by the Top500 benchmark. The HPC software development measurement system developed under HPCS allows software designers to conduct detailed experiments with programmers and determine precisely where users spend their time.
HPCwire: The industry is still struggling to make the current terascale systems productive. How are we going to make petascale systems productive? What are the major roadblocks?
Kepner: I think the ability to measure the innovations and determine which ones contribute to productivity is critical to solving this problem. Without this ability, it is very difficult to know which technologies to pursue, or to even know when we have achieved success. That said, I think from a technology perspective, the biggest roadblocks are steep memory hierarchies and requiring heterogeneous parallel programming approaches to achieve performance. More specifically, with the development of multi-core systems we have yet another level in the memory hierarchy. The added difficulty this places on users cannot be overestimated. Users are the ones who have to figure out how to map their application onto cores, nodes, racks, and systems to achieve high performance. In addition, it would appear that the parallel programming approaches that one might choose to use on a single multi-core processor would not be the right approaches for programming multiple distributed memory nodes.
HPCwire: How should we balance the different demands of HPCS? For example, how would you prioritize, say, computing power, bandwidth, and productivity?
Kepner: I would say that the combination of technologies that produces the flattest memory hierarchy and allows the users to view the system in the simplest and most homogeneous way will be the most productive. Obviously this needs to be balanced by cost. Unfortunately, the current metric of (Top500 Performance)/dollar will not drive us in that direction. I think the HPCS program has developed ways of evaluating systems that do a better job of taking the cost of the users time into account. We have applied these to our own systems at Lincoln Laboratory and have found them very useful.
HPCwire: If you could choose just one hardware technology and one software technology that will make the biggest impact on HPCS, what would they be?
Kepner: Hardware: a flatter memory hierarchy. Software: high level array based programming environments that support PGAS (Partitioned Global Address Spaces).
HPCwire: What would an ideal HPC programming language look like? If you were king, what would your top three requirements for a new parallel programming language be?
Kepner: Let's narrow the question to focus on technical computing (as opposed to web services or other applications). Within the technical computing domain I think we *know* that the following language/library features are useful: strong support for multi-dimensional array constructs, PGAS, good single thread performance, and an integrated interactive development environment. I don't think any of these features are language specific and many can be implemented in existing languages and libraries. However, I think no existing language or library has them all.
HPCwire: What do you think will be required to achieve a reasonable level of adoption for a new programming language?
Kepner: I have always felt that it was important that HPCS give language designers a clean slate for developing new features. Ideally, the developers of these languages will first implement and test those features they feel will provide the most benefit to users. At that point the community will be in a strong position to evaluate the best path forward for getting specific features into users hands. Until we have the specific features and their concrete implementations, it is difficult to speculate on the right transition approach. Once we have the features, I think that it will be straightforward to determine which approach to adoption is best: a new language, extensions to an existing language, or a new library.
HPCwire: How will the HPCS work filter down to more mainstream users of high performance computing -- those using systems that cost less than a million dollars?
Kepner: I would expect that technology will follow standard lines. All users have certain things that they can be flexible on and certain things they need to hold fixed. On the software side, users with a huge legacy code base will be the slowest to adopt technologies that require changes to the code. Users that rewrite their code more regularly will be the first to benefit from software innovations (regardless of the scale of system they are running on). On the hardware side, those users who need to be on the absolute cutting edge of performance will be the first to adopt systems with new hardware innovations. To the extent that some hardware innovations are a function of scale, those that need the biggest systems will get these first.
For more information about high productivity computing systems, visit this month's issue of CTWatch Quarterly at http://www.ctwatch.org/.