|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / April 6, 2007|
"Make everything as simple as possible, but no simpler." Albert Einstein
Natural science can be understood as the process of developing models that predict the behavior of the natural world, and we celebrate as great science the creation of the simplest models that give accurate predictions. Computer architecture seems, over the past decade or two, to have moved in the opposite direction, glorifying complexity at the expense of understandability and predictability, and even performance and usability. Highly speculative out-of-order superscalar microprocessors with north- and south-bridges, graphics adapters and raid controllers have evolved out of what was once the modest domain of hobbyists.
In and of itself, there's nothing wrong with the fact that the hardware and software of modern PCs are complex; they have adapted very successfully to the needs of home and office users, to the point of becoming nearly indispensable for civilization as we know it. But that complexity does make it next to impossible to create accurate models of their performance, and hence to design software that performs efficiently. And when your application is running for days or weeks at a time on hundreds or thousands of computers, you care about its efficiency.
To make matters worse, many of the evolutionary pressures on desktop computers are contrary to the needs of scientific and technical users. Low prices and high clock rates are real benefits, but high thermal dissipation, slow memory, sluggish I/O, high communication latencies, and limited memory access bandwidth have severely restricted the algorithmic options for parallelization of HPC codes and limited the scalability of those codes in production.
There is a real chicken-and-egg problem here that will take multiple generations of simplicity to fully resolve. Since personal computer hardware is now overkill for most users, the only people who care what is going on inside the chip are the designers, who have to assure that the circuitry is performing correctly. As a result, chips have lots of touch points for status information, but they are not designed to learn about the behavior of software algorithms. Worse yet, the individual chips that make up a contemporary cluster node have different, often contradictory, performance monitoring facilities.
As a result, today's scientific computer users have few tools that enable them to understand what their codes are doing, and hence are unable to articulate what they want their next computer to do differently. One manifestation is the Sisyphean task of creating benchmarks that fully encapsulate performance behavior. No sooner does a new benchmark come out than it is disavowed by various users as "not representative of what we do." Until we have computers whose behavior is transparent, we will not have benchmarks that truly capture that behavior, and we won't have computer hardware that responds to that behavior because hardware designers will not have clear benchmark targets to shoot for.
Here are some of the steps that need to be taken.
It is critical to start getting computers with all the node logic designed together with a common performance monitoring architecture. Ideally, all the node circuitry would be on a single chip, but if that is not possible, it should at least be made up of consistent chips. Then users will be motivated to harvest the performance data.
And we need to think more broadly about what to do with the performance data that we collect. Today the state-of-the-art is to translate it into graphs and charts. But the data is likely to be full of patterns that do not necessarily reduce to charts. The biologists are showing us the value of using techniques like neural networks to look for these kinds of patterns. We still have much to learn in the area of "performance analysis analysis."
While we wait for HPC hardware that users can more directly understand and critique, there are several architectural simplifications that are sure to be fruitful.
We need to simplify communications performance. Too many algorithms jump through too many hoops trying to avoid communications between processors that are "distant," where distant means that the hardware takes a long time to get a message back and forth. This complexity is perhaps most visible in applications that use adaptive dynamic grids, forcing the optimization to be attempted on the fly. Where the performance curve of communications networks is flattened out, simplicity rules.
A second valuable simplification comes from clock coherence. When all the processor clocks in a system are synchronized within a fraction of a microsecond and, more importantly, are locked to a common reference, time references are consistent and monotonic throughout the system. This coherence promises a straightforward path to the elimination of major sources of OS noise, which saps the performance of so many clusters today.
A third area of simplification potentially comes in the way file systems are handled. We now have disk systems with thousands of spindles and parallel file systems that know how to use them. But these systems also have thousands of DIMMs that are often used in ad hoc ways to store global data. Why can't we reflect the power of a parallel file system back onto all of those DIMMs, so users can freely shift the location of their data without changing their program logic?
Parallel software development is neither easy nor simple, and until we come up with elegant new ways to think about concurrency, it will not be. To make progress in the meantime, we need machines that (a) run today's codes well, (b) are simple and transparent enough to permit successful debugging and tuning, and (c) provide sufficient compute, memory and communication resources to allow new algorithms to follow the natural expression of the programmer's intent.
Ultimately, parallel computing itself should simplify scientific programming since nature is inherently parallel. In order to establish the natural correspondence between parallel phenomena and parallel computation, however, we need to make sure that the computers we use are as simple as possible. But not simpler.
Jud Leonard is a founder and the CTO of SiCortex, Inc, a recent entrant in the HPC market. His career in high performance computing has run from the IBM 1620 and 360 through Digital's PDP, VAX, and Alpha systems. He knows how complicated it can be keeping things simple.