Alexander Reinefeld, head of the computer science department at Zuse Institute Berlin, discusses the "Reconfigurable Supercomputing" session he chaired at ISC 2006, as well as his rather strong views on the topic. Along with Reiner Hartenstein, who presented as part of this session, Reinefeld believes it is time to look for alternatives to the von Neumann architecture, and he believes FPGAs will play a big role in that process. Reinefeld also discusses his personal history in the fields of Grid computing and HPC, in general.
HPCwire: First, can you discuss the session you will be chairing at ISC? What is it about reconfigurable supercomputing that makes it such an important topic?Reinefeld:
For me, reconfigurable supercomputing is one of the hottest topics today. Therefore, I am very excited that Hans Meuer invited me to chair this session. As a researcher, I try to be one step ahead of time and that's why we already started two years ago to explore the potential of reconfigurable computing in our HPC center at ZIB. I am not the only one who believes that it is time to look for alternatives to the classical von Neumann computer architecture. The old recipe of increasing the clock speed, the number of CPUs, the cores per chip and the threads per core just doesn't give us enough sustained computing performance. It seems that the von Neumann architecture has come to a dead end in HPC.
Actually, reconfigurable computing is not new at all. Konrad Zuse, the great inventor of the modern programmable computer, already mentioned the basic principles in his article "Rechnender Raum" ("the computing space") in the late 1960s. Perhaps he came up with his thoughts too early. Or, putting it the other way around, the von Neumann architecture worked far too well for so many decades that nobody thought about alternatives. But now, I believe, we are at the beginning of a new era, where FPGAs will be used as application accelerators in HPC.HPCwire: Why do you think FPGAs have gained such popularity within the HPC community?Reinefeld:
FPGAs (field programmable gate arrays) have been a long time on the market, but up to now they were mostly hidden in embedded systems. Their programming was cumbersome and required specialists in technical computer science using some obscure programming languages like VHDL or Verilog. You might think of these languages as a kind of assembler language, but that's wrong. It's even worse: Programming is done at the gate level, that is, at the very lowest level of information processing with NAND and NOR gates. Just remember your first semester course on technical computer science!
Things are now changing dramatically. FPGAs are being recognized as viable alternative to the power-consuming large scale computers. They won't replace traditional CPUs, but they will be used as application accelerators. In a typical application scenario, the larger part of the program (I/O, pre- and post-processing) is executed on a standard host processor like Itanium, Opteron, etc. Only the most time-consuming kernel is executed on an FPGA chip.
FPGAs have a relatively low clock rate of some hundred MHz, but they produce usable output with each single clock tick. Moreover, they are intrinsically parallel. A clever designer could implement several processing units, producing multiple output streams in parallel. That's the trick. And now imagine what you can do if some hundreds or thousands of FPGAs are connected by a fast interconnect.HPCwire: Will you be doing a presentation as part of the session?Reinefeld:
I won't present any technical issues by myself, but as you see from my answers, I have a very distinct opinion on this topic and I expect a vivid discussion on that. At Zuse Institute Berlin, we bought a small Cray XD1 with six FPGAs in 2004, which was one of the first systems delivered to Europe at that time. Later on, we were the first to adopt Mitrion-C as a programming environment. We did so because we are no hardware experts and hence do not want to dive too deeply into hardware aspects. Rather, we are interested in using FPGAs for HPC applications. Ideally, we would like to have a software tool that allows researchers to efficiently port their kernels to FPGAs by themselves. But this is still wishful thinking, and it is clear that my team members will need to do quite a bit of consulting to help HPC users in adapting their codes.HPCwire: Thomas Steinke of Zuse Institute Berlin and Reiner Hartenstein of TU Kaiserslautern will be presenting. What they will be discussing? What makes them ideal presenters on this topic?Reinefeld:
Both talks complement each other ideally. Thomas Steinke is from my group at Zuse Institute Berlin. He is a HPC software practitioner. He heads the bioinformatics group at ZIB and is responsible for the HPC user consulting in computational chemistry. Consequently, his talk will focus on the programming aspects. Being one of the first adopters of the Mitrion-C programming environment, he will tell us whether his expectations on programming productivity and efficiency met his expectations. Reiner Hartenstein is a professor at TU Kaiserslautern. He is a real expert in reconfigurable computing. With his deep understanding of the technology, he will provide an in-depth analysis of the past and future of this new technology in the light of HPC.HPCwire: On another front, can you discuss your background in Grid computing? How has the Grid community changed since your days as a founding member of E- Grid and GGF (Global Grid Forum)?Reinefeld:
I am pleased to see that Grid computing has gained maturity in a very short time. Compared to the old days of the first Globus retreats in the United States and our E- Grid meetings in Europe, Grid computing is no longer the hobby of a bunch of researchers, but it is now of real economic value. When we first noticed this trend, we merged the American, European and Asia/Pacific forces to form what became the GGF. Much of our work went into launching Grid standards to enable interoperability and to allow widespread application. Interestingly, there is now a trend in GGF to establish regional interest groups -- just as in ACM or IEEE -- to better integrate the various communities and to draw on their local expertise.
At the European level, I was recently involved in a Next-Generation Grids expert group which coined the term "SOKU," standing for "service-oriented knowledge utilities." In a whitepaper, we presented the vision of interoperable, knowledge-enhanced Grid components that do no longer build on traditional software layers. For the details see www.cordis.lu/ist/grids
.HPCwire: I understand you also are involved with the D-Grid project in Germany. How is that initiative progressing? How would you rate the job Wolfgang Gentzsch is doing as director?Reinefeld:
D-Grid was born in March 2004 when the German minister of science and education, Mrs. Edelgard Bulmahn, announced in Berlin at GGF10 that she was going to spend 100 million Euro on German Grid projects over the next five years. It took more than a year until the first D-Grid projects started in 2005, but now things look very bright. Five so-called "community Grid projects" were started in astrophysics, climate research, high-energy physics, bioinformatics and medicine, and engineering. They are accompanied by one "integration project" that provides basis services, a common software repository and know-how (e.g., in AAA).
Wolfgang Gentzsch heads all this. With his "integrative" management style, he is the best possible person for that job. Considering that D-Grid is not just another research project, but a large collaborative effort of hundreds of individuals, Wolfgang's job is certainly not easy. Compared to Unicore, which was more focused toward (just) the HPC community, D-Grid is much more ambitious in its goals: D-Grid shall create end deploy a sustainable Grid infrastructure for science and industry in Germany.HPCwire: Finally, I'm wondering if you could discuss the work being done at Zuse Institute Berlin. How does the work being done by the computer science department there relate to current trends in both HPC and Grid computing?Reinefeld:
Zuse Institute Berlin is a research institute for applied mathematics and computer science. I am heading the CS department, which covers a broad spectrum, ranging from supercomputing services to research topics in Grid and P2P computing, autonomic computing and related topics. While our spectrum might seem to be very broad, the topics actually go hand-in-hand, providing a lot of synergy.
ZIB has a long tradition in supercomputing. Currently, we operate -- jointly with RRZN in Hanover - a 5-teraflop IBM p690, and we are about to start the procurement for our next supercomputer, HLRN-2. The investment cost (30 million Euro) will again be paid jointly by the HLRN-consortium, namely the six states: Berlin, Bremen, Hamburg, Mecklenburg-Vorpommern, Niedersachsen and Schleswig- Holstein.
The experiences made in the operation of supercomputers provide valuable input for our research and vice versa. As an example, one of our research topics is in resource reservation and job planning. On the one hand, our research results are used to optimize the job throughput on the HLRN. On the other hand, the day-to - day production gives us real sample data for our research. Similar synergies occur in the other fields of interest: Grid computing, network protocols, autonomic computing and, lately, in FPGA technology.HPCwire: Is anything else you would like to add about your session at ISC, reconfigurable computing, or your work in HPC/Grid in general?
I regard it as a great privilege having had the chance to be always at the forefront of HPC. Back in 1992, I operated the first 1,024-node transputer system in Paderborn. Thereafter I built large-scale clusters with the innovative SCI interconnect, and then I was among the first researchers working in a new field called Metacomputing (i.e. Grid computing). Now we are in an era where supercomputers are measured by acres and mega watts. I seriously do hope that there is a more economical solution for HPC. Reconfigurable computing could be part of the solution, but, of course, I can't predict the future.
Let me end with my favorable proverb (by Alan Turing): "We can only see a short distance ahead, but we can see plenty there that needs to be done."