|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / April 21, 2006|
Fresh from last month's Intel Developer Forum in San Francisco, HPCwire got a chance to speak with Intel CTO, Justin Rattner, about a variety of topics. In part one of this two-part article, the CTO shares his views on the high performance computing industry and talks about the significance of the company's newly announced Intel Core microarchitecture.
In his role as Chief Technology Officer, Intel Senior Fellow, and Director of Intel's Corporate Technology Group, Justin Rattner is responsible for leading Intel's technical policy and standards efforts, microprocessor, communications, and systems technology labs, and Intel Research.
Rattner joined Intel in 1973, and was named the company's first Principal Engineer in 1979 and an Intel Fellow in 1988. He has received two Intel Achievement Awards, Intel's highest employee honor, for his work in high-performance computing and advanced cluster communication architecture.
In 1989, Rattner was named Scientist of the Year by R&D magazine for his leadership in parallel and distributed computer architecture. He was featured as Person of the Week by ABC World News in December 1996 for his visionary work on the Department of Energy ASCI Red System, the first computer to sustain 1 trillion operations per second, and the fastest computer in the world for four years. In 1997, Rattner was honored as one of the Computing 200, the 200 individuals who have had the greatest impact on the U.S. computer industry, and was subsequently profiled in the book Wizards and Their Wonders.
Good News, Bad News
For someone who's been involved in the HPC domain for decades, Rattner has a measured perspective on the current state of high performance computing. "It's kind of a good news, bad news story," he says. "The technology of high performance computing is certainly more accessible than it was in the early part of the last decade."
Ten years ago, the entry prices for HPC systems were in the hundreds of thousands of dollars, but basically they were million-dollar-plus machines. Rattner says that much of what we see today, in what can legitimately be called high performance computing, is done on much lower cost platforms. The growth of clusters, high density rack mount systems, blade servers has made the hardware more affordable. And the software environment is more readily available and much more complete than it was in the 1990s.
"When we talk to various customers and partners, I'm struck by how high performance computing has become just a routine part of their information technology," says Rattner. "That familiarity is really one of the good news stories. Increasingly you hear about HPC being in the loop. We've recently gotten involved in BMW and their Formula 1 activities. HPC is an integral part of Formula 1 and auto racing, in general. It's also become de rigueur for yacht racing. You just find it in so many different areas; it's not just confined to government research institutions. So, from my perspective, that's the good news."
And the bad news? Along with many in the HPC community, Rattner laments the lack of software advancements.
"A few years ago, when Intel asked me to look at what we should be doing in HPC, I was struck by how little progress had been made on the programming front," says Rattner. That's my big disappointment. The technologies that were popular a decade or more ago are still in widespread use today. We're still programming in MPI and still working on technologies like OpenMP. I had hoped and expected that after a decade or more we really would have made some fundamental advancements on the software side."
The HPC community has been struggling with software since supercomputers first appeared in the 1970s. The specialized nature of the market never supported a critical mass of developers that would have provided a comfortable software base. Government-sponsored initiatives, such as DARPA's High Productivity Computing Systems program, are attempting to address this issue, but Rattner believes that software innovation will probably come from outside of the high performance computing community.
"I think that HPC probably won't drive the fundamental advancements in parallel programming," says Rattner. "I think it had that opportunity, but that window of leadership is rapidly closing. The advent of multi-core processors in the high volume spaces is probably going to do more. It's certainly going to attract a lot more investment in creating powerful solutions to the programming problem -- largely out of necessity. If these new architectures are going to be successful, a lot of people are going to have to program them and they're not going to be satisfied with the kinds of tools available in HPC today."
He predicts that the demand for multi-core, multi-threaded solutions in non-HPC sectors, such as consumer electronics, will push industry and academia to develop new solutions to make these highly parallel architectures widely programmable. This would relieve some of the burden from the traditional HPC community.
"As the volume applications for parallelism begin to build -- and a lot of them will be on the consumer side, such as the entertainment spaces -- that portends some major advancements in the programming side, in operating systems and tools arena," says Rattner.
And maybe that's OK. Certainly if commodity software solutions followed commodity hardware into the HPC domain, it would be a boon for everyone.
Along these lines, Rattner sees Microsoft's entry in the HPC market, with their upcoming Windows Compute Cluster Server 2003 offering, as a positive development for the community. He believes Microsoft understands that the development of tools to make these systems more usable is fundamental to making high performance computing more accessible.
"I think Microsoft has a very informed view of this," says Rattner. "They seem highly committed to bring their formidable capabilities in the programming tools area to bear on programming these machines. So I think that's good news for high performance computing."
Getting to the Core
Intel certainly hopes that the software for parallel programming matures quickly since all of the company's future processors are destined to be multi-core. The move to this new architectural model reflects a general awareness that pushing single CPU performance is yielding diminishing returns. "The microprocessor industry has realized that there are constraints on our ability to increase single thread performance at the historical levels -- the Moore's law kind of rate," observes Rattner.
"A small aside about Moore's law," says Rattner. "There's always been a bit of confusion about it. Gordon [Moore] wasn't talking about performance; he was talking about transistors. In that sense, we see at least another decade of advancement in our ability to put a large number of transistors onto a single chip. But our ability to extract performance from a single execution thread is going to become more and more difficult. So we're just naturally forced to turn to other forms of parallelism, which leads us to multithreading, and then, to multi-core. That's what the physics is dictating. So that's the track we're going to be on for the foreseeable future."
Rattner says the problem then becomes how to program it.
"When you're just talking about a couple of cores or a couple of threads, that's reasonably tractable," he explains. "We can talk about foreground processing and background processing. We all know that the core count is going to rise, largely as a function of Moore's law. We'll be at tens of threads and then hundreds of threads and we'll be facing all the classic issues that the HPC community has faced for decades."
Intel's new core microarchitecture, introduced at the IDF last month, is a turning point for the company. The new architecture, which will be incorporated into all x86 microprocessor platforms, shifts the emphasis from pure performance to energy-efficient performance.
"2006 is one of the most significant years in our history," declares Rattner. "Not since the introduction of the Pentium Pro -- and some would argue, not since the introduction of the Pentium -- have we really made such a profound change in the microarchitecture. And that change is not limited to one market segment. This is really huge!"
Rattner believes Intel's future very much depends on making this architectural transition successful. Right now the challenge for the company is coordinating all the technologies that are required to bring this off. Not only is it a brand new architecture, but it's going to be implemented using the new 65nm process on 300mm wafers. And all this needs to take place in a very short period of time. Most of the new products, such as the Woodcrest server platform, are scheduled to arrive in the May to July timeframe, although some of the introductions will not be until Q4.
"It's not that it's just this raft of new products," says Rattner. "For us, it represents such a fundamentally new direction microarchitecturally. And one that's very much focused on energy efficiency along with high performance -- what we call energy-efficient performance. That's more that just a marketing catchphrase."
Since nobody is really interested in reversing performance gains for the sake of using less power, the question for Intel became: how do you deliver high performance at energy-efficiency levels that people have previously associated with much lower performing processors?
"The products that will come to market this year started their design life about four years ago," says Rattner. "It takes about four years to go from product definition to validated production-worthy silicon. We didn't wake up six months ago or last year and say 'Wow, it's energy efficiency!' We knew we had to make this turn for some time.
"My predecessor, the previous CTO, Pat Gelsinger, in 2001 at the Solid State Circuit Conference talked about the power wall. We began to talk about the need to rethink what we were doing in terms of energy efficiency, because the future was really looking scary. The thermal dissipation levels were just soaring and we knew we couldn't continue on that trajectory. The real breakthrough was a result of our effort to build a unique microarchitecture for the mobile platform -- what became Pentium M. As a result of that effort, we came to understand how to achieve high performance with high levels of energy efficiency. That represents a major change in approach for us."
Rattner says that systems with the new Intel Core microarchitecture have been in the hands of key developers for some time and, according to him, the feedback has been uniformly positive and enthusiastic. But he also realizes that with an architectural change of this magnitude, Intel has to accept a certain amount of risk.
"It an enormous change -- probably one of the biggest product changes in Intel's history -- and any change implies risk," admits Rattner. "But we are so confident in the new microarchitecture that we have quite literally put the full faith of the company behind it."
Next week, in the second part of our conversation with Justin Rattner, he share his views on open-source hardware and talks about the past, present and future of the Itanium processor.