August 03, 2007
There seems to be more talk these days about how parallel computing is changing the IT universe. Maybe this is just the result of people reflecting each other, making it look like everyone came to the same conclusion simultaneously. I've encountered enough of that not to dismiss it as a possibility. In this case though, I think there is an intersection of thought on the emergence of parallel computing that goes beyond people just hopping on the multicore bandwagon.
At least we appear to be entering the acceptance stage of parallel computing. The community has spent enough time in the denial, anger, bargaining, and depression phases. I guess this implies we were in the grieving process. But what was all the grieving about anyway? The loss of sequential processing? In retrospect, it seems like an unfortunate waste of time.
What is really encouraging is that virtually all of the big IT vendors are embracing parallel computing. There even appears to be a general consensus on where this revolution is taking us.
At the Microsoft's Financial Analyst meeting last week, Craig Mundie presented his vision of how computing is evolving away from the traditional desktop system. Mundie, the Chief Research & Strategy Officer at Microsoft, admitted that the PCs most everyone uses today are not all that productive and that this is inevitably going to change. Considering that Microsoft made its fortune on the desktop, this was a candid admission that the company is prepared to undertake some radical changes. It also gives notice to the myriad computing vendors who live in the Microsoft ecosystem that they must be prepared to do the same.
Mundie proposed that the rise of multicore processors and more powerful communication technologies will end applications as we know them and spawn a new computing paradigm. Essentially the traditional desktop will be pulled apart by client computing (mobile devices, thin client desktops, embedded computing devices) and cloud (utility) computing. Exactly where this split occurs is subject to some controversy, and Mundie offered little guidance on where the balance will be achieved.
In any case, because serial computing is not able to take advantage of multicore/multiprocessor architectures, single-threaded applications will no longer dominate the computing landscape. It's not that word processing and spreadsheet programs will disappear, they will just cease to be recognizable as such. Those kinds of functions will be incorporated into intelligent agents that act more like people. These adaptive, context-aware applications will require much more computing power than is available on a processor with just a handful of cores. Said Mundie:
[A]s the microprocessor has grown dramatically in capability, as has the whole system, the concept of the app hasn't fundamentally changed that much. And so the question that looms in my mind for Microsoft and ultimately for the industry is: What are those future applications and what might they look like? And in fact, can we move to use all of the power that's there -- not just to make them responsive to a new class of demand from you, but ultimately to do things for you that are more like what people who help you do for you?
This new application profile sounds similar to that represented by Intel's recognition, mining, and synthesis (RMS) applications, which I've written about before. Like Microsoft, Intel is predicting these new applications will start to really take hold in the next five to ten years. Perhaps not coincidentally, this dovetails nicely with Intel's other plans to deliver terascale processors and on-chip optical communications in this same general timeframe.
The world described by Mundie points to a future where HPC and traditional computing have pretty much merged. Heterogenous and highly parallel processors will be standard issue microprocessors, he says. This heterogeneity will exist to enable fine-tuning for various types of computing workloads. If I didn't know better I might think I was reading a Cray briefing on its Adaptive Computing strategy. Since Mundie himself came from a supercomputing background (co-founding Alliant Computer Systems in 1982), he should be well aware of the implications of these new technologies. Explained Mundie:
[T]he world is going to move more and more away from one CPU that is multiplexed to do everything, to many CPUs, and perhaps specialty CPUs. This is not the world that the programmers target today. This kind of complexity was historically reserved only for the wizards who wrote the core operating system; or, in the world of supercomputing in science and engineering, people who had the ultimate requirement for computational performance built big machines like this and have used them to solve some of the world's tough computational problems. That was always a niche part of the industry.
As if to prove his point, at the same time Mundie was offering his vision of the future of computing, AMD was busy filling in some of the details. At AMD's 2007 Technology Analyst Day last week, execs provided a glimpse into the company's two-year technology roadmap.
AMD is on schedule to ship its first 65nm "Barcelona" quad-core processors this month and a 45nm version named "Shanghai" in mid-2008. By 2009, the company intends to deliver octal-core processors, code-named "Sandtiger." Also in 2009, AMD is planning to release its first heterogeneous processor, a CPU-GPU hybrid, called "Falcon." It will consist of five cores: four CPUs and a single GPU.
To help ease the transition to parallel computing, AMD will deliver hardware extensions for software parallelization, called xSP. Through an open specification development process, xSP will allow software developers to access the capabilities of multicore processors. These extensions will enable acceleration for software transactional memory, accelerated cross-core communication, fast context switching for light-weight parallelism, and light-weight profiling. AMD also announced plans to deliver instruction set extensions to accelerate performance for compute-intensive workloads, including high performance computing, multimedia applications and security.
When Microsoft, Intel, AMD and Cray all seem to be heading in the same direction, it means something. And, of course, they're not the only ones. IBM, Sun, HP, SGI and many others are also pushing the parallel computing front forward. The hand-wringing and hyperventilation seems to be coming to a close. In ten years, we'll wonder what all the fuss was about.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - August 02, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?