January 08, 2009
The hardware and software challenges of multicore/manycore CPUs have been flogged in this publication for a number of years. The assumption was that geek ingenuity would eventually power through the roadblocks. The memory wall problem would yield to innovative hardware architectures, and new software development approaches would make multithreaded computing practical enough for widespread use. But what if that doesn't happen?
There's a good article in the January/February 2009 issue of Technology Review that outlines multicore computing challenges and talks about some of the software strategies being pursued by Intel, Microsoft and others in the industry. But the most interesting part of the article is toward the end, where the author allows for the possibility that the whole multicore paradigm may just fall apart:
So what's the downside if multicore computing fails? What is the likely impact on our culture if we take a technical zig that should have been a zag and suddenly aren't capable of using all 64 processor cores in our future notebook computers?
For a positive spin on this outcome, the author quotes Apple Computer co-founder Steve Wozniak, who apparently believes the end of Moore's Law-driven microprocessor evolution would be a good thing:
"I can't wait!" says Steve Wozniak, the inventor of the Apple II. "The repeal of Moore's Law would create a renaissance for software development," he claims. "Only then will we finally be able to create software that will run on a stable and enduring platform."
Of course, the other way to create a stable platform is to build scalability into the software model so that the number of cores is transparent to the application. The idea is that jumping from 8 to 64 cores automatically gives an application better performance, without recoding or even recompilation. That's the thrust behind the work Intel, Microsoft and university researchers are doing today.
Some industry luminaries, like Professor David May at Bristol University, thinks replicating cores using legacy architectures is the real problem, given that conventional CPUs like the x86 were never designed for parallel processing. He elaborated his position in October in an Electronics Weekly article on the pitfalls of multicore programming:
Current attempts to use multi-cores in the mainstream computing world, like the efforts made by Intel and Microsoft with a bunch of US universities, may be doomed. "I think they (Intel and Microsoft) are trying to solve a different problem," said May, "they're taking all the PC applications and putting them on multi-cores. That's a very different problem and, in my view, they won't be very successful. Taking sequential programmes and trying to make them run in parallel is virtually impossible."
May is also the CTO of XMOS Semiconductor, a company that has developed a multicore architecture that uses "software defined silicon" to combine some of the best attributes of ASICs and FPGAs. The resulting processor is aimed at the consumer electronics market.
Perhaps along the same lines is Creative Technology's just-announced Zii processor, which also claims to use software defined silicon in its newly minted 10 gigaflops chip. Like the XMOS silicon, Zii is targeted for the consumer space, although the Web site video hyperventilates about building a petaflop supercomputer with a mere six racks of Zii processors. Maybe if they were IBM, they'd actually attempt it.
In any case, for most kinds of client-side computing, the x86 architecture may truly be a dead end. Since the Internet became the center of the computing universe, PCs have been morphing from general-purpose computing appliances to thin clients. This will continue as more and more computing is moved into the cloud. As clients get ever thinner, the main computing load is data transcoding, which generally can be accomplished with greater efficiency using more specialized silicon like GPUs, FPGAs, DSPs and maybe these new-fangled software defined silicon gadgets. In that sense, PCs are becoming more like handheld devices.
Where would that leave server-side computing, especially HPC? For throughput and capacity computing, CPU-based architectures still offer a reasonably-natural computing architecture. But for many HPC applications, and for capability supercomputing in particular, the inherently parallel architectures of GPUs, Cell processors and FPGAs offer a better fit (although a CPU companion is still needed at this point). The high level of interest with GPGPUs, Cell processors and FPGAs is one indication that supercomputing might be turning away from conventional CPUs.
Economics will dictate that mainstream HPC will continue to rely on the same processor architectures used in consumer electronics. But one day, those chips may be something other than x86.
Posted by Michael Feldman - January 08, 2009 @ 4:49 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?