|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / February 16, 2007|
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
The quantum was much in the news this week as Canadian tech start up D-Wave Systems unveiled Orion, a 16-qubit superconducting adiabatic quantum computer processor. The commercial version of this early prototype system will ultimately be targeted at solving NP-hard problems that conventional digital computers have a hard time with.
There are lots of questions about the technology, however. First are the fundamental questions raised by some experts on whether there is enough evidence to prove that the calculations taking place are actually quantum, and not just an exotic analog calculation happening at 4 millikelvin. Then there are questions about whether the technology will scale by the 1,000 times or more needed to address problems too hard for computers to solve today.
Still, most everyone does agree that this is an important step in a very interesting direction, and to their credit D-Wave is very open about both the questions and the promise of their technology. If you'd like to do your own digging I recommend Scientific American's online coverage as a good place to start, along with this article by Ashlee Vance at The Register.
>>The International Solid State Circuits Conference
A lot of the goings on in IT reported this week came out in association with presentations made at the ISSCC (International Solid State Circuits Conference) in San Francisco. The major chip companies were all showcasing their technology futures.
Intel gave us more details on their 1 TFLOPS 80-core experimental chip. Yes the chip only has a 32-bit address space, and yes it has dramatically simplified circuitry (about 1/3 the number of transistors on conventional chips from Intel). But Intel's advance is important in that it's spurring a whole new conversation about what operating systems and software might look like if they didn't have to spend so many millions of lines of code managing what used to be a scarce resource: the compute core.
AMD's discussions on its Barcelona quad-core offering focused on its own claims that it performs 40 percent better than Intel's quad-core line, and on its innovations in power and thermal management. Among other features Barcelona chips power down memory logic when not in use, and employ clock gating to shut down areas of the chip not in use.
IBM was talking about Power6, where their approach is to improve performance by cranking up the clock to nearly 5 GHz. This is clearly a contrarian approach by IBM. I understand that the move to hafnium-juiced chips will have stave off the fundamental physics problems that IBM is going to encounter on this path, but this approach appears to have a much shorter lifespan than the approach IBM's chip competitors are taking, and I wonder whether this isn't simply buying time while the company adjusts its path forward.
In a much more interesting move IBM announced an evolution in computer memory technology that may enable it to put up to three times more memory on the chip with the processor. IBM says it's been able to speed up DRAM to the point that it's nearly as fast as SRAM, enabling it to replace SRAM as the choice for memory on the processing chip.
>>Electricity use by servers in the U.S. doubles
There was a lot of coverage in the IT press this week about a new Lawrence Berkeley study commissioned by AMD of power consumption in servers. It's now estimated that servers account for 1.2 percent of all electricity use in the US (about the same as all the color TVs in the country) at a cost of about $2.7B. More troubling for the global warming crowd, the study shows that electricity use in servers doubled from 2000 to 2005. You can find the entire study in PDF form at http://enterprise.amd.com/Downloads/svrpwrusecompletefinal.pdf.
Stream Processors, Inc. started talking this week about their new stream-processor for digital signal processing. The chip contains two MIPS cores (one for Linux-level tasks and IO; the other for real-time DSP work) in addition to a "data parallel unit" that can offload hefty tasks with VLIW and SIMD (tip of the hat to Chris Aycock for that one).
Some in the IT world started thinking about where all that hafnium that Intel and IBM are talking about is going to come from once it starts being used to make the world's computer chips. It seems that only 50 tons are produced worldwide each year. Not to worry, says IBM's Chief Technologist Bernard Meyerson in a piece carried by Reuters. The hafnium in one cubic centimeter could be spread across 10 football fields worth of silicon wafers. "That assumes a 50-atom-high pile of it," said Meyerson, "which frankly would be an extraordinarily large amount for materials like this one." Whew: dodged that one.
Several new systems came online this week, including the largest shared-memory system in Canada. The 5 TFLOPS SGI Altix 4700 system will be used by the Réseau de calcul de haute performance (RQCHP) at the University of Montreal for research in physics, chemistry, engineering, medicine, computer science, biochemistry, bioinformatics, and several other fields.
John West summarizes the headlines in HPC every day at insideHPC.com, and writes on leadership and career issues for technology professionals at InfoWorld and on his own blog at http://onlytraitofaleader.com/. You can contact him at firstname.lastname@example.org.
www.onlytraitofaleader.com Leadership and career skills to help scientists, engineers, and technologists find success doing what they love to do. No time to keep up? Subscribe to the RSS feed!