July 07, 2006
It's been an "early" summer here in San Diego this year. Our typical Pacific Ocean marine layer, which normally keeps the area naturally air-conditioned in May, June and most of July has failed us this year. Temperatures have soared into the nineties in recent weeks and ocean temperatures are already in the seventies. I was reminded of the unusual warm weather as I worked on my computer over the July 4th weekend, with the constant whir of the cooling fan in the background.
I'm not bringing this up to remind everyone about global warming. There's another "inconvenient truth" worth talking about -- the collision between rising energy costs and increasing energy consumption by ever more powerful computers. Recent articles in this publication and elsewhere have discussed the problems associated with powering and cooling high-end computers. In fact, it's hard to find an HPC story that doesn't mention the power crisis in supercomputing.
Last week's article on the limits of high performance computing, by John Gustafson of ClearSpeed, illustrates how energy costs have started to dominate overall computing expenditures. He points out that Google's cost of running its server farms are its biggest line item expense. This probably explains why the company is constructing an enormous (and mysterious) computing facility on the banks of the Columbia River, where access to cheap hydroelectric power has apparently acted as a powerful incentive. Oak Ridge National Laboratory's buildup of its supercomputing infrastructure over the next few years, culminating in a petaflops system in 2008, will certainly benefit from its proximity to the energy resources of the Tennessee Valley Authority.
Gustafson's example of the 5 cents/kilowatt-hours energy cost at the Pacific Northwest National Laboratory versus 23 cents/kilowatt-hours at the Maui High Performance Computing Center is a powerful example of how economic geography is affecting the price of computing. Unfortunately, Maui is doubly penalized. Not only are energy costs much higher there, but the constantly warm climate adds to the overall cooling load of the facility and the computers. The increasing importance of electricity costs points to a new reality that will influence how and where supercomputers may be deployed in the future.
In this week's feature interview with Steve Scott, CTO of Cray, he discusses the supercomputing power consumption problem as well. Scott talks about three ways to address the problem: multi-core architectures, specialized processors that use less silicon per computation (for example, streaming processors, vector processors and FPGAs), and intelligent chip-level power management. Certainly the industry is headed down all three paths and each approach promises a significant impact on overall energy consumption.
The recent CTWatch Quarterly article, "Designing and Supporting High-end Computational Facilities," written by Ralph Roskies (Pittsburgh Supercomputing Center) and Thomas Zacharia (Oak Ridge National Laboratory), devotes a good deal of attention to energy issues. They write:
"Power consideration begins with the ability of the utility company to deliver adequate power to the site from its substations. Be prepared for a shocked reaction from your utility company the first time you call and make your request, especially if you have never done this before."
In some areas of the country, like parts of California, I'm guessing that the request for an extra megawatt or two from the local utility company may be problematic. Energy-poor countries, like Japan, may have even greater limitations. For example, the recently installed Tokyo Tech TSUBAME supercomputer had to meet very strict power consumption requirements.
Roskies and Zacharia continue:
"The power costs must not only take into account the power needs of the computer, but also the cost of the cooling. As a rule of thumb, multiply the power consumption of the system alone by 35-40 percent to estimate the additional power consumption of the required cooling. Today's rates for power vary substantially over the country, ranging from under 3 cents/kwh to over 10 cents/kwh."
Hmm... I guess they haven't been to Hawaii recently.
But this brings up a bigger question. What portion of our national energy budget should we expect to allocate for computer infrastructure as we evolve toward an Information Society? Some projections have cyberinfrastructure taking as much as 50 percent of our total energy use within 20 years. But a 2002 RAND Corporation report, titled "Electricity requirements for a digital society," predicts less than five percent by 2020.
In the RAND study, the conclusion is that the real energy limitations have to do with quality and reliability:
"Increasing use of the Internet and other information and communications technologies (ICTs) marks a U.S. transition toward a 'digital society' that may profoundly affect electricity supply, demand and delivery. RAND developed four 20-year scenarios of ICT evolution (2001-2021) for the U.S. Department of Energy and assessed their implications for future U.S. electricity requirements. Increased power consumption by ICT equipment is the most direct and visible effect, but not necessarily the most important. Over time, the effects that ICTs have on energy management, e-commerce, telework, and related trends will likely be much more consequential. Even large growth in the deployment and use of digital technologies will only modestly increase U.S. electricity use over the next two decades. The more pressing concern for an emerging digital society will be how to provide the higher-quality and more-reliable power that ICTs demand."
But this still leaves me wondering. As our ancestors evolved into humans -- "information beings" so to speak -- the amount of energy our brains used, proportional to the rest of our bodies, continued to rise. Today our brains consume 20 percent of our energy while at rest. It seems reasonable to me that an economy based on information processing and knowledge discovery would also demand a bigger cut of the energy pie for its intellectual hardware.
Meanwhile, it looks like another warm weekend for San Diego. Maybe this time I'll just shut down the computer and head to the beach. Even the Information Society has to take a break once in awhile.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - July 06, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?