August 18, 2009
In these times of energy resource and climate change worries, green computing continues to be on the minds of high performance computing practitioners and providers. Green computing is mostly about energy efficiency, that is, performance per watt, but also encompasses other aspects like reuse, biodegradability, and optimal resource use in general. But the more I hear about it, the more I realize it's really about the economics of computing rather than any environmental sensitivity.
As John Gustafson has noted, "HPC users are not tree huggers." Like many in the industry, he believes the goal is not to reduce energy use and other resource costs per se, but to maximize computing within a fixed budget. If that's the case, then this is basically just another element of the Total Cost of Ownership (TCO).
But from a marketing point of view, the term "green computing" has a lot more Mom-and-apple-pie sound to it than say "TCO-optimized computing." So it's not surprising that every chipmaker, storage provider, interconnect company, and system vendor is selling green these days. Of course, it's no guarantee of success. SiCortex, the cluster vendor that made green computing the centerpiece of its business, went belly up this year when it failed to attract VC funding to continue its operations.
So with all this newfound love of all things green, what are the results? Depends on how you measure it. Certainly x86 chips are getting more efficient with each processor generation. Intel's Nehalem chips are advertised as having twice the performance per watt as its previous generation Penryn processors, but at the system level this gets diluted significantly. For example in the June 2008 Green500 list, the most energy efficient Intel-based (Penryn presumably) clusters achieved 220 to 240 megaflops/watt, while in the June 2009 list, the top Nehalem-based clusters topped out at 250 to 270 megaflops per watt -- about a 10 percent increase.
In fact, the average efficiency for the whole Green500 also increased by 10 percent compared to last year. During that same period, the aggregate power of the list increased by 15 percent. The conclusion of the Green500 crowd is that "while the supercomputers on the Green500 are collectively consuming more power, they are using the power more efficiently than before." The other conclusion that could be drawn is that the gains realized in energy efficiency are not keeping up with computing demand.
Keep in mind that the Green500 measurements are based on either Linpack or peak performance numbers, not actual applications. Therefore, real-world energy efficiencies are potentially much higher, given that a lot of the power smarts built into these new chips and the servers constructed around them have to do with reducing power at idle or partial load -- something not likely to occur during a Linpack run.
Having said that, my instinct is that energy use in HPC and the broader industry will continue to grow, despite more efficient infrastructure. Computing demand seems insatiable right now and I don't see any end in sight. And since computing is a high value commodity relative to its energy inputs, the economic incentive will continue to be in favor of more computing.
That doesn't mean energy efficiency isn't worthwhile. For individual datacenters, minimizing energy use is a big motivator since there are practical limits to increasing power to a particular site. Also, energy and cooling costs are becoming (or in some cases have already become) the largest expense over the lifetime of a system. Dan Reed's recent blog about how the new focus on Power Usage Effectiveness (PUE) is changing the way these facilities are being designed points to the fact that the cost ratio of computing infrastructure to energy is inverting.
All of this is driven by the economic realities of maintaining these facilities as more and more computing capability is stuffed into them. If the industry needs to feel good about itself by calling it green computing, so be it. It's all TCO to me.
Posted by Michael Feldman - August 18, 2009 @ 6:13 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?