June 26, 2008
One of the final panel sessions at the International Supercomputing Conference (ISC) last week focused on "green" supercomputing, a term used to encompass both power efficiency and environmental responsibility. Barely an issue just a couple of years ago, today every IT vendor, HPC or otherwise, is selling green computing in one form or another. With overall IT power consumption expected to grow around 15 percent per year and the pressure on datacenters to accommodate ever-larger systems, energy conserving strategies have become a huge issue in HPC and the IT industry, in general.
Panel chair Horst Simon (LBNL) started the session by noting that even small energy cutbacks can yield large savings over a long period of time. He pointed out the very modest energy saving measures instituted in the U.S. in the mid '70s in response to the oil embargo netted $700 billion of savings over the ensuing 30 years. Although worldwide, IT infrastructure consumes only about 0.8% of energy used, the cost totaled $7.2 billion in 2005. Given the double-digit growth rate of IT power consumption, steps taken today could save billions of dollars over the next decade.
Simon cited a Google report that determined that datacenter energy costs are starting to dominate lifecycle costs. According to this study, energy costs may eclipse acquisition costs for low-end servers after just two years of service. With that in mind, Google and Microsoft are building huge datacenters (tens of megawatts) along the Columbia River to take advantage of cheap hyro-electricity and use the river water for cooling. Ten or fifteen lesser-known IT companies are building similar facilities elsewhere in anticipation of future demand for ultra-scale datacenters. "Clearly the industry is changing and something is going on with power and computing," Simon noted.
Virginia Tech's Wu-chun Feng was interested in green computing before it became fashionable. In 2002, Feng and his colleagues became interested in developing an energy-efficient HPC system that required minimum cooling. The effort was born out of necessity. Virginia Tech's datacenter wasn't much more than a warehouse. It had little access to cooling, with temperatures in mid-summer rising to 85-90F. Both power and space were limited. The goal was to develop a highly reliable machine that could operate under these harsh conditions; performance was secondary.
In response, Feng's team developed a 240-node cluster, called Green Destiny, based on the highly energy-efficient Transmeta processor (1 GHz TM5800). The entire system used 3.2 kilowatts. The Transmeta chips weren't the fastest chip ever conceived. Green Destiny topped out at 101 gigaflops on Linpack. Even in 2002 that would have placed it in the bottom half of the TOP500. Feng recalled they took some heat about the machine's low performance, causing one colleague to joke that it "runs just as fast when it's unplugged." But the project was a success. In the two-year life of the system, there was no unscheduled downtime.
Other than interest in the exotic Transmeta hardware, Feng's work got little attention. In 2002, HPC was about performance at any cost. Oil was $25 a barrel and not many people were worried about power and cooling costs yet. The conventional wisdom was that Moore's Law would solve everything. "It's interesting to see in five and half years how things have changed," said Feng.
Computing per Watt Has Been Solved
HPC Veteran John Gustafson broke with conventional wisdom, declaring that the computing part of our machines is already highly energy efficient. He noted that the latest ClearSpeed gear delivers 4 gigaflops/watt, and Intel will soon achieve that in mainstream processors. According to Gustafson, the computational elements of a modern HPC system consume just a small fraction of the total power.
He illustrated this by pointing out that a typical Linpack run for a top 10 system uses the equivalent of 20 barrels of oil. The floating point calculation part uses just 0.1 barrel. The rest is used moving data from one point to the other (although he admitted that includes on-chip data movement as well). With that in mind, Gustafson said that industry should now focus on the energy efficiency of data communication. He wants to replace flops with a new metric: "byps" or bytes per second. According to him, measuring byps per watt will give people a much better understanding of the energy efficiency of systems.
Wasteful power consumption in data communications is relatively easy to find. According to an IEEE Spectrum report, in 2005 it was estimated that all the NICs in the U.S. consumed 5.3 terawatt-hours of energy. Since all of IT consumes 200 terawatt-hours, the NIC devices alone represent 2.6 percent of the power used by all the machines. Furthermore, since communication tends to be bursty, about 95 percent of this energy is wasted. Most of the time, NIC is chewing up watt-hours waiting for the next data deluge.
Gustafson maintains that computing is not going green to reduce energy use or reduce the carbon footprint, but to get more performance within a fixed power budget. "This is the inherent nature of HPC," said Gustafson. Improving performance per watt will go to increasing performance, not reducing watts. After all, he said, "HPC users are not tree huggers."
Green Computing by Law
Because of Japan's limited domestic energy resources and an environmentally conscious populace, green computing is more or less mandated by law in the island nation. Under the Kyoto Treaty, the amount of carbon many government facilities and public universities can emit is regulated. With such stringent limits, datacenters have no choice but to aggressively pursue energy efficiency.
As the technical lead for the TSUBAME supercomputer at the Tokyo Institute of Technology (TiTech), Satoshi Matsuoka has had to deal with this reality for some time. The TSUBAME machine was built with ClearSpeed accelerators on top of conventional Opteron nodes to achieve the high levels of performance with a low power consumption. Currently at 100 teraflops, the system consumes a total of 1.2 megawatts for power and cooling.
Matsuoka explained that as part of TSUBAME's upgrade path over the next two years, they are tasked to deliver a one petaflop system -- a 10-fold increase in performance over the current system. And they have to achieve that with the same power consumption as today's TSUBAME. That means they will have to exceed the energy efficiency of the IBM Roadrunner, the most energy-efficient supercomputer ever built. One of the technologies TiTech is looking at is GPGPU. The raw double precision performance per watt is not as good as the ClearSpeed boards, but GPUs are very well suited to data bandwidth-intensive applications, like FFT codes. And even with today's technology, the energy efficiency of GPUs is about five times better than Blue Gene.
On the national scale, Matsuoka said the Japanese government is starting a five-year project in ultra-low-power HPC. Researchers will look at multicore processors, accelerators, next-generation memory technology, advanced networks, better cooling technology, facility improvement, zero emission power sources, and low-power algorithms. The project's goal is to develop basic technologies that will enable a 1,000-fold increase in energy efficiency over the next decade.
Integrated Facilities Design
Dr. Franz-Josef Pfreundt, who heads IT at Fraunhofer-ITWM, thinks the real discussion of green computing needs to focus on energy costs. He noted that an environmentally-friendly solution could be provided using suitable biofuels or solar energy technology, but costs may make such a model impractical.
Pfreundt asserted that currently energy costs represent only a few percent per year of the initial acquisition cost of a supercomputer. That works out to only about 10 to 15 percent of the machine's cost over its three-year lifetime. At ITWM they've achieved that ratio for their latest 2.1 million euro supercomputer, even at the rate of 0.1 euros/KW-hour. He also argued that extending the useful life of the hardware is another cost-saving strategy.
Pfreundt believes that to optimize power use, people need to consider the efficiency of the entire computing infrastructure. Part of the problem, he said, is that the energy budget for the system is divorced from the acquisition cost. If they were wrapped together as part of the system procurement, buyers would naturally pay more attention to power consumption. At ITWM, they've managed to achieve a relatively cost-efficient setup by re-using some of the waste heat and selecting energy efficient hardware.
For example, by taking advantage the temperate German climate, they use outside air for cooling -- something that would not be possible in summer over much of the U.S. and Asia. They also recycle the warmed 86F air to heat local greenhouses. Pfreundt thinks that if they could extract more heat from the computer directly, that is, water cool them, the waste heat would have even more value, since the water could be sold for community heating.
ITWM recently purchased a 70-blade IBM QS22 BladeCenter cluster based on the new Cell processors (PowerXCell 8i), which are the same blades that went into the Roadrunner petaflop machine. At ITWM, they've demonstrated 488 megaflops per watt on Linpack and think they can achieve 600 megaflops per watt, which would earn it the top spot on the Green500 list. While the current Cell processors provide 1.6 gigaflops/watt, Pfreundt projected that within three years the industry will have chips that deliver 10 gigaflops/watt. With that level of efficiency, Pfreundt said he will be able to get a petaflop into his facility.
Learning From the Embedded Space
Berkeley Lab's John Shalf observed that because of the industry's typical two to four year design cycle, not much has occurred that fundamentally addresses the power issues in computing, although he thinks we're starting to see the beginnings of some promising approaches. Accelerators like the Cell processor and GPUs have huge potential, especially with the use of properly tuned codes, but the downside of shuffling data back and forth between host CPU and the accelerator can limit performance on many apps. Shalf sees discrete accelerators as a stepping stone on the path to integrated manycore designs.
According to him, the goal of green computing is to minimize the power consumed for the amount of work performed. This has been the driving force behind embedded computing for some time and is the reason Shalf believes that the current power crisis is converging the embedded and high performance computing spaces. In the embedded world, you start with the application and design the system around it. According to Shalf, that kind of tight coupling between hardware and software is what enables exceptional power efficiency.
"That doesn't mean it's special-purpose and only works for one application target," he explained. "It means that you throw away everything that you don't need for a range of problems."
That translates into much less complex microprocessors than is typical of today's standard x86 or Power chips, or even the new Intel Atom. By simplifying the logic, you can design much smaller chips, with many more cores, shorter instruction pipelines, and less power leakage. For example, GPUs don't have TLBs since they're not swapping applications in and out of memory. "Most of what you have on these modern CPUs, you don't need for science," said Shalf.
At Berkeley, Shalf and others are currently working on "Green Flash," a research project to define a new class of supercomputers for modeling climate conditions and understanding climate change. They chose the application because it encompasses a wide range of algorithms that are applicable to many different science codes. The work is being done in collaboration with Tensilica, a company that tailors highly energy-efficient embedded processors for platforms like MP3 players and network routers. One implementation, the XTensa microprocessor, draws just 0.09 watts at 600MHz and achieves 100 times better floating point performance than the Intel Core2 architecture.
Using Tensilica's design tools, a new chip can be developed in 18 months at a cost of $5 to $10 million. When you consider that a leadership class supercomputer is typically priced in the $100 million range and is the end result of a multi-year development cycle, a simple microprocessor design could easily fit into the scope of the project. Shalf believes this may be the commodity model that HPC will need to adopt if it hopes to achieve exascale computing.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?