HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
December 14, 2010
Exascale. Say it, "exascale" -- it even sounds fast, maybe its the "x" with the multiplicative qualities it donotes. Appropriate, since exascale comptuters will be 1,000x faster than today's current crop of petascale machines. Forget the fact that we're barely into petaflop territory, we're always on to the next big thing...and for those in the supercomputing/HPC space, there's one word that conjures up future machines with almost unimaginable capacity and that's exascale.
But with all good things, there's a catch right. Software? Well, that's one, getting software rewritten to take advantage of those manycore beasts, but with enough time and effort, software is doable. So is hardware -- string enough manycore processors together, and voila. However, an even more pressing concern is energy. Time is money is the old saying, but energy is also money. Especially true since the world is still relying on fossil fuels that won't be around forever. At current rates of demand, oil and natural gas won't be around another century. So as these fuels become more limited, and therefore precious, prices will only increase.
The Institution of Engineering and Technology elucidates the challenge of getting to exascale in a recent article, stating that it's quite possible to build an exascale supercomputer right now, but you'd need a dozen nuclear stations to power it. That is why there will need to be big changes to the underlying hardware and software of the next generation of supercomputers, or they just won't be econmically-viable.
There's a certain irony in the fact that the same machines that will be used to help solve the world's energy and environmental problems themselves contribute to the problem.
Martin Curley, senior principal engineer and director of Intel Labs Europe, illustrates the extreme scale of these machines: "An exascale computer has the equivalent power of 50 million laptops. Stacked on top each other, they would be 1,000 miles high and weight more than 100,000 tonnes."
Wilfried Verachtert, high-performance computing project manager at Belgian research institute IMEC, says that an exascale computer made from existing technology would require 14 nuclear reactors. "There are a few very hard problems we have to face in building an exascale computer. Energy is number one. Right now we need 7,000MW for exascale performance. We want to get that down to 50MW, and that is still higher than we want."
Different companies are looking at different ways to reduce that power demand. Shrinking processor design will allow more processors on each chip. By 2018, 10nm chip fabrication process should be able to fit about 20 times more processors than today's chips can. Intel is working on these smaller designs.
Bill Dally, processor at Stanford University and chief scientist at graphics chipmaker NVIDIA, says that 11nm process technology will enable 5,000 cores on-chip.
SGI is looking at using low-power processors in supercomputers, specifically it is experimenting with using the Atom processors that were developed by Intel for handheld computers.
SGI is also working with field-programmable gate arrays (FPGAs). These are chips that can be reconfigured after manufacturing. Steve Teig, president and CTO of FPGA specialist Tabula, explains that FPGAs allow developers to change the way data moves around a computer. Instead of moving the data to the processor, you can reconfigure and compute in place. Despite advantages, FPGAs are still quite power-hungry.
But with all those cores, you need to be able to exploit that parallelism, another challenge. And then there's also reliability concerns. The bigger the machine, the more parts will fail.
Despite these challenges, there's little doubt among learned professionals that society will achieve the goal of getting to exascale, and it's not some far-off goal, but will happen in just a few years. The rate at which supercomputing advances has been pretty predictable, with each decade ushering in a thousand-fold increase in power. Imec's Verachtert sums up: "In 1997, we saw the first terascale machines. A few years ago, petascale appeared. We will hit exascale in around 2018."
Posted by Tiffany Trader - December 14, 2010 @ 3:30 PM, Pacific Standard Time
Tiffany Trader is the editor of HPC in the Cloud. With a background in HPC publishing, she brings a wealth of knowledge and experience to bear on a range of topics relevant to the technical cloud computing space.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?