August 04, 2011
Ioan Raicu, an assistant professor in the Department of Computer Science at the Illinois Institute of Technology (IIT) and guest research faculty in the Math and Computer Science Division at Argonne National Laboratory, has a long-standing interest in the challenges of data-intensive computing and distributed system.
As founder and director of the Data-Intensive Distributed Systems Laboratory (DataSys) at IIT some of the problems he has been tackling have similar threads that run across cloud computing, exascale computing and the new programming and efficiency challenges of manycore processors.
Raicu compared current supercomputing capacity and that which will fuel the coming age of exascale to major cloud computing providers like Amazon.
In doing so, he made the claim that Amazon in 2018 will look very similar to exascale supercomputers, with node counts in the many hundreds of thousands.
Currently Amazon's data centers are spread out across six locations with an estimated 40,000 servers, 320,000 cores, and consuming an estimated $12 million per year in energy costs. Raicu claims that this already parallels the systems at major institutions and that by 2018, Amazon's revenue, which is a mere $250 million per year, could grow anywhere from 100 to 1000 times what they are now.
This growth comes at a cost, however. While Amazon is spending an estimated $12 million per year on energy costs alone, by the time it hits the exascale level Raicu predicts, energy costs could soar to as much as $20 million per year.
During his talk, Raicu pointed out a number of ways that the challenges of exascale can be directly related to the problems that the major IaaS vendors like Amazon will face. Among such expected hurdles is, perhaps not surprisingly, the energy efficiency issue he discussed during his yearly energy expenditure estimates. With talk that exascale systems will likely require their own dedicated power plants, what would a set of distributed data centers housing many hundreds of thousands of nodes require?
Raicu argues that we need to look to more power efficient technologies that will not only aid in the progress toward exascale computing—but that can also be harnessed to power the growing mega-clouds. Even without solutions to the efficiency problem, there are other bottlenecks, including the usual suspects when it comes to major data centers or supercomputers---memory and storage.
Even with the efficiency and hardware problems solved, there need to be applications that can take advantage of the vast numbers of cores available. For exascale, this is challenging enough—but when it comes to a distributed computing powerhouse like Amazon, operating at such scale, solving parallel programming challenges is going to be just as important, if not in some ways more complex.
To better understand the context of some of his statements, check out the video of the talk presented below.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?