May 09, 2013
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system.
Reports indicate that the country’s Education, Culture, Sports, Science and Technology Ministry (talk about a broad reach) is preparing a funding request to begin toiling at the first design phases of the eventual extreme-scale K replacement—one that is expected to pay dividends by next year if approved.
While Japan has made its formal entry into the exascale race with this announcement, it’s not aiming much higher than others with noteworthy funding toward the lofty goal. The Japanese government anticipates an exascale reality for the same 2020 target that other similarly-motivated nations are aiming to reach.
According to one Japanese news outlet, these exascale ambitions come at a relatively reasonable price, about the same as the K development costs—110 billion yen, or around $1.1 billion. Compared to cost projections elsewhere on the planet, this is bargain. Consider, for instance, India’s recent exascale funding effort worth $2 billion and the U.S. projects, which project a long-term investment of billions.
In the United States, these investments were kicked with $125 million in 2012 to fund preliminary research—an amount that wasn’t kicked back into the extreme scale pool this year with the most recent round of NSF funding despite the consensus that exascale by 2020 will remain a target.
At this point, it appears the $1.1 billion investment is to fund the conceptual design phase in Japan, although there are still details pending about the partner institutions and companies behind the research.
Funding numbers for exascale are all relative, since there are numerous cost considerations around the many phases of exascale; from research and development (which this funding appears to be supporting exclusively), then eventually system design and acquisition, and finally, the incredible resources to power an exascale machine.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?