October 06, 2006
An undeclared race towards petaflop computing is in progress between the United States and Japan -- a race which is being closely watched by the global HPC community. Right now the scales lean towards the U.S., which leads with its latest IBM Blue Gene/L computer, a 280 teraflops (sustained) system. The IBM machine took the number one spot from Japan's Earth Simulator in 2004, which had dominated the supercomputing charts since 2002.
Experts are expecting the first petaflop system within the next couple of years. The bets are that it will be a follow-on of the IBM design. However, Japan is not to be discounted. As the first and only country having specified supercomputers as "Key Technology of National Importance," Japan is aiming at becoming the world leader in simulation capabilities in areas covering nano-science, life science, climate/geo-science, physical science and engineering. Unburdened by the responsibility for nuclear stockpile stewardship, it can focus its research and financing on providing a petaflop platform for real-world applications.
These efforts are harnessed by the RIKEN institute, which together with leading industries and universities has set up an organization that targets the development of a 10 petaflop system within the next six years. On September 19th, RIKEN issued a press release which officially declared these intentions. Back in April 2006, a research collaboration started in Japan to define the best possible architecture for such a system, based on a benchmark consisting of 21 real-world applications. Using these benchmarks, two candidates for such an architecture have now been selected for further design evaluation. They have been put forward by Fujitsu Ltd. and a team formed by NEC Corporation and Hitachi, Ltd. The results of this final evaluation will be available at the end of this fiscal year and will become the basis of the implementation. On September 19th and 20th, RIKEN held a seminar at which the announcement was made.
Taking advantage of a visit to Bonn, Germany to give a keynote lecture at a scientific conference, Dr. Mitsuyasu Hanamura, who heads the applications software group within the RIKEN Next-Generation Supercomputer R&D Center, took part in a press briefing organized by the NEC Europe Computing & Communication Research lab in St. Augustin, Germany. Dr. Hanamura, gave a technical summary of this subject.
The Next-Generation Supercomputer Project, as it is called within Japan, is tasked to support six distinct goals:
To reach these goals, the new machine will enable access for researchers and industries through the cyber science infrastructure framework of the National Research Grid Initiative (NAREGI) project initiated by the National Institute of Informatics (NII).
According to Dr. Hanamura, because of prohibitive power consumption, the new class of supercomputers will need technology breakthroughs. Based on reasonable projections until 2010 on compute-power per CPU, efficiency-factors and power consumption, as well as the need to support existing codes, he gave an estimate for a hypothetical one petaflop (sustained) system:
CPU Type Peak Perf. Efficiency Est. Power SW Support
-------- ---------- ---------- ---------- ----------
Vector 63 GF/CPU 0.3 47 MW good
Scalar 30 GF/CPU 0.1 40 MW good
Special- n.a. 0.5 ~0.5 MW poor
This data clearly points towards a mixed hardware environment in order to be able to reach both high performance and the support of existing application code. As an example for special purpose hardware he pointed to RIKEN's MD-GRAPE3 machine, a special-purpose computer geared for molecular dynamics and multi-body calculations. In May 2006, a system based on this chip already achieved a performance level of over one petaflop. Therefore Dr. Hanamura foresees an architecture which combines scalar nodes, vector computers and special purpose computers into a single system. As multi-scale simulations often need to consider both particle-based and domain-based effects, which lend themselves naturally to different computing models, this new architecture should be well suited here.
The tentative schedule of the project is
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?