July 21, 2009
When NEC and Hitachi withdrew from Japan's Next-Generation Supercomputing Project in May, it left Fujitsu as the only system and chip vendor remaining on the project. Originally the idea was to build a 10 petaflop (Linpack) machine using a combination of scalar CPUs from Fujitsu and vector processors from NEC. With NEC's chips off the table, the powers that be -- in this case MEXT, Japan's Ministry of Education, Culture, Sports, Science and Technology, and RIKEN, Japan's Institute of Physical and Chemical Research -- decided to go forward with Fujitsu hardware alone.
According to Fujitsu's announcement on July 17, the multi-petaflop system will now be powered by the company's new eight-core SPARC64_VIIIfx processor, codenamed "Venus." That chip was unveiled at about the same time NEC and Hitachi were bailing on the supercomputing project. Although not currently in production, Venus was advertised as the fastest CPU on the planet at 128 gigaflops.
It's doubtful Venus will hold that title when it is deployed in Japan's prototype machine late next year. By 2010 the eight-core Power7 chips should be in the field, and IBM is saying those processors will deliver over 256 gigaflops per CPU. The Power7 will be used in the multi-petaflop "Blue Waters" supercomputer for NCSA, which is scheduled to be running full tilt in 2011. Even Intel's Xeon chips should be well into triple-digit gigaflops when the Westmere 32nm Xeon processors hit the streets in 2010.
What may set Venus apart from its competition is its energy efficiency. Fujitsu is claiming the SPARC64_VIIIfx design allows it to operate at less than one-third the power of current Intel processors. The company didn't specify which Intel parts they were referring to, but since even the high-end Itanium CPUs top out at about 122 watts, the Venus chip should draw no more than 40 watts or so.
Aside from Fujitsu silicon, the next-gen Japanese super will also feature a multidimensional mesh/torus network as well as custom system software to glue it all together. The fact that there will no longer be vector hardware to contend with will undoubtedly make this software simpler than it otherwise would have been.
But there will be some attempt to accommodate applications developed for NEC's SX vector machines. According to the press announcement: "Although the next-generation supercomputer will consist only of scalar units, through the use of application parallelization and tuning it will support applications that have run on previous supercomputers with vector units. Other ways to assist users of vector-based supercomputers are also being considered."
Despite the NEC/Hitachi withdraw, the plan is to get a "partially operational system" by late 2010, and the complete production system ready by 2012.
Posted by Michael Feldman - July 21, 2009 @ 12:33 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?