November 03, 2011
Just three and half years after IBM broke the petaflop barrier with its Roadrunner supercomputer, Fujitsu's "K computer" has passed the 10 petaflops mark. Fujitsu and RIKEN announced on Tuesday that they have completed the final build-out of the system and achieved 10.51 petaflops on Linpack, reaching a major milestone of Japan's Next-Generation Supercomputing Project.
In June of this year, Fujitsu and RIKEN captured the number one spot on the TOP500 with a Linpack result of 8.16 petaflops for the partially completed K system. It marked the first time a Japanese system was number one on the list since the Earth Simulator supercomputer held the title from 2002 through 2004.
The completed K system, housed at RIKEN's Advanced Institute for Computational Science in Kobe, is powered by more than 88 thousand SPARC64 VIIIfx CPUs. The 8-core SPARC64 VIIIfx chip was purpose-built for HPC, delivering 128 peak gigaflops at 2.0 GHz, while drawing a relatively modest 58 watts. Although each CPU represents a single node, four of the SPARC chips are glued to a single motherboard, 24 of which make up a rack. The whole system is comprised of 864 of these racks.
The peak petaflops for the final system is a whopping 11.28 petaflops, and thanks to the Fujitsu's 6D Tofu interconnect, the system was able to squeeze better than 93 percent Linpack efficiency from the floating pointing parts -- a rather remarkable feat. Total time for the Linpack run: 29 hours and 28 minutes.
Of course, the real value of all these flops is not Linpack. The K is destined for all sorts of big science workloads, including nanotechnology simulations, drug discovery, materials design, climate prediction, industrial design, and cosmology, among others. The multi-petaflops capabilities of the machine should enable some of these application to push the envelope of their respective domains.
Applications aside, Japanese supercomputing prestige is soaring with the K machine right now, and unless there's a surprise Chinese system waiting in the wings to overtake the it, the system will retain its title as the most powerful computer on the planet. It looks like all other double-digit-petaflop machines in the pipeline won't be up and running until next year.
If IBM hadn't parted ways with NCSA over the Blue Waters Project, the K system might already have had some serious competition from the US. Blue Waters, which was also supposed to be a 10-petaflop system, in this case based on Power7 technology, was originally slated to come online toward the end of this year. Obviously, that's not going to happen.
Another contender is the Jaguar supercomputer upgrade at Oak Ridge National Lab (ORNL), which will result in a 10 to 20-petaflop system. That machine, which will be renamed "Titan," will be outfitted with the next-generation "Kepler" GPUs from NVIDIA, but that work isn't expected to be completed until late 2012. The first phase of the upgrade, which involves plugging 960 Fermi-class GPUs into the machine, is already in motion, and is expected to be completed this year. But it's rather unlikely those initial enhancements will yield anything approaching 10 petaflops.
Other leading-edge petascale machines include the two big IBM Blue Gene/Q systems headed for US DOE centers: "Mira", a 10-petaflop system destined for Argonne National Lab, and Sequoia," a 20 petaflop machine, which will be installed at Lawrence Livermore. But both of these Blue Genes aren't expected to be operational until 2012.
Likewise for the 10-petaflop Dell-built cluster for TACC, named "Stampede." That machine will be relying on Intel's Many Integrated Core (MIC) coprocessor to provide most of the flops, and since the first production MIC ("Knights Corner") won't be available for at least a year, that system won't be up and running until late 2012.
Technically, the K Computer is not quite ready for prime time either. The Linpack run was part of the machine's verification process. Over the next few months, the engineers will be developing and tuning the system software system, which should be completed by June 2012. Real production users are not expected to be able to log on until November 2012.
Beyond its 10-petaflop adventure, Fujitsu would like to start selling SPARC64 VIIIfx-based servers outside of Japan. It would certainly make sense for Fujitsu to try to cash in on its investment in the SPARC chip and K design. But as impressive as the technology is, the market has not exactly embraced custom-built HPC.
For political reasons, the US government supercomputing labs would be unlikely to import foreign HPC of any flavor. And considering the attractive price-performance of x86 HPC, smaller clusters of K would probably not have much of a market in the commercial HPC space. Fujitsu could perhaps export K-type supercomputers to Europe and perhaps elsewhere is Asia. But as we saw last week, China is interested in developing its own HPC industry, and the large European centers are more apt to stick with the supercomputer vendors they know best -- mainly IBM, Cray, and Bull.
For the time being though, Fujitsu and Japan can bask in the glow of their accomplishment and enjoy their newfound position at the top of the supercomputing heap. If history is any guide, these moments tend to be rather fleeting.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?