November 02, 2007
Last Thursday, NEC announced its sixth generation vector supercomputer, the SX-9, which the company is touting as the "worlds fastest vector supercomputer." The company says the new machine will be twice as energy-efficient as the SX-8R generation. The SX-9 is based on a new 100 gigaflop vector processor, sixteen of which are placed in a node. In addition to the new vector processor, the SX-9 supports up to one terabyte of shared memory per node and an internode interconnect of up to 128 GB/second. At its maximum configuration of 512 nodes, the SX-9 would deliver a peak vector performance of 839 teraflops.
Before I go any further, I should point out that to the best of my knowledge, no such machine is being built -- or ever will be. According to Thomas Schoenemeyer, HPC Presales Manager, NEC GmbH, nothing near the size of an 839 teraflop system is in the pipeline. NEC has orders for two systems in Europe. One is headed to the German Weather Service (DWD); the other to Meteo France. The German system, which will deliver 39 teraflops, and, coincidentally, costs 39 million euros (72 million dollars), is scheduled to be fully operational in 2010. The Meteo France system is also expected to be a sub-100 teraflop machine. This week, the company also announced an order from Japan's Tohoku University for a 26 teraflop system. NEC plans to ship bigger SX-9 systems down the road, but they don't expect to be challenging petaflop supercomputers in the foreseeable future.
"We are not going to be on the top of the TOP500 list with this system," admits Schoenemeyer. "Our focus is the productivity of the customer."
An 839 teraflop SX-9 would probably cost in the neighborhood of a billion dollars. So despite what you might have read elsewhere, the top systems from Cray and IBM are unlikely to be challenged by a maxed out SX-9 machine anytime soon. The last NEC machine to achieve TOP500 notoriety was the 36 teraflop Earth Simulator, a SX-6 generation system that was ranked the most powerful machine in the world from 2002 to 2004, before IBM Blue Gene/L overtook it.
Like its forebearers, the SX-9 is targeted to weather forecasting service facilities, climate research centers, and other government science centers. NEC has sold over 1000 SX systems over the past two decades -- the vast majority in Japan and Europe, although there are some outliers in Australia, South Africa, and Brazil. There are virtually none in North America.
The way NEC is happily churning out vector supercomputers, one might get the impression that weather and climate modeling is a growth industry. While global warming is certainly a big topic these days, such research is unlikely to propel SX-9 production into the double-digit growth rates enjoyed by the overall HPC market.
But unlike in North America, Japan and Europe have a decent-sized installed base of vector machines and the vast majority of them are NEC supers. Although most of the 1000-plus NEC vector machines sold over the last two decades have been retired, a lot of Japanese and European Earth science centers still run on SX systems. NEC is hoping many of these organizations will upgrade to the SX-9 at some point and keep the legacy going.
SX-8 applications are upwardly compatible with SX-9 (binary compatible), so the software upgrade path should be painless. NEC maintains its own compiler for the vector processors, as well as a Super-UX Unix OS to enable applications to fully utilize the large flat memory architecture and powerful processors. Both OpenMP and MPI parallelism are supported. It's this kind of end-to-end support that has allowed NEC to maintain, and even grow, its customer base for more than two decades.
In the recent past, Cray has had some success with its X1 and X1E vector machines (Warsaw University, Spain's National Institute of Meteorology, Korea Meteorological Administration). But today the company is penetrating the European market with its Opteron-based XT4 systems. Cray's future strategy for its vector computing offerings will become more apparent next week.
Dedicated vector machines used to be all the rage in supercomputing, starting with the first commercial system in 1974, the CDC STAR-100. Cray soon followed with the Cray-1 in 1976. Later, NEC, Fujitsu and Hitachi each developed their own architectures. But vector supercomputing is a tough sell these days. The market share of these types of machines has been declining for some time, replaced by more general-purpose systems -- both tightly coupled supercomputers and computer clusters -- based on superscalar CPUs.
While HPC applications that make heavy use of a lot of matrix arithmetic, like computational fluid dynamic (CFD) codes, are well-suited to vector processors, in practice, multicore superscalar chips have proved to be a better overall technology. This is mainly because as HPC applications evolve, they become more complex, employing a greater variety of algorithms to get their job done. This complexity manifests itself in diverse computing requirements; some parts of the code require high levels of single-threaded performance, other parts require a lot of threads, and still others benefit from lots of data parallelism. Systems based on scalar processors tend to be very good at the first two, and pretty good at the third one. Vector-based machines are really only good at data parallelism (and actually only a subset of that). Even weather modeling applications, the vector machine's raison d'etre, require scalar processing for optimal performance.
More commodity-based vector processing solutions already exist and more are on the way. Short-vector SIMD on CPUs, like PowerPC AltiVec and x86 SSE, is a step in the direction of integrated vector capabilities. Mixing vector and scalar engines on the same dies, as has been done with the Cell BE processor, is another approach to making vector processing more mainstream. And as I wrote last week, coprocessor accelerators, like GPUs, FPGAs, and SIMD ASICs (ClearSpeed), are providing similar capabilities at a much more attractive price.
In the end, economics will choose how vector computing gets done. But the purveyors of proprietary solutions are on the wrong side of history. General-purpose commodity computing is not just here to stay, it's here to dominate.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - November 01, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?