August 15, 2011
As we reported last week, IBM backed out of the deal with NCSA to build the 10-petaflop machine on the grounds it was no longer financially feasible. According to a report in the Champaign/Urbana News-Gazette, the National Center for Supercomputing Applications (NCSA) is already looking for a replacement to IBM's ill-fated machine. John Melchi, who heads the Administration Directorate at NCSA, said computer vendors have been contacting the center to offer their solution.
While Melchi wouldn't name names, he pointed out that there were originally four proposals submitted to NSF for the system back in 2007. No doubt some or all of those vendors are talking to NCSA again.
Presumably the $300 million price tag for Blue Waters would still apply. The National Science Foundation (NSF) had kicked in $208 million for the project, while the University of Illinois and the state government tacked on an additional $100 million. Given the apparent failure of IBM to squeeze any more money out of parties, the next vendor will probably have to work within the same financial constraints.
According to Berkeley Lab Deputy Director Horst Simon, who is quoted in the News-Gazette article, the NSF has historically low-balled supercomputer projects, with the expectation that the vendors, their partners, or other government entities would make up the difference. There is also a certain "macho aspect" to getting a top-ranked machine on the TOP500 list, he added.
But a lot has changed since 2007, when the Blue Waters deal was originally formulated. The 2008-2009 recession re-focused the attention of HPC vendors on the bottom line, while state governments are reeling from a loss of tax revenues. (Illinois, in fact, is one of the hardest hit states, suffering its worst deficit in history.) In such an environment, prestige takes a back seat to practicality.
The question remains whether the NCSA can find a vendor to come up with a 10-petaflop system for $300 million by the end of 2012. The cheapest way to get to 10 peak petaflops peak is with GPUs, but its not clear if NCSA want to go that route. The real goal of the project is to provide a system that can deliver a sustained petaflop across a range of science and engineering codes. And since GPUs don't have the same general-purpose breadth of computational capability as CPUs, NCSA might have to reformulate its approach.
Full story at Champaign/Urbana News-Gazette
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?