July 16, 2009
The rumor flying around this week about SGI pulling out of the petaflop supercomputer deal with the National Science Foundation (NSF) and the Pittsburgh Supercomputing Center (PSC) sounds like bad news. But if true, it's just the opposite.
As first reported by VizWorld, and then picked up by the ever-alert John West at insideHPC, SGI is having second thoughts about plans to deliver a petaflop-capable supercomputer to PSC. The system would have been funded under Track 2 of the NSF program to help support petascale HPC for open science. The deal was presumably arranged under the old (pre-Rackable) SGI regime, but the NSF has yet to announce the award.
However, the plans were far enough along to warrant a slide deck (PPT) authored by PSC director Ralph Roskies in January of this year. The document included some fairly detailed information about the proposed machine (see particularly slide 11), including the fact that the proposed system was going to be based on SGI's much-talked-about Project UltraViolet platform and would be employing Intel's upcoming Nehalem EX processors.
According to VizWorld, the deal was reputed to be worth $30 million dollars, which for anything approaching a petaflop machine, would be a true bargain. Note that even the half-petaflop Ranger supercomputer installed in February 2008 for the Texas Advanced Computing Center Track cost $30 million (plus $29 million for operational costs), and Sun Microsystems almost certainly took a loss on the deal. Not coincidentally, Ranger was also a Track 2 funded machine. Also perhaps not coincidentally, both SGI and Sun were bought out this year as they saw their bottom lines fall off a cliff.
For the sake of comparison, IBM's 1.375 petaflop (peak) Roadrunner super reportedly cost Los Alamos and the NNSA somewhere between $100 million and $133 million. The implication, of course, is that the $30 million NSF/PSC deal was deeply discounted. But now that SGI is under new management, CEO Barrenechea and company have apparently decided making money is more important than a TOP500 slot and being deferential to its government customers. Good call.
I think Douglas Eadline at Linux Magazine nails it pretty well in his article on the reason why HPC companies are tempted to engage in such practices: One also must ask, why sell so cheap? The answer takes a little explaining, but basically, there is a lot of “buying business” at the high end of the HPC market. I like to call it “buying a press release”, but the idea is the same. In addition, there is what I call a tradition of “give us a gift” mentality at many educational and government institutions.
Discounting goods or services below cost never seems to work out. In the short term, it produces an artificial type of competition between vendors that is not just a race to the bottom, but a race below the bottom. In the long term, this practice conditions customers to devalue the product, making subsequent sales more difficult, eventually forcing vendors to start stripping value out of their offerings.
If you take the premise of discounting to its logical conclusion, you reach something like what Wired magazine editor Chris Anderson is suggesting in his new book Free: The Future of a Radical Price. In it, he argues that digital commerce has effectively priced the value of online information and ideas to zero. From a guy who makes a living at selling information and ideas, that might sound somewhat contradictory (especially when you realize that his idea-laden book will run you $26.99).
Anderson's assertion is that companies can still make money around the thing that they're giving away, either in services or value-added products. So, for example, you might provide online music for free, but sell concert tickets around that to subsidize the musicians and Web site operation. Journalist and author Malcolm Gladwell picks apart the premise of Anderson's book rather thoroughly in a recent article in the New Yorker, so I won't rehash those particular arguments.
A general comment though: At the 50,000-foot level, it seems like a bad idea to reduce transparency of financial transactions by trying to hide the costs somewhere else. It just furthers the misguided notion of a "free lunch" for both buyer and seller. You don't have to be endowed with much of an imagination to realize the economic carnage that has resulted from such thinking.
So if supercomputing centers want to buy petaflop machines at $30 million a pop, I'm sure that's still possible. They'll just have to wait a few years. And if the new SGI sticks to an honest business model, they will still be around to sell them.
Posted by Michael Feldman - July 16, 2009 @ 5:09 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?