November 04, 2010
IBM's HPC business unit, aka Deep Computing, has always been more about fielding cutting-edge platforms than making profits. Although the company has produced some ground-breaking supercomputing systems over the years and has captured a large chunk of the HPC server market, the business proposition was not always clear. But according to a conversation I had recently with Herb Schultz, marketing manager for IBM's Deep Computing unit, that looks to be changing.
According to Schultz, the company is revamping its approach to its high performance computing business in several dimensions. These include new alliances, sales strategies, and solutions, as well as a shift in HPC market segment focus. Overall, says Schultz, that will involve transitioning from a model that relies on selling hardware parts to one that offers complete integrated solutions.
Schultz admits IBM Deep Computing has probably put forth this story before, but according to him the incentives are now changing. And by that, he means monetary incentives. Schultz remembers as little as two or three years ago the main metric for the HPC business was revenue. So there would be an awful lot of pressure, for example, to sell a $50 million supercomputer, even if it cost IBM $49.99 million to make. "There is really no appetite in IBM anymore -- with some of the leadership changes over the last few years -- for revenue that has no profit with it," says Schultz.
At the core of the strategy shift is the realization that some of the industry's fastest growing segments, like cloud computing and business analytics, are underpinned by high performance computing technology. Even IBM's "Smarter Planet" campaign, which covers segments like education, public safety and retail, will draw on HPC technologies. That includes hundreds of new applications, everything from optimizing city traffic flow in real-time using video streams to managing retail inventory with RFID tracking. HPC permeates this class of data-intensive applications.
IBM has decided these new application areas position Deep Computing as a growth engine (read profit center) for the company. At the same time, even traditional HPC -- science applications, financial analytics, seismic codes, bioinformatics, etc. -- is poised for robust growth. According to IDC, the HPC server revenue is growing at more than twice the rate of the overall server space (6.3 percent CAGR versus 2.6 percent) and will represent 76 percent of the total market increase over the next four years.
But IBM plans to be a bit more particular about market segments. Specifically, they intend to give more attention to customers interested in "better reward for value" -- in other words, verticals that are willing to pay more for premium products. In the higher education market, where IBM is traditionally strong, customers are generally reticent to pay for value; they tend to be very price-sensitive. On the other hand, the financial services industry and some manufacturing firms are much more willing to shell out some serious cash if the solution adds to their bottom line.
Schultz says Whirlpool, for example, was able to save a significant amount of money because of better packaging, modeled and designed via HPC. In this case, the number of damaged goods that had to be returned due to faulty packaging was greatly reduced. Schultz estimates that Whirlpool was able to recoup its HPC investment in a matter of weeks.
Devoting more attention to commercial HPC means the company will simultaneously be shifting the Deep Computing product mix, which has skewed heavily toward the high end. Schultz estimates that 70 to 80 percent of IBM's current HPC revenue is derived from supercomputing systems that cost over $500,000. "They're tremendous revenue producers, but the profit profile is not all that great."
The goal is to move a much greater proportion of the HPC sales into the mainstream HPC market -- that is, systems under $500K. According to Schultz, they're looking to increase revenue in this area from around 20 percent today to something closer to 50 percent. In other words, become less like Cray and more like HP and Dell. This is somewhat uncharted territory for the Deep Computing folks, though. "We've never been really good at this," admits Schultz. "We've never even tried to be good at this, actually."
They do have products that serve that market today, namely the System X (x86 server) products, but that group is more geared toward retail and telecom, where performance is not the driving criteria. Some of the System X shift to HPC is occurring organically. For example, the iDataPlex product, a dense x86 server design, was principally aimed at the Web 2.0 market -- the i in iDataPlex stands for Internet. But as it turns out, that product is garnering plenty of attention from HPC customers.
The plan is for the Deep Computing group to work closely with the System X team so that more HPC-specific x86-based machinery can be offered. Some of this is already in motion. The recent announcements of a GPU-equipped BladeCenter variant and the iDataPlex dx360 M3 suggests a more purposeful x86 HPC strategy.
But selling hardware alone is not in IBM's interest and is certainly not where the company's strength lies. There are already plenty of "value" server vendors out there for do-it-yourself HPC customers. From a company perspective, Big Blue has always made its best margins selling software, services and highly-integrated systems, and it wants to duplicate that model in the Deep Computing group.
High value software like IBM's General Parallel File System (GPFS), math libraries, and Tivoli Workload Scheduler LoadLeveler have never been marketed or sold aggressively, and were sometimes just given away as incentives to buy the hardware. "We're leaving 3 to 4 billion dollars on the table every year by not aggressively selling the system software that we've got," says Schultz.
At the same time, IBM plans to implement a better go-to-market strategy for the Deep Computing offerings, using stronger channel partner relationships, as well as greater incentives with business partners and ISVs. The company also plans to find a new route to the market via large system integration firms. The idea is that commercial customers will be able to buy HPC more like appliances, encompassing compute, storage and software, rather than as individual pieces to be cobbled together on-site.
None of this mean they'll be ceding the high-end supercomputer business to Cray. Especially for top 10 systems and future exascale machines, IBM is committed to be a player. Schultz concedes that the initial return on these elite projects is not very good, but the ROI is there for future IBM products. It's certainly likely that IBM research projects in areas like phase change memory, 3D chip stacking, silicon photonics and advanced software technology will first show up in high-end HPC systems.
The fact that this strategy is somewhat at odds with Deep Computing's more pragmatic business approach is worth noting, though. But the way Schultz tells it, the company is committed to getting a good chunk of this high-end R&D funded via government programs such as was done with their DARPA HPCS-funded PERCS work. Down the road, the company is counting on getting a generous slice of the more than $1 billion that the US government plans to spend on exascale technologies over the next five to eight years.
As far as the HPC product mix goes, IBM will stick with its trio of Power-based systems, Blue Gene, and the aforementioned x86 product line. Schultz thinks the Power and Blue Gene lines may converge in five years or so, but for now they're keeping those products distinct. The first Blue Gene/Q system, Sequoia, is scheduled for delivery to the NNSA in 2011, with the main pipeline expected in 2012. Meanwhile the Power7-based servers have already been out for a year, although the first really large deployment of the souped up IH supercomputing variant of that server will be the NCSA's Blue Water system, also in 2011.
The only product that failed to make the cut was the Cell-based (PowerXCell 8i) HPC QS22 blades, which one assumes will be phased out at some point. Although that blade was used in the Roadrunner, the first petaflop supercomputer, the Cell processor turned out to be too specialized a solution, especially as GPGPU-based acceleration took hold in the last couple of years.
Whether this Deep Computing makeover works or not remains to be seen. But every large HPC server vendor is tweaking its strategy to one degree or another: Cray is dipping into the mainstream market with its CX1 and CX1000 lines; Dell is ramping up its product line with purpose-built performance gear; HP is doing likewise. All of this is being done to tap into what looks to be a burgeoning commercial HPC market. Like its rivals, IBM doesn't want to miss that opportunity. "This is one of the higher points for this business over the last 15 years," says Schultz.
Posted by Michael Feldman - November 04, 2010 @ 3:40 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?