The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Blog: From the Editor

From the Editor | Main Blog Index

As the Paradigm Shifts


The industry's headlong rush into cloud computing is shaking up the old order, sometimes in ways even the biggest IT firms can't anticipate. And while there has not been a wholesale conversion to the idea of utility computing, momentum seems to be steadily building in spite of the dire economic situation -- or maybe because of it.

In a Wisconsin Technology Network column this week, Peter Coffee, the director of platform research at Salesforce.com, wonders if the plummeting economic indicators may actually be hiding the IT sector's transformation taking place beneath all the financial carnage. His main argument is that capital spending may no longer be the best indicator of economic growth in the information age since it's now possible for firms to rent things like compute cycles from utility computing providers:

You don't need to own a car if you live in a place that's served by Zipcar. You don't need to own a collection of recording media artifacts if you're just as happy with unlimited music on demand, for a fixed subscription fee, at Napster. And you don't need to buy, or even lease, a supercomputer to run complex models when you can buy capacity by the minute from Amazon.

While the main audience for utility computing is for the larger enterprise market, HPC apps continue to show up in the cloud with increasing frequency. It's mainly the smaller firms that have trouble rationalizing a large cluster buy that are being attracted to HPC in the cloud. But in these challenging financial times, companies of all sizes are likely to take a look at renting cycles off-site.

A recent article at Fortune points to Kenworth Truck Company's use of aerodynamic design software hosted on an IBM cluster to design truck mudflaps. The truck design firm determined that it could buy access to supercomputer-level hardware for a fraction of the price of actually buying one outright. The design software used by Kenworth came from Exa, who noted that although two-thirds of its revenue still comes from selling software in the conventional way, sales from utility-based packages are "growing almost twice as fast."

If mudflaps seem a bit mundane, last week I wrote about biotech startup Pathwork Diagnostics, which was using Amazon EC2 and Univa UD's UniCloud as a platform for its cancer diagnostics tool. Pathwork's rationale for shifting to the cloud model: a two-thirds cost savings compared to buying a new machine, plus the flexibility to scale up for peak computing needs.

IT vendors are taking notice. Today, every major IT firm has a "cloud computing strategy," although it's way too soon to tell who the big winners and losers will be. A company like Microsoft would seem to have the furthest to go, since it has relied on its traditional client-side software for so long. Transitioning from a shrink-wrapped software model to a service model is going to be tricky for the software giant, but over the past few years the company has been making a huge effort to shift course. Last year it rolled out its Azure cloud operating system in the hopes of duplicating the success it enjoyed with its flagship Windows platform.

Of particular interest to the HPC crowd was Microsoft's announcement last week regarding a new research initiative named Cloud Computing Futures (CCF). The group is being led by long-time HPC'er Dan Reed and we'll be covering the project in more depth later this week. In a nutshell, CCF is a collection of hardware and software technologies -- including Azure -- that attempts to define the next-generation cloud platform. Considering that cloud computing 1.0 is still coalescing, that's a pretty ambitious undertaking.

One of the major goals of CCF is to come up with a much more energy- and cost-efficient cloud computing platform than is available today. Toward that end, the Microsoftians are experimenting with Intel Atom-based servers. The Atom is Intel's ultra-low-power CPU aimed at MIDs, netbooks, and nettops. Its big draw: for around 30 or 40 dollars and drawing just a handful of watts, the chip gives you x86 compatibility.

Using the Atom for servers is not a completely unique idea. Last year at SC08, SGI debuted an experimental Atom server called Molecule. Even though the performance of the individual Atom CPU was meager by Xeon standards, the performance-per-watt of the system was much better. Plus, the memory bandwidth of an Atom processor was about three times better than a conventional x86 CPU. A Molecule rack with 10,000 cores boasted an aggregate memory bandwidth of 15 terabytes per second.

Of course, Intel wouldn't be happy if Atom servers became all the rage in cloud computing. It would much rather sell its more expensive, higher margin Xeon server parts to datacenter customers. Figuring out how to keep its Atoms in line could turn out to be a real challenge for Intel. The low power and low cost of mobile CPUs are the exact attributes that are so attractive to computing at scale. Yes, even for chipmakers, the rise of cloud computing may demand some tricky maneuvers.

Posted by Michael Feldman - March 03, 2009 @ 5:34 PM, Pacific Standard Time

Michael Feldman

Michael Feldman

Michael Feldman is the editor of HPCwire.

More Michael Feldman


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?