The Portland Group
CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

A New Take on Utility Computing


Since making my foray into the world of grid computing, I have seen no shortage of efforts, solutions and practices dubbed "utility computing." Offerings such as Sun's Network.com, IBM's Blue Gene On-Demand and HP's Utility Computing Services are only the tip of the iceberg, as countless vendors -- from 3Tera to Egenera -- and organizations -- Media Grid, for example -- also have joined the fray, all adding their own spin to the idea of capacity on-demand. Last week, however, I saw utility computing take on what is (to me, at least) a whole new, yet awfully familiar, look.

You see, for most of the aforementioned utility computing solutions -- and for the majority of those I've omitted -- the catch is that the CPU resources come from outside of the company firewall, from some resource provider who spares you the hassle of maintaining a costly hardware infrastructure. In the case of the newly announced partnership between software provider Cassatt and IT consultant BearingPoint, however, the notion of utility computing comes in-house, with the IT department playing the role of service provider to the rest of the company. And on top of simply providing resources, the solution, based on Cassatt's Collage Platform, offers the increasingly prevalent Grid 2.0 characteristics of dynamic resource allocation, SLAs and the ability to manage both physical and virtual machines.

Frederic Veron, managing director at BearingPoint, acknowledges that utility computing has been broadly viewed as an external IT solution, but told me that in his opinion, the concept is far more specific to what rather than where. " 'Utility' means what?" he asked. "It means you can subscribe to it; you have a number of different service levels and, depending on the service level you select ... your unit cost will be different; and you have that notion of computing on-demand, or computing on-tap, and you pay as you go or as you use." Given his definition, it's difficult to argue that utility computing necessarily means off-site.

Of course, the real key to delivering resources as a utility lies in the ability to accurately meter and monetize usage, and both Veron and Gamiel Gran, Cassatt's vice president of alliances, rank Collage's metering function among its most notable features. In fact, if you ask me, the emphasis on resource monitoring might be the big differentiator between Cassatt's solution and the slew of on-demand, datacenter-driven grid platforms that have been getting so much (virtual) ink in this publication. According to Gran, "This is distinct from a traditional grid-based infrastructure model that predominantly enforces usage of resources themselves. For us, it's more about full automation and optimization of a service delivery model." He cites characteristics like high-availability, SLAs and policy enforcement, and metering as elements of this automation and optimization.

Different from a "traditional grid-based infrastructure model?" Perhaps. But when compared to the current and, indeed, next generation of grid platforms, it is really the latter in Gran's list that sets Collage, and the Cassatt/BearingPoint definition of utility computing, apart from solutions that include "grid" somewhere in their descriptions. Veron, for his part, told me that while he doesn't consider grid and virtualization capabilities inherent to utility computing, "You need a little bit of each of these ingredients to really have a strong utility in your environment." But enough of this; I could argue semantics all day.

The real story here is that we're seeing the move toward on-demand, virtualized, highly available and policy-driven datacenter solutions gain even more momentum. In this case of Cassatt and BearingPoint, they will focus their collective energies initially on the financial services market -- like Gran said, "Where the pain is most severe is where we go first" -- but believe that public sector institutions, along with companies in the technology, communications and media markets, also will be big customers of the product.

This announcement aside, however, there are a few other items in this issue that I think are worth noting, and on which I will be following up over the next couple of weeks. The first of these is Altair's new usage-based licensing model, which the company assures us will revolutionize the way grid shops utilize their infrastructures. Next up is Digipede landing yet another financial services customer, this one in the form of hedge fund manager III Offshore Advisors. I'll be speaking with Digipede President John Powers to get the low-down on why his company seems to be so popular with the financial marketplace. And, finally, expect to hear a little more about Objectivity, whose Objectivity/DB platform has just achieved the highest level of IBM grid compatibility. While this news by itself might not seem too important, when looked at in the context of the recent buzz around distributed databases and grid systems, it becomes increasingly more significant.

Finally, as usual, I'll direct your attention to a few other announcements that -- if they haven't already seen them -- are sure to be big news for certain segments of our audience. These include: "DoE Launches First Segment of Next- Generation Network"; "Clemson, Sun Combine HPC With Transportation Industry"; "Juniper Networks Boosts Performance for SAP Applications"; and "BEA Addresses EDA With WebLogic Event Server."

Posted by Derrick Harris - June 04, 2007 @ 10:59 AM, Pacific Daylight Time

Discussion

There are 0 discussion items posted.

Join the Discussion

Derrick Harris

Derrick Harris

Derrick Harris is the Editor of On-Demand Enterprise

More Derrick Harris


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?