August 03, 2007
In June, EverGrid debuted its Cluster Availability Management Suite (CAMS), a continuous availability and resource management software solution for high productivity computing environments and the utility enterprise datacenter. We asked Mitch Ratner, the VP of Product Management and Business Development at EverGrid, to give us some background on EverGrid's offering and talk about the company's strategy going forward. He also talks about their new Data Center Resource Manager (DCRM) product, which was announced on Tuesday.
HPCwire: First off, let's get a quick overview of your recently debuted Cluster Availability Management Suite (CAMS) -- what problem does it solve; how does it do it, what makes it unique, etc.?
Ratner: CAMS is a suite of two distinct components: Resource Management and Availability Services. Collectively, they allow for complete server and application lifecycle management, and ensure continuous availability for all applications, without the need to change a single line of code in the app, and without modifying the operating system.
CAMS is targeted at the batch computing marketplace, typically found in high performance technical computing data centers. Using our core technology known as "checkpoint/restore," EverGrid enables two key values within the HPC data center: stateful pre-emption and checkpoint/restore of single node to massively parallel apps.
Stateful pre-emption is the ability of a job queuing system to "pause" a low priority job to disc to make room for a high priority job to run. Once the high priority job is completed, the lower priority job is simply "resumed" from its checkpointed state on disc. Current state of the art is to simply kill the lower priority job, run the higher priority job, then restart from scratch the lower priority job -- losing all compute cycles spent on that job prior to its being killed.
Checkpoint/restore allows single node or massively parallel jobs to recover from faults in the environment with minimal loss of compute cycles. Many of the jobs in the HPC data center run for long periods of time (days, weeks or even months). Upon a failure, the current state of the art is that the job gets restarted. Using our checkpoint/restore technology, periodic checkpoints are taken of the application state, enabling CAMS, upon a component failure that takes an application down, to restore the application to its last checkpoint, perhaps even on different servers, and allow it to continue. This preserves virtually all the compute cycles already spent on the application, and provides a near continuous availability environment for applications.
HPCwire: CAMS is targeted to both high performance technical computing and high performance enterprise computing. Since this software uses a type of virtualization, how does it fit into the high performance model of computing? What's the virtualization technology being employed?
Ratner: CAMS is applicable to the HPTC environments, and our soon to be announced Data Center Resource Manager (DCRM) is targeted at the emerging enterprise dynamic data center. We will demonstrate a beta version of DCRM at LinuxWorld/NGDC as well as demonstrating CAMS.
Virtualization is a very over-used term, and means many different things to different people. What we do is virtualize the operating system calls by pre-loading our user-space library with the application. We call this our OS Abstraction layer, and we "virtualize" all OS calls that return some form of OS handle to a resource within the environment. So, in addition to capturing application state, we also virtualize all OS calls, which allows CAMS or DCRM to make the application mobile, which means the application can be physically moved, while running, to any other server, either physical or within a virtual machine.
Machine virtualization technologies such as VMware and XenSource are not popular in HPTC environments because of their operating overhead. Typically, HPTC applications consume as much of the CPU as possible and they don't want to lose any horsepower to existing virtualization technologies. Our OS abstraction layer captures all app state with less than five percent overhead, which is unprecedented. Given this low overhead and our value propositions in the HPTC data center, our OS abstraction layer is being accepted into this community.
HPCwire: What's your market strategy? Are you focusing your efforts in certain market sectors or datacenter environments where the high-availability (HA) and fault tolerance are already well-known issues? Or are you taking more of an evangelist approach and trying to reach a broader set of users?
Ratner: We have to do both. Most data centers are familiar with fault tolerance and HA, but evangelism is needed for our DCRM product, which does much more. DCRM is intended to allow complex environments to be treated as a single managed entity, with pools of resources being carved up as needed to satisfy any given application workload. DCRM implements policy-based management as well, to enable the dynamic nature required in a utility-based data center. Education is needed to clearly articulate the needs of the emerging data center and which type of solution is best. We believe our vertically integrated stack of functionality, built to scale to thousands of nodes as a basic core assumption, satisfies all needs of the new "utility computing" environments.
For the HPC arena, our value proposition of stateful job pre-emption and fault tolerance for massively parallel jobs is very clear, and solves critical pain points, so little evangelism is needed. In those environments, we simply need to do proof of concepts, show our low overhead, and the product sells itself.
HPCwire: Is the company looking to develop relationships with IBM, Sun, HP, Dell or maybe some of the smaller cluster vendors as a way to get better market penetration?
Ratner: EverGrid is going to build a strong channel, and will look to partner with big iron vendors as well as perhaps some niche players that have larger market share in their particular sectors. EverGrid will also partner with ISVs and their channels as well.
HPCwire: What's in the works for EverGrid for the next 6 -12 months?
Ratner: We will roll out our GA version of DCRM, and continue to expand our CAMS product to support more fabrics and systems interconnects as well as getting shrink-wrapped applications supported by both EverGrid and the ISVs themselves. EverGrid will also begin to build out its channels and direct sales force. EverGrid is getting ready for a steep growth phase.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?