The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Too Early for HPC in the Cloud? Microsoft Responds...


Last week as I prepared for a lengthy article about the post-virtualization performance gap for high-performance computing, which signifies that it might be too early for HPC in the cloud for many users across a wide range of applications, I reached out to Microsoft's Director of Technical Computing, Vince Mendillo, for a quote about the company's position on the matter. What I received in return was so complete that it seemed most appropriate to give it some room of its own. The answer is not only more complete than typical responses, it provides a unique glimpse into Microsoft's world--at least as it pertains to HPC and cloud.

The following is Mendillo's verbatim email answer, which sheds some light on the possibilities for the future, even if it doesn't touch on many of the significant challenges that actual HPC users are discussing specifically. That might be because the application-specific complaints that almost always rooted in performance are too scattered across several different research areas and groups or perhaps it might just be because Microsoft has infinite hope about the possibilities of clouds for traditional HPC users. Either way, the response, which is below in non-italics, reiterates Microsoft's position and lends some insight about where the company will be heading in coming months.

High performance computing is at an inflection point and the time has come for high performance computing in the cloud. We believe the cloud can provide enabling technology that will make supercomputing available to a much broader range of users. This means a whole new group of scientists, engineers and analysts that may not have the resources for or access to on-premises HPC systems can now benefit from their power and promise.

It’s important to note that certain HPC workloads are ready for the cloud today (e.g., stochastic modeling, embarrassingly parallel problems) while others (e.g., MPI-based workloads) will take longer to move to the cloud because they require high speed interconnects and high bandwidth for low-latency, node-to-node communications.  Data sensitivity and locality are also important considerations—large, highly sensitive data might be better suited to on-premises HPC, while publicly available data in the cloud could fuel new, innovative HPC work. 

We feel that organizations will benefit from both on-premise HPC and HPC in the cloud.  Among the benefits of this blended model are:

• Economics: On-premise computational resources include more than servers. Much of the on-premise computing cost is infrastructure and labor for most organizations. Other expenses like power, cooling, storage and facilities also have to be factored in. The cloud can provide economic advantages to on-premise-only computational resources. For example, take a “predictable bursting” scenario:  In order to provision the computational requirements of an organization – including periods of peak demand – with only an on-premise resources, the organization would be paying for capacity that would go unutilized for a large part of the time. By provisioning a predictable level of computational demand with on-premise resources, while at the same time accommodating “bursts” in computational demand with cloud computing, the organization will have much better utilization rates and just pay for what they need.

• Access: Some stand-alone organizations (and workgroups inside bigger companies) today do not have access to on-premises HPC systems. HPC in the cloud gives these organizations an entirely new resource. For example, a small finance or engineering firm that runs a periodic model but doesn’t want a closet full of servers, can readily access high performance computing literally on a moment’s notice.

• Sharing and Collaboration: The cloud enables multiple organizations to easily share data, models and services. With on-premise HPC, sharing involves moving data back and forth through LAN/WAN networks which is impractical and costly for large data. By putting data and models into public clouds, sharing among multiple organizations becomes more practical, creating the possibility for new partnerships and collaborations.

By building a parallel computing platform, we can enable new models across science, academia and business to scale across desktop, cluster and cloud to both speed calculation and provide higher fidelity answers. The power of the cloud is a fantastic example of the tremendous computational resources that are becoming available.

Today, scientists, engineers and analysts build models, hand them to software developers to code them (which can take weeks or months) and then hand them to their IT departments to run on a cluster. The time from math (or science) to model to answer is extraordinarily slow. Even once an application is coded, it can take days (or sometimes months) for a simulation to run. Imagine having greater computational power available to simulate interactively throughout the day, taking advantage of clusters and the cloud. Faster analysis reduces time to results and can speed discovery or creates competitive advantage. This potential—across nearly every sector—is at the core of Microsoft’s Technical Computing initiative.  Consider some of the possibilities when HPC power is more broadly available and accessible:

• Better predictions to help improve the understanding of pandemics, contagion and global health trends.

• Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates.

• More accurate prediction of natural disasters and their impact to develop more effective emergency response plans.

We’re working partners and the technical computing community to bring this vision to life. You can tune into the conversation at http://www.modelingtheworld.com.

Posted by Nicole Hemsoth - June 27, 2010 @ 2:01 PM, Pacific Daylight Time

Discussion

There are 0 discussion items posted.

Join the Discussion

Nicole Hemsoth

Nicole Hemsoth

Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.

More Nicole Hemsoth


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?