February 04, 2010
The National Science Foundation and Microsoft have announced a collaboration that would provide researchers free access to the Windows Azure cloud platform. In a Thursday morning webcast, Dan Reed, who leads Microsoft's eXtreme Computing Group, and Jeannette Wing, the NSF's assistant director for Computer & Information Science & Engineering, talked about the new agreement. The announcement came on the heels of Monday's commercial launch of Microsoft's Windows Azure platform.
In a nutshell, Windows Azure is a cloud operating system that allows users to host applications to run at Microsoft's own datacenters. Unlike Amazon's EC2 service, which basically lets you rent compute cycles, Microsoft is providing a more complete software platform for prospective cloud users. Microsoft has talked about offering Azure for "private clouds" at some future date, but the NSF's interest isn't based on that capability.
In fact, one of the major goals here is freeing scientists and engineers from being tied to local datacenters for their computational work. In general, NSF-funded researchers at universities are reliant on local systems -- desktops, clusters and full-blown supercomputers -- which, themselves, are often funded by the agency. But a lot of scientific applications are too big for desktops and too small for supercomputers, which means researchers are dependent upon compute and storage clusters housed in university facilities. The problem is that these institutions are not in the IT infrastructure business, so there is strong motivation to offload the procurement and management of these systems to someone else.
The other aspect to this is that the cloud model offers elastic compute and storage capacity for the end user. Because of the nature of scientific work, researchers need lots of capacity at certain stages, and none at other stages. The cloud is great for that kind of workflow since you only have to pay for what you use.
In this case though, the researchers won't even be paying for it. Under the program, access to Azure will be free of charge for three years and Microsoft will also be providing "an engagement team" to help researchers get their apps into Azure. Wing said the NSF will be awarding access via a review process and plans to pony up $5 million to help fund the researchers. Initially, "at least tens of projects" will be selected, says Wing.
The typical application profile would be one that was data-heavy, highly-parallel, but didn't require tight communication between compute nodes or top 10 supercomputing-level capability. A lot of scientific computation falls into this category, especially that which is based on parallel algorithms for mining large datasets. If that sounds like you and you're a principal investigator interested in giving Azure a whirl, you should take a look at the NSF letter on how to apply for funding.
This isn't the agency's first foray into the cloud. In 2008, the NSF launched its Cluster Exploratory (CLuE) program, which set up access to Google/IBM and HP/Intel/Yahoo systems to study cloud services platforms. Unlike those two platforms, which were based on open-source Hadoop, Azure supports a Windows-based programming interface. In typical NSF fashion, the agency is spreading its bets around. The driving force behind all of this, of course, is to get more computational value per dollar, or more precisely more scientific knowledge per dollar. As Microsoft's Reed noted, "The purpose of computing is insight, not numbers."
The drive to make computation more efficient is permeating the federal government these days. And despite the generous 8 percent budget increase for the NSF requested by the Obama administration, the agency is still looking to find better ways to meet its IT needs. It looks like cloud computing is destined to be a big part of that.
Posted by Michael Feldman - February 04, 2010 @ 6:21 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?