June 05, 2010
Microsoft is no stranger to the concept of delivering capability to the mainstream and accordingly, it has been pushing its technical computing message as of late with the launch of ModelingTheWorld and a series of announcements that are driving home a simple message—they want to be forerunners in delivering high-performance computing to millions.
Yes, millions. Which in my count anyway is something like 800,000 more HPC users than we might have seen before. But that statement just opens another can of worms (is it still HPC if anyone can use it, for instance--but that's a culture question).
Because while this is still HPC applications we’re talking about here, Microsoft and others have seized on the wide range of figures that suggest we’re entering a new phase for potential profit by bringing incredible compute capacity to the mom and pops of the world. And there are a lot of them—whether or not it’s millions remains to be seen but Microsoft is betting on it.
This week at ISC I sat down with Vince Mendillo to discuss Microsoft’s goals in the near future and how delivering large-scale capacity can be achieved. Mendillo broke down Microsoft’s initiatives into three parts, all of which are key hurdles for other companies with cloud investments. The difference between Microsoft and several other vendors in the same space is not difficult to imagine—capital.
Mendillo discusses these initiatives using the word “investment” which means not only funds, of course, but internal research resources. In other words, this is not an IBM research-like goal with mere claims to fame for the sake of fame itself. Microsoft means business. And it certainly does not end with (or even necessarily begin with) Azure.
Just as with other vendors, simply having the infrastructure is meaningless without drives to complement that capacity with functionality. The simplification of writing parallel code with the relatively few MPI experts available, maneuvering the exponential growth in data and figuring out how to manage it, and then extending the capabilities of Azure to make it the leading choice for HPC on demand—these are all behemoth goals, but they are coming from an industry giant.
Microsoft has decided on the goal to reach the millions of users who need HPC but have been unable to leap the hundred hurdles it used to take to grasp it. This is a new movement in HPC because the potential for profit have created a race that still only a few companies have prepared for.
It's still early in the game, but the players are lining up.
Posted by Nicole Hemsoth - June 05, 2010 @ 2:28 PM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?