May 20, 2010
Microsoft's ambitions have always been big. A quarter of a century ago, the company's primary mission was to put a consumer-friendly computer on every desktop. At least in the industrialized world, they can consider that mission accomplished. Now they want to do nothing less than model the world.
Actually, what they're envisioning is offering an integrated set of computing tools and platforms that enables others to model the world. The target applications include all the typical HPC suspects: scientific simulations, medical imaging, financial modeling, aerospace design, real-time predictive analytics, bioinformatics, and so on. The overarching plan is to integrate Microsoft's current portfolio of HPC server products, it's newly hatched parallel computing tools, and the Azure cloud platform into a complete technical computing portfolio.
To go along with that vision, Microsoft has created a Technical Computing group that brings all the pieces together. Bill Hilf will be heading up marketing for the new group, with Kyril Faenov leading the engineering team. It will be made up of the HPC group that Faenov started six years ago, the Interactive Supercomputing team that they brought aboard when the company was acquired last year, the parallel computing group, and a sprinkling of folks from the Microsoft Research division.
According to Faenov, Microsoft is aiming the new effort at the millions of scientists, engineers and analysts out there looking for more user-friendly technical computing, or in his words, "to make their lives easier, lower their costs of discovery, and make innovation faster." That, of course, was and is the theme of the company's current Windows HPC Server 2008 platform for cluster computing, and that same focus will now apply across all their HPC solutions, parallel computing tools and Windows Azure cloud offering.
Bringing the Azure cloud into the HPC fold was a no-brainer. In fact, Faenov says HPC and supercomputing applications already represent a large percentage of the early adopters for their new cloud offering. Microsoft sees Azure as a way to bring technical computing to a much broader set of customers -- either those that don't have the financial wherewithal (or expertise) to build their own HPC infrastructure or those that do have in-house cluster systems, but would like to burst to the cloud at times of peak demand.
Although little of this capability is in place today, the long-term goal is to be able to run a Windows-based HPC app on either a local cluster running the HPC Server, in the Azure cloud, on a workstation grid, or on some combination of the three. The idea is to make the underlying platform transparent to the applications, so that applications can be migrated as needed. The apps themselves could be in the form of SOA workloads, Dryad programs, or more traditional MPI-based applications.
To fill the parallel programming piece of the puzzle, Microsoft has Visual Studio 2010, which comes with support for things like multicore/manycore coding and MPI-aware debugging, profiling and runtime analysis. In the future, they will integrate support for GPU computing -- there's already a beta plug-in for NVIDIA's parallel Nsight -- and extend the programming model to support a distributed runtime environment for clusters and clouds.
The third focus for the new group will be on tools and applications for technical domain specialists. Faenov says they are seeing significant demand from customers to be able to handle large-scale datasets and to create and visualize the models interactively. Since these tools are aimed at the technical end user rather than the professional software engineer, the environments must be high-level, but rich in mathematical abstractions. Microsoft already has some of these tools in its current stable of offerings, (Excel and Microsoft SQL, for example), but more may be on the way. And all of the tools will be designed to work seamlessly across cluster and cloud platforms.
Microsoft has set up a Web site to explain its technical computing initiative. Currently, the site is mostly an infomercial for the new group (with some interesting commentaries from HPC movers and shakers), but eventually the company hopes to turn it into an ecosystem hub that attracts industry practitioners and academics across the community.
This is all about the future, though. Microsoft's announcement this week was the vision, not the product lineup. Becoming a technical computing superpower is going to take time. Faenov says Microsoft will begin laying out its roadmap and offering up some product details over the next few months.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?