August 01, 2013
Conspicuously missing from Microsoft's recent corporate reorganization was any mention of the company's HPC group, which develops a cluster-enabled version of Windows Server, among other products. That absence doesn't mean a thing, and the program is going "full steam ahead," the manager in charge of Microsoft's HPC program said this week.
"Customers and partners have been asking us what the recent Microsoft organization changes mean for HPC. What does it mean for HPC?" Alex Sutton, the Group Program Manager for the Big Compute program at Microsoft asked in a recent blog post. "The answer--full steam ahead."
Sutton says his Big Compute team has several updates in the works that will bring new capabilities to existing products. "We're still working on the HPC Pack for Windows Server clusters and enabling new Big Compute scenarios in the cloud with Windows Azure," he says. "Because we are part of the Windows Azure Group, we are driving capabilities like low-latency RDMA networking in the fabric."
Microsoft has not achieved the sort of success it envisioned when it got into the HPC business with the launch of Windows Compute Cluster Server 2003 back in 2006. Consider that there were six Windows-based clusters on the Top 500 list that was published in the fall of 2007. In the most recent Top 500 list, there were just three Windows clusters.
And one of Microsoft's current Top 500 listings is Windows Azure itself, an HP-based cluster with more than 8,000 cores and 60 TB of memory that ranked 241 on the list with 151 teraflops. (This isn't the entirety of Windows Azure, of course, but the portion used for HPC.)
Sutton stressed the importance of Windows Azure to Microsoft's HPC strategy going forward. This isn't surprising at all, considering that the company moved its HPC-focused Technical Computing group to the Windows Azure Group two years ago.
The ability to rapidly tap into a cloud-based HPC resource like Azure marks a fundamental change, Sutton says, and will enable customers to pay for what they use, and eliminate the need to invest in HPC resources that may sit idle.
"Our customers in research and industry get it," Sutton says in his blog. "Our enterprise customers are able to keep their on-premises servers busy, while running peak load in Azure. And now developers can cost-effectively test applications and models at scale. We are part of the Enterprise and Cloud Division at Microsoft for a reason."
Sutton also addressed criticisms that Microsoft's HPC strategy more strongly favors businesses and enterprises over academia and national laboratories, the traditional heart of HPC in the U.S. "The research community advances the leading edge of HPC. Our team and Microsoft Research continue to work closely with partners in academia and labs--we value the relationships and feedback," he said.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?