May 01, 2013
At the 2013 Open Fabrics International Developer Workshop, in Monterey, California, VMware's in-house HPC expert Josh Simons delivered a presentation [slides] on the Software-Defined Datacenter. While Simons mainly inhabits the HPC space, he donned his enterprise hat for this talk.
The phrase software-defined datacenter started making the rounds in the second half of 2012, spearheaded by VMware. At its essence, a software-defined datacenter is a prescriptive model for bringing the benefits of virtualization to rest of the datacenter. It is an enabling technology for what some are calling Cloud 2.0.
In discussing the evolution of virtual platforms, Simons says "the next leap is going beyond the single-datacenter or beyond small or modestly-sized clusters to actually supporting a hybrid cloud model where you want mobility of those applications and workloads across a much wider range. You want to be able to do it across a full datacenter and also do it between, say, a private cloud deployment and a public cloud deployment."
According to Simons, getting to this next level, i.e., achieving this robust, scalable hybrid cloud, means first putting in place the software-defined datacenter.
"We create a virtual datacenter abstraction underpinned by a set of all software services that allow us to provision networking, provision storage, provision storage, and compute and memory, etc., all the resources that you need to stand up the service that you're intent on standing up, but do that totally from a software-based perspective and do it at scale.
"That's absolutely necessary if we're going to move into the cloud. And that is, simply put, what the software-defined data center is about.
"The software-defined data center is the architecture that let's us deliver the cloud. It's how we would build it as a provider of software to customers that are building clouds. This is the underpinning that you would use to do this."
Where cloud is a way of offering computing services that prioritizes:
The software-defined datacenter provides an architecture for cloud where:
• All infrastructure is virtualized
• Delivered as a service
• Control of this datacenter is entirely automated by software
In differentiating software-defined networking and the software-defined datacenter, Simons explains that the latter is a critical component for delivering the full value of cloud computing, while the former is the means by which networking will be decoupled from underlying hardware.
The presentation also includes a discussion of the benefits of RDMA, which provides direct access to the physical hardware with performance as the primary goal.
While it is still early-stage thinking, Simons suggests that there may be a way to access the benefits of SDN and RDMA. Currently, datacenter workloads are only about 50 percent virtualized; and while that number is set to increase, there will always be a segment of workloads that requires the performance of bare metal. Additionally, the need for low-latency, high-bandwidth interconnect in the enterprise is a clear trend (to support scale-out DBMS, big data, HPC, and so forth). Simons asserts that future SDDC and SDN implementations must accommodate these realities, so the question then becomes how to do that.
One possible path forward, according to Simons, is to have the SDN layer reach down into the physical infrastructure to pull more data out of there, employing techniques like metrics and topology sensing so that more optimized placement decisions can be done by the SDN layer in support of delivering better application performance. VMware engineers are contemplating the possibility of using RoCE as the basis of an SDN environment that also supports RDMA.
Simons concludes that RDMA is clearly important for future enterprise datacenters, but he emphasizes that the work is at a very early stage. VMware welcomes community involvement to advance this goal.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?