October 20, 2011
As storage customers look for a way off the spinning disk merry-go-round, SSDs have become the hottest gadgets in the enterprise. But a team of computer scientists at Stanford University think they can do even better. The researchers have come up with a scalable, high performance storage approach dubbed RAMCloud -- RAM because it stores all the data in DRAM, and cloud because it can aggregate the memory resources of a whole datacenter.
The cloud reference also alludes to its main application space in the internet universe of Web page slinging and online database transacting. But the scalability and performance aspect of RAMCloud also makes it a candidate for high performance computing, particularly those applications that swing to the data-intensive, rather than compute-intensive, side of the spectrum.
The RAMCloud project is led by Stanford professor John Ousterhout, inventor of the Tcl scripting language. No stranger to the world of performance computing, Ousterhout's research work has delved into, among other things, distributed operating systems and high-performance file systems. Outside of the academic sphere, he serves as the chairman of Electric Cloud Inc., a company he founded in 2002 to provide high-performance software build tools.
In a nutshell, RAMCloud is a software platform that aggregates the memory of a large number of commodity servers to host all the application data in a datacenter or cluster. Since DRAM is being used, RAMCloud is said to deliver 100-1000x lower latency than disk-based storage and 100-1000x greater throughput. The software uses a combination of replication and backup techniques to deal with the fact that DRAM drops all its bits when power is cut off.
The original RAMCloud design was described in detail in a 2009 and is encapsulated in a recent article in the Communications of the ACM. The researchers are convinced that the current reliance on hard disk technology will not suffice for data-intensive applications, which are quickly spreading into every aspect of enterprise computing. As the researchers proclaim in the article, "if RAMCloud succeeds, it will probably displace magnetic disk as the primary storage technology in data centers."
The two most important attributes of RAMCloud is its ability to scale across thousands of server and its extremely low latency and. Regarding the latter, we are talking latencies on the order of 5-10 µs, which is 1,000 times faster than disk and about 5 times faster than flash. The researchers admit this level of latency is probably overkill for any current Web-based applications, but should encourage new applications that would take advantage of such performance. (Of course, for some HPC applications, single-digit microsecond latencies would be greatly appreciated today.)
Unfortunately, network latency is going to impinge on the aggregate latency of a RAMCloud set up. While the researchers recognized that low-latency networks such as InfiniBand, Myrinet, and high performance Ethernet from vendors like Arista, can achieve 10µs latencies across a datacenter, most facilities today employ TCP/IP on top of Ethernet, which provide typical round-trips on the order of 300µs–500µs. Optimizing these networks in regard to latency will be key to maximizing RAMCloud performance.
As far as scalability is concerned, using today's commodity server and memory technology, the researchers think RAMClouds as large as 500 TB can be constructed. At current memory prices, RAMCloud storage would cost around $60/GB. Within 5 to 10 years, they predict it will be possible to build RAMClouds as large as 1 to 10 petabytes at a cost of under $5/GB.
Of course, DRAM-base storage is always likely to be more expensive than disks or solid state storage. At current pricing a DRAM storage system is about 50-100 time more costly than a disk-based set up and 5 to 10 time more costly than a flash memory system. But for high throughput I/O applications, such prices are easier to justify. The researchers argue that if your code's execution is bound by how fast you can access data in storage, DRAM can actually be 10 to 100 times less expensive than disk.
There are a number of issues that are still to be worked out with the technology, including the exact data model and API, how to optimize latency in regard to remote procedure calls, data durability and availability, cluster management, application multi-tenancy, and support for atomic updates. Nevertheless, these are all solvable issues.
With the ongoing buildup of scaled-out datacenters, along with the emergence of data-intensive applications, much of the groundwork for RAMCloud is already being laid. No timeline has been offered to turn the RAMCloud research project into a commercial offering, but there don't appear to be any technological showstoppers. And given Ousterhout's entrepreneurial experience with Electric Cloud, a startup may not be too far off.
Posted by Michael Feldman - October 20, 2011 @ 6:44 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?