September 29, 2011
Violin's Memory's launch this week of its latest and greatest flash memory arrays for primary storage got me to thinking about how far and how fast solid state storage has traveled over the last few years.
Gone are the days when enterprise-grade flash was only considered for caching hyperactive data, aka tier 0 storage, layered on top of a largely disk-based storage systems. We're now seeing a much more generalized solid state storage solution, encouraging at least one writer to state the case more starkly with an article titled, Violin Memory: This Is The Impact Event Before The Extinction Of Hard Disks.
While Violin is among the better-known, and more successful solid state storage vendors, it's certainly not the first to go after tier 1 disks in the datacenter. Both Texas Memory Systems (TMS) and Nimbus Data Systems have SSD boxes that target primary storage.
Those two employ enterprise multi-level cell (eMLC) flash technology to achieve a product that is on par cost wise with 15K disk-based arrays. Compared to single-level cell (SLC) flash, eMLC is somewhat less performant and needs more attentive error correction, but it is much less expensive.
Violin, with its newest 6000 series flash arrays, have both SLC and standard MLC flavors, but wraps a lot of enterprise goodies into the systems, such as high availability, redundancy, and serviceability. Violin is not making pricing public on the new product line, so there is no way to compare its offerings to those of Nimbus and TMS.
Even before Violin's 6000 boxes were launched, the company was already bumping against (and in some cases, displacing) storage stalwarts like EMC and NetApp, two companies that sprinkle flash atop their disk-based storage. Vendors like Violin, TMS, Nimbus and Huawei Symantec think they can skip that flash-cache approach with their latest all solid state arrays.
These vendors think they've solved the up-front cost gap, at least with regard to Fibre Channel and SAS 15K disk systems (but not the lower cost SATA drives). Although the price per GB of flash versus disk componentry is still fairly wide, even for eMLC, once you wrap a complete storage system around it, the price differential shrinks away. Both TMS and Nimbus, for example are in the $12 to $13/GB range for their flash system products.
On the other hand, no one that I know of is arguing that disk storage is going away completely. For capacity storage, especially where the data isn't in constant read/write demand, disks will be the technology of choice for the foreseeable future. The "flash and trash" model, where all active data will be on flash and the rest will be relegated to low-cost SATA drives, is where a lot of people in the industry think we are headed.
For the high performance computing crowd, the story may be a little different. At the upper edge of HPC, capacities are just too darn big for flash to swallow whole. The just-announced 55-petabyte NetApp storage system for the upcoming Sequoia supercomputer to be installed at Lawrence Livermore National Laboratory, could certainly not be accomplished with a solid state setup today. Even at the aforementioned $12/GB price point, such a system would cost well over $600 million.
That said, smaller HPC customers could certainly make flash a bigger part of their lives, as some commercial and government customers are already doing. Nimbus has installed 100 TB of its flash installation at eBay, and Violin has two petabyte-sized deployments of their memory arrays, one at AOL and the other at a US government agency. Given the 10-fold or so cost advantages in power and floor space, even premium-priced flash could make economic sense for reasonably large systems, and especially so for the kinds of data-intensive workloads that are becoming more and more common in HPC.
The largest flash storage deployment in HPC looks like it will be the Gordon supercomputer at the San Diego Supercomputer Center (SDSC). That system, built by Appro, will be outfitted with 300 TB of the new Intel iSolid-State Drive 710 Series , enough to deliver 35 million IOPS to data-hungry science applications. According to the press release, "SDSC has taken delivery of Gordon's 64 I/O nodes equipped with Intel's 710 Series, and they are already available to users of Dash, a smaller, prototype version of Gordon."
As announced at IDF, the new Intel SSD parts are based on the less expensive, higher capacity standard MLC technology, but use Intel's own High Endurance Technology (HET), which the company claim offers "the same high levels of performance as single-level cell (SLC) memory but at a more attractive price point," which according to various sources, looks to be about $6.45/GB. Keep in mind these are storage drives, not the more full-featured flash SAN boxes mentioned above.
A lot of HPC installations are probably going to gravitate toward these standalone SSDs or even PCIe connected flash devices so that solid state storage can be integrated intimately into the server infrastructure and give the best performance boost for the buck. On the other hand, Nimbus has revealed they have number of HPC customers for their flash storage boxes in oil and gas, financial services, life science, and education. There's no reason to think that other like-minded users won't start adopting the technology too as it proves itself.
Posted by Michael Feldman - September 29, 2011 @ 6:21 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?