November 09, 2011
On Wednesday, DataDirect Networks unveiled its new Storage Fusion Architecture (SFA) system, the SFA12K, its third generation SFA platform. Like previous SFA offerings, this one, of course, is aimed at super-sized HPC machines, but it is also targeted at big data applications that are spreading across the Internet and infiltrating enterprise datacenters.
The SFA12K follows the recently-announced SFA10K-X and the original Storage Fusion platform, the SFA1000, which DDN introduced in 2009. The two big focus areas for this architecture are vastly increased performance and support for embedded applications -- what DDN likes to call in-storage processing. The design focuses on placing a lot of internal and external bandwidth in the hardware, along with enough processing power in the controllers for I/O applications to run natively inside the storage appliance.
"The SFA12K is really the first embodiment of that capability," says Alex Bouzari, CEO and co-founder of DataDirect Networks.
The idea is to consolidate data-intensive software and hardware in one platform, which in the HPC realm allows customers to embed parallel file systems or other custom storage software directly inside the storage gear. For enterprise and internet companies, it will enable businesses to blend advanced analytics and multimedia content/distribution software into the platform, with the intent to process huge amounts of unstructured data in real time. In this realm, typical applications include internet search, financial risk analysis, inventory management, personalized advertising, digital security, and fraud detection.
Bouzari says performing storage processing right where the data is maximizes I/O flow, lowers latency and eliminates the need for running these applications on external servers. That means the I/O can operate at memory speeds, rather than network speeds. This will be especially critical as datasets get larger and applications get more demanding of real time response and use random access patterns, says Bouzari.
Performance-wise the SFA12K is certainly a speed demon. A single storage appliance can deliver 40 GB/sec of bandwidth, which is four times as fast as the original 2009-era SFA10000 and two and half times as fast as the newer SFA10K-X . With just 25 full racks strung together with InfiniBand or Fibre Channel, an SFA12K deployment could deliver an aggregate bandwidth of 1 TB/second.
Those numbers would only be attainable under the high end model, known as the SFA12K-40. That model is for block storage only, as is the SFA12K-20 version, which tops out at 20 GB/sec. DDN is also offering a 20 GB/sec appliance, called the SFA12K-20E. This last one is designed to host embedded storage software (thus the E designation), like DDN's own ExaScaler and GridScaler parallel file systems, other HPC file systems, or any third-party storage application as described above.
From a storage media perspective, customers can configure the SFA12K with a mix of SSD, SATA, and SAS drives (as they could for the previous generation SFAs). The only difference in this newest offering is that it supports the latest eMLC flash memory for SSDs and moves up to 4TB drives on the SATA option. Thanks to the high-capacity disk, just two SFA12K racks (88U) maxed out with 4TB drives can house a whopping 6.72 PB.
Underneath the covers is DDN's Storage Fusion Fabric, a redundant non-blocking internal network of 160 6Gpbs SAS lanes (40 x 6Gbps SAS x 4 lanes), designed to extract maximum performance from the storage media. That's not only important for aggregating the I/O from all those hard disks, but also meets the greater IOPS potential of SSDs, which can operate at much higher speeds than spinning disks.
"All of this just makes the product an incredible powerhouse for HPC applications," says Bouzari.
The CEO says they already have booked more than 100 petabytes of customer orders for the SFA12K, split evenly between HPC and non-HPC customers. Of the 50 petabytes headed for the high performance computing installations, 15 PB are going to Germany for the recently announced 3-petaflop "SuperMUC" iDataPlex cluster at the Leibniz Supercomputing Center (LRZ). The other 35 petabytes will end up at Argonne to serve as primary storage for the 10-petaflop Blue Gene/Q "Mira "supercomputer. The Mira installation will use the SFA12K-20E platform, in which the in-storage application is IBM's GPFS file system.
The remaining 50 petabytes is destined for other customers who are much more reticent to talk about their storage set-up. According to Bouzari, a couple of deployments are going to cloud providers, a handful are in classified environments, and a remainder are headed to media content providers.
SFA12K systems will ship in the second quarter of 2012, but the DataDirect is obviously already taking orders. Pricing was not provided.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?