June 10, 2008
Today NetApp has announced new technologies in the middle of its performance roadmap that it says are aimed at the scale-out storage needs of engineering and HPC environments. The company's new kit combines a tighter footprint -- saving on infrastructure costs -- with software and appliances that accelerate the process of getting the data from disk to user.
Known as Network Appliance until just a few months ago, NetApp is a nearly $3 billion a year company focusing on network attached storage systems. The company's products serve the archiving and content delivery needs of medium- and large-sized enterprises, including Yahoo! and Deutsche Telekom.
With this announcement NetApp is introducing solid advances in the mid-range of its hardware lineup following recent refreshes at the ends of the performance spectrum. The FAS2000 series covers storage needs at the low end up to the 100 terabyte range, while the FAS6080 enables configurations just over 1 petabyte. The FAS3140 and FAS3170 in today's announcement can be configured with up to 420 terabytes and 840 terabytes respectively and round out the middle of the company's storage lineup. These systems are complemented by the V3140 and V3170 systems, a modification that allows NetApp's hardware to be integrated with storage solutions from many of the company's competitors.
The FAS3140 and 3170 products are scale out storage products that aim to provide faster throughput with multiple points of access for stored data. As such, the filesystem is not ideal for all workloads by itself, but will be well-suited for those with independent data. In technical computing, and in HPC in particular, the company will face stiff competition against established market positions held by Panasas, SGI and BlueArc.
NetApp's Storage Acceleration Appliance, also announced today, addresses another key use case: that of multiple readers on a single data set, as you might find in applications from genome search, financial services, and image processing. The appliance automatically caches copies of the data set to maintain maximum bandwidth to multiple independent readers. The cache holding the replicated datasets could be solid state or disk, and the solution offers centralized administration (there is still only one master copy of the
data) with the benefits of distributed access.
The last piece of hardware in today's announcement is the Performance Acceleration Module, an add-on card to improve performance for workloads that are dominated by random read access (such as file serving). Up to 5 modules snap into PCI Express slots in the company's existing storage servers and provide an "intelligent" read cache. NetApp's software offers analysis tools that can predict whether your workload would benefit from installing the module before you make the investment.
Also included in today's announcement is a Remote Support Agent that monitors the health of your installation and proactively opens tickets on your behalf with NetApp to head off problems before they become downtime or lost data.
According to Brendon Howe, vice president and general manager of the NAS & V-Series business units, today's announcement is strategic for NetApp, "We have focused a lot of effort on the enterprise side of our storage offering lately, and now we're moving to aggressively market the new technologies we've been developing for the technical side of the computing market." But with established vendors already having strong beach heads in this market, and HP, Sun, IBM, and others taking aim at a larger slice of the pie, it remains to be seen if NetApp can find its niche in the HPC market.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?