The Portland Group
CSCS Top Right Frontpage

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
HPCwire Japan

How to Deploy Storage to Better Compete in Today’s Economy

Sponsored Content by Xyratex

Being competitive in today’s world economy means companies have to accelerate the time it takes to go from concept to profitable products and services. There is no shortage of new, good ideas and inventions to solve the problems we face; yet the world market demands solutions faster.

Specifically, high-performance parallel compute technologies have become more affordable for commercial research and development operations, but there is a gap in the data storage technologies.  

As an example, huge leaps in technologies are reducing the time it takes to sequence DNA from a month down to a day. This shortens the front-end time it takes to get data into the chemical analysis process, and could help doctors shorten the time it takes to make a clinical diagnosis of a patient illnesses from weeks to just a few days.

To achieve this vision, storage bottlenecks have to be removed from the compute clusters. Today’s general-purpose network storage, while easy to implement, cannot keep up with the throughput demanded by the compute side of the analysis; we can keep adding compute cheaply but storage remains the bottleneck. Some high-end HPC storage solutions can address the performance bottleneck but are either too costly or too difficult to implement and manage.

To make breakthroughs in diagnosis and treatment, we have to break this logjam throughout the entire system. Until this happens, additional chemical analysis will remain impractical, and we’ll remain restricted in our ability to fully understand autism, Alzheimer’s and other medical puzzles.

Similar data bottlenecks are emerging as research and development departments across a myriad of industries (specifically manufacturing simulation and modeling, energy exploration, weather and climate analysis, media and entertainment and economics and financial analysis) seek more data granularity, accuracy and resolution from their applications. These departments have attempted to leverage traditional general-purpose storage platforms sold to the enterprise, but they quickly hit performance barriers that affect sustained productivity. They recognize the need for high performance storage to break their application bottlenecks, but they can’t justify the acquisition cost or the deployment and operational management expense that comes with super compute complexity. Could commercial research and development operations invest in this more costly technology? Of course, but the return on investment and resulting end products would cost too much to the end consumers.

Removing the storage bottleneck in a cost-effective manner for commercial ventures can accelerate time to results for all players in a given market and ultimately will benefit consumers. It is time for an affordable, next-generation HPC storage platform that can meet these new productivity demands. Research and development departments should seek out new storage solutions that can deliver on the following requirements:

Costs of deployment and management must be minimized. Storage system deployment should not require new expertise and needs to be able to be done by the in-house IT team or the R&D department. With the exception of provisioning and service operations, ongoing storage system management must be hands-off.

Flexibility to scale-out storage in both throughput performance and capacity. This will accommodate new product development, application of new algorithms, increased data resolutions and multiple simulation analyses.

High performance efficiency. Unlike general-purpose network file systems, next-generation HPC storage solutions must offer robust parallel file system technology to match compute cluster performance. Choosing an open source file system such as Lustre® ensures flexible scalability that has broad application interoperability and integrates ongoing enhancements from the greater development, test and support community.

Enterprise Availability. Research and development operations that were once confined to workstations now depend on clustered server and storage. Next generation HPC storage must provide out-of-the-box high availability, because accelerating application results depends on compute parallelization and shared storage resources.

Deploying compute clusters to accelerate your time to results will make your organization more competitive. Use these requirements as a guide to choosing the right HPC storage solution, keeping in mind that the system you choose should be specifically designed to meet your new technical computing workloads now and in the future.

Interested in finding out more information on Xyratex HPC storage products and solutions? Come see us at and don’t forget to check back with us from time to time to see what’s new.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?


HPC Job Bank

Featured Events

HPCwire Events