August 29, 2011
A research team at IBM's Almaden, California research lab has developed a disk drive array that can store 120 petabytes of data. At that capacity, the system can hold about a trillion average-sized files, providing enough storage for the most demanding supercomputing simulations.
According to a recent article in MIT's Technology Review, the system was developed for an unnamed customer that requires petascale simulations, but the technology could also apply to conventional ultra-scale storage systems. In particular, the 120-petabyte array could be a run-of-the mill storage setup for cloud computing systems of the future -- that according to Bruce Hillsberg, director of storage research at IBM and leader of the petabyte storage project.
The storage array is made up of 200,000 conventional hard disk drives and are stored in an extra-dense and extra-wide storage drawer. As is the case for a lot of IBM's cutting-edge supercomputing technology, the components are water-cooled rather than air-cooled.
Besides the challenge of getting so many disks into reasonably sized system, there was the more tricky problem of disk failure. With hundreds of thousands of drives involved, failures have to be treated as a fundamental property of the system. IBM uses the standard approach of striping copies of data on different disks, but employs software that enables storage performance to be maintained at high levels when the hardware breaks. According to Hillsberg, the system is designed to be robust enough not to lose any data for a million years and "without making any compromises on performance."
The Technology Review piece points out the system capabilities have leveraged recent enhancement's to IBM's General Parallel File System (GPFS), that the company demonstrated in July. In that case, the file system was able to scan 10 billion files in 43 minutes, which according to the IBM'ers was 37 times faster than 2007-era GPFS.
Presumably we will find out who IBM's unnamed customer is when the 120-petabyte system is deployed.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?