September 01, 2011
Panasas made news in 2008 when it was selected as the storage system to power the world’s first petascale system and they are looking ahead to 2018—the projected year that exascale computing will arrive.
This week the company’s co-founder and CTO, Garth Gibson, hit on some of the specific challenges on the storage front that will need to be addressed for the era of 70 TB/s bandwidth required for exascale computing.
Gibson said that most estimates contend that the storage system for checkpointing will need to deliver 60-70 TB/s of storage, which means his company is facing a lofty target. Panasas and other storage vendors with their eye on the 2018 prize will be looking at approximately 20% per year disk improvement to get to that level—and that is, as Gibson simply put it, “a lot of disks.”
He claims that the most cost effective way to reach these goals is to stage through SSD, not so much because of the non-volatility issue, but because it provides cheap memory capability and “decent megabytes per second per dollar.”
Gibson detailed the SSD slant to Panasas’ exascale strategy:
"I think there’s a good likelihood that we’ll see a checkpoint restart package evolve that will take data from the main memories of the nodes, drop it into SSD then go back to the compute and dribble to the storage—that will probably give us a factor of ten as the ratio of the amount of time we spend in capturing checkpoints versus the amount of time before the next checkpoint, so we’ll probably get to storage in the neighborhood of six terabytes per second.
Now that’s a much more achievable number in terms of the cost of the storage system but the disks aren’t going away because they’re three to thirty times the capacity of main memory and you don’t want to try to provision all of that in any other solid state technology because of cost."
In such conversations the issue of high-density capabilities always tends to arise. Gibson addressed this by pointing to new movements in the drive industry, which he says is finally realizing that even though it’s been possible for some time, there is value in creating read heads that are thinner than write heads, which creates an opportunity to build higher density drives by taking the write head and shifting it half a track over, thus half overwriting the last track since the read track can still see the half of the track that wasn’t overwritten.
While he notes that this isn’t “perfect” he does say that it mirrors current technologies already. In his words, “The big problem is that when you go to do a rewrite of a sector, you would trash the downstream sectors along the way, which is going to force a change in behavior—but that’s what we do with SSDs anyway."
Full story at Panasas
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?