The Portland Group
CSCS Top Right Frontpage
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

HPC Center Traces Storage Selection Experience


We often hear about national labs and universities settling on a particular vendor for server and storage solutions, but details are usually in short supply when it comes to how vendors stacked up against one another in a head-to-head bidding war.

HP announced last week that the University of Utah's Center for High Performance Computing (CHPC) moved into its Converged Infrastructure arena by selecting the HP X9320 IBRIX Network Storage System coupled with ProLiant SL160z G6 servers. This announcement, like many others of this ilk was full of the expected hyperbole about scalability and cost, so we followed up with the Brian Haymore, who heads the HPC storage team at CHPC to find out how they evaluated the competing vendors to enhance the center's Updraft cluster and what ultimately led to their storage decision.

The I/O issue isn't new for Haymore's team. He says that this pain point was one they recognized early on but that came into more focus when they would have one or two users running large cases on the clusters, then having everyone else wanting to go to the scratch file system to look at the results they'd run weeks or months ago. He said that at this point the file system would be dead in the water--quite a problem when their people expected interactive responsiveness. He claims they knew the applications were saturating everything the current file system could offer and that it wasn’t a network saturation issue. He remained convinced that NFS just wouldn't offer the scalability for some applications and that proprietary solutions might offer the only remedy.

The chemical and fuels engineering group at CHPC was running an application that was authored by the Center for the Simulation of Accidental Fires and Explosions. This application is a composite of code contributed from scientists across the country, which fine-tunes its results but is difficult to modify from an I/O perspective. This meant that for Haymore’s team, the storage selection process required more than just looking at price points—they needed a file system that was going to fit with the application without manipulating application itself.

With that in mind, the I/O difficulties were at the heart of performance hitches. During the baseline test, which was against their standard NFS server they were running at about 90 seconds per iteration with about 45 percent of that time being gobbled by I/O. In other words, half of the time that baseline system over the standard NFS server was spent in I/O activity.

Four vendors were vying for a chance to improve the I/O capabilities at CHPC, including Panasas, HP with its IBRIX solution, partners Dell and Terascala with their Lustre offering, and the partnership to provide GPFS from IBM and DDN. Haymore told us that while these were the four main vendors considered, others, including Isilon were evaluated early on. Isilon's solution would only have been suitable if the application could be changed, which was not a possibility.

Haymore says that Panasas provided no performance increase with their application. His team wanted to dig deeper with the Panasas engineering team to look for the choke point but they were unable to gain any traction with that process. Eventually, he says, this option timed out and they considered other alternatives.

While they were able to realize a tripling in performance with the Dell and Terascala Lustre offering, the excitement over the performance increase was hampered by a troubling series of mysterious I/O errors that affected 50 percent of the runs, even those that used the exact same dataset. As Haymore described, there seemed to be no rhyme or reason—the “file system just puked.”

He says that they found good support from the Dell Terascala team but ultimately they were never able to resolve the error after determining it was not a tuning error and instead was likely a bug that had been filed with the Lustre package that could not be fixed in a reasonable timeframe. Besides, as Haymore noted, aside from these more practical concerns about stability, the very status of the Lustre file system was in question as it was being handed off to Oracle.

In the end, the choice boiled down to the DDN/IBM GPFS and HP’s IBRIX solutions as they both performed almost exactly the same. He says that in this case, the tipping point wasn’t based on pricing alone—rather, he said, the support model was a major factor. As Haymore pointed out, getting your hardware from DDN and software from IBM required two hops for support whereas with HP, it was a single, unified support model—an important factor in his team’s final decision.

Make no mistake, however, price did play a role. While he admits that even at the beginning he expected the HP solution to be quite expensive, he says that they were able to accommodate their budget—the icing on the cake, as far as Haymore was concerned.

On that note, we asked if he went into the closed bidding process thinking that one solution would win out. He says that he would have counted on Lustre as being the champion if he had to make an early pre-benchmarking guess. This is because, as he put it, “Part of us doing our jobs is to keep our finger on the pulse of what the big boys are doing and for us, those big boys are the national labs. Lustre is heavily deployed there but it’s hard to tell if it’s because that’s what won the bid on a price point or if it was really the king of performance….We don’t know why it is always selected. We just figured we’d mimic national labs since it’s been their trend for the last several years.” While he notes that they do use other file systems, he says he’s still surprised at the errors they faced with Lustre.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Xyratex

HPC Job Bank


Featured Events


HPCwire Events