April 06, 2011
The University of Texas has set about building its third data center, which officials expect will open later this year. Like the other two, this one will have the capacity to contain almost three petabytes—a perfect fit for the genomic research project it is slated to handle.
The university system is home to the MD Anderson Cancer Center, which, like other organizations untangling genomic-driven problems, generates and requires quick access to incredible amounts of data.
The center, which is tackling some cutting-edge work in the genomics-cancer arena, will require some significant number-crunching capabilities. IDG reported that the center will create the largest HPC cluster dedicated to cancer research in the world.
Lynn Vogel, CIO of MD Anderson in Houston states that this effort is being fueled by an incredibly large private cloud on the order of 8,000 processors and a half-dozen shared large memory machines with hundreds of terabytes of data storage attached.
As IDG reported, while the research center’s “general server infrastructure uses virtualization, the typical foundational technology for cloud, this specialized research environment doesn’t. Rather, the organization uses an AMD-based HPC cluster to underpin the research cloud.” In order to tap into the resource, they use a SOA-based web portal aptly named ResearchStation.
Vogel noted that currently, the 8,000-processor HPC sitting at the heart of the private cloud already is operating at 80-90% of capacity as did the setup that came before it, which weighed in at the 1,100-processor count. On the storage front it will make use of an HP-Ibrix system that supports extreme scale-out, he explained.
Interestingly, the group behind the research did briefly consider some public cloud alternatives but there were problems that extended beyond the usual suspect when dealing with patient data. According to Vogel, “we’ve found on performance, access and in the management of that data, going to a public cloud is more risky than we’re willing to entertain—and we’re just not comfortable with the cloud given the actionable capability of a patient should there be a breach.”
Vogel also noted that there is an important angle missing from public cloud providers in the way of understanding of the complexity of their data and goals. He says “As much as public cloud providers would like us all to believe, this is not just about dumping data into a big bucket and letting somebody else manage it.”
Full story at IDG
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?