June 03, 2013
The important scientific research questions of 2013 more often than not require a significant amount of collaboration and data sharing among the prestigious universities working on them.
In that light, the University College of London (UCL) partnered with DataDirect Networks (DDN) to develop what will eventually turn into a 100-PB collaborative, cloud-based research network.
HPC in the Cloud spoke with DDN’s Jeff Denworth as well as Dr. Daniel Hanlon, Storage Architect for Research Data Services in University College London’s Information Services Division, about the network and what it means for the research efforts of both the University College of London and major research universities as a whole in the United Kingdom.
Scientific data is one of the more valuable commodities to researchers today, especially as major scientific initiatives are globalizing. Access to data means being able to participate in those initiatives and just overall contributing to the advancement of modern science.
However, a lot of that data that would be useful is discarded after the specific project for which it was generated is finished. It is difficult to ascertain how useful that data would have been to future research projects simply because it is unknown what those projects will be. As such, as Hanlon put it, “Sometimes a dataset that went behind a publication isn’t being maintained.”
The simple solution, of course, is to keep all of the data. Indeed, with DDN’s implementation, that is exactly what UCL plans to do. “We’re planning on keeping around everything,” Hanlon said. Such a development would mark a shift in how researchers approach and handle data in the context of their individual projects.
“It allows the university to effect not necessarily a technology change but a cultural one,” Denworth said of what DDN hopes to be a hallmark of going forward. The cultural change to which Denworth refers is one where researchers need not worry about managing their data, a task is which is often only tangentially related to their field of study.
Of course, ‘keeping all the data,’ represents a difficult computational challenge, especially in terms of storage and accessing. According to Dr. Hanlon, UCL does not yet have that capability. However, it represents their expectation, and if they alongside DDN can indeed scale the shared storage infrastructure out to 100 PB, that expectation would be fulfilled.
With that said, currently through the first phase of the network’s implementation, UCL has access to up to 600 TB of object storage today according to DDN.
The University College of London counts some of the world’s top scientists and researchers among its staff and alumni, including 25 Nobel Prize winners. UCL currently employs a network of about 3,000 researchers. The goal is to provide that depth of research experience and notoriety a relatively simple path to access worthwhile information.
“UCL is sitting on a treasure trove of existing research data that isn’t available for future exploitation,” said Dr. Hanlon. “Those datasets that are in the same field are not currently available for future research, so we want to enable that.”
When asked if UCL anticipates using the DDN network to collaborate with other universities and contribute to the overall scientific trend of using cloud-based technologies to evaluate global problems, Dr. Hanlon responded with a resounding yes.
“All of the other universities will be doing similar things. We fully expect to be collaborating with other universities all the way through. It’s too early to say how these emerging interactions will develop but we’ve already been involved in some initial testing of studying the prospects,” Dr. Hanlon said before going on to mention Oxford University as one of the noted research institutions looking to share information with UCL.
DDN built the system through combining the WOS distributed object storage architecture with the GRIDScaler parallel file storage system, a system that UCL hopes will serve as the gateway to those important stored massive datasets generated from previous projects. Further, according to DDN, that system is coupled with the Integrated Rule-Oriented Data Management Solution, or iRODS, which is meant to manage and ‘clean’ those datasets. According to UCL, the system will save the university hundreds of thousands of pounds in power, staffing, and maintenance costs.
“It’s about UCL providing the facility that allows researchers to store data without having to deal with the burden of managing data,” Dr. Hanlon said in conclusion. “All the details, the implementation, the infrastructure, many of the researchers don’t care about that. They are faced with the choice of having to put data somewhere and we’re providing something that is easy for them to use, low burden of entry, and a system that can manage their data in a better way than they could already do.”
In the end, when the researcher’s job involves more actual research and is concerned less with data management, that researcher’s time is put to better use. Further, when he or she has access to a vast system potentially scalable to 100 PB, they can spin some interesting and ground-breaking studies based on data already generated and in the system.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?