November 09, 2011
SEATTLE, WA, Nov. 9 -- The RENCI/North Carolina booth (#2942) will be one of several on the SC11 show floor to participate in a demonstration that will connect booths in the Washington State Convention Center with large data sets in the U.S. and Europe, creating a distributed, high-speed international data grid that allows researchers to share, store and manage large data sets.
The “Big Data” grid will connect the exhibition booths of DataDirect Networks (DDN, 2304), Karlsruhe Institute of Technology (2534) and RENCI/North Carolina to the Texas Advanced Computing Center (TACC) in Austin, Texas; RENCI (Renaissance Computing Institute) in Chapel Hill, NC; and the Karlsruhe Institute of Technology in Karlsruhe, Germany. The data grid will be built with DDN’s Web Object Scaler (WOS), a hyperscale geo-distributed cloud storage system, and will use the Integrated Rule-Oriented Data System (iRODS), a data software system that manages large, complex data sets by applying management policies to control the execution of all data access and manipulation operations.
WOS is an extremely fast and easy-to-deploy object-oriented cloud storage system that can scale to unprecedented levels while still being managed as a single entity. It addresses the needs of organizations that have petabytes of data, which need to be archived and shared between multiple data centers.
The Data Intensive Cyber Environments (DICE) research groups at the University of North Carolina at Chapel Hill and the University of California at San Diego develop iRODS, with support from a U.S. National Science Foundation grant. The iRODS@RENCI research group also supports iRODS development. iRODS is the core data management software being deployed by the DataNet Federation Consortium, an NSF-funded project to prototype a national data management infrastructure with six science and engineering disciplines.
Combining the super fast and scalable WOS with iRODS results in a grid in which data is easily shared, managed and stored in persistent archives, according to Reagan Moore, head of the DICE group at UNC-Chapel Hill and RENCI chief scientist for data grids.
“This international data grid demonstrates how researchers can participate in collaborative research while analyzing massive data collections,” Moore said. “An iRODS-based WOS infrastructure greatly minimizes the effort required to manage and distribute large scientific data sets and make them available for such research.”
Moore and DDN Chief Scientist Dave Fellinger will provide an overview of the WOS-iRODS data grid and a second international data grid connecting Kings College London to RENCI, KEK in Tokyo, Japan; Academia Sinica in Taiwan; and IN2P3 in France. Their talk takes place at 3:30 p.m. Tuesday, Nov. 15, in the RENCI booth. At 2:30 p.m. Tuesday, also in the RENCI booth, Moore will highlight iRODS and some of the research organizations using iRODS, including NASA, the National Optical Astronomy Observatory, the Australian Research Collaboration Service, and the Texas Digital Libraries.
For more on RENCI at SC11, see http://www.renci.org/news/releases/renci-sc11.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?