August 11, 2006
In June 2006, the National Center for Atmospheric Research (NCAR) and the San Diego Supercomputer Center (SDSC) signed a Memorandum of Understanding that provides a mechanism by which the two sites can back up each other's critical data. The agreement formalizes a partnership that will ensure the reliable, long-term preservation of scientific data vital to the missions of both institutions. As set forth in the NCAR/SDSC Memorandum of Understanding, the two institutions will work together to provide access to storage, implement networking procedures, share software tools, create documentation, and conduct random tests of data retrieval from both sites, as well as develop mechanisms to track and catalog data. They will also jointly sponsor and participate in a workshop on data integrity and security. SDSC and NCAR will also work together to mount a coordinated storage infrastructure and exchange expertise.
NCAR's Mass Storage System contains five enormous data silos that house data used by geoscientists around the world for long-range and long-term research. This year, NCAR will make 100 terabytes of archival storage space available for replication of SDSC data, while SDSC's Storage Resource Broker (SRB) will do the same for NCAR. The amount of data storage available at each site under the agreement will increase annually by 50 terabytes, reaching 300 terabytes by 2010. The amount can increase by mutual agreement of both sites.
"The direct benefit to NCAR is that we'll be able to store crucial scientific datasets offsite for business continuity purposes -- something we've been planning for several years," says Tom Bettge, deputy director of Operations and Services for NCAR's Scientific Computing Division. "In the event of an unexpected disaster, critical data on NCAR's Mass Storage System would be preserved."
"San Diego is in the same position as NCAR," he adds. "The SDSC storage silos are at one site, and a disaster could cause the loss of a significant amount of data. This agreement provides a near no-cost, temporary solution for what's called geographical data replication -- the duplication of data at different sites."
Some of the first NCAR data to be stored at San Diego will be portions of NCAR's Research Data Archive, which is managed and curated by the Scientific Computing Division (SCD). The Research Data Archive contains precious historic records and data from satellites and field experiments, as well as output from global climate-simulation models, mesoscale weather models, and other Earth science models.
"This collaboration will be one of NCAR's first tangible uses of the TeraGrid," Bettge says. "Since NCAR and SDSC are now TeraGrid partners, we'll use the TeraGrid servers at both locations and the 10-gigabit network to transfer data back and forth." NCAR joined the TeraGrid, the nation's most comprehensive and advanced infrastructure for open scientific research, in June.
"We look forward to developing future similar programs with our other TeraGrid partners," he notes.
The Mass Storage System at NCAR is a storehouse of irreplaceable research data, digitally archived on 45,000 tape cartridges. The geographical replication of data across different sites ensures the long-term preservation of important digital assets.
The data replication effort is a pilot project of the Chronopolis Consortium, a partnership that includes NCAR, SDSC, the University of Maryland, and the University of California Library System. Chronopolis aims to organize, preserve, and make accessible the increasing number of digital holdings that represent vital intellectual assets -- many of which, like NCAR's Research Data Archive, are irreplaceable.
Bettge points out that the SDSC data replication of NCAR data will be available only by invitation and not to all NCAR users. He also emphasizes that the data storage will be in "dark" archives. (Dark archives are inaccessible to the public, preserve data that are available elsewhere, and entail minimal transactions to access the data.)
Source: NCAR; SDSC
Photo: Lynda Lester, NCAR/CISL
More information on the topics discussed is available at the following URLs:
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?