December 16, 2005
A new resident of the Math Sciences Building is supporting the sophisticated data-storage needs of researchers at Purdue University and helping to establish the institution among the nation's supercomputing elite.
"Robbie the Robot," named for the mechanical star of the 1950s sci-fi classic "Forbidden Planet," is a cutting-edge, automated storage and retrieval system that will enable vast amounts of data to be seamlessly archived and quickly located for researchers' use.
The $1 million robot system has the capacity to store up to 1 petabyte of data.
"To put this in context, one petabyte equals 1,000 terabytes," says Dwight McKay, director of systems engineering with Information Technology at Purdue (ITaP). "The U.S. Library of Congress contains approximately 10 terabytes of data, and our capacity is about 100 times that amount.
"That is substantial considering all the Internet content in existence is estimated to be 8 petabytes. This system brings Purdue up to the kind of data storage that other large, high-performance computing centers have."
This initiative is part of ITaP's ongoing efforts to upgrade high-performance computing capabilities.
"We've been actively expanding our resources to attract researchers to Purdue, and this robot system is one of the tools to help us become competitive at the national level of supercomputing," McKay says.
This is especially needed to support the new Cyber Center for supercomputing that was announced last summer as part of Discovery Park, the university's multidisciplinary research center.
"Researchers are coming to Purdue and bringing their very large data sets with them," says Mike Marsh, senior engineer in the Rosen Center for Advanced Computing. "With this system, we have the ability to capture that data in our library and have it automatically available to them, and that's a big advantage."
The robot also will enable more researchers to move toward mining data collected from multiple, sophisticated simulations. Some of the current research that will benefit includes climatology modeling and structural biology.
"These researchers have large computations and simulations, as well as large data sets," McKay says. "This is the tool they need to be effective in doing this kind of science."
McKay and his team monitor researchers' use of and needs for the system, which is in the testing phase and set to be operational in the spring. Through a user group, ITaP is able to gather feedback and adjust to the needs of researchers.
"We're a partnership with researchers," McKay says. "We are familiar with their labs so we see how we can help and what kinds of resources they need."
The tape robot device is part of a hierarchical storage-management system that consists of a server computer attached to the robotic tape mechanism, all within a 6-by-20-foot space. It uses extremely fast, fiber-channel technology. The software on the server conveys to users that their data is online and available when they request it.
Behind the scenes and within about 10 seconds, the robotic arm - which resembles those used in automobile manufacturing - moves along a hallway of shelves storing data tapes to select and then load the requested data into the computer for researchers to access. Data that isn't being requested can be moved onto tapes for storage until it's needed. The entire process is lightning fast and carefully controlled by sophisticated sensors, Marsh says.
"Robbie" represents the third generation of such robots on campus.
"We've had similar, but much smaller, systems in the past," McKay says. "In this generation, we've added a significant piece of hardware with very large storage capability for archiving data and supporting data-intensive science."
The previous tape-storage robot - in use at Purdue since 1996 - could hold up to 60 terabytes of data on about 960 tapes with 15 tape drives that could each transfer 11 megabytes of data per second.
"Robbie" represents a quantum leap ahead, McKay says.
The new robot - an ADIC model using LTO-2 tape drives - has 5,400 tape slots and 36 drives that can each transfer 40 megabytes of data per second.
This type of system can be found at the Central Intelligence Agency, the Social Security Administration, national research labs and very large insurance companies - not many universities
"These systems are expensive, physically large and require high-level staff to operate," Marsh says. "This robot is putting Purdue ahead of the curve."
The system can easily be doubled in size to two petabytes with additional tape drives. It also can accommodate 11 different models of tape drives from four different manufacturers, and many of the parts are engineered to be "hot-swappable" and redundant, which makes the system more flexible and able to stay online during maintenance.
"We can replace failed power supplies or tape drives while the library continues to run, which keeps the system available to researchers at all times," Marsh says.
The system operates 24 hours a day, providing continuous backup and automatic downloading to researchers. The old robot system will be online for about a year while its data is migrated to the new system.
Marsh says the new system also provides more efficiency in meeting government requirements for storage of sensitive data.
"It's critical that data be backed up in a separate location in case of natural disaster," he says. "With this system, it will be possible to locate another robot system elsewhere, like Indianapolis, and duplicate critical data in that remote location."
While "Robbie" is putting Purdue in the upper echelon of supercomputing, tape-storage needs will continue to become more sophisticated.
"One exabyte is 1,000 petabytes, and it's estimated that a 5-exabyte library would be able to store all the words ever uttered by every person who has ever lived since the origin of our species," Marsh says. "We should have libraries capable of storing an exabyte of data within the next several years."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?