October 04, 2013
AUSTIN, Tex., Oct. 4 -- The Texas Advanced Computing Center (TACC) at The University of Austin and its partners today announced that they will design, build and deploy Wrangler, a groundbreaking data analysis and management system for the national open science community. Supported by a grant from the National Science Foundation (NSF), which includes $6M for deployment plus additional funding for operations, the new system is scheduled for production in January 2015.
"Wrangler advances the vision in data-centered science to tackle today’s most complex, extremely data-intensive challenges and issues," said Bob Chadduck of the NSF Computer and Information Science and Engineering Directorate's Division of Advanced Cyberinfrastructure. "NSF is proud to support this community-accessible, data-focused resource to advance science, engineering and education."
The design and implementation of Wrangler responds to developments in technology and research practice that are collectively referred to as Big Data or the Data Deluge, encompassing a variety of needs related to research data storage, analysis, and access in the sciences.
“Wrangler is designed from the ground up for emerging and existing applications in data intensive science,” said Dan Stanzione, the lead principal investigator (PI) for the project and deputy director at TACC. “Wrangler will be one of the highest performance data analysis systems ever deployed, and will be the most replicated, secure storage for the national open science community.”
Wrangler features a novel primary storage tier based on NAND Flash memory, which will enable reading and writing data at up to one terabyte per second and executing up to 275 million IOPS (input/output operations per second). In addition, the 10 petabyte disk storage system of Wrangler will be fully replicated to Indiana University, a partner in the project, providing data access reliability and security. Wrangler will support the popular Hadoop software framework and a full ecosystem of analytics methods and technologies for Big Data.
“This combination of unmatched transaction performance, massive bandwidth and capacity, and full data replication far exceed what is currently available to the open science community,” Stanzione said.
Dell Inc. and DSSD Inc. are the two strategic partners providing the technology that make up the core of Wrangler.
In addition to hosting part of the system, Indiana University will participate in operations and training, and will help users optimize their network performance between their home institutions and Wrangler. The Computation Institute (CI), a joint initiative of the University of Chicago and Argonne National Laboratory, will integrate their Globus Online service within the Wrangler project to make transferring data to and from Wrangler simple and fast.
“Wrangler will meet critical needs for managing, moving and analyzing massive and diverse data sets in disciplines including energy, weather and the global climate, basic biology, health, and medicine, and will also support citizen science from astronomy to marine biology to zoology,” said Craig Stewart, co-PI and executive director of the Pervasive Technology Institute at Indiana University. “We anticipate Wrangler will support more than 1,000 researchers and students every year, and will serve as a model for smaller-scale data systems on campuses that will improve US research capabilities.”
“Globus is committed to facilitating open science,” said Ian Foster, director of the Computation Institute and professor of Computer Science at the University of Chicago and Argonne National Laboratory. “The Wrangler project demonstrates what is possible by connecting institutions and people using services like Globus Online that let researchers focus on research.”
Wrangler’s performance and storage capabilities for Big Data applications will be enhanced through tight integration to TACC’s Stampede supercomputer and to NSF Extreme Science and Engineering Discovery Environment (XSEDE) resources around the country. Immediately upon deployment, Wrangler will be a part of the broader XSEDE ecosystem. Integration with Globus Online, the official data transfer mechanism for XSEDE, will provide for rapid, reliable and secure data exchange with other elements of the national cyberinfrastructure.
• Massive, replicated, secure high performance data storage (10PB each site).
• A large scale flash storage tier for analytics, with bandwidth of 1TB/s and 275M IOPS
• Embedded processing of more than 3,000 processors cores for data analysis
• Flexible support for a wide range of software stacks, including Hadoop and relational data.
• Integration with Globus Online services for rapid and reliable data transfer and sharing.
• A fully scalable design that can grow with the amount of users and as data applications grow.
“Each new large-scale system, especially ones that bring new classes of capabilities, has significant impacts on society,” Stanzione said. “Wrangler is sure to enable groundbreaking research and many communities are ready and committed to adopt the system on day one.”
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?