September 04, 2013
AUSTIN, Tex., Sept. 4 -- The Texas Advanced Computing Center (TACC) at The University of Texas at Austin today merged a set of existing benchmark and computer performance activities under the Advanced Computing Evaluation Lab (ACELab). The goal of the lab is to analyze and accelerate effective use of future computing technologies for computational science research. The lab will achieve this through the deployment of state-of-the-art hardware, measurement and documentation of the characteristics of important user applications, and the creation of new and updated benchmarks.
"TACC constantly evaluates technologies in order to design future systems that provide the maximum performance and best capabilities to researchers," said Bill Barth, TACC's director of High Performance Computing. "The ACELab provides an environment that leverages our deep expertise and technology partnerships to conduct rigorous evaluations of new processors, storage, and networking technologies."
During the past 12 years, TACC has built a reputation as a leader in advanced computing technologies, offering insights, collaboration and consulting support to companies that rely on high-performance computing (HPC) for their business competitiveness. The ACELab will interact with TACC's industrial partners through the Science & Technology Affiliates for Research (STAR) program, offering benchmarking services for a wide array of important applications.
The ACELab will evaluate, develop and package open-source benchmarks appropriate for the HPC environment. These will include microbenchmarks and application-level or "user experience" benchmarks. The lab will also conduct research to produce new techniques and tools for using current and planned computing technologies effectively, and provide guidance and benchmarks for the design of future computing technologies.
"Benchmarks represent the important characteristics of user applications, but it is difficult to measure these characteristics in a busy production environment," said John McCalpin, veteran of processor and system design teams at Silicon Graphics, IBM and Advanced Micro Devices, and TACC's co-director of the ACELab.
"The dedicated systems of the ACELab will allow detailed measurements of application characteristics using a combination of hardware and software performance monitors, along with the ability to measure the sensitivity of application performance to the system configuration. This understanding of user applications will allow us to quantitatively assess the relevance of standard microbenchmarks for each application area, and guide us in the development of new benchmarks."
At the same time, the ACELab will produce more comprehensible, easy-to-use application benchmarks for all types of scientists.
"The application-based benchmarks will represent the current and future needs of the advanced computing community," said Carlos Rosales, a research scientist at TACC and co-director of the ACELab. "Areas such as life sciences or high-volume data processing have not historically been part of application-level benchmarks in the HPC space, but with the increasing use of advanced computing in these areas, the community requires a more comprehensive set of benchmarks."
Results from the ACELab will support existing and future research projects, including the prediction of application performance and energy consumption based on microbenchmarks and application profiling data. Such predictions could, for example, provide guidance for users on the effect of network and file system contention on application throughput, or provide a preview of which computational algorithms are likely to be most effective on future systems.
The results from benchmark development and evaluations will be published as whitepapers on the ACELab website. Research findings, benchmark descriptions and other results will be published in appropriate academic journals and presented at conferences.
Documentation will be a critical part of the ACELab effort, according to Rosales. "It's important for the community to have access not only to measurements, but to a complete documentation set that explains the procedures and tests."
Source: Texas Advanced Computing Center
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?