Climate Science Triggers Torrent of Big Data Challenges

By Dawn Levy

August 15, 2012

Supercomputers at the Oak Ridge National Laboratory (ORNL) computing complex produce some of the world’s largest scientific datasets. Many are from studies using high-resolution models to evaluate climate change consequences and mitigation strategies. The Department of Energy (DOE) Office of Science’s Jaguar (the pride of the Oak Ridge Leadership Computing Facility, or OLCF), the National Science Foundation (NSF)University of Tennessee’s Kraken (NSF’s first petascale supercomputer), and the National Oceanic and Atmospheric Administration’s Gaea (dedicated solely for climate modeling) all run climate simulations at ORNL to meet the science missions of their respective agencies.

Such simulations reveal Earth’s climate past, for example as described in a 2012 Nature article that was the first to show the role carbon dioxide played in helping end the last ice age. They also hint at our climate’s future, as evidenced by the major computational support that ORNL and Lawrence Berkeley National Laboratory continue to provide to U.S. global modeling groups participating in the upcoming Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change.

Remote sensing platforms such as DOE’s Atmospheric Radiation Measurement facilities, which support global climate research with a program studying cloud formation processes and their influence on heat transfer, and other climate observation facilities, such as DOE’s Carbon Dioxide Information Analysis Center at ORNL and the ORNL Distributed Active Archive Center, which archives data from the National Aeronautics and Space Administration’s Earth science missions, generate a wide variety of climate observations.

Researchers at the Oak Ridge Climate Change Science Institute (ORCCSI) use coupled Earth system models and observations to explore connections among atmosphere, oceans, land, and ice and to better understand the Earth system. These simulations and climate observations produce a lot of data that must be transported, analyzed, visualized, and stored.

In this interview, Galen Shipman, data-systems architect for ORNL’s Computing and Computational Sciences Directorate and the person who oversees data management at the OLCF, discusses strategies for coping with the “3 Vs”— variety, velocity, and volume — of the big data that climate science generates.

HPCwire: Why do climate simulations generate so much data?    

Galen Shipman: The I/O workloads in many climate simulations are based on saving the state of the simulation, the Earth system, for post analysis. Essentially, they’re writing out time series information at predefined intervals—everything from temperature to pressure to carbon concentration, basically an entire set of concurrent variables that represent the state of the Earth system within a particular spatial region.

If you think of, say, the atmosphere, it can be gridded around the globe as well as vertically, and for each subgrid we’re saving information about the particular state of that spatial area of the simulation. In terms of data output, this generally means large numbers of processors concurrently writing out system state from a simulation platform such as Jaguar.

Many climate simulations output to a large number of individual files over the entire simulation run. For a single run you can have many files created, which, when taken in aggregate, can exceed several terabytes. Over the past few years, we have seen these dataset sizes increase dramatically.

Climate scientists, led by ORNL’s Jim Hack, who heads ORCCSI and directs the National Center for Computational Sciences, have made significant progress in increasing the resolution of climate models both spatially and temporally along with increases in physical and biogeochemical complexity, resulting in significant increases in the amount of data generated by the climate model. Efforts such as increasing the frequency of sampling in simulated time are aimed at better understanding aspects of climate such as the daily cycle of the Earth’s climate. Increased spatial resolution is of particular importance when you’re looking at localized impacts of climate change.

If we’re trying to understand the impact of climate change on extreme weather phenomena, we might be interested in monitoring low-pressure areas, which can be done at a fairly coarse spatial resolution. But if you want to identify a smaller-scale low-pressure anomaly like a hurricane, we need to go to even higher resolution, which means even more data are generated with more analysis required of that data following the simulation.

In addition to higher-resolution climate simulations, a drive to better understand the uncertainty of a simulation result, what can naively be thought of as putting “error bars” around a simulation result, is causing a dramatic uptick in the volume and velocity of data generation. Climate scientist Peter Thornton is leading efforts at ORNL to better quantify uncertainty in climate models as part of the DOE Office of Biological and Environmental Research (BER)–funded Climate Science for a Sustainable Energy Future project.

In many of his team’s studies, a climate simulation may be run hundreds, or even thousands, of times, each with slightly different model configurations in an attempt to understand the sensitivity of the climate model to configuration changes. This large number of runs is required even when statistical methods are used to reduce the total parameter space explored. Once simulation results are created, the daunting challenge of effectively analyzing them must be addressed.

HPCwire: What is daunting about analysis of climate data?

Shipman: The sheer volume and variety of data that must be analyzed and understood are the biggest challenges. Today it is not uncommon for climate scientists to analyze multiple terabytes of data spanning thousands of files across a number of different climate models and model configurations in order to generate a scientific result. Another challenge that climate scientists are now facing is the need to analyze an increasing variety of datasets — not simply simulation results, but also climate observations often collected from fixed and mobile monitoring.

The fusion of climate simulation and observation data is being driven to develop increasingly accurate climate models and to validate this accuracy using historical measurements of the Earth’s climate. Conducting this analysis is a tremendous challenge, often requiring weeks or even months using traditional analysis tools. Many of the traditional analysis tools used by climate scientists were designed and developed over two decades ago when the volume and variety of data that scientists must now contend with simply did not exist.

To address this challenge, DOE BER began funding a number of projects to develop advanced tools and techniques for climate data analysis, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project, a collaboration including Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, the University of Utah, Los Alamos National Laboratory, New York University, and KitWare, a company that develops a variety of visualization and analytic software. Through this project we have developed a number of parallel analysis and visualization tools specifically to address these challenges.

Similarly, we’re looking at ways of integrating this visualization and analysis toolkit within the Earth System Grid Federation, or ESGF, a federated system for managing geographically distributed climate data, to which ORNL is a primary contributor. The tools developed as a result of this research and development are used to support the entire climate science community.

While we have made good progress in addressing many of the challenges in data analysis, the geographically distributed nature of climate data, with archives of data spanning the globe, presents other challenges to this community of researchers.

HPCwire: Does the infrastructure exist to support sharing and analysis of this geographically distributed data?

Shipman: Much has been done to provide the required infrastructure to support this geographically distributed data, particularly between major DOE supercomputing facilities like the one at Lawrence Livermore National Laboratory that stores and distributes climate datasets through the Program for Climate Model Diagnosis and Intercomparison. To support the growing demands of data movement and remote analysis and visualization between major facilities at Oak Ridge, Argonne, and Lawrence Berkeley National Laboratories, for example, in 2009 the DOE Office of Advanced Scientific Computing Research began the Advanced Networking Initiative with the goal of demonstrating and hardening the technologies required to deliver 100-gigabit connectivity between these facilities, which span the United States.

This project has now delivered the capabilities required to transition the high-speed Energy Sciences Network (ESnet) to 100-gigabit communication between these facilities. ESnet serves thousands of DOE scientists and users of DOE facilities and provides connectivity to more than 100 other networks. This base infrastructure will provide a tenfold increase in performance for data movement, remote analysis, and visualization.

Moreover, DOE BER, along with other mission partners, is continuing to make investments in the software technologies required to maintain a distributed data archive with multiple petabytes of climate data stored worldwide through the Earth System Grid Federation project. The ESGF system provides climate scientists and other stakeholders with the tools and technologies to efficiently locate and gain access to climate data of interest from any ESGF portal regardless of where the data reside. While primarily used for sharing climate data today, recent work in integrating UV-CDAT and ESGF allows users to conduct analysis on data anywhere in the ESGF distributed system directly within UV-CDAT as if the data were locally accessible.

Further advances such as integrated remote analysis within the distributed archive are still required, however, as even with dramatic improvements in the underlying networking infrastructure, the cost of moving data is often prohibitive. It is often more efficient to simply move the analysis to where the data reside rather than moving the data to a local system and conducting the analysis.

HPCwire: What challenges loom for data analysis, especially data visualization?

Shipman: The major challenge for most visualization workloads today is data movement. Unfortunately, this challenge will become even more acute in the future. As has been discussed broadly in the HPC community, performance improvements in data movement will continue to significantly lag performance improvements in floating-point performance. That is to say, future HPC systems are likely to continue a trend of significant improvements in total floating point performance, most notably measured via the TOP500 benchmark, while the ability to move data both within the machine and to storage will see much more modest increases.This disparity will necessitate advances in how data analysis and visualization workloads address data movement.

One promising approach is in situ analysis in which visualization and analysis are embedded within the simulation, eliminating the need to move data from the compute platform to storage for subsequent post-processing. Unfortunately in situ analysis is not a silver bullet, and post-processing of data from simulations is often required for exploratory visualization and analysis. We are tackling this data-movement problem through advances in analysis and visualization algorithms, parallel file systems such as Lustre, and development of advanced software technologies such as ADIOS [Adaptable Input/Output System, or open-source middleware for I/O].

HPCwire: What’s the storage architecture evolving to in a parallel I/O environment?

Shipman: From a system-level architecture perspective, most parallel I/O environments have evolved to incorporate a shared parallel file system, similar to the Spider file system that serves all major compute platforms at the OLCF. I expect this trend will continue in most HPC environments as it provides improved usability, availability of all datasets on all platforms, and significantly reduced total cost of ownership over dedicated storage platforms.

At the component level, the industry is clearly trending toward the incorporation of solid-state storage technologies as increases in hard-disk-drive performance significantly lag increases in capacity and continued increases in computational performance. There is some debate as to what this storage technology will be, but in the near term, probably through 2017, NAND Flash will likely dominate.

HPCwire: What hybrid approaches to storage are possible?       

Shipman: Introducing a new layer in the storage hierarchy, something between memory and traditional rotating media, seems to be the consensus. Likely technologies include flash and in the future other NVRAM technologies. As improved manufacturing processes are realized for NVRAM technologies, costs will fall significantly. These storage technologies are more tolerant of varied workloads.

For analysis workloads, which are often read-dominant, NVRAM will likely be used as a higher-performance, large-capacity read cache, effectively expanding the application’s total memory space while providing performance characteristics similar to that of a remote memory operation. Unlike most storage systems today, however, future storage platforms may provide more explicit control of the storage hierarchy, allowing applications or middleware to explicitly manage data movement between levels of the hierarchy.

HPCwire: How does big data for climate relate to other challenges for big data at ORNL and beyond?

Shipman: Many of the challenges we face in supporting climate science at ORNL are similar to the three main challenges of big data — the velocity, variety, and volume of data.The velocity at which high-resolution climate simulations are capable of generating data rivals that of most computational environments of which I am aware and necessitates a scalable high-performance I/O system.

The variety of data generated from climate science ranges from simulation datasets from a variety of global, regional, and local modeling simulation packages to remote sensing information from both ground-based assets and Earth-observing satellites. These datasets come in a variety of data formats and span a variety of metadata standards. We’re seeing similar volumes, and in some cases larger growth, in other areas of simulation, including fusion science in support of ITER.

The President in a recent release from the Office of Science and Technology Policy highlighted many of the challenges in big data faced, not only across DOE, but also the National Science Foundation and the Department of Defense. A number of the solutions to these big-data challenges that were highlighted in this report have been developed in part here at Oak Ridge National Laboratory, including the ADIOS system, the Earth Systems Grid Federation, the High Performance Storage System, and our work in streaming data capture and analysis through the ADARA [Accelerating Data Acquisition, Reduction, and Analysis] project, which aims to develop a streaming data infrastructure allowing scientists to go from experiment to insight and result in record time at the world’s highest-energy neutron source, the Spallation Neutron Source at Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire