April 23, 2013
University of Oklahoma associate professor Amy McGovern is working to revolutionize tornado and storm prediction. McGovern's ambitious tornado modeling and simulation project seeks to explain why some storms generate tornadoes while others don't. The research is giving birth to new techniques for identifying the likely path of twisters through both space and time.
McGovern's work was recently detailed in an article by Scott Gibson, science writer for the National Institute for Computational Sciences at the University of Tennessee, Knoxville.
A simulated radar image of a storm produced by CM1. The hook-like appendage in the southwest corner is an indication of a developing vortex. Source.
This latest round of updates adds to an earlier report that was released in May 2011. That NICS article was published after a severe weather event spawned nearly 200 tornadoes, devastating large swaths of the southern United States. The disaster left 315 dead and caused billions of dollars worth of damage.
McGovern says she and the other researchers "hope that with a more accurate prediction and improved lead time on warnings, more people will heed the warnings, and thus loss of life and property will be reduced."
Part of the challenge has been devising a realistic and reliable model and coming up with usable input data. A numerical model known as CM1 has proven valuable to researchers allowing them to focus more on the science and less on the workflow.
The team has also made strides with regard to storm simulations. Rather than model storms that actually happened, they base their models on the conditions that are required for a tornado to take form.
They start with a bubble of warm air, which sets off the storm-building process. Then they introduce equations and parameters that factor into the storm's development. Getting the friction right is especially challenging as even the grass on the ground can affect this variable.
The research team is working with the National Weather Service to implement an early storm warning system, called Warn-on-Forecast. The goal of the project is to inform the public of impending storms with 30-60 minutes of lead time. Getting this level of accuracy requires a high-resolution model, and that takes a lot of computing power.
The researchers want to figure out "what actually generates the tornado, and the only way you can confirm that is to make the high-resolution simulations," McGovern explains. "Those are not feasible to do all across the U.S. right now on a Warn-on-Forecast basis. We are running on 112 by 112 kilometer domain; now scale that up to the U.S. and ask it to run in real time. We're not quite there yet."
They're using the University of Tennessee's Kraken supercomputer to run the simulations and and the UT's Nautilus supercomputer to analyze them.
"The biggest thing that Nautilus does for us right now is process the data so that we can mine it, because we're trying to cut these terabytes of data down to something that's usable metadata," McGovern reports. "I am able to reduce one week of computations down to 30 minutes on Nautilus, and post-processing time is reduced from several weeks to several hours."
The researchers expect to have a more precise storm prediction system in place by December.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?