September 01, 2006
In 2006, the Department of Energy's Office of Science made two separate allocations of 400,000 processor hours of supercomputing time at the National Energy Research Scientific Computing Center (NERSC) to the U.S. Army Corps of Engineers for studying ways to improve hurricane defenses along the Gulf Coast. The research is being done in cooperation with the Federal Emergency Management Agency (FEMA).
As hurricanes move from the ocean toward land, the force of the storm causes the seawater to rise as it surges inland. The Corps of Engineers used its DOE supercomputer allocations to create revised models for predicting the effects of 100-year storm-surges -- the worst case scenario based on 100 years of hurricane data -- along the Gulf Coast. In particular, simulations were generated for the critical five parish area of Louisiana surrounding New Orleans and the Lower Mississippi River. These revised effects, known as "storm-surge elevations," are serving as the basis of design for levee repairs and improvements currently being designed and constructed by the Corps of Engineers in the wake of Hurricane Katrina's destruction in the New Orleans Metro Area.
Additionally, Gulf Coast Recovery Maps were generated for Southern Louisiana based on FEMA's revised analysis of the frequency of hurricanes and estimates of the resulting waves. While still preliminary, these maps are being used on an advisory basis by communities currently rebuilding from the 2005 storms. Final maps are expected to be completed later this year.
The Corps used its first NERSC allocation, announced in February, to conduct Storm Surge simulations using the ADvanced CIRCulation (ADCIRC) coastal model and Empirical Simulation Technique (EST) to study both how high the storm-surge waters would rise and how often such surges would occur.
The Corps of Engineers plans to use the second NERSC allocation, announced in July, to finalize the revised stage frequency relationships by the end of 2006. Having access to the NERSC supercomputer will allow the Corps of Engineers to create more detailed models of the effects of Hurricane Rita and other storms along the Texas-Louisiana coasts. Increased detail will give the Corps of Engineers and FEMA more information about the local effects of such storms. For example, storm surge elevations are greatly influenced by local features such as roads and elevated railroads. Representing these details in the model greatly improves the degree to which computed elevations match observed storm surge high-water marks and allows the Corps to make better recommendations to protect against such surges.
At NERSC, the Corps of Engineers team is running their simulations on an 888-processor IBM cluster called "Bassi." The cluster is powered by IBM's newest Power5 processors and is specially tuned for scientific computation. The Corps' simulations typically use 128 to 256 processors and run for two-and-a-half to four-and-a-half hours per simulation batch.
The Corps of Engineers team is also running hurricane simulations on the DoD Major Shared Resource computers at the Engineering Research and Development Center (ERDC). Due to the tremendous computational requirements of these hurricane protection projects and urgent timelines, only by working together and using both DOE and DoD resources, will the Corps be able to provide high quality engineering solutions.
As a result of the runs, the Corps determined that the applications produced incorrect results at topographic boundaries in some instances and codes were modified to improve the accuracy of the results. For example, the runs at NERSC have improved the Corps' ability to model the effects of vegetation and land use on storm surges that propagate far inland, as Hurricane Rita did on Sept. 24, 2005.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?