February 03, 2011
"Everybody talks about the weather, but nobody does anything about it." That quote is over a 100 years old, but if you swap in "climate change" for "the weather" you have a pretty good update for the 21st century. And if you've been following the news lately or have just stepped outside, you may have noticed that the climate is getting a little, shall we say, unpredictable.
Which of course brings me to high performance computing. Putting an HPC spin on the original quote: it seems like a lot of supercomputing cycles are being devoted to modeling climate change, but not nearly as many to modeling the solutions.
Fortunately though, some are. And there are plenty of solutions out there in need of big-time computer modeling. Among the most talked about solutions are fusion energy, solar power, biofuels, advanced battery technology, fuel cells, and carbon sequestration.
Of these, carbon sequestration -- aka carbon capture and storage -- doesn't seem to get as much press as the others. And that's too bad. Any rationale plan to deal with climate change has to include removing the excess carbon dioxide we've already pumped, and are continuing to pump, into the atmosphere. Carbon sequestration has the advantage of offering a workable solution even if countries fail to cap their carbon emissions. And so far, that seems to be the most likely scenario.
There are lots of ways to capture and store carbon: stimulating uptake by plants via photosynthesis, creating a soil conditioner known as biochar, creating inert carbonates by reacting the C02 with the appropriate minerals, and storing C02 in the ground. They each have their own advantages and disadvantages, but they all share a common unknown: How will this man-made carbon cycling effect the environment? After all, the idea is not to substitute one natural disaster for another.
One of the promising (read least expensive) methods of carbon sequestration is to simply pump the CO2 from fossil fuel burning power plants into geologically stable formations like basalt, depleted oil/gas reservoirs, and saline aquifers. Saline aquifers are particularly attractive, since they are present over wide geographic areas and have really large capacities for C02 storage.
To study the saline solution (so to speak), the hard-charging scientists at Berkeley Lab's Computational Sciences and Engineering and the National Energy Research Scientific Computing Center (NERSC) have developed an industrial-strength simulation code to model CO2 injection into these underground saline reservoirs. A recent article published by Berkeley Lab describes the work in some detail.
Injecting C02 into brine seems simple enough, but the behavior below the surface becomes very complex. Dissolving gas in liquid changes its behavior, in this case, setting up convection currents, which then speeds up the C02 diffusion. Of course, you want to make sure that the C02 stays put over thousands of years, that is, it doesn't vent back to the air, leak into aquifers used for drinking water, or create other dangerous side effects.
The new software developed by the Berkeley team was able to provide a much finer grained model than that of a traditional geological simulation code, and is able to generate a 3D model of the C02 in solution over time. From the Berkeley Lab writeup:
The code combines a computing technique called adaptive mesh refinement (AMR), with high-performance parallel computing to create high-resolution simulations. The team's simulations were performed at NERSC using 2 million processor-hours and running on up to 2,048 cores simultaneously on a Cray XT4 system named Franklin.
Even with that core count and computer time, the initial simulations were fairly modest in size, measuring only at the scale of meters. The eventual goal is to be able to use the physical characteristics of a particular aquifer to predict how much CO2 it can accommodate.
The article says the code could also be adapted to help geologists more accurately track and predict the migration of hazardous wastes underground and, get this, "to recover more oil from existing wells." Sigh.
Posted by Michael Feldman - February 03, 2011 @ 7:33 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?