August 25, 2006
As an industry, hydrocarbon exploration and production operates in an increasingly challenging environment. The new challenges include more than high risk and high capital commitments, or declining fields and complex operations. Unconventional plays have become conventional, with fractured and/or tight porosity systems becoming commonplace. New environmental challenges require sophisticated and constrained operations. In this evolving regulatory, economic, and political environment, it is not enough to be creative, aggressive and technically adroit. One also wants to be smart.
The good news is that smart is a lot cheaper than it used to be. Specifically, high performance computers (HPCs) are a lot less expensive than they used to be, and a lot more powerful. The fastest computer in the world, Blue Gene/L, runs at nearly 300 teraflops, or 300 trillion floating point operations per second. The real revolution is that regular computer servers have become HPCs through parallel architectures, increasing industrial and market penetration.
A small cluster of Linux boxes -- 32 regular servers -- now outperforms the world's fastest computers from only a few years ago at 1/100th of the cost. They also are compact and easily serviced. A 128-cluster computer would take only three or four racks, easily fitting in a kitchen. These machines, the "big iron" of the world, have become readily available and powerful tools to tackle tough exploration, drilling and production problems.
Figure 1 shows the Thunder Linux cluster at Lawrence Livermore National Laboratory (LLNL). It is an 18-teraflop machine with more than 1,000 nodes and 4,000 central processing units, and ranks as the 11th fastest computer in the world. However, Thunder is about to be surpassed by an even faster and more powerful cluster system now being built for Lawrence Livermore. In late June, the Peloton supercomputing project was awarded to Appro for three 1U Quad XtremeServer clusters with a total of 16,128 cores based on next-generation AMD Opteron processors with DDR2 memory. To provide a production quality computing capacity, Peloton features a novel architecture that groups identical scalable units of 1,152 cores to form three shared-memory multiprocessor clusters.
The Peloton clusters will be used in an unclassified environment as a multi-programmatic and institutional (M&IC) resource and in the classified environment to solve complex computational problems related to the National Nuclear Security Administration's (NNSA) Stockpile Stewardship Program. This program ensures the safety, security and reliability of the nation's nuclear deterrent. Identical scalable units with 1,152 cores will be grouped together to form the three shared memory multiprocessor clusters. Multiple organizations and programs within LLNL will share these supercomputing clusters for large, medium and small scale scientific simulations.
With scalable computing power at affordable pricing points, it is not surprising that massively parallel computers are becoming more common in oil and gas companies and their allied service companies. They mostly operate in seismic processing, although they also tackle problems from financial modeling to molecular chemistry. And more and more companies are looking to HPCs to solve tough problems in reservoir characterization and management. The reasons are simple: improved recovery, reserves stewardship and cost reduction.
Like any tool, however, they must be pointed at the right problem and operated well. Despite the high power and low cost of high performance computers, any commercial oil and gas company must understand why it should buy a machine, what it could do with one, and how it would fit sensibly into its business model. It must also know how to deploy the techs and scientists hired to work these machines. This is where the challenges to conventional operations and approaches can inform smart business how to wield big iron to solve big problems and turn big profits.
Two areas come to the fore. First, how can one handle uncertainty in the subsurface and in geophysical interpretation? Second, how can one simulate reservoirs in the increasingly difficult operational environment to obtain extremely high recoveries?
The Realm Of Uncertainty
Workers in the subsurface know only one thing with certainty: They are wrong. No one knows what the rocks and fluids truly look like between wells. Common unknowns are saturations, lithologic distributions, fracture character and geometry, and large-scale connectivity. Even the very best geophysics and geological concepts still cannot shake the irreducible uncertainty in a single geological or reservoir model.
So why should a company limit itself to one? Or 10,000?
Stochastic integration and inversion are an approach that tackles this uncertainty head-on. Essentially, it generates thousands of forward models of some specific property, say, porosity, oil saturation or CO2 distribution, as examples. The inputs are trusted data such as well data, seismic constraints or production data, while the outputs are a handful of configurations that match all data, with a strict probabilistic ranking. This provides an operator with not one "best" model, but with several alternatives and their likelihoods. These models may vary in rock distribution, velocity or fluid properties in ways that are readily tested.
This gets then to the heart of many industrial problems: What information is needed to make large business decisions? Stochastic inversion can be applied to early seismic processing (exploration), post-discovery development planning, early production verification, history matching, and tertiary recovery planning -- in short, every phase of the field life cycle.
In a tertiary recovery project in Wyoming, CO2 was injected and monitored using electrical resistance tomography (ERT) between abandoned wells. The initial, deterministic inversion looked noisy and unimpressive, and data collection ceased. Later, those same data served as the basis for a stochastic inversion. The likeliest solution still showed noise, but there were four other families of solutions, three of which showed a north-to-south trending plume and stimulation of a producing well.
To improve the analysis, another inversion was run with only one more piece of information: the volume of CO2 injected between ERT surveys. Suddenly, the highest probability looked like the north-to-south plume, and a secondary solution identified a possible anomaly around a water injector. One more difference map analysis revealed even higher confidence. The operators are looking at the field data to test the predictions of the inversion.
Figure 2 represents changes in resistivity among 19 abandoned wells after three weeks of injection over the 70-acre study area in the CO2 flood. The left image is the first difference map, showing mostly noise. The two middle maps show the two most likely solutions, noise and a CO2 plume. The right map shows the solution when only the total injection volume constraint was added.
For this case, no new data were collected after the first inversion. Instead, existing data and basic physics constrained the solution space very effectively. It also pointed toward ways to test the predictions of the model in production data and also suggested new analysis. Using this technique allows the operator to leverage off all relevant knowledge of the field and test interpretations that are subject to debate. It also helps inform operators of multiple scenarios and what new information may be needed to choose the most promising course of development. In fact, the less correlated the data sets (e.g., temperature, water cut, tiltmeter, crosswell seismic, etc.) the better the inversion.
Stochastic inversion and integration are superior to conventional inversion and analysis in every way, except one. They are very computationally intensive. A typical stochastic analysis generates thousands of possible solutions. For a stochastic analysis to converge may take hundreds or even thousands of CPU hours. On a conventional workstation, that many CPU hours would require weeks to months to complete.
But this is where HPCs come in. A 256-CPU, 64-node cluster could execute an analysis in hours, depending on the problem. Even including setup, parameterization, I/O and other concerns, a single HPC could tackle 30 to 100 problems a year. Although this may not be enough for every asset within a large company, it may help handle the most difficult cases, the highest-risk projects or the largest few assets within a company.
A World Without Scale Up
Currently, these large assets comprise large reservoirs managed by engineers using large reservoir simulations. In many cases, the workflow for these simulations has not changed in years: Build a geological concept from the data, build a detailed static geological model from those concepts, scale up to a full-flow reservoir model, and someday attempt a history match. Many of these steps imbed assumptions that cannot be verified, including relative permeability and scaling coefficients. The more of these assumptions introduced, the less unique the solutions for a given reservoir simulation.
One approach is to not make the assumptions. Instead, brute force can be used to run very large simulations where the best geological understanding is rendered in detail. Already, models run at higher resolution than in the past. In the case of managing the world's largest asset, the Ghawar Oil Field, Saudi Aramco runs its POWERS simulator on a massively parallel HPC. As of 2004, this 128-node Pentium IV-based machine had run full field simulations with between 10 million and 100 million cells and more than 4,000 wells, with larger runs pending. These simulations are run with multicomponent hydrocarbon models, waterflooding with varying brine chemistries, and dual-perm response to match fracture-flow history. Some runs include CO2 floods.
This capability not only allows Saudi Aramco to run fairly large models with minimal or no scale up, but also to execute history matches extremely rapidly (in some cases, in hours to days). Saudi Aramco has used this capability for infill drilling, water cut management, breakthrough prediction and other basic reservoir engineering choices (Figure 3). New data can then be incorporated into updated geological models that underpin the simulations.
Almost all such full-flow models run on conventional finite volume codes. These have proven to be reliable in most fields. There are exceptions, however. Even in simulations with multicomponent oils, methane, CO2, water and dual-permeability systems, the simulation of many important processes is crude or absent altogether. While that is fine for many conventional cases, some require greater sophistication. This is true of thermal recovery, where extreme temperature and viscosity transients matter. The handling of fracture systems is still poor, with simple continuum models of complex geometries with nonlinear stress/flow response. These models poorly predict dissolution or precipitation resulting from CO2 injection, or bulk crustal deformation, or scale formation.
These require coupled, complex simulation tools called reactive transport models. Many research versions of these codes exist, including TOUGH2, NUFT, STOMP and others. Some are finite difference codes, some finite element, and some are coupled to discrete fracture and deformation codes. They have one commonality: They all require massively parallel machines to run sophisticated cases of 3-D stratigraphic and structural complexity of most hydrocarbon fields.
Again, for those fields where fractures dominate the flow field, or where chemistry is difficult, HPCs provide a way to target tough local questions that impact cost or operations. For unconventional reserves simulation, such as in-situ oil shale recovery, steam injection into thermal diatomite, or enhanced coalbed methane recovery, advanced simulators on massively parallel platforms provide the hope of tackling tough operational problems, such as reducing and mitigating well failure events, and substantially improving recovery factors.
Big Iron And The Future
Could these areas be combined and optimized? One can certainly imagine using some kind of stochastic integration to provide an initial reservoir field model, which is updated with in-field information and advanced simulations run on the fly. Sequential stochastic runs reduce cycle time, allowing for additional reservoir detail and physical and chemical processes to enter models as necessary.
In all cases, information is processed and mapped to optimize around changing parameters (production rate, maximum recovery, environmental integrity, etc.). Even this complex scenario could be managed by a fairly small HPC, perhaps 32 to 64 nodes, for a medium size field. While this scenario is not yet in operation, all the components exist and could be integrated quickly and easily. One can imagine how this workflow could lead to substantial improvements in total recovery and operating cost reduction.
As mentioned, HPC is not all things for all cases. It is best used for managing specific projects or assets of greatest risk or greatest value. Saudi Aramco chiefly built its computer and simulator to model Ghawar. Even if a company does not have an asset like Ghawar or Prudhoe Bay, it still will have ventures or operations that represent a major investment. HPC applications can help reduce the risk and improve the performance of these projects.
Increased competition helps. Competition between chip makers Intel and AMD has not only dropped prices, but also produced common architectures that can handle a wide range of realistic technical challenges. As such, the challenge to operators and researchers alike is one of fit. What is the real problem, and what is a smart approach to solve it?
This is likely to require some new thinking about the exploration, drilling and production workflow. Is it possible to jump past steps such as creating a complex static geomodel? What is the value of upscaling, and can it be avoided? How are operational data sets measured and tracked? Ultimately, the value of HPC applications is only how they affect the value chain to reduce the cost of operations or cycle time. Identifying the key technical choke points in the business and rethinking the technical workflow can help focus the big iron to produce something novel, sexy and powerful. Something useful.
And something smart.
The author acknowledges Roger Aines, Steve Ashby, Bill Boas, Garfield Bowen, Ali Dogru and Abe Ramirez for discussions leading to this article. He also thanks Appro and Anadarko for supporting these technologies and research. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract W-7405-ENG-48.
Adapted and reprinted with permission from the July issue of The American Oil & Reporter.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?