September 15, 2006
Chevron and two of its partners recently discovered a new field in Gulf of Mexico deepwater that could yield 3-15 billion barrels of oil, boosting U.S. reserves by up to half. At the Council on Competitiveness' HPC Users Conference on September 7, Chevron CTO, Dr. Donald Paul, gave an impromptu talk about the discovery and the role HPC played.
Donald Paul said HPC was crucial for enabling this important discovery. HPC has been used for seismic processing for many years, but Chevron’s "Jack-2" reservoir and others like it in the Gulf of Mexico deepwater are at the very edge of current seismic imaging capability. Paul explained that imaging at the scale of this project was unprecedented, with data sets up to a quadrillion (10^15) points. Processing such vast data sets was impossible until the past few years brought advances in HPC capabilities and visualization technologies.
The features of the newly discovered reservoir were completely invisible until recently, because of a huge canopy of salt that is sometimes miles thick, and geologists were skeptical about the amount of potential oil in that region. But with high performance computing, what was invisible became clear. "Geology's always been smarter than the geologists," said Paul. "Nature is so complex that our knowledge is very small in comparison. The machines get faster so you can see more, adjust the algorithms, and finally see what you're looking for. What we found is 300 miles long and 100 miles wide." This, he said, has been the whole history of seismic imaging. Seismic imaging isn't an exact science and it's "always a question of which approximation is best." He said Chevron evolves the algorithm every six months, and that this enables them to "just see things that were not visible before."
Once HPC permitted Chevron to "see" the possibilities, the company had the confidence to proceed with the enormously expensive process of drilling a test well. HPC was used again for the even larger challenge of modeling what the drilling process might be like. This computer modeling was done in real time.
Specialized ships were needed to drill through 7,000 feet of water and 20,000 feet of underlying rock. The steel drillstrings were five miles long (8 kilometers) and cost more than $1 billion each. The drilling was fully run by robotics.
The next stage, Paul said, is to model these reservoirs to decide how best to develop them. This will involve simulations with billions of cells. Again, the modeling will not be done in the lab, but "on the front line of production work."
Chevron used its own proprietary software for seismic imaging, on an HPC system that was "a cluster of a few thousand processors." Paul said the discovery "unveils an enormous accumulation trend of oil," but cautioned that "there's a big difference between accumulation and actual oil. We have a long way to go, years really."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?