A map of the Gulf of Mexico covers an entire wall of geophysicist John Potter’s office at Amerada Hess Corp.’s Houston-based R&D lab. It is as complex as it is large, charting thousands of square miles of underwater terrain. Hand-drawn concentric circles and lines separate, into hundreds of square lease units, thousands of miles of ancient rock formations and subterranean cliffs available for offshore oil exploration. A much more detailed map, this one digital, is stored on Potter’s desktop computer. Created from sound waves and complex mathematical algorithms, it measures the density and composition of bedrock located miles beneath the Gulf floor that dates back, in some cases, to the days when dinosaurs walked the earth.
But what’s most striking about Potter’s map and the computerized one that he and his colleagues in Hess’ Geophysical Group use is not so much the age of the underwater landscape but Hess’ detailed knowledge of it. Potter’s maps include precise measurements not only of the thickness of a rock layer in one area versus another, but also 3-D images viewable from all angles to look for clues about any undiscovered oil that might lie within.
Such maps would not have existed even five years ago: By necessity, oil exploration has been a high-stakes guessing game of the highest order. Supercomputers, a relatively recent phenomenon, have helped fine-tune the analysis, but at a hefty price, which has limited their use by some companies and made it cost-prohibitive to conduct sustained, ongoing number-crunching and digital depth analysis.
But times are changing. Thanks to the emergence of low-cost computing power in the form of Linux computing clusters, Hess and other oil companies now can run algorithms they never could have dreamed of running before. At far more affordable prices, Hess can now extract terabytes more information about what lies beneath the earth’s surface than it could with a supercomputer. Indeed, Linux cluster technology has “dropped our cost of computing by an order of magnitude of two,” says Hess CIO Richard Ross. “One of our guys wrote a program that allows him to interactively work with multiple terabytes of data. All the books stored in the Library of Congress would equal 20 terabytes. Just think about working with that much information in real time.”
Better yet, Hess’ Linux clusters have nearly doubled in power each year since 1998, enabling Hess engineers to process data more often and in a wider variety of formats. Ultimately, this gives Hess executives a continuously improving stream of information with which to make crucial decisions about which oil fields to lease, where to drill and how much money to bid for a particular field. The difference is like night and day: CIO Ross says today’s seismic images put Hess’ old maps to shame. “It’s like the difference between looking at a low-resolution image, where you can barely make out two human figures, and a high-resolution image, where you realize there’s a man and woman holding flowers and candy,” he says. “Because we can now process more data going into our bids for oil leases, our risk of doing something stupid is lower.”