How to Increase the Reliability of Your IT Infrastructure Using Predictive Analytics REGISTER >
The Hess Geophysical Group didn't start off bullish on Linux. In 1998, when a companywide push to cut costs first moved Forsyth's crew to consider Linux, Hess was one of the first to look seriously at the then-untried operating system, which was viewed at the time as more of a rogue movement than a viable alternative to Microsoft and other proprietary software system makers. According to Davis, there was a lot of initial unease over the idea of entrusting Hess' most precious cache of information to Linux. The IBM SP2 supercomputer that Hess had been leasing at the time, similar to the chess-playing Deep Blue, had a tried-and-true track record. "It was the best machine I ever worked with in terms of reliability and performance," Davis recalls. "Linux is not at the same level. You have different expectations of a Mercedes and a Volkswagen."
But that year, cost pressures took priority. Hess' bottom line took a hit as oil prices dropped to one of the lowest points of the 1990s. At $2 million per year on a three-year lease, the supercomputer was an expense the Houston lab decided it had to questionespecially since Davis and crew would need a second supercomputer to meet the improved performance levels that Hess needed to move forward. It would be a tough call: Hess needed a system that would perform tasks in the same amount of time as the supercomputerbut for a lot less money. And Hess couldn't compromise on reliability; its system would be performing 3-D seismic depth imaging on areas covering hundreds of square miles, a task that would require intense levels of complex number-crunching.
Just then, Scott Morton, a former oil industry expert for computer maker Silicon Graphics Inc., joined the Hess lab in Houston as a senior professional geophysical specialist, and it was Morton who finally convinced the group there was an alternative. At SGI, which offered high-powered Unix workstations and supercomputers (think special effects for Jurassic Park), Morton had watched as SGI's customers began shifting from Unix to the newly emerging Linux. For example, Morton recalls, the national defense labs "had already proved that parallel processing could be done very well"and for far less moneyon clusters of PCs rather than a single massive supercomputer. He believed Hess' use of the supercomputer also could be transferred to parallel processing on PCs running Linux.
At most companies, such a radical change in hardware and software might take years to accomplish and require approval from a management committee. But Hess' Geophysical Group, a small, tightly knit coterie of eight engineers and IT specialists, was able to move swiftly and autonomously: Hess headquarters in New York traditionally had let its Houston R&D staff noodle as it pleased, as long as it met budget targets. "It was a local decision to experiment with Linux," says Ross. "They let me know what they were doing, but they didn't ask for my approval."
So Morton and some of Hess' IT experts, including Davis, began benchmarking Linux against the existing IBM SP2 supercomputer. There were, once more, worries. At first, "we were concerned about the lack of support on the hardware side," Morton recalls. PC suppliers were accustomed to a Microsoft environment, not Linux. Linux though, proved a worthy alternative to the SP2. "In some cases the benchmarking results were the same; in other cases, the cluster was slower or faster," says Davis. On average, the Linux cluster and the supercomputer offered about the same processing speed. "That gave us a good idea of what we could do to replace the SP2," Morton says. With about eight months left before Hess' lease of the SP2 was set to expire, the group decided it would gradually shift from the supercomputer to Linux clusters, buying one 32-node Linux cluster and then another as the new system proved itself.