How to Increase the Reliability of Your IT Infrastructure Using Predictive Analytics REGISTER >
Grid computing isn't entirely new. It's actually a variation on the notion of distributed computing, which takes an application off one server and runs it more effectively by spreading it across several servers on a network.
A few early experiments with distributed computing, including a pair of programs called Creeper and Reaper, ran on the ARPAnet, the 1970s predecessor of today's Internet. Later, when Xerox Corp.'s Palo Alto Research Center (PARC) installed the first Ethernet, a program cruised the network at night, commandeering idle computers for CPU-intensive tasks. A later scavenger system, called Condor, developed by Miron Livny and his colleagues at the University of Wisconsin at Madison, is now running at more than a dozen universities and other sites. Condor (see "Flight of the Condor" in this article) roams within clusters of UNIX workstations, usually confined to a single laboratory or department, and delivers 400 CPU days per day of free computing to academics at the university and elsewheremore than many supercomputer centers.
In 1995, grid computing concepts were explored by computer scientists around the country in a project organized by the National Science Foundation called I-WAY, in which high-speed networks were used to connect, for a short time, high-end resources at 17 sites across North America. Out of this activity grew a number of Internet grid research projects. In 1997, the Entropia network, spearheaded by Argonne's Foster, was established to apply idle computers worldwide to problems of scientific interest. In just two years, this network grew to encompass 30,000 computers with an aggregate speed of over one teraflop per second, or the capacity to perform 1 trillion numerical calculations per second. Among its scientific achievements is the identification of the largest known prime number (see www.mersenne.org). Since then, venture funding has been raised to use grid computing for philanthropic concerns, including Parabon Computation Inc.'s Compute Against Cancer project, which analyzes patient responses to chemotherapy, and Entropia Inc.'s Fight AIDS at Home project, which evaluates prospective targets for drug discovery.
Now, grid computing is "gaining critical mass" in the business community, says Foster. Indeed, hardly an industry exists that doesn't need to crunch more data now than it did 10 years ago. Modeling and simulation software are available for everything from airplane wing design to data mining. This comes as CIOs feel enormous pressure to squeeze the most from their budgets. At the same time, their bosses demand they find new ways to use technology for competitive advantage. The grid strategy can help.
The right circumstances include research, engineering, financial analysis and other areas that need heavy number-crunching, usually on smaller sets of data that require hundreds or thousands of calculations. At pharmaceutical firms, that means drug discovery. At Oracle Corp., grid computing is used to run more than 50,000 tests daily on the database software it develops. And Motorola Inc. uses grids in many areas, including verification tests on code for the semiconductors it builds. Intel Corp., meanwhile, says its internal grid has saved it $500 million over the past decade.