Viewpoint: Ian Foster on Grid Computing

0108 Brainpower Collective

Science, like business, is increasingly collaborative and multidisciplinary, and it’s not unusual for teams to span institutions, states, countries and continents. But what if such groups could link their data, computers, sensors and other resources into a single virtual laboratory? The payoffs could be huge.

Some envision a cure for cancer or a new miracle drug, but the interest to me is the ability to enable a dramatic advance in the quality of the tools we can bring to bear on complex problems—in business and in research. But let’s be less exotic. Imagine the possibilities of harnessing the power of idle computers to solve a specific business problem, big or small. You can do it faster, cheaper—and you can pose some questions that you wouldn’t have had before because now you have the computing power.

The average computer, for example, is being used only 5 percent or 10 percent of the time. There are more than 400 million PCs around the world, many as powerful as a supercomputer was in 1990. Most are idle much of the time. Every large institution has hundreds, or thousands, of such systems. That’s an amazing amount of available computing resources now going unused. At the University of Wisconsin, for example, a grid regularly delivers 400 CPU days per day of essentially “free” computing to academics at the university and elsewhere—more than many supercomputer centers. SETI@home—the group of volunteers who are looking for signs of intelligent life in the universe—is now running on half-a-million PCs and delivering 1,000 CPU years per day—the fastest special-purpose computer in the world. Some entrants have now gone commercial, hoping to turn a profit by selling access to the world’s idle computers—or to the software required to exploit idle computers within an enterprise. Though the business models that underlie this new industry are still being debugged, the potential benefits are enormous: Intel Corp., for example, estimates its own internal grid computing project has saved it $500 million over the past 10 years.

The focus for grid computing, so far, has been on cracking tough research problems—like finding a new prime number or analyzing patient responses to chemotherapy or solving hugely complex research problems. Now, though, we’re seeing a growing interest in applying it across a whole range of industrial problems.

There are technical challenges. We have a good set of security technologies, for example, but there is more to be done. We need to better manage the security and policy issues that arise when you are sharing resources on a large scale. Then there’s the need to figure out the business model of this. When someone buys computing software services from people in a grid, how do people get paid for that? What should such services cost? What would be a fair price? And then, how do you keep track of who is using which cycles?

Still, if this technology takes off as I think it will, it should then become easier to outsource whole aspects of your business. This technology is part of trying something that will help an ASP model work more cost-effectively and therefore have a bigger impact, business-wise. The way the world is moving, in the future when we engage in computing, we’ll do it in the context of loosely-knit dynamic organizations interested in short- or long-term collaboration. And outside groups will try to provide services a CIO might provide more cheaply. Those two forces seem to be pushing us toward thinking of computing not as something we buy and operate as an enterprise, but in a dynamic fashion. That’s what grid computing is all about.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles