Strategy

By Karen S. Henrie  |  Posted 01-06-2006 Print Email
Strategy

Over the past 20 years, companies have amassed a glut of IT infrastructure and put up with levels of waste and inefficiency that they would never tolerate if IT were seen in the same light as more traditional corporate assets. Imagine an auto manufacturer building a plant that gets used for two hours a day. An oil company investing in a refinery that produces at just 5 percent of capacity. A bank with 20 five-story office buildings and only 15 employees working in each one.

Yet that sort of inefficiency is rampant in IT. According to Gartner Inc., a utilization rate of from 5 percent to 10 percent on Intel servers is the rule rather than the exception. Most IT organizations run out and buy a new server every time they deploy a new application.

CIOs have willingly tolerated this situation in hopes of making sure they can supply enough IT resources to business users when needed. Many CIOs are distrustful of mixing and matching carefully configured applications on a single platform, for fear it will put one or all of those applications at higher risk of a system-level failure. The operating-system upgrade required by an accounting application could bring down a payroll application running on the same server.

Steadily declining equipment costs have only made matters worse. Processing power, memory and disk space are getting cheaper by the day. Server prices alone have dropped 80 percent or more over the past decade, making it that much easier to simply buy another server or storage device, rather than rationalize an existing setup.

Bill Homa, senior vice president and CIO at Hannaford Brothers Co., a $5 billion grocery chain based in Scarborough, Me., recalls a store-based labor-scheduling system that ran on 33 Windows NT servers. Some ran the application in production, while others were used to develop and test newer versions. "Managing that was a nightmare," says Homa.

This profusion of infrastructure does not come cheap. According to a recent report by IDC, the labor required to maintain a single small application server can cost between $500 and $3,000 per month in a production environment—and that figure excludes costs associated with backup and recovery, network connectivity, power and air conditioning. Multiply that by hundreds, or even thousands of servers in the typical large IT organization, and it's easy to see how systems-management costs are skyrocketing.

The problem isn't confined to servers. Many storage systems, networks, PCs and even applications are also vastly underutilized. According to a recent Gartner survey, companies routinely spend 70 percent to 80 percent of their total IT budgets supporting established applications and required infrastructure components.

That's why a growing number of IT organizations are now turning to virtualization. The term "virtual" goes back to the days of mainframe time-sharing, when it referred to partitioning and other technological sleights of hand that allowed one computer to carve up its processing, memory and storage resources so that it appeared as a different computer to every user.

Present-day virtualization stems from those early roots. Dan Kusnetzky, vice president of System Software at IDC, says virtualization is akin to "creating and fostering a carefully designed illusion." Gartner defines it more prosaically as "the pooling of IT resources [e.g., computing power, memory, storage capacity] in a way that masks the physical nature and boundaries of those resources from the user."



 

Submit a Comment

Loading Comments...
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date