The Green IT Dilemma

I have been devoting a lot of time to thinking through the refresh of our rolling three-year IT infrastructure plan. A big piece of our thinking is focused on ways to steadily reduce the overall cost of computing, even as we increase the capacity of our computation, storage and connectivity resource pools.

 

Increasingly, this is leading us to look at ways to decrease our use of electrical power in the data center. This is pure economics; we’re not driven by any particular ecological agenda. Power costs money and using less of it saves money.

 

We have been carefully measuring our energy-efficiency ratios: how much power we pump in to run our machines versus how much power we need to keep them running. With several different technology platforms to manage, there is no easy way to come up with a single formula for this. However, we use MIPS/watt, stored terabytes/watt and ports/watt measures to keep track of where the power goes.

 

We also measure the ratio of “critical” power–what we need to run the technology–to total power used, which includes cooling systems, environmental management and so on. In a perfectly efficient world, this ratio would be 1:1. Ours is at about 1:1.6, and it’s my objective to get it to about 1:1.25 by 2010.

 

The trouble is, a lot of our other initiatives run counter to achieving this. When we started down the capacity virtualization path, our servers averaged about 20 percent utilized. The current and near-term generations of CPU that we use can throttle back on their power levels under light loads, reducing power consumption from around 30 watts per core to well under 10 watts when idle.

 

With virtualization, we achieve utilization levels of over 65 percent and will target even higher levels as the workload-tuning process improves. So, I need fewer servers (good), but they run hotter all the time (bad). I’m also increasing the density of my server installations by using blades, and that creates hot spots that require fairly aggressive cooling. That, in turn, takes more power to move the air or water that carries off the excess heat.

 

Of course, we could always spread things out more. But, if we do that, we’ll incur additional space costs and increased connectivity costs for the fiber and cat 6A cables.

 

We’re starting to run into the same problems with storage. Disks on all storage tiers are getting bigger, and, even though the drive manufacturers do a pretty good job of keeping power requirements down, bigger disks encourage us to keep more data immediately available. Since even the most power-frugal disk uses more power than a tape cartridge, we’re steadily losing that fight as well.

 

Worst of all, every projection I have seen indicates that the cost of power will rise over the next decade. Even if we make our 1:1.25 objective for efficiency, leverage all the power-saving features of the technology we use, and balance power density, cooling, space and cost optimally, power will still cost us more money five years from now than it does today.

 

Going green isn’t just a passing fashion or an ecologist’s dream. It’s hard commercial reality. We all need to get behind ways to save power in the data center–not only to save money, but also to ensure that we have enough power to go around. That’s going to take a lot of work.

 

Better get started.

 

Back to CIO Insight

 

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles