Power Surge

By Tony Kontzer  |  Posted 07-10-2007 Print Email

Power Surge
Consider that in the past, every watt of power devoted to computer processing in data centers required a half-watt of power for cooling and lighting, and today that equation has flipped: Every watt consumed by computing resources now requires two watts of power for cooling and lighting, according to Gartner's Bell. "That means one-third of power is going to useful work, and two-thirds is devoted to non-productive tasks," he says.

That fast-changing equation is forcing IT executives to think more conscientiously about their data centers on multiple fronts. They must keep up with technology to ensure they have the computing resources to support today's complex business applications; they must squeeze more productivity out of their servers so as not to let those machines sit idly, as they often did in the past; and they must control the increased costs of powering and cooling those resources, which often requires them to reduce the size of their data centers to cut down on waste.

Often, however, solving one problem causes another. A few years ago, brokerage firm Thomas Weisel Partners reached the point where it was running out of data center space. It tackled the problem with a combination of blade servers, which pack more computing power into smaller boxes, and virtualization software, which lets individual servers handle the workload of multiple machines. Virtualization had the added benefit of letting the company run multiple applications on a single server without worrying about intermingling customer data, a huge concern for financial services companies.

But the combination of smaller, hotter-running machines and virtualization led to heavier power consumption and cooling requirements. "I'm probably more likely to run out of power and cooling before I run out of space," says Kevin Fiore, Thomas Weisel's vice president and director of engineering services.

Sleepless Nights
The question of how to reduce power drain while beefing up processing capacity has proven to be a conundrum that keeps many IT executives awake at night. So it's no wonder technology vendors devote big money to reducing the data center power drain.

Intel and AMD are investing millions in their latest efforts to offer microprocessors that deliver more processing capabilities while consuming less power—Intel with its research into new (as yet unnamed) input/output technology that could support up to 10 processors on a single chip, and AMD with its soon-to-be released Barcelona chip. IBM's billion-dollar "Project Big Green" initiative, intended to make computing more energy efficient and environmentally friendly, includes a five-step program for companies looking to cut power use in the data center. And a team at Hewlett-Packard Labs last year introduced "dynamic smart cooling" technology that links smart air conditioning systems to a network of sensors measuring temperatures entering and leaving servers, in theory delivering cooling only when and where it's needed. "Cooling beyond needed levels is a waste of energy," says HP Fellow Chandrakant Patel, who heads up the effort. "We can reduce power consumption by 25 to 45 percent."

Meanwhile, Schwartz and his executive team at Sun—which last fall introduced Project Blackbox, a self-contained, shippable data center—have established power consumption as a major area of engineering focus. In a blog entry posted last September, CTO Greg Papadopoulos made it clear that processing power is no longer the most important consideration in equipping data centers. "Just about every customer I speak with today has some sort of physical computing issue: They are … maxed out on space, cooling capacity or power needs—and frequently all three," Papadopoulos wrote. "My guess is that we'll look back at today's 'modern' systems to be about as efficient and ecologically responsible as we would now view the first coal-fired steam locomotives."



 

Submit a Comment

Loading Comments...