Putting IT Metrics to WorkBy Samuel Greengard | Posted 04-13-2009
Putting IT Metrics to Work
Utter the word "metrics" to any cio and you're likely to get an earful about opportunity and risk. It's a fairly simple concept: Measure the right things, and the business will likely thrive. Monitor the wrong factors, and the same enterprise will veer off course and perhaps even crash.
While no sane executive would ever dispute the validity of metrics and benchmarking, figuring out how to apply them can prove daunting. And nowhere is this more obvious than at the chaotic intersection of business and IT.
"It's very difficult to identify a benchmark that is actually relevant," says Bobby Cameron, Forrester Research's principal analyst for CIO issues. "So much of IT's performance and budget is defined by the business and technical context in which spending and investments occur." Too many organizations, he says, get lost in a tangle of measures and numbers that may or may not prove meaningful.
"The key is to measure IT's value to the business," Cameron suggests.
It's an argument that's tough to dispute, particularly in today's brutal economy. However, the path to success can prove remarkably bumpy. For one thing, no single set of benchmarks works for every organization--or even for the same organization over time. Attempting to apply a set of industry best practices or use a preset Balanced Scorecard or Six Sigma template can result in abysmal failure. "More than a few companies have gotten lost by chasing other companies' metrics," says former CIO Dan Gingras, now a partner at consulting firm Tatum. "Benchmarking must be internally rather than externally focused."
For another, "It's easy to fall into the pathology of choosing metrics that indicate success while an organization is actually struggling," says Harvard Business School Professor Robert S. Kaplan, co-creator of the Balanced Scorecard. "If you're too inwardly focused and you think mostly about IT as its own function, you can easily wind up measuring the wrong things."
Consider: A bulletproof e-mail system and five-nines availability may allow the IT department to toast its performance, but if customer satisfaction is lagging or revenues are sagging, the metric is useless.
Beyond Conventional Metrics
Finally, it's important to develop tools and systems--ranging from business intelligence and analytics to customer surveys and focus groups--to capture the data that's needed to measure success against a benchmark. Inaccurate or missing data can lead to wrong conclusions and poor decisions. In fact, it can derail an entire benchmarking initiative and weaken an enterprise. Moreover, as an environment changes, CIOs must be aware of how underlying numbers, ratios and percentages may change and lead to deceptive results. For instance, a metric such as "technology spending versus operating expenses" may spike in a down economy as revenues drop, but then appear low during a robust quarter.
Conventional IT metrics often center on server performance, applications, mips and other technical factors. But these measurements don't necessarily tie into profitability or agility over the long term. Even executives who use more targeted IT-business metrics--such as "technology as a percentage of operating expenses" or "technology spending versus revenues"--risk missing the big picture. It may be useful to expand outward to general metrics such as Economic Value Added (EVA) or information productivity, both of which can help an enterprise understand IT investments in the overall fabric of the business.
Yet, even these criteria are likely to provide more of a snapshot than a complete picture. An IT department must understand where it fits in, says Forrester's Cameron. Within some enterprises that operate in a regulatory environment or an industry that requires little more than system dependability and availability, a utility-based metrics model may fit the bill. At these firms, technology need only serve as an enabler.
At the other end of the spectrum, he notes, are organizations where IT must serve as a partner to spur technological innovation and help differentiate the business for customers and others. Not surprisingly, many IT organizations fall somewhere in the middle.
Complicating matters further is the fact that different IT systems and solutions may fit into any of these categories within a single organization. Problems typically develop when there's a mismatch between what a department, business unit or entire organization expects and what metrics the IT department and the executive suite have in place.
For example, Harvard's Kaplan relates the story of a British bank that nearly failed after it rolled out a Balanced Scorecard. Three years later, with the goal of building an IT platform that could be used anywhere in the world, the business was in shambles-- despite exceeding all of its IT benchmarks. "There was a complete disconnect between the business needs and IT performance," Kaplan says.
If all of this seems like a three-dimensional game of chess, welcome to the real world of metrics. Benchmarking pioneer Howard Rubin, president of consulting firm Rubin Worldwide, stresses the importance of creating a "parallax view" in order to gain a good perspective: "An organization must look at things from different angles--and use different metrics--in order to understand if it is getting consistent readings." He describes metrics and benchmarking as a largely forensic process that allows a CIO and other executives to view patterns, form hypotheses and then optimize systems. "No single strategic metric is valuable by itself," Rubin explains.
When used effectively, a suite of high-level strategic metrics--experts say anywhere from eight to 12 is ideal--provides multiple dimensions in which to view overall performance on at least a quarterly basis, if not in real time. Underneath this layer, an organization may rely on dozens or hundreds of more tactical benchmarks to measure a variety of things. Some of these may be tied directly to IT--others indirectly or not at all.
Moreover, since metrics ultimately hinge on strategic objectives, they should remain relevant for three to five years ... or longer. Every few years, Kaplan says, it's good to go back to a blank slate and engage in a complete refresh of metrics. It's particularly important when a disruptive technology, such as the Internet or mobile communications, comes along and fundamentally changes the nature of business interaction.
Rubin also emphasizes that results aren't binary. Just because an organization deviates from a benchmark doesn't mean it has failed and that heads should roll. At the end of the day, it's essential to rely on human interpretation and analysis to combine hard data with expertise and experience. "People have to read the metrics and the indicators and make good decisions," he says. "A benchmark is best viewed as a continuous calibration tool [that can be used] to identify opportunities and form hypotheses."