Opinion: John Parkinson on the Ins and Outs of Benchmarking your IT Department

John Parkinson Avatar

Updated on:

Late in 2005, the CIO of a fortune 20 company whom I have known for many years asked me to take a look at benchmarking his IT organization against its peers. On the surface, this was a reasonable request. The CIO wanted to know how well he was doing against some key performance indicators and if there were any areas where he was significantly behind the competition on policy, practices and performance.

However, in my view, there were two major potential issues with the request. First, just who were his peers? He’s in an industry where there are only a dozen or so global players, only a handful of whom are really of comparable size. Roughly half of these run their IT largely in-house; the other half are largely outsourced, so valid analysis would be difficult and complex and meaningful comparisons would be hard to make. Even if we were to drop industry as a selection criterion, there are only a few companies as large as his, and they are extremely diverse, making useful comparison even harder.

I knew exactly who he wanted to include in the benchmark study, but beyond the small potential sample size, you get the second problem of confidentiality of data and results. If the sample is too small, it’s too easy to use publicly available information to try to “decode” the supposedly anonymous benchmarks and get straight to a ranked list of your competitors’ performance; that’s why getting other companies to participate is generally difficult. These two factors, plus the cost of collecting and analyzing the information required make benchmarking for the largest organizations extremely challenging. And all too often the results can be meaningless or misleading—or both.

Here’s an example of how benchmarking results can lead you astray. A few years ago I worked on a major reorganization of an IT group that had had very good comparative benchmarking results for many years. According to the benchmarks, they were in the lead in their industry and amongst the best performers in any industry. Yet when we were done with the reorganization—which involved new information-driven policies and practices as well as a new organization structure—they had succeeded in reducing head count from around 700 to just over 250 without loss of delivery volume or quality. That result puts into question their years’ worth of benchmark comparisons and “excellent” performance ratings: Either everyone has the same set of chronic problems, or there is something wrong with the measurement and analysis process.

All this got me thinking about IT performance benchmarking issues in general. A Fortune 20 IT organization is generally pretty large: Most IT budgets at this level would be a Fortune 1000 company all by themselves (the threshold is about $1.6 billion) and a few would be in the Fortune 500 (which takes a budget of roughly $3 billion). In general, these IT organizations are also pretty diverse, thanks to the necessity of serving all the different areas of a Fortune 20 business. In many cases such large IT organizations are organized like a collection of smaller IT organizations, usually along functional or (increasingly) core business process lines.

With this is mind, I suggested a slightly different, three-part approach to the benchmark.

Firstly, we would do a quick, mostly qualitative, benchmark on about 20 global organizations selected solely for comparable size, using publicly available information and extracting any quantitative information available. We would use this to build a “positioning map” to give the CIO a sense of who amongst the group was most like his organization and who was least like it. The “map” would have several dimensions, but we would not guarantee that it would serve up any specific or useful quantitative comparisons. Instead, it would let the CIO decide who, if anyone, he wanted to know more about. In particular, it would let him and his team and the business executives who are his peers think strategically about the role of the IT organization and how to organize IT resources. Are there “out of the box” practices from companies that are radically different from his own business from which he can learn, but where adoption would require a strategic shift? This could have a significant impact on his strategic planning process, but not on immediate day-to-day operations.

Second, we would look at about 50 product and service organizations (in the technology products and services arena) with revenues about the size of his IT budget. Here we would go for a more quantitative analysis and some specific rankings. We would, in essence, be benchmarking his company against the best companies that operated at the scale of his organization and did comparable kinds of work. These results would let him know more about how the broader market valued specific aspects of performance in technology services and give him some additional basis for internal return on investment decisions and perhaps suggest some new performance measures.

Thirdly, we would benchmark each internal part of his company’s IT organization against the others (his company’s IT organization is organized along business process lines, with a shared-services model for infrastructure management), taking care to compare like with like. These results would tell him if he had any major differences in his internal performance, which were his best and less than best performers, and how much variability his IT customers were experiencing. We would also be able to give him an idea about how much better his organization’s overall performance would be if he focused on improving the bottom quartile of his teams—using the practices and experience of his top quartile.

He liked all three parts of the proposal, especially the last, because they had never done a formal “internal” comparative benchmark before. He saw immediately that they could link the benchmark to their balanced scorecard process and to his managers’ individual KPI set, both of which they had been struggling to implement effectively. If properly designed, the benchmark could be used periodically to monitor the effects of process improvement programs, such as CMMi and Six Sigma (he was working on initiatives for both). And because the various groups being benchmarked were all within a single organizational context (although distributed geographically and distinguished to some extent by different management styles) many of the external variables that make comparisons among separate corporations difficult or expensive would not be factors, or could be more easily controlled for.

The results will be in soon: The external benchmarks have taken about three months, and the internal effort has consumed about ten weeks, for this first time around, because the necessary information systems have had to be established and some data rationalization was needed. Subsequent internal benchmarks should be much faster (about a week for the analysis and report generation) and are planned to be run once a quarter. Fortunately, most of the basic data collection mechanisms were already in place.

I’ve gotten good results from this internal comparison process over the past 20 years and I’m always surprised that more organizations don’t use it routinely, though some companies do (GE comes immediately to mind). It can have implementation pitfalls if some of the IT divisions being measured feel they are being unfairly compared, or if the bottom quartile is penalized rather than encouraged to improve. So internal benchmarks have to be carefully designed, well implemented and effectively communicated. Do all these, however, and you may get a much better return on investment than from an external comparison that obscures more than it illuminates.

Once the results are in, I’ll follow up on this benchmarking effort in a future column.