In my February column, I looked at some of the ways CIOs are beginning to differentiate their strategies by building portfolios of projects that incorporate concepts from the world of financial investment. Buried in that column was a promise to look at approaches to justifying and funding projects when the benefits for those putting up the money aren't immediately obvious. I have three types of projects in mind: extensive infrastructure development and deployments that will eventually benefit everyone, but that have only a few initial users; projects and technologies that benefit a few users a great deal, but will never be widely leveraged; and pilot projects based on promising early-stage technologies that might not pan out.
Most IT organizations have to deal with all three kinds of project from time to time, and some face them all the time. These projects are important for several reasons. First, it's generally impossible over the long run simply to fund projects that have a certain return on investment and that benefit everyone equally. Second, few businesses are so homogeneous internally that every part of the organization gets equal benefit from every project. Third, the politics and sociology of organizations and their trading partners make "rational decision-making" in the pure economic sense of the phrase very difficult. Fourth, it's really not possible to get sustainable competitive advantage from the same technology that everyone else has and uses. Someone has to be first to do the differentiating and make it work.
Given that we can't avoid the hard parts entirely, or all the time, what should we do?
The first situationinvesting in infrastructureis the easiest of the three to attack.
No one wants to be asked to bear the total infrastructure cost of a new capability (think WiFi access, RFID and VoIP right now, but PDAs, cell phones and even networked PCs are recent past examples) just because they were the first to see the potential benefit and ask for it. And often, lots of potential users want to be the last to sign up because they know that by then all the bills will have been paid and they'll get something close to a free ride. CIOs faced with these all-too-common situations have a few options:
Use "corporate" funds to make the initial investment and repay the "loan" by charging everybody a usage "tax" as they implement the capability. This works well where there is a clear business case and high probability of successful deployment. The CIO is essentially investing on "margin" on behalf of all users.
Find more users willing to be early adopters. In many cases, this now means going outside the enterprise to create a "consortium" of adopters who together represent a big commitment, but where no individual member is bearing the entire cost. Given that many early attempts were in the supply-chain area, I'm surprised more efforts of this type haven't happened with RFID.
Get key technology partners to finance the project, either on a "pay-as-you-go" basis or by substituting external investment for the corporate funds used in the first option above. In the long run, this will generally cost you more than paying for it yourself (even on a discounted cash flow basis), but if you don't have the available capital and the ROI is there, it's a viable option. All the large technology vendors offer some form of project financingoften at very competitive rates.
In the right situation, you can use a combination of all three tactics. Getting the rate of return right isn't easy, but the necessary financial models are well-established, and the skills needed to use them can be borrowed from the CFO if they're not a part of IT.
The second kind of project is harder. The challenge of funding projects that will never benefit a majority of users, but will be of enormous benefit to just a few, is one of the core drivers of "shadow IT." If the CIO won't do it, business units use their own money and often go outside the organization for project resources. Some of these resources get to stay aroundbecause the life-cycle costs of such projects are a lot larger than just the initial development cost (think BlackBerry).
In organizations with strong governancewhere standards are understood and adhered tothis can be exactly the right answer. IT focuses on 65 percent to 80 percent of the portfolio and gives "license" to user departments to undertake the rest. Users can specify their own ROI hurdle rates and trade off the CapEx and OpEx consequences against other things they might want to invest innot against the IT projects of other departments. What we have done is switch the yield dimension from a "global" view (all of IT spending) toward a proportion of more "local" portfolio views across all spending in an operating business.
It gets interesting when you run the yield models both ways to get a range of possible portfolio designs, with clear and quantified trade-offs. Business managers can now make informed decisions about prioritiesyou can even implement a system of trading credits to allow departments to swap priorities. You also can and should re-run the models from time to time to make sure the optimization trade-offs between global and local remain valid.
I have looked at what the portfolio mix should be in large companies, and concluded that about 60 percent of IT spending should be managed centrally by IT to provide common services to all users. (Remember, this is a reference model, not a prescription.) About 25 percent should be allocated to projects that benefit some users, but not all, and the remaining 15 percent should go to focused projects that have relatively few beneficiaries. Where projects have certain critical characteristics (especially their need to access master-reference data and their potential impact on systemwide performance), IT should probably still be in charge. Otherwise, a more open sourcing process is fine.
The problem in practice is that strong governance is rare, and the resulting lack of common approaches and platforms raises IT and business costs significantlyusually without raising returns very much. In effect, the gains from our local optima aren't enough to offset the global inefficiencies they cause. This is another of the really hard partsbut here again there are tools to help and an emerging body of knowledge about how to use them.
Finally, let's look at the challenge of emerging technologies. Big organizations are often encouraged by their technology vendors to work with very early-stage technologies, and there are lots of ideas being pushed by smaller and newer players. Without much effort you can burn a lot of time and money trying out these early-stage ideasmost of which (even from the major players) never really make it into the mainstream market. Sure, somebody needs to be trying out all this new technologyotherwise it will be even less well-designed than a lot of what we now get to work with. But does it have to be you?
It turns out that the answer in most cases is clearly "no, it does not" and "no, it should not." If you measure how much of their experimental work directly influences their subsequent mainstream activities, very few IT organizations are effective experimentersless than 5 percent of the companies we have studied or worked with qualified as good or excellent in this regard. You might have other reasons for working with new technologiesrewarding high-performing employees, for example, or responding to a trading partner's directivebut at most companies it's a waste of resources. And in the information-rich world we live in, you can learn just about as much by being a skilled observer as you can by being a participantand at a much lower cost.
This is one kind of project that shouldn't be in most IT portfolios at all.
John Parkinson is chief technologist for the Americas at Capgemini. His next column will appear in June.