Windows Vista and The Price of Productivity

In my last column, I wrote at some length about the potential for a significant productivity gain from the new user interface and interaction experience technologies that will begin to appear later this year and next, specifically Microsoft’s Vista.

In this piece, I want to explore the downside—all the effort and cost that we will probably need to incur in order to get these gains.

My first thought was, “If the designers do a good job, the effort should be close to zero,” but on reflection I doubt that it will be so easy, no matter how good the design is. We are simultaneously too familiar with what we already have (the various flavors of WIMP metaphors) and too light on universal intuition about how things should work.

Just try to switch quickly between Windows and OS-X or Linux and see how confused you can be by someone else’s intuitions about ease of use.

We need to remember that most human cognitive skills follow a “normal” (bell curve) distribution. The majority of the population has about the same (average) ability with generally small “tail” populations that are either much better or much worse than the average.

There is always a temptation to look for ways to leverage the “best” tail—in part because the “best” people can be 10 or 50 or 100 times better than the average (think race car driver or fighter pilot). Most people will never be that good, however, so it’s generally a better strategy to focus on improving the average ability of the population.

Learning new skills is one of the cognitive skills that follow this distribution. Some of us have no problem at all. Most of us can manage it with some effort; a few of us can’t manage it at all.

In a perfect world, the approach to “learning” the new UI would recognize this. The new UI would start off life looking just like the old UI, and software instrumentation would watch what we use it for. As consistent usage patterns emerge, the UI would automatically and incrementally adapt so that it continuously improved our ability to do our jobs.

No single increment would be so large as to be disruptive. Eventually (and hopefully painlessly) we would reach an individual “capability” plateau where further automatic tweaking wouldn’t improve our individual performance and we would have to make a conscious decision to learn and adopt the habits of the better, perhaps of the best, performers—habits that would also have been discovered by the automatic tuning of their UI experience.

Because we would have information from a large population of users, it would be clear what “better” and “best” means in the context of our work environments. To the extent that optimum interactions can be learned (and it’s a good bet that some can’t) we could be incented to take the training, practice the new skills until we master them, and thus improve our performance.

Because we won’t be as productive while we are learning the new skills, this is really a co-investment model. The business invests via short-term lowered productivity to get long-term higher productivity. We invest the emotional energy (and the frustration that comes with sudden incompetence at things we used to be good at) to learn in order to perform better with less effort later. And there will be some lifecycle costs even when we are done.

Support will be much harder when everyone is using a slightly different version of the UI and when identical interactions are actually implemented differently, because each computer is now individually optimized to the individual. At a minimum, support will involve being able to replicate the exact local context in order to understand the issues and make suggestions about how to solve them.

I don’t expect the new UIs soon to be deployed to be that sophisticated this time around, although I do think that the support requirements will be more complex almost immediately. I do, however, expect such sophisticated approaches with the following generations of deployments. This time around, however, there’s going to be some transition pain.

So back to the original question – what will it cost and will it be worth it? Let’s assume for a moment that the new platforms and UIs will actually work as intended right away and that the productivity potential is real.

From the early-stage testing I have done, I’d say that the “average” person is going to take about one month to get used to the new UI and that they will lose between 25% and 40% of their effective productivity during that period, even if the application designers do a really good job.

Assuming you get the anticipated productivity gains (which I estimated to be between 20% and 40%), you would be showing a positive return after between two and three months.

There are some imponderables. You’ll have to look for people who aren’t picking up the new habits and decide what to do about them. Eventually, they won’t be able to cope with the work at all if they can’t be helped to adjust.

How much design impact will be needed to achieve the productivity potential clearly depends on the types of tasks, so there are some variables to look at here as well. But as far as I can see, the cost should be manageable (mostly) and the benefits are definitely worthwhile.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles