How Web 2.0 Changes Enterprise Thinking

Brian P. Watson Avatar

Updated on:

Andrew McAfee makes an ample case for businesses to use collaborative technologies in his new book, Enterprise 2.0 (Harvard Business Press, available now). In this chapter, McAfee explains how sites like Wikipedia have changed the traditional “structures” of content.

Throughout the history of corporate computing, the norm has been to use technology to impose these work structures–to define workflows, interdependencies, decisions right allocations, and/or information needs–in advance and then use software to put them in place. ERP, CRM, SCM, procurement and other “enterprise systems” enjoyed explosive growth starting in the mid-1990s. These applications differed in many respects, but they shared one fundamental similarity: they were used to define, then deploy, business processes that cut across several organizational groups, thus helping to ensure that the processes would be executed the same way every time in every location. The applications did so by imposing the elements of work structure listed above.

The belief that technologies supporting collaborative work should impose work structures appears, in fact, to be an almost unquestioned assumption. Technology developers and corporate managers seem to have until quite recently shared the belief that good outcomes in group-level work come from tightly structured processes. This belief was reflected in the design of both groupware and KM systems. A review of groupware use by Paul Dourish of the University of California, Irvine, highlighted that fact that while the software originally intended to facilitate “work as an improvised, moment-by-moment accomplishment,” it actually wound up being used to support workflows. According to Dourish, software containing “predefined formal descriptions of working processes” has constituted “perhaps the most successful form of groupware technology currently in use.” In other words, a genre of technology intended to support unstructured work has enjoyed its greatest success as a tool for imposing work structures. KM applications did not typically specify interdependencies between people or workflows, but they did tightly predefine the structure of the information to be included within the knowledge database, giving only certain people and groups the right to add to it.

It’s easy to understand where this faith in structure originates. It’s at least as old as the theories of the industrial engineer Frederick Winslow Taylor, who at the beginning of the twentieth century advocated studying work to determine the “one best way” of accomplishing it and then making sure all workers followed the new standard. Later in the century, the quality revolution led by W. Edwards Deming and others stressed that the best way to ensure consistently satisfactory outcomes was not to focus on the outcomes, but rather to control the process used to create them. This could mean, for example, taking away a worker’s right to adjust his machine after it generated a single bad part and giving this right instead to an engineer who will use the techniques of statistical process control to determine if the machine had truly drifted out of specification. These techniques and the philosophy underlying them soon spread from manufacturing to other industries, to the point where standardized, tightly defined processes became an almost universal goal.

The work design philosophy of good outcomes via imposed structure is clearly appropriate in many circumstances, but is it always? Are there circumstances in which it’s better not to try to impose control? Can high-quality outcomes result from an undefined, non-standardized, uncontrolled process? n

Reprinted by permission of Harvard Business Press. Excerpted from Enterprise 2.0: New Collaborative Tools for Your Organization’s Toughest Challenges by Andrew McAfee. Copyright (c) 2009 by Andrew McAfee; All Rights Reserved.