This is a trick question. Imagine you’ve just had a major medical operation, and you’re being given a choice: Would you rather be put in a nursing unit that administers the wrong drug or the wrong amount once in every 500 “patient days,” or go to a unit that reports blunders ten times as often, where the odds against you appear to be 1 in 50 rather than 1 in 500?
Amy Edmondson, an associate professor of management at the Harvard Business School, was tricked, too. In the early 1990s, Edmondson was doing what she thought was a straightforward study of how leadership and coworker relationships influenced drug treatment errors in eight nursing units. Edmondson, along with physicians from the Harvard Medical School and the Harvard School of Public Health, which provided financial support for her research, were all stunned when questionnaires completed by these nurses showed that the units with superior leadership and relationships between coworkers reported making far more errors. The best units appeared to be making more than ten times more errors than the worst!
Puzzled, but undaunted, Edmondson brought in another researcher, who used anthropological methods to interview people at these eight units and observe them at work. Edmondson was careful not to tell this second researcher about her findings or hypotheses, so he wasn’t biased by what had already been discovered.
When Edmondson pieced together what this independent researcher found with her own research, and talked to the physicians who were supporting her research, she realized that the better units reported more errors because people felt “psychologically safe” to do so. In the two units that reported the most mistakes, nurses said “mistakes were natural and normal to document” and that “mistakes are serious because of the toxicity of the drugs, so you are never afraid to tell the nurse manager.”
Now, consider the two units where errors were hardly ever reported. There, the story was completely different: the nurses’ view of errors, Edmondson found, was that nurses said things like, “The environment is unforgiving, heads will roll,” “you get put on trial,” and that the nurse manager “treats you as guilty if you make a mistake” and “treats you like a two-year-old.” Edmondson concluded that the research study would not have successfully uncovered all errors in these units because people weren’t talking about all of them in an effort to protect themselves from repercussions.
Edmondson’s research is helping to change the meaning and interpretation of reported medical errors. The physicians at the medical schools which sponsored her research have changed their views 180 degrees, no longer viewing the error data as objective evidence, but as something driven in part by whether people are trying to learn from mistakes or trying to avoid getting blamed for them.
A recent best-selling book by experienced surgeon Atul Gawande, Complications: A Surgeon’s Notes on an Imperfect Science, makes much the same point. Gawande cites his extensive clinical experience (and also mentions Edmondson’s research), as evidence that mistakes are inevitable at even the best hospitals, and that the difference between good and bad surgeons (and good and bad hospitals) is that the good ones admit and learn from mistakes, while the bad ones deny making mistakes, focusing their energies on pointing the finger at others rather than on what they can learn.
The implications of Edmondson’s perspective on learning from errors go far beyond the medical arena. Related research, including recent studies in the airline industry and in manufacturing plants, reinforces her perspective and adds important nuances. One of the most crucial lessons from these studies is that companies and groups that focus on how and why the system causes mistakes rather than which people and groups are to blame not only encourage people to talk more openly about mistakes, they result in changes that actually reduce errors.
Jody Hoffer Gittell is an assistant professor at Brandeis University who spent eight years studying the airline industry. She describes this research in her wonderful book, The Southwest Airlines Way: Using the Power of Relationships to Achieve High Performance. I was especially struck by her comparison of how Southwest Airlines and American Airlines handled delayed planes. At American, at least in the mid-1990s, employees repeatedly told her things like, “Unfortunately, in this company when things go wrong, they need to pin it on someone. You should hear them fight over whose departments get charged for the delay.” In contrast, at Southwest, the view was that when a plane was late, everyone needed to work together to figure out how the system could be changed so it wouldn’t happen again. As one Southwest station manager told Gittell, “If I’m screaming, I won’t know why it was late…. If we ask ‘Hey, what happened?’, then the next day the problem is taken care of…. We all succeed together—and all fail together.”
Certainly, there are many other reasons why American Airlines has lost billions of dollars in recent years, and Southwest continues to be profitable despite the horrible conditions facing the airline industry, but Gittell makes a powerful case that Southwest’s focus on repairing systemic problems rather than placing blame is an important part of the story. Gittell’s research suggests that having both psychological safety and a focus on trying to fix the system is important.
Indeed, focusing on the fix more than where the fault lies may be tough to accomplish: There is a well-documented tendency in countries like the United States, which glorifies rugged individualism, to give excessive credit to individual heroes and to place excessive blame on individual scapegoats when things go wrong.
Research on two process improvement efforts by MIT professors Nelson Repenning and John Sternman examined how this tendency to “overattribute” success and failure to individuals undermines organizational change efforts, and explores how this tendency can be overcome.
The professors contrasted a successful and a failed change effort. In the unsuccessful one (which focused on speeding product development), managers continually attributed good and bad performance to individual skills and effort, rather than systemic issues. Heroes and scapegoats were constantly produced, but little meaningful learning and change actually occurred. In the successful effort (which focused on improving manufacturing cycle time), managers consciously fought their natural tendency to identify who deserved credit and blame, and instead, focused on how to strengthen the system. A supervisor explained their success this way: “There are two theories. One says, ‘There is a problem, let’s fix it.’ The other says, ‘We’ve got a problem, someone is screwing up, let’s go beat them up.’ To make an improvement, we could no longer embrace the second theory, we had to use the first.” Clearly, CIOs and just about anyone else who manages complex systems face similar pressures to find and fix the causes of mistakes, and to create conditions where the system keeps getting better and better.
Emerging research on this challenge suggests three key guidelines. First, when mistakes happen, begin by assuming that the fault lies with the system, not the people. Second, forgive people who make mistakes and encourage them to talk openly about what they have learned. The best managers follow the mantra “forgive and remember” rather than “forgive and forget” or, worse yet, “blame, remember who screwed up and hold a grudge against them.” Sure, there are still times when people lack the training or skills to do a job well, and the system is not really to blame. But if you manage people, think twice the next time you’re searching for a scapegoat. First, try changing the system.
Robert I. Sutton is co-author with Jeffrey Pfeffer of The Knowing-Doing Gap: How Smart Companies Turn Knowledge into Action. He co-leads Stanford University’s Center for Work, Technology and Organization. Professor Sutton’s next column will appear in August. Please send comments to editors@cioinsight-ziffdavis.com.