Decision-Support Systems: Lessons from the Military

A couple of decades ago I was involved in a project to review some extremely sophisticated decision support technology—the onboard avionics suite of a new high-performance military airplane.

The technology had been in development for over a decade, so it was already well behind the state of the art from a hardware perspective, but the software suite attempted to break new ground in managing cognitive complexity.

It needed to.

The military planners who had requested the new platform wanted to create an airplane that had one crew member instead of two and that could perform equally well in both all-weather air superiority fighter and ground-attack roles. It also had to be carrier capable.

So the “pilot” not only had to fly the plane but act as mission manager, weapons officer, defensive systems coordinator and so on. The idea was to offload a lot of “routine” decision making in these areas to the avionics software (both on board the airframe and via remote communications links to one or more controller aircraft) and leave the pilot free to concentrate on “mission critical” activities.

Great in theory. Unfortunately, even highly experienced pilots couldn’t reliably “fly” even simple mission profiles in the simulator. The engineers blamed the pilots. The behavioral psychologists blamed the engineers.

The pilots—who are selected from a tiny proportion of the human ability spectrum and are extraordinarily well educated and capable people—were extremely frustrated. The program was in trouble, and I got called in to help fix it (the general running it was an acquaintance from a previous career cycle and called in a favor or two to get me involved).

The problem turned out to be relatively simple. The avionics suite was configured to tell the pilot everything all the time. The poor pilot had too much information, couldn’t figure out what to pay attention to and went into “overload,” triggering a lot of (serious) errors.

What the avionics lacked was sufficient “situational awareness:” the ability to selectively present information so that only what was really necessary “right now” was displayed.

We worked out how to add situational awareness capability (using a portfolio of pre-determined “mission scenarios”); the engineers reconfigured the avionics package fairly fast (a few months); and the problems were almost completely eliminated.

In the process, I learned a lot about decisions making.

First, it’s possible to create a situation in which the next piece of information—even if it’s useful or even vital—can degrade decision making capability, because it triggers an overload condition. Adding more information after that point always makes things worse.

Second, situational-awareness filters are not trivial, but they can be developed if “mission scenarios” are well though out ahead of time. It’s necessary, however, for the decision maker to be aware which scenario is in operation; otherwise, interpretation errors will occur.

Third, sometimes you need a “reset” button: a way to clear away accumulated clutter and start fresh. If you have to do this it’s best to do it fast and restart at an easily recognizable point in a relevant system state.

Fourth, practice improves decision making ability. All those hours spent in the simulator flying “artificial” mission scenarios mean that the pilots can react faster and more accurately even if the “real-world” situations aren’t exactly the same as the simulations.

And that’s important, because very few of their real-world missions are exactly like anything they practice for; there are just too many variables in the real world to model completely. At the same time, all the most critical decisions seem to involve a lot of uncertainty, so practice is helpful but not a panacea.

Fifth, recording, analyzing and replaying decisions helps improve decision-making capability, especially the analysis of decisions made under particularly uncertain circumstances. At a minimum, you can improve the information environment. Ideally you can look for ways to avoid getting into the same position again.

The military does this extremely well, and it shows in improved learning curves and rapid assimilation times for even complex new situations, as well as consistently improving decision quality.

Sixth, automation of routine decisions helps almost all the time—but not absolutely all the time. Even the best automation makes bad decisions sometimes, and there are interesting areas of performance when human decision makers simply perform better – often for no obvious reasons and especially when ambiguity is high.

So automate by all means, but add some supervisory capability that looks for situations where the automatic decision is sending the “mission” in a bad direction.

And finally, even the best simulations don’t eliminate the need for test pilots—the tiny fraction of skilled flyers who can fly the airplane before there is a simulator, and in the process help to discover what many of the decisions criteria need to be.

Applying these lessons to decision making in the business world would, I think, give everyone a better chance at getting things right—even when the “mission” is only “business critical.”

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles