Strong Signals: How Real Is Real Time?

By John Parkinson  |  Posted 01-17-2003

Strong Signals: How Real Is Real Time?

There's been a lot written over the past couple of years on the concept of the "real-time enterprise." Let me start by admitting that, as a mathematician by training, the term "real time" carries a lot of baggage for me.

Nevertheless, there are a lot of software engineering ideas in areas such as embedded systems and safety-critical computing that focus on how you build information-based technologies that respond very quickly to changes in their environment. Because these systems require the ability to act as fast as the changes they sense and then respond to—in other words, in real time—they generally can't involve humans in their decision and control loops. Creating a real-time enterprise involves applying these ideas to a broad range of business systems.

Such systems are important. First, situations arise in which the ability to process a stream of complex data, identify patterns—both expected and unexpected—and then react appropriately is essential to the safe operation of a system. Electric power generation, medical monitoring, automobile emergency braking and aircraft flight controls are examples. In the business world, you can look at risk management, manufacturing quality, supply-chain operations, product profitability and customer satisfaction in the same way.

Second, systems that use rapid sense-and-respond principles can outperform systems that use more static decision rules and control models. Modern fighter aircraft are aerodynamically unstable: They can't fly at all without their flight computers. But letting the computers microcontrol their flight surfaces greatly expands the performance envelope of the airplane. In business, we could apply the same approach to product mix, customer-specific pricing, manufacturing capacity management and logistics.

Back in the 1960s, Peter Drucker introduced the idea that business management tasks were becoming so complex and information-intensive that we needed to apply automation to them, letting people concentrate on "what to do" decisions, not on "how to do it" activities. The integrated MIS efforts of the 1970s and 1980s were often directed at this goal. We made a lot of progress in areas such as inventory management, yield management and financial risk, but the objective fell out of favor because we underestimated how much sensing would be needed, how smart the response rules would need to be, and how rapidly the business environment changes. And, as often happens, we underestimated the human change-management dimension.

Copying Strategies

Copying Strategies

Can the technologies we have today cope? At the basic level, we certainly have more connectivity, computational capacity and data storage available than we did 20 years ago. We can also write much smarter and more flexible rules and rule processors. We know we don't have to automate everything to make the automation effort worthwhile, and we understand how to build in the capability to adapt going forward—sometimes automatically, more often through periodic human intervention via performance review and technology redesign.

Creating the real-time enterprise is still a major challenge, but there's a compelling core value proposition here. A functional real-time enterprise gives you a number of significant advantages:

Greater return on assets through dynamic control models. It's possible to make the case for an additional 30 percent to 40 percent use of assets where dynamic models replace static models.

Richer and more consistent customer interactions. This is possible because the same set of information is being used in every customer contact, and single point-of-contact constraints and costs can often be avoided.

Steady improvement in decision quality. The system can "remember" its decisions, and we can analyze the outcomes to improve both the rules and how they are applied.

Greater productivity from people at all levels. They can work with better information and focus their attention on critical processes. (However, this is a mixed blessing on a macroeconomic scale. Our own and others' models, and data from early adopters, indicate that extensive deployment of the real-time model could reduce demand for human resources in business by perhaps as much as 10 percent overall and much higher in some processes. In an economy growing only in the single digits, that has serious social implications.)

At the same time, dynamic systems and fast responses can cause instability if the decision analysis and feedback isn't handled correctly. And automatic control systems have to be able to smooth out data flows when anomalies are detected but aren't significant to the underlying process. Without the ability to do this smoothing, automatic systems tend to overcontrol. These are new design considerations that business systems designers will have to learn pretty much from scratch. Also, software and systems quality is critical when full automation is the goal. You can't have systems that fail in unexpected ways or that contain the number of defects that a lot of business software has today.

This might seem like a lot to ask—and it is. Yet the embedded systems and safety-critical software developers get very close to defect-free designs and implementations—at least 99.999 percent (the magic "five nines") correct most of the time. And they do it at productivity rates 10 to 20 times that achieved by business software developers. There is a lot to learn from their processes and practices.

Getting Started

Getting Started

So how would someone get started on the real-time road? Look at areas that are hard to do well if people are involved. Typically, they will have complex sets of business rules, large volumes of data that change values frequently, and high levels of requirements for correct decisions made quickly and unpredictably.

Look for the domain experts, the people who know how things really work, and start to capture what they know; they will often be the people who do the work, not the people who manage it. Their knowledge will be incomplete but should let you build good initial rule sets. Then look at where the closed-loop controls will have to be (right now they will often be people) and what smoothing functions will be required (probably more people). Build a model of the automated version of the process and run real data through it, working on eliminating errors and improving exception handling until the result is reliable enough to put into production.

The models are really important, not only as a design and debugging aid, but also as control mechanisms to detect when improvements are needed. By comparing results from the real system against predicted results from the model, you can see what, where and when you need to adapt.

Some major problems remain. There is a lack of available skills to build these levels of automation, even in the arena of embedded systems, and I don't see too many business systems courses that are currently trying to teach these design approaches. The level of change management required for reconfiguring people and processes to work this way is high and simply can't be ignored. "People are not peripherals," as one of my colleagues likes to remind me, and they don't ordinarily like being monitored as closely as the new systems may require.

It won't be an easy evolution to dynamic sense and respond. There's no doubt in my mind that it's going to happen, however, and it will probably start sooner and happen faster than we think. But there will very likely be some spectacular foul-ups along the way.

John Parkinson is chief technologist for the Americas at Cap Gemini Ernst & Young.

One Step at a

Time">

One Step at a Time

Of course, nothing actually happens in real time. The general theory of relativity tells us that in the real universe, the principle of simultaneity and the limiting velocity of light require that all events occur in an unambiguous sequence within an observer's unique frame of reference. By definition, events that occur simultaneously cannot affect each other. Otherwise, causality breaks down and you get all sorts of interesting but paradoxical outcomes.

What's material is the reaction time to an event. And reaction times have to be optimized to the needs of the situation and the available responses. Respond too fast and you'll miss or waste resources in the overshoot. Respond too slowly and you'll miss and get shot. Getting this right isn't easy.