Expert Voice: Icosystem's Eric Bonabeau on Agent-Based Modeling

By Jeffrey Rothfeder  |  Posted 06-16-2003

Expert Voice: Icosystem's Eric Bonabeau on Agent-Based Modeling

How closely can agent-based modeling represent business systems? Icosystem's Eric Bonabeau says it depends on what you're looking for.

Eric Bonabeau, chairman and chief scientific officer of Icosystem Corp. in Cambridge, Mass., became interested in complexity theory and adaptive problem solving by watching insects. More than a decade ago, when he was a France Telecom research engineer, Bonabeau spent time in Santa Fe, N.M., at the foot of the Rocky Mountains, studying insect colonies. He learned that individual insects, though relatively small and weak on their own, are collectively capable of finding food, building sophisticated shelters, dividing up labor and defending their territories. Complex systems such as computer networks, supply chains and equity markets, Bonabeau was convinced, are not much different: They rely on their individual parts for their ability to perform extremely complicated activities. In the late 1990s, agent-based modeling, a way to simulate complex systems, became used more widely as a management tool—and Bonabeau quickly became a convert. "Agent-based modeling is a mind-set as much as a technology," he says. "It's a perfect way to view things and understand them by the behavior of their smallest components." After a stint at BiosGroup Inc., a Santa Fe complexity theory consultancy, Bonabeau in 2000 founded Icosystem, which designs agent-based models for corporations. The company has been profitable, Bonabeau says, since September 2001. CIO Insight Contributing Editor Jeffrey Rothfeder caught up with Bonabeau recently to discuss the present status and future potential of agent-based modeling.

CIO Insight: What is agent-based modeling?
Bonabeau: Agent-based modeling is the main tool in the complexity science toolbox. People have been thinking in terms of agent-based modeling for many years but just didn't have the computing power to actually make it useful until recently. With agent-based modeling, you describe a system from the bottom up, from the point of view of its constituent units, as opposed to a top-down description, where you look at properties at the aggregate level without worrying about the system's constituent elements. The novelty in agent-based modeling compared to what physicists would call micro-simulation is that we're talking about the possibility of modeling human systems, where the agents are human beings with complex behavior.

Give us an example of how an agent-based model would work.
Think about a traffic jam. It's very hard to capture the properties of a traffic jam at the aggregate level without describing what individual drivers do. These drivers are the agents in an agent-based model. Each of these agents/drivers is different, and the characteristics of their driving behavior become the rules in the model. Just a few variables, five to ten, can describe how aggressive they are, how they react to a slowdown, how often they change lanes, if they like to pass on the right. When you run this model you can reproduce a traffic jam, but this time you can watch closely the individual behavior of the drivers and you can inject different events—a forest fire near the highway, for instance—to see how these events would affect the emergent properties, the visible properties, of the traffic jam.

These emergent properties, we find, are the result of not only the behavior of individual drivers but the interactions between them as well. What I do on the road depends on what others do. Some of these emergent properties are counterintuitive. One example that I think is very interesting involves the beltway in London. They changed the speed limit from a uniform 50 miles an hour to 35 miles an hour in some portions, and then they varied speeds depending on traffic flow. They discovered that by reducing the speed limit, you actually increase the average speed of the cars. So that's an example of a counterintuitive phenomenon that you could only predict with an agent-based model. It could not be explained without looking at how the parts behave and interact to make the whole.

How do companies that run agent-based simulations of their operations react to counterintuitive results?
We typically work with clients who already know that the world around them is so complex that there are things they don't understand, that they won't be able to grasp, and sometimes the solution that we propose is not understandable. But this a very, very, very small fraction of all business executives—1 percent of top executives at Fortune 500 corporations. These executives understand the idea of modeling well enough that they usually take what we offer on faith. There are millions of interactions and pathways through which events propagate and each of them is a tiny fraction of the full explanation. You can't reduce the explanation to two or three or five simple sentences. It's often too complex for that.

For example?
We've been working for a software company, a leader in the storage field, that was interested in moving from a centralized storage system to a decentralized network storage system. The company wanted to implement rules for data management locally, in the various nodes of its storage network. They came to us with three sets of rules and asked us to test them, to see if they're the right ones and which one is the best. But none of them was particularly good, because they came out of a centralized mind-set, which is all the company knew. What happens is that when you implement these rules locally at a node in the storage network, in some situations for some configurations of traffic and document distribution over the network, an action that you take can have ripple effects throughout the entire network, creating congestion or outrageous latency delays for something as simple as document requests. This is because it was centralized thinking behind these rules, so the effect of one node on another was quite strong. Without modeling it, though, it would be very hard for a human brain to predict this pathological behavior of the system, because there are so many pathways and so many influences in the network. What creates congestion is not a single message by definition. It's the fact that there are many, many packets and messages in the network that are traveling all over the place, which are the result of many, many different things happening all over the network. You cannot reduce this to, oh, it is because A influences B but not C.

But if it's so complex and so difficult for humans to understand, how do you know that your simulation is an accurate representation of what you're modeling?
Well, first of all, the very notion of an accurate representation is a slippery concept. A model is always a simplified description of the real world. And there is no such thing as an accurate model without reference to the question it's trying to address. You have to know the question, the issue that you're addressing with the model. You have to have very, very specific objectives. And once you've got that, you have to decide what level of description you're going to use, what variables are going to be in your model, and so on. That's where the art resides. And then you're driven in your model-building process by what kind of data is available to validate the simulation. You're not going to build an agent-based model—or any kind of model, for that matter—without taking into account how you're going to calibrate it and validate it against what you're trying to model. And once you're happy with the model's accuracy, can you trust how the model responds to things it has never seen? This is where human judgment plays a role. You test an intervention, which is a new business condition—maybe a new pricing strategy or a change in regulatory policy—and see how the model reacts. Then you have to ask an expert, does it make sense to you? You don't have real data to back it up. You're using a model of something that exists to respond to something that has never existed before.

What kinds of business systems does agent-based modeling work best with?
It works best when the systems are comprised of many constituent units that interact and where the behavior of the units can be described in simple terms. So it's a situation where the complexity of the whole system emerges out of relatively simple behavior at the lowest level. If you have a system in which the behavior is already very complex, and all these behaviors put together produce something that's even more complex, you might lose the power of the approach.

One of the most fascinating things about agent-based modeling is that often we're simulating human behavior. Human behavior is extremely complex, but depending on the issue that you're trying to address, you might actually be able to describe human behavior in very simple terms, like human beings in a supermarket. They have a shopping basket, there's a finite number of things they can do in a supermarket, they're very constrained. You might be able to characterize their behavior with 10 or 15 variables.

Eric Bonabeau, continued

Do you test a lot of interventions before knowing you have the right solution?
Yes, you're building the model to solve a problem. So you have an objective. You want to maximize profitability by creating a pricing strategy that will do that, for example. Or you want to create a set of regulations for the stock market that will prevent collusion. Based on your research and knowledge gathered from people in the field or experts in the organization, you try out a series of interventions. And you select those 3 or 4 or 10 percent of interventions that seem to produce the best outcome with respect to your objective. Then you breed them and you mutate them, and you have a new generation of solutions and so on and so forth.

By the way, in many projects that we're working on, the objective is not to find an optimal solution; the objective is to find a robust solution. So a robust intervention, one that will work fine no matter what, is what you want, not one that will work optimally under a very specific set of conditions. Because if there is a change in the environment, if there is something that you forgot to include in the model that is actually key, then you end up with something that is optimal but just doesn't work in the real world; it's fragile and brittle. So optimality is another slippery concept.

This is becoming a big issue in IT areas like supply chains, where the notion of an optimal chain has been the ideal for many years. But now it's becoming clear that it's the most adaptive supply chain that is desirable, not the most optimized, because that just ends up becoming inflexible.

A related concept is one that people don't think about enough: evolvability. Instead of building a system that is optimal or even robust, you may want to build it to prepare for future generations of products. If you're a software company that builds a routing algorithm, you don't want to make it too dependent on Cisco routers the way they are built today because in two years they may be very different, and then you're going to be stuck with software that you're going to have to redo from scratch. So what you want to do is make it evolvable, easy to transform for the next generation.

Can an agent-based model do that? Can it identify whether a system has the emergent property of evolvability?
Right, you could use an agent-based model to build scenarios for evolvability by determining the flexibility of different components in the system and how this affects the overall performance of the system.

What are some of the applications that you're working on?
We have clients that are interested in pricing strategies or in marketing strategies—we've worked on predicting the performance of new-product launches and on defining the right mix of marketing channels. We do portfolio management work for the pharmaceutical industry where we use agent-based modeling to simulate the operations of an R&D facility, and the intervention is how to implement a profitability strategy. Right now, we're predicting how consumers will choose new healthcare plans. We have a lot of data on how they made choices in the past and we're actually very, very good at predicting what they'll choose in the future. We're between 95 percent and 99 percent accurate. Consumer behavior is going to be one of the most successful applications of agent-based modeling in the next few years. That's because with agent-based modeling, you can go way beyond traditional techniques like econometrics. Using agent-based modeling you actually model the decision-making behavior, so you have access to a deeper layer inside the consumer.

What is a typical interaction between you and a potential client? How much do you have to educate potential clients about agent-based modeling?
The executives we work with are usually familiar with complexity science and agent-based modeling, otherwise it's an uphill battle that we can't afford. We're only working with people we know are going to be responsive and understand the strengths and weaknesses of our approach. That's number one. Number two is, depending on the kind of issue they want us to address, we have to assess whether or not we can do it, and we have to evaluate what kind of value we can bring to the client. Is it going to be prediction, or is it going to be insight? Usually, if it's insight, it's because we don't have enough data to do forecasting, but we can at least identify and describe how their system operates. I'm much more comfortable with prediction, because when it comes to insight, I really don't know how to measure the value that we're bringing to our clients. So when clients ask for insight modeling, I have to be very candid and say, 'Please don't make any decisions based on this model. It's a model that is aimed at helping you think about your problem, not a model that is aimed at solving your problem.'"

Let's assume we're building a predictive model for our client. Usually we do a small pilot experiment in which we take the model's results and try it out in the real world. Hopefully, what occurs in the real world is consistent with the predictions the model made. Or if it's not completely consistent, we may discover certain things that we failed to take into account. But usually we come out of the pilot comfortable with what we've done, and then it's a matter of the client making a decision to use the model on a large scale in the real world.

But implementing a full-scale application like that requires a lot of faith by these executives. Failure isn't easily accepted in corporations. And faith in agent-based modeling will only spread after some high-profile company achieves quantifiable results with it. Real financial results. When a company says we saved or we made $200 million thanks to agent-based modeling, there will be a big difference in the popularity of this approach. I think that is what is needed. We don't have that yet.

Please send questions and comments on this story to editors@cioinsight-ziffdavis.com.