Case Study: Nasdaq

By Jeffrey Rothfeder  |  Posted 06-16-2003

Case Study: Nasdaq

Here's a story that got lost as the dot-com bubble expanded. In 1998, as the Nasdaq Composite Index smashed through 2000—on its way to over 5000 just two years later—executives at The Nasdaq Stock Market Inc. were worried about illogically high equity valuations. Stock prices were going up faster than earnings, and outlandish price-to-earnings ratios—when there were any earnings to consider—weren't dissuading investors from buying more and more shares. "Supply was racing to keep up with demand, and the traditional equilibria curves in which high P/Es curbed investor enthusiasm were falling by the wayside," says Alfred Berkeley, who was president of Nasdaq at the time. Berkeley was particularly concerned that the bubble then just beginning to expand would suddenly burst, taking with it both Nasdaq's profits and its reputation; he couldn't have known, of course, that the real run-up hadn't even begun yet. And he was eager to figure out a way to address this problem before it got worse. Drawing on more than 20 years of experience as an investment banker with a focus on technology companies and startups prior to becoming president of Nasdaq in the mid-1990s, Berkeley was convinced that the deviation in market performance he had identified went well beyond anything traditional statistical modeling could handle. This was new territory—complex behavior that defied market expectations and broke routine economic rules, and he thought standard number-crunching would be too rigid to explain the anomalies.

So Berkeley contacted Michael Brown, Nasdaq's chairman at the time (and a former chief financial officer at Microsoft Corp.), and asked him what Gates & Co. would do if faced with such a deeply perplexing business irregularity. Brown responded that Microsoft would hire the smartest people anywhere in the world to analyze it—and he told Berkeley that for this problem, he should contact BiosGroup Inc., a business strategy consultancy in Santa Fe, N.M.

Out of this conversation an intriguing collaboration emerged between Nasdaq and BiosGroup, in which a new, virtually untested programming technique called agent-based modeling was used to create an alternate, computer-based version of the exchange, which mimicked in all ways the behavior of Nasdaq and its participants. By running this dead-ringer simulation of Nasdaq over a period of nearly three years, BiosGroup scientists succeeded in forecasting the exchange's performance under a range of market conditions. For instance, they mapped the influence of day traders on stock prices under varied circumstances, which provided clues about what was behind the big run-up in technology stocks. And they predicted what would happen to stock prices and liquidity when equity tick sizes were lowered to a penny from the traditional sixteenths of a dollar, or 6.25 cents, as the Securities and Exchange Commission was then planning to mandate. In the end, the approach didn't provide Berkeley with the specific information he had originally wanted—that is, a formula for keeping the dot-com bubble from expanding.

The crash of 2000 took care of that. But in Berkeley's view, it was the unexpected and unintended results of agent-based modeling that were most valuable: The program offered never-before-obtained insights into how markets like Nasdaq operate and how they respond to regulatory changes. "It was extremely useful to me because it had predictive power," says Berkeley, now Nasdaq's vice chairman. "It clarified for me which management issues are important to watch and that I must be prepared to address."

Traffic Jam

It took a bit of a flyer for Nasdaq to turn to agent-based modeling in 1998. At the time, agent-based modeling was a promising but far from perfected technology that had its beginnings in complexity science, a popular area of research for computer and cognitive scientists during the past couple of decades. The central tenet of this theory is that complex systems emerge from a series of seemingly random actions by individual "agents" responding to each other. For instance, the flow of traffic on a freeway is the result of the different driving styles of hundreds of people behind the wheel; some are timid and hit the brakes frequently, others tailgate and go 20 miles over the speed limit, still others are alert or drowsy. Complexity theorists argue that it's impossible to understand or predict traffic patterns without examining those patterns from the bottom up—from the interactions among drivers, and not from the overall, visible movement on the freeway.

Businesses, too, are made up of numerous complex systems. A supply chain, for example, is composed of factories, warehouses, trucks, raw materials, inventory and retailers, managers and line employees working at the various jobs through the network. By their actions, each of these "low-level" agents determine the emergent phenomena—or the performance—of the larger system, according to complexity scientists. Agent-based modeling is a computerized rendering of complexity science. It's a way to simulate complex systems by using mathematical algorithms to create models of agents—virtual people and virtual objects—that represent individual components in a system. By setting these agents in motion, managers can watch and measure how their organizations operate—and thus determine, for instance, specific points of inefficiency as well as the origin of such weaknesses. And by throwing in new wrinkles for the agents to react to—such as a regulatory change, a competitor with a similar, less expensive product, or an unlikely real-world decision such as moving trucks with less than full loads—it's possible to see how the organization will react or be affected.

"An agent can be anything," says Stuart Kauffman, a BiosGroup founder and one of the first complexity scientists at Santa Fe Institute, a research center with an emphasis on understanding how complicated patterns and systems emerge from simple randomness. "It can be a pallet, it can be an SKU, it can be a truck, it can be a cross-loading dock. It can be any part of the system. Some agents make decisions and some agents only have a certain amount of time to decide what to do. Whatever they are, their interaction and how it changes the dynamics of the complex system we're studying are what we're interested in."

The results of agent-based modeling can be surprising. Consider the model built by Sainsbury's, the British supermarket chain. Based on bar-coded data, video camera studies and expert knowledge, the model—called SimStore—duplicated shopping behavior down to such details as the percentage of shoppers who turn right after entering the store and the average time a consumer spends in different supermarket departments. Rules were incorporated into the model that mimicked real-life behavior: Some shoppers would always move from wherever they are to the nearest item on their list; others would go, instead, to the next item on the list.

Perhaps the most revelatory insight that Sainsbury's took away from SimStore, says Eric Bonabeau, chairman and chief scientific officer at Icosystem Corp. (see Expert Voices), a complexity theory consultancy in Cambridge, Mass., was that when there were more shoppers in the supermarket, sales of wine, displayed in the back of the stores, dropped. The reason, Bonabeau posits, is that as the store becomes crowded, there are more clusters of customers throughout the supermarket. That discourages people from fighting their way through these groups to reach the wine. This is extremely useful knowledge for Sainsbury's executives, which could lead to creative decisions that they may not have dared make without the agent-based model. For instance, the supermarket's managers may decide to limit discounting—typically viewed as a winning strategy that brings more customers into the store—because while it potentially increases sales of low-margin products, the crowds it attracts are hindering more valuable high-margin wine purchases.

The Mockup

Nasdaq's Brown was a recent and enthusiastic convert to complexity theory when Berkeley asked him for help in solving the dot-com quandary. Just a short time earlier, on his way to a conference, Brown began to read Kauffman's At Home in the Universe (Oxford University Press, 1995). He was so enthralled by the explanations of the way enterprises work and by the descriptions of how agents at the lowest systems level can determine the efficiency of upper-tier operations that he couldn't put the book down. In fact, he stayed in his hotel room throughout the next day reading it—and missed the conference entirely. Soon after, Brown called Kauffman and the pair held long discussions about the ways companies might use complexity theory to understand and improve their performance. When Berkeley asked for help, Brown was excited about the possibility of seeing BiosGroup's agent-based modeling approach first-hand at the exchange.

During the initial meetings in 1998 between Nasdaq and BiosGroup, the decision was made to start with a simple model of the exchange to see if Nasdaq's real-world performance could be replicated. Nasdaq, an electronic exchange, essentially links market makers and other stock transaction companies over a network to manage the purchase and sale of about 3,600 stocks. Most market makers oversee dozens if not hundreds of different stocks, and their primary responsibility is to keep a stock liquid by matching buy and sell orders as they are placed, even if they have to use their own money to do so. While Nasdaq makes money from fees from listed companies, distributing market data and other financial products, it relies on market makers for much of its revenue, chiefly through a transaction charge placed on every trade executed on the exchange.

To construct the first Nasdaq model, BiosGroup scientists programmed eight different broker-dealer agents into the system, each of them acting as the market maker in a single stock. These agents, which were designed based on detailed records of trading patterns and interviews with market makers, were embedded with rules that allowed them to observe order flow through the simulation and to adapt their trading strategies as market conditions changed. If there was an influx of investors into a particular stock, for instance, the market maker could react by changing the quote. Investors were represented by about 50 different agents.

After running a series of simple mock trades that validated the prototype's ability to operate as an electronic market, the BiosGroup team proceeded to test the impact of decimalization on the model and on the way the model's agents interacted, in hopes of obtaining clues about how the real Nasdaq would react—a particularly timely experiment, because the SEC was then preparing to order U.S. exchanges to switch to share-price increments of a penny by mid-2000 (the deadline eventually was postponed until the following April). Nasdaq officials were concerned about this new equity tick, because its market makers generate their profits from the difference between the price at which a stock can be sold and the price at which it can be bought—the bid and the ask. Nasdaq management worried that if decimalization narrowed this gap by 51/4 cents, earnings of the market makers would suffer. In response, the dealers might try new trading strategies to avoid shortfalls. They might, for instance, avoid some small trades because the effort wasn't worth the meager profits these transactions would provide. Or they might migrate to Nasdaq's newest rivals, other electronic exchanges, such as Instinet and Island, that link buyers and sellers directly. Because of their peer-to-peer efficiency, these so-called ECNs (electronic communications networks) have tended to offer narrower bid-ask gaps than Nasdaq; consequently, Nasdaq's market makers had generally avoided trading on them. But by narrowing the spreads, decimalization could take away Nasdaq's competitive advantage—especially because some ECNs pay fees to market makers for shifting business to them. These and numerous other complex possibilities might reduce the number of transactions market makers execute on Nasdaq and thus trim its revenue.

When decimalization was introduced into the BiosGroup model, the average bid-ask spread, not surprisingly, narrowed: Often, the gap was only a penny. But the simulation's market-maker agents continued to trade with each other with no slowdown in activity. And so-called price discovery—which measures whether a stock is trading at its appropriate value (based on fundamental measures such as price-to-earnings ratios), and whether its price is responding appropriately to corporate events such as acquisitions, earnings reports and executive changes—was unaffected.

Then the modelers introduced aggressive day traders—investors programmed to take a chance on stocks if their analysis indicated that they could turn it quickly for a fast profit—into the simulation. As the post-decimalization bid-ask increment became smaller, gambles became less risky: Instead of a sixteenth of a dollar between the bid and ask, the gap was only a penny—and agents playing day traders saw their opening. The system was inundated with these speculative trades. The result: The share prices in the model no longer truly reflected company performance and business conditions. In other words, the agent-based model began to generate the very distant beginnings of a bubble. "It was a completely unexpected result: Letting rogue traders participate in the model to the extent that they would actively in the real world when the bid-ask decreases overwhelmed the market, and we lost both price discovery and liquidity," says Bob MacDonald, former CEO at BiosGroup and now a director at NuTech Solutions, which purchased BiosGroup earlier this year. "It wasn't an absolute characteristic of the market that you lost price discovery. It just occurred when investors were scrambling in between a small bid and ask gap."

Scalability

Nasdaq's management wanted to know whether this outcome would be repeated in a much larger and more intricate simulation that represented even more closely the exchange's day-to-day activities. So BiosGroup scientists built a $1 million-plus version of the model. This second virtual Nasdaq included agents playing the roles of upwards of 50 market makers as well as dozens of day traders, portfolio managers at large financial institutions and conservative individual investors. This time, the market makers were much closer replicas of their real-world counterparts—programmers used a vast amount of historical data and weeks of observation to closely mimic trading patterns—and the agents were taught to trade in many stocks at once. In addition, they were embedded with rules allowing them to learn from their experiences in the simulated model; set their quotes and execute their trades based on the highest profit they could generate from a transaction; handle all types of market orders, including straight transactions, limit orders and negotiated orders; and make trades on either Nasdaq's electronic system or on ECNs. Agents modeled after investors were given a huge amount of information for making trading decisions, including unsubstantiated rumors and verified news. They were programmed to decide whether to buy or sell by comparing the value that the available information gave to a stock with the price of the equity on Nasdaq or any other electronic marketplace. After hundreds of thousands of simulated runs, BiosGroup analysts determined that 70 percent of the time the model accurately reflected the real-world performance of market makers overall, and that the model could serve as the basis for further perfecting the model.

As for the impact of decimalization on this model, says MacDonald, "second iteration, same outcome. Decimalization was clearly a threat to the established markets and to the equilibrium of price discovery."

That was valuable information for Berkeley: "It was a look at the mechanism of the markets, an inside view of the system that we never saw or documented before." For one thing, it explained, to a degree, what was happening in the dot-com bubble: Investors trading on noise and momentum, even with the larger fractional bid-ask gaps, were overpowering the stock market's traditional price discovery. For another, it provided ammunition to derail decimalization.

Berkeley presented the data that the agent-based model generated to the SEC and to Congress, hoping to persuade the regulators and lawmakers that decimalization could hinder the performance of the markets and harm investors. He was rebuffed. SEC officials were convinced that smaller increments would be a boon for individual investors. "They didn't want to hear it," says Berkeley. "There was a political mood to move to pennies and even the facts couldn't stop it."

A Future in Technology

In all, the BiosGroup model delivered six key findings—most of them troubling to Nasdaq. Besides the impact of decimalization, the most worrisome was the realization that market makers began to change their trading strategies as the bid-ask gap came under attack from individual day traders as well as institutional traders seeking the lowest transaction fees. The simulation wasn't completely clear on exactly which new tactics market makers would embrace, but it seemed likely that trading on ECNs would increase substantially—and Nasdaq's transaction-fee revenue would consequently drop.

Two years after BiosGroup finished its work with Nasdaq, the simulation has turned out to be a remarkably accurate predictor of the future. Five of its six conclusions proved to be accurate. Only the forecast that overall trade volume would increase did not come true—but that was primarily the result of the bursting of the bubble. That highlights a limitation of these models, says Alexander Outkin, a member of the BiosGroup team that created the Nasdaq simulation and now a chief scientist at Strategic Analytics Inc. in Santa Fe: "The model made its predictions based on the market conditions at the time the model was run. Then, the market bubble burst, fewer people wanted to buy stocks and volume dropped, proving us wrong. We were working with information that was correct for its time, but incorrect for the future."

As a management tool, says Berkeley, the agent-based model was especially potent. It allowed him to brace the organization for the ringing changes that decimalization brought when it was finally introduced in April 2001. For instance, as suggested by the model, Nasdaq's market share has dipped. In the first quarter of 2003, the percentage of share volume in Nasdaq-listed stock executed on Nasdaq's systems fell to 19 percent from 31.6 percent in the first quarter of 2002. What's more, Instinet now tops Nasdaq in trading volume on Nasdaq stocks, and other ECNs are making inroads. But this revenue hemorrhage to other electronic exchanges could have been worse had Nasdaq not focused during the past few years on building its own version of an ECN, called SuperMontage.

"Nasdaq has spent a lot to improve their technology and make their system more desirable to trade on," says Ken DeGiglio, chief technology officer at Renaissance Trading Technologies, which makes a portal system that links market makers to all of the over-the-counter electronic exchanges. "But the fact is Nasdaq will have to keep the pressure on, because now it's just another trading destination."

Meanwhile, Nasdaq management, expecting a consolidation of its market makers as spreads declined and their profits were hammered, has developed a cost-containment program that even now continues to reduce technology and overhead expenditures, by as much as 6 percent in the first quarter of 2003 over the year-earlier quarter. With all of this and a bear market, Nasdaq has at least held its own. Net income is up 13 percent in the first quarter compared with the prior quarter despite a 10 percent drop in revenue during the same period.

"The model clarified the important management issues that we had to face," says Berkeley. "Our traditional economists pooh-poohed all of it, but I'm confident that the agent-based model was a better way to look at our marketplace than statistically standard deviations. The hardest thing for a manager is to think clearly. The model forces that discipline on you."

Please send comments and questions on this story to editors@cioinsight-ziffdavis.com.