Wall Street has invested heavily in information technology, including systems that calculate risk in order to safeguard the big investment firms against pretty much exactly what's been happening to them this year.
But don't blame the computers for the meltdown of the financial system. The best technology can only do what people tell it to do, and people just weren't asking the boxes the right questions. This was a failure of management that goes right to the top of the big banks, broker/dealers and insurance companies. And the CIOs at those companies are implicated in the disaster.
"The machines weren't used the way they were designed to be used," says Gregg Berman, risk-management practice head at the RiskMetrics Group, an analysis and services firm.
The issue was not as simple as the old programmers' adage "Garbage In, Garbage Out," or GIGO. It was less about bad data than incomplete data, based on a worldview that was skewed in favor of keeping the money machine running, even as markets and products became so opaque that the bankers themselves often didn't understand the full implications of what they were doing.
Transparency, a necessity for free markets, was sacrificed. In his prescription for recovery, former Securities and Exchange Commission chair Arthur Levitt Jr. wrote in The Wall Street Journal that companies should disclose "the key assumptions they used, the risks they present, the range of movement of these assets, why the company expects the fair value to recover if the asset's value has shrunk, and a track record and outlook for the investment horizon."
That would represent quite a change. Modeling the risks associated with complex, packaged mortgage-backed securities is complicated work, full of assumptions and abstract variables, and the folks with their hands on the levers did not really want that work done. That's the way it often works in situations far beyond the current mortgage mess: People want to keep on making the deals that made them money, and so they ignore the worst-case scenarios.
"The tendency is to model exposure to a normal day's movement in the markets, but not every day is normal," says Clay Struve, a partner at the small trading firm CSS, who is known as the man who made the Black-Scholes option-trading model work in a real-life trading environment. "You've got to look at the extremes."
Struve has advocated continuous stress-testing and updating of risk models, with an eye for the potentially fatal unlikely event. That's the kind of thing technology was supposed to simplify, but it doesn't work if the machines are set to create risk-management models that are seaworthy in fair weather, but not in a storm.
Part of the problem is the traditional division at financial firms between an analysis of market risk and credit risk, says RiskMetrics' Berman. Modern products like Credit Default Swaps combine the two, and up-to-date technology can handle the interrelationships between them. However, many companies maintained the old siloed approach to risk analysis. "Players who don't silo did well," Berman says.
The large investment banks also favor proprietary systems, which means their models may be built in relative isolation and do not reflect best practices, Berman says. (RiskMetrics and its competitors sell to banks, but mostly at the divisional level.)
It would have been tough for any CIO--or chief risk officer or investment committee--to have stopped the train, according to
Berman. "The culture and the incentive structure were not designed so that people highlighting issues were valued," he says. "An understanding of the ramifications of extreme circumstances didn't boil up to the top."
Still, he adds, "This information was knowable and could have been acted upon."