September 23, 2008
The confluence of the U.S. financial meltdown and this week's High Performance on Wall Street conference in New York might be one of those coincidences that's trying to tell us something. To be honest, I'm not a big believer in cosmic happenstance, but in this case it made me wonder if the financial software models had anything to do with our current economic chaos. I didn't have to look very hard to find some correlation.
A great post by Saul Hansell at the New York Times explained why many of the risk models developed by quants didn't see the brick wall at the end of the tunnel (see How Wall Street Lied to Its Computers). According to Hansell, there were multiple points of failure at these firms, but in many cases the quantitative models themselves hid the risks they were supposed to be revealing. Writes Hansell:
Ultimately, the people who ran the firms must take responsibility, but it wasn’t quite that simple. In fact, most Wall Street computer models radically underestimated the risk of the complex mortgage securities, they said. That is partly because the level of financial distress is “the equivalent of the 100-year flood,” in the words of Leslie Rahl, the president of Capital Market Risk Advisors, a consulting firm. But she and others say there is more to it: The people who ran the financial firms chose to program their risk-management systems with overly optimistic assumptions and to feed them oversimplified data. This kept them from sounding the alarm early enough.
That sentiment reflects a recent conversation I had with Jerry Hanweck, of Hanweck Associates, a firm that develops quantitative finance products. He told me some of the high profile hedge funds that lost a lot of money last year were also relying on limited historical data to drive their models. Especially in high frequency trading and arbitrage trading situations, Hanweck thinks the traders often misapply their statistics. According to him, when you gather all this random data together and run regression analysis on it, some of the results are going to look reasonable, just by chance. "If you try to extract too much from the limited amount of data that we have available to us, you really can overfit the data," he explains.
In some cases though, the inverse problem occurred. Hansell writes that some models were designed to dilute the risk by looking too far back -- into the last several years of trading history versus just the last several months -- when things were starting to get dicey. This hid short-term volatility behind a mask of long-term stability. But to keep profits flowing, Wall Street execs had a vested interest (literally) to keep these less-than-stellar models humming along.
Many economists think that the 2007 credit crunch that launched the current downward financial spiral was set in motion by the now notorious collateralized debt obligations or CDOs. These instruments had become infested with devalued subprime loans, and at some point it became clear to investors that the risk associated with CDOs was a lot larger than originally thought.
According to Hanweck, because of the complexity of CDOs, the risk of these instruments is based on simplified assumptions. In some cases, limits in computational power made these simplifications necessary so that the valuation models could be run. "That's what really started the problems last year and even back in 2005, when GM and Ford had their first batch of hiccups," he says. The nature of these CDOs suggests that the buyers -- investment banks, commercial banks, insurance companies, and other institutions -- were engaging in faith-based capitalism.
And what about the subprime mortgages that started it all? Well, devising and selling these packages didn't have much to do with computers or quantitative models. Says Hanweck: "That was just plain old greed."
Posted by Michael Feldman - September 22, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?