September 17, 2009
In the wake of this week's HPC on Wall Street conference, where the attendees gabbed about the wonders of high performance computing (that we dutifully reported on), it's worthwhile remembering that all this cutting-end technology failed to prevent the financial meltdown in the fall of 2008, and the subsequent global economic collapse. In fact, it's arguable that superfast computers and networks just sped up the process.
There's plenty of blame to go around, but some have pointed to the deficiencies of the technology itself. Specifically, there has been a good deal of criticism of the software models used to calculate financial risk of the now discredited collateralized debt obligations (CDOs) and credit default swaps (CDSs). Back in February, I wrote about a particularly questionable mathematical formula that was widely used by quantitative analysts (aka quants) to build these models.
With a year of hindsight to draw on, maybe an even clearer picture is emerging. A recent New York Times article attempted to explain the failure of the models in a more systematic way. The author suggests the software was doomed from the start because it didn't factor in human fallibility:
The risk models proved myopic, they say, because they were too simple-minded. They focused mainly on figures like the expected returns and the default risk of financial instruments. What they didn’t sufficiently take into account was human behavior, specifically the potential for widespread panic. When lots of investors got too scared to buy or sell, markets seized up and the models failed.
It's a cautionary tale of what happens when people obsessed with math (quants) meet people obsessed with money (speculators). Perhaps the Wall Street crowd needs to get in touch with people obsessed with people (behavioral scientists).
By the way, the latest financial instruments Wall Street has come up with are called "life settlements." They've actually been around since 2005, but have spread rapidly since then. In a nutshell it works like this: Financial institutions buy up and package lots of life insurance policies into securities, which they can then resell to investors. (Sound familiar?) The payout is made when the people that took out the policies die. The quicker the person dies, the bigger the yield.
Some interesting motivations are in play here. It puts the investors in the position of rooting for the death of the policyholders, which makes one wonder what steps they might take to increase their ROI. Also, since the value a policy is inversely proportional to the likelihood of that person dying, it creates a death auction mentality for the buyer and seller. Try to model that quant-boy.
In fact, though, the human element is systematically ignored in lots of models. In another Times article, Nobel Prize-winning economist Paul Krugman argues that economists failed to predict the current crisis because they ignored the fact that irrational behavior from real live people doesn't adhere to a pure free-market model. Krugman summed it up thusly:
As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth. Until the Great Depression, most economists clung to a vision of capitalism as a perfect or nearly perfect system. That vision wasn’t sustainable in the face of mass unemployment, but as memories of the Depression faded, economists fell back in love with the old, idealized vision of an economy in which rational individuals interact in perfect markets, this time gussied up with fancy equations.
Besides financial instruments and macroeconomics, what other models could be headed for a hard landing? Well, presumably anything that involves people. One that comes to mind is climate change. There are plenty of global warming simulations out there, but I doubt if any fully account for the interplay between the physical processes of the atmosphere, geosphere, biosphere, and peoples' behavior. Throw in the variable of a carbon tax or say a cap and trade policy, and now you're talking about a real grand challenge.
Obviously trying to apply a mathematical model to human behavior is bound to be tricky inasmuch as we only have a crude understanding of the process of how people think, much less the interaction of large numbers of them. But it's hard to imagine how any of these models are going to be of much practical use until we figure out how to incorporate ourselves into them.
Posted by Michael Feldman - September 17, 2009 @ 7:14 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?