Tuesday, January 6, 2009

Risk and Regulation

The Times has a good article about RISK Mismanagement. Between the lines, it well illustrates the dynamics by which ill-concieved financial service regulation worked against the health and stability of the financial system.

There are many such models, but by far the most widely used is called VaR — Value at Risk... one reason VaR became so popular is that it is the only commonly used risk measure that can be applied to just about any asset class... Another reason VaR is so appealing is that it can measure both individual risks — the amount of risk contained in a single trader’s portfolio, for instance — and firmwide risk... Top executives usually know their firm’s daily VaR within minutes of the market’s close.

Risk managers use VaR to quantify their firm’s risk positions to their board. In the late 1990s, as the use of derivatives was exploding, the Securities and Exchange Commission ruled that firms had to include a quantitative disclosure of market risks in their financial statements for the convenience of investors, and VaR became the main tool for doing so. Around the same time, an important international rule-making body, the Basel Committee on Banking Supervision, went even further to validate VaR by saying that firms and banks could rely on their own internal VaR calculations to set their capital requirements. So long as their VaR was reasonably low, the amount of money they had to set aside to cover risks that might go bad could also be low.


The intent of the regulators was sensible enough. Firms ought to report risk exposures to investors. Banks with riskier investments ought maintain larger capital cushions. In the end, however, attempts to enforce these good ideas with regulation are almost intrinsically problematic because risk is a rather difficult thing to quantify simply and objectively. There is almost willful oblivious-ness in coming up with a single number and blessing it as a functionally complete and objective measure of risk. But the bureaucracies -- large-firm management and the regulatory agencies -- required such a number.

Tangentially, its worth making explicit what is inherently put at stake by the notion that risk can be quantified simply and objectively: Were that true, free markets would be of limited practical value, and command economies would be the order of the day.

...Taleb, a trim, impeccably dressed, middle-aged man — inexplicably, he won’t give his age... He also went from being primarily an options trader to what he always really wanted to be: a public intellectual. When I made the mistake of asking him one day whether he was an adjunct professor, he quickly corrected me. “I’m the Distinguished Professor of Risk Engineering at N.Y.U.,” he responded. “It’s the highest title they give in that department.” Humility is not among his virtues. On his Web site he has a link that reads, “Quotes from ‘The Black Swan’ that the imbeciles did not want to hear.”
...
“Why do people measure risks against events that took place in 1987?” he asked, referring to Black Monday, the October day when the U.S. market lost more than 20 percent of its value and has been used ever since as the worst-case scenario in many risk models. “Why is that a benchmark? I call it future-blindness.

“If you have a pilot flying a plane who doesn’t understand there can be storms, what is going to happen?” he asked. “He is not going to have a magnificent flight. Any small error is going to crash a plane. This is why the crisis that happened was predictable.”
...
Eventually, though, you do start to get the point. Taleb says that Wall Street risk models, no matter how mathematically sophisticated, are bogus; indeed, he is the leader of the camp that believes that risk models have done far more harm than good. And the essential reason for this is that the greatest risks are never the ones you can see and measure, but the ones you can’t see and therefore can never measure.


There is something almost explicitly svengali about this Taleb; His ultimate claim -- Risk models are imperfect, ergo, they are useless -- is more theatrical then intelligent. (Tangentially, his argument mirrors that of a former manager of mine against using unit tests).

On the other hand, the pilot analogy touches on a key point. What he describes -- any small error crashing the plane -- is a system that is not robust. Systems are not made robust by meditating over the un-imagineable. They are made more robust, in the first instance, by being made more adaptable, in the second instance, by incorporating lessons learnt-the-hard-way and, above all, by redundancy. Its not difficult to demonstrate how ill-concieved well-meaning attempts at financial service regulation often -- by adding rigidity and introducing centralized points of failure -- make the system more brittle. And the lessons-learnt by politician-regulators are often different then those of market participants.

...The Securities and Exchange Commission, for instance, worried about the amount of risk that derivatives posed to the system, mandated that financial firms would have to disclose that risk to investors, and VaR became the de facto measure. If the VaR number increased from year to year in a company’s annual report, it meant the firm was taking more risk. Rather than doing anything to limit the growth of derivatives, the agency concluded that disclosure, via VaR, was sufficient.

That, in turn, meant that even firms that had resisted VaR now succumbed. It meant that chief executives of big banks and investment firms had to have at least a passing familiarity with VaR. It meant that traders all had to understand the VaR consequences of making a big bet or of changing their portfolios...

...All over Wall Street, VaR numbers increased, but it still all seemed manageable — and besides, nothing bad was happening!


The primary job of the SEC, of course, is not to protect the stability of the financial system (for example, by limiting the growth of derivatives), but to protect investors. In mandating the publication of VaR, it actually suceeded to the degree that investors were informed about the increasing riskiness of their investments.

The question, then, is why investors were not concerned. I believe it can be easily argued that regulations intended to make investment easier and safer, to protect investors not just from fraud, but from research and dilligence, have the effect of dumbing down investors, and so reducing their ability to over-see the companies they own.

The way, in the end, VaR analyis owes its universal adoption to regulations illustrates a crowding out effect. I interviewed in the credit risk department of a large -- relatively unscathed -- bank in march 2007. In one of my conversations we talked rather explicitly about the limitations of the sorts or risk numbers people threw around, but also, how regulatory requirement (Basel II above all), sort of forced banks to spend resources on risk analysis they understood to be less then useful, that could have been better allocated to more useful analysis.

Which gets to some core problems with the current conception of regulation. If a regulator is better then a private firm at managing risk, then that firm ought not be in business. And if the firm is better, then the regulator ought not be telling it how to manage risk.

Further, this is an example of how regulation can introduce single points of failure (in this case: a flawed risk management practice) into a system.

In a crisis, Brown, the risk manager at AQR, said, “you want to know who can kill you and whether or not they will and who you can kill if necessary. You need to have an emergency backup plan that assumes everyone is out to get you. In peacetime, you think about other people’s intentions. In wartime, only their capabilities matter. VaR is a peacetime statistic.”


This hits the nail on the head. Which is to say, VaR is a wonderful tool, with great utility in certain contexts (peacetime), but it is not a one-size-fits-all measure of risk. That some people viewed it as such says more about those people then it does about the tool.

And I think the frame-of-mind of those people is a key point here. If you are a "Risk Manager" in a giant bank, responsible for controlling risk across a mind-boggling array of products and activities, you need simple numbers. All the more so, if you are a regulator with responsibility across a whole economy. This is the black hole at the heart of the system. The choice between coming up with "God-Blessed" (to use a term loved by a Risk Manager I once worked with), if not-entirely-meaningful, numbers or throwing one's hands in the air.

Which is not to say that risk cannot be managed, only that risk management doesn't scale well. Or, rather, that it needs to be re-concieved as it scales. Which is to say, government regulators should be more concerned about, for example, structural risks like mis-aligned incentives, or the existence of firms too big to fail, then with how individual firms manage risk.

...the big problem was that it turned out that VaR could be gamed. That is what happened when banks began reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks. That sounds good in principle, but managers began to manipulate the VaR by loading up on what Guldimann calls “asymmetric risk positions.” These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge.


This is a really interesing angle that I had not recognized. The danger, in setting rules, is that they always have unintended consequences. The more universal a rule, the more dangerous those consequences.

No comments:

Post a Comment