Why "too big to fail" means "wait for it"

Nassim Taleb, writing in Foreign Affairs, describes why a Black Swan came to Cairo without anybody noticing and in general why opinion leaders keep getting caught on the wrong foot by the arrival of "large-scale events that lie far from the statistical norm and were largely unpredictable to a given set of observers."  The fall of the Berlin Wall was a surprise. The 2008 meltdown was a surprise. The Arab Spring was a surprise. "Why is surprise the permanent condition of the U.S. political and economic elite?"

The answer, he argues, is that the elites won't see them coming rather than that they can't. Part of the problem is the consequence of their own damping. By attempting to centrally manage systems according to some predetermined scheme they actually store up volatility rather than dispersing it. By kicking the can down the road they eventually condemn themselves to bumping into a giant pile of cans when they run out of road.

Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting no visible risks. In fact, they tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policymakers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite.

Thus every bailout and rescue made in the name of preventing the demise of something deemed "too big to fail" builds up a head of steam until the point is reached when the system can no longer contain the pressure. Then the volatility goes from a seeming zero to an extremely high number. The Black Swan will have arrived. And it always will for as long as fiction is substituted for fact, failure is relentlessly reinforced and false assurances are given all around. In Auden's words "The lights must never go out, the music must always play ... lest we should see where we are, lost in a haunted wood, children afraid of the night who have never been happy or good." The antidote, Taleb argues, is information. To price risk into the present rather than hide it to fester unseen beneath the surface.

But the elites cannot admit to surprise; nor can they admit to bad things starting on their watch. Therefore they keep sweeping things under the carpet until, as in some horror movie, it spawns a zombie. To make systems robust, says Taleb, you've got to admit that you can make mistakes and pay the price. You will have to in the end anyway.

The policy implications are identical: to make systems robust, all risks must be visible and out in the open -- fluctuat nec mergitur (it fluctuates but does not sink) goes the Latin saying. ...

In the United States, promoting these bad policies has been a bipartisan effort throughout. Republicans have been good at fragilizing large corporations through bailouts, and Democrats have been good at fragilizing the government. At the same time, the financial system as a whole exhibited little volatility; it kept getting weaker while providing policymakers with the illusion of stability, illustrated most notably when Ben Bernanke, who was then a member of the Board of Governors of the U.S. Federal Reserve, declared the era of “the great moderation” in 2004.

This is a daunting task. Given the fact that politicians and economic managers are elected or promoted for their skill at "controlling events," they can hardly admit that they cannot. It will take an intellectual revolution to make everyone realize that human control over the real world is really limited. And yet accepting that volatility must be faced rather than hidden is the key to preventing the arrival of even more Black Swans.

What is needed is a system that can prevent the harm done to citizens by the dishonesty of business elites; the limited competence of forecasters, economists, and statisticians; and the imperfections of regulation, not one that aims to eliminate these flaws. Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of “experts” in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile.

How can this be done as a practical matter? Here Taleb says very little (probably due to space limitations) and Leo Linbeck III's idea of shifting the decision points downward probably says more. The key difference between a central manager and more localized management is that the local managers do not feel compelled to a create a grand unified theory of events. They do not have an incentive to keep stories consistent with the overall narrative. They have no talking points from which to stray. They only have problems, which they more or less try to solve.