Changing Our Minds About Iran

Those worried the recent NIE is wrong in estimating that Iran is no longer pursuing a nuclear weapons program have grounds to worry. Intelligence assessments are often wrong. That may not accord with the glamorous movie portrayals of spies and espionage, but that’s how it is.

Advertisement

A list of historical intelligence failures makes depressing reading. John Hillen notes that in the 1990s intelligence agencies failed to anticipate India’s nuclear tests, despite Pakistan’s warning to US Ambassador Bill Richardson that they were imminent. In 2006 Israel underestimated the magnitude of Hezbollah’s rocket arsenal in Lebanon and suffered under a rain of high explosive for the duration of the war. And of course US intelligence is still looking for Saddam Hussein’s WMD arsenal to this day. These failures are small compared to the really big intelligence fiascos, some of which almost resulted in the annihilation of nations. For instance, Stalin was totally surprised by Hitler’s blitz against him in 1941. Israel was almost destroyed by its failure to detect the Yom Kippur surprise attack in 1973. And despite billions of dollars at their disposal and the urgency of the Cold War every major Western intelligence agency failed to anticipate the collapse of the Soviet Union.

How can really big developments be missed? The short answer is that some developments can’t be predicted because they happen in the future. The CIA, writing to explain why the ongoing Soviet collapse went largely undetected, argued that certain things could not be foreseen because they had yet to happen. They diligently tracked the decline of the Soviet economy but how were they supposed to know that a Gorbachev would emerge? Gorbachev’s ascendancy provided the match to the tinder. How could they forsee Gorbachev would be the match? If not even the Soviets knew what was going to happen, how could the CIA?

Advertisement

The Agency did predict that the failing economy and stultifying societal conditions it had described in so many of its studies would ultimately provoke some kind of political confrontation within the USSR. The timing of this confrontation, however, depended on the emergence of a leadership to initiate it, and its form depended on the specific actions of that leadership. After that leadership finally appeared in the form of Gorbachev, the consequences of its actions-well intentioned but flawed-were dependent on diverse political variables and decisions that could be and were postulated but could not be predicted even by the principal actors themselves. … It was not inevitable that Gorbachev would succeed Chernyenko.

Whether we like it or not, there are limits to what intelligence can know at any one time. The inescapable uncertainties may make it impossible to decide the status of Iran’s nuclear program “once and for all”. As in the case of the Soviet Union changes in the situation and leadership happen all the time. Honest analysts must keep revising the picture as new information comes to light. While Washington politics describes any change in intelligence estimates as examples of ‘lying’ or incompetence the plain fact is that altering assessments is endemic to the process. An unchanging intelligence picture is a wrong picture. Changing your mind is a natural thing to do.

What’s needed is a way to keep improving the picture with each successive measurement. Bruce Blair, a former Senior Fellow at the Brookings Institution, notes that as long as “changing one’s mind” is done scientifically using a mathematical tool called Bayes’ analysis, the result is a more accurate intelligence estimate.

Advertisement

Bayes’ analysis is often called the science of changing one’s mind. The mental process begins with an initial estimate – a preexisting belief – of the probability that, say, an adversary possesses weapons of mass destruction, or that an attack by those weapons is underway. This initial subjective expectation is then exposed to confirming or contradictory intelligence or warning reports, and is revised using Bayes’ formula. Positive findings strengthen the decisionmaker’s belief that weapons of mass destruction exist or that an attack is underway; negative findings obviously weaken it. … it is quite possible for the intelligence findings to be wildly off the mark for 10 or more cycles of assessment before settling down and converging on the truth

A run of bad luck – failures to detect an actual attack, or false alarms if there is no actual attack – could drive the interpretation perilously close to a high-confidence wrong judgment.

The key to achieving a “convergence” between fact and perception is repeated measurement. In plain words intelligence agencies must repeatedly measure and re-measure until they are convinced what they are seeing is “true”. Extending the Cold War problem of detecting a false attack, the more radar detections of incoming missiles mean a greater belief that a real attack is in progress. A failure to detect incoming missiles lessens the belief in the same way. The process of repeatedly updating the estimate of the situation based on new data was essential to avoiding accidental nuclear war during a period when both the Soviets and the US had thousands of weapons deployed. However, the war on terror adds a further task which was not present during the Cold War.

Advertisement

During the long standoff with the USSR it was assumed that the Soviet leadership wanted to avoid national annihilation. The Kremlin was believed to be “rational” and this made it less important to anticipate a USSR first strike, because deterrence, or the fear of retaliation, made a first strike — a nuclear September 11 — an unlikely strategy by the Soviets. But after the real September 11 occurred there was considerable doubt over whether radical Islamists, with their belief in a paradise (or in the case of Iran the return of the 12th Imam) following an apocalypse, could be deterred like the Soviets. Religious fanatics might actually welcome an apocalypse and bring it on at the first opportunity. After the disaster of 2001 it became one of the goals of US intelligence to detect a possible enemy first strike (which presumably might occur the moment a radical Islamic regime acquired nuclear weapons) with a high degree of certainty. It’s the fear of a nuclear September 11 that drives the worry over the relatively modest Iranian nuclear weapons program. Having failed to detect the September 11 attack, American leadership could not risk failing to detect a future “nuclear September 11”. Yet the task of predicting when an enemy might get WMDs has proven very difficult to achieve. The evidence is rarely clear-cut. In a situation of ambiguity, what should a rational President do? Bruce Blair says: keep measuring. Not just once, but repeatedly and often. He writes:

Advertisement

In the case akin to pre-war Iraq, suppose that the national leader believes that dictator X is secretly amassing nuclear, biological or chemical weapons, but that U.S. spies cannot deliver the evidence proving the weapons’ existence. What should the leader believe then? Should the indictment be thrown out if the spies cannot produce any smoking guns? How long would a reasonable person cling to the presumption of the dictator’s guilt in the absence of damning evidence?

The mathematics of rationality (according to Bayes) throws surprising light on this question. It proves that a leader who continues to strongly believe in the dictator’s guilt is not being dogmatic. On the contrary, it would be irrational to drop the charges quickly on grounds of insufficient evidence. A rational person would not mentally exonerate the dictator until mounting evidence based on multiple intelligence assessments pointed to his innocence.

Let’s look at an example of how repeated measurement affects our estimate of how likely a threat is. Suppose the US received a single report — like an NIE — stating that dictator X had given up his WMD program after a long series of measurements claiming the contrary. Should he believe it? The answer is: not right away.

If the leader interpreting the intelligence reports holds the initial opinion that it is virtually certain that the dictator is amassing mass-destruction weapons – an opinion that may be expressed as a subjective expectation or probability of, say, 99.9 percent – then what new opinion should the leader reach if the intelligence community (or the head of a UN inspection team) weighs in with a new comprehensive assessment that finds no reliable evidence of actual production or stockpiling?

Adhering to the tenets of Bayes’ formula, the leader would combine the intelligence report with the previous opinion to produce a revised expectation. Upon applying the relevant rule of inductive reasoning, which takes into account the 25 percent error rates, the leader’s personal subjective probability estimate (the previous opinion) would logically decline from 99.9 percent to 99.7 percent! The leader would remain highly suspicious, to put it mildly, indeed very convinced of the dictator’s deceit. …

Believe it or not, a rational leader could receive four negative reviews in a row from the spy agencies and would still harbor deep suspicion of the dictator because the leader’s logically revised degree of belief that the dictator was amassing weapons would only fall to 92.5 percent.

Advertisement

In a logical world the US would continue to treat dictator X with suspicion until long experience proved him harmless. But in a politicized world a President might be obliged to treat any hypothetical dictator X as if he were entitled to protection against double jeopardy; having once been declared innocent, never to be suspected again. Logic makes us measure existential national threats in one way but politics, Perry Mason and media coverage compel us to adopt another, cinematic point of view. Of course in Washington politics sometimes wins over reason.

Richard Fernandez is PJM Sydney editor; he also writes at the Belmont Club.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement