Rocket man

Megan McArdle at the Atlantic believes that some of the data analysis and modeling problems now being found in the AGW thesis are due to confirmation bias in which researchers’ observations are anchored on what had previously been reported. She looks at the devastating exposition on Watts Up With That? and asks “Climategate: Was the Data Faked?” But she’s not willing to concede the existence of a conspiracy, which she believes would have required too many conspirators. Instead, she posits the existence of an unconscious bias and quotes Richard Feynman on how error crept in Millikan’s electron experiment to illustrate her point:

Advertisement

Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of–this history–because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.

Confirmation bias “is a tendency to search for or interpret information in a way that confirms one’s preconceptions, leading to statistical errors.” A similar, but subtly different kind of problem affected the Space Shuttle program. Let’s call it ‘incentive bias’.  NASA grossly underestimated the probability of a launch failure and set it at  1:100,000 because that’s what it was bureaucratically believed to be. What it bureaucratically had to be. Richard Feynman, who was asked to look into the causes of the disaster knew this number could not possibly be right. But he also knew how powerful an influence a bureaucratic bias could be. There was a consensus on how safe the vehicle was on launch among rocket scientists. But there was only one problem: it had to be wrong.

The first thing Feynman found while talking to people at NASA, was a startling disconnect between engineers and management. Management claimed the probability of a launch failure was 1 in 100,000, but he knew this couldn’t be. He was, after all a mathematical genius. Feynman estimated the probability of failure to be more like 1 in 100, and to test his theory, he asked a bunch of NASA engineers to write down on a piece of paper what they thought it was. The result: Most engineers estimated the probability of failure to be very close to his original estimate.

He was not only disturbed by management’s illusion of safety, but by how they used these unrealistic estimates to convince a member of the public, teacher Christa McAuliffe, to join the crew, only to be killed along with the six others.

Feynman dug deeper, where he discovered a history of corner-cutting and bad science on the part of management. Management not only misunderstood the science, but he was tipped off by engineers at Morton Thiokol that they ignored it, most importantly when warned about a possible problem with an o-ring.

Feynman discovered that on the space shuttle’s solid fuel rocked boosters, an o-ring is used to prevent hot gas from escaping and damaging other parts. Concerns were raised by engineers that the o-ring may not properly expand with the rest of the hot booster parts, keeping its seal, when outside temperatures fall between 32 degrees Fahrenheit. Because temperatures had never been that low, and there had never been a launch failure, management ignored the engineers. The temperature on launch day was below 32 degrees.

Feynman had his answer, he just had to prove it.

The perfect opportunity arrived when he was requested to testify before Congress on his findings. With television cameras rolling, Feynman innocently questioned a NASA manager about the o-ring temperature issue. As the manager insisted that the o-rings would function properly even in extreme cold, Feynman took an o-ring sample he had obtained out of a cup of ice water in front of him. He then took the clamp off the o-ring which was being used to squish it flat. The o-ring remained flat, proving that in fact, resilliancy was lost with a temperature drop.

Advertisement

In his own report Feynman described the terrible and corrupting influence of incentives and expectation upon science and engineering. Even literal rocket science was not exempt from human pressure. Feynman ended his discussion of the Challenger disaster with an observation that eerily speaks to the subject of “consensus” in scientific matters. Consensus doesn’t matter. Only science and engineering does. “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

“If a reasonable launch schedule is to be maintained, engineering often cannot be done fast enough to keep up with the expectations of originally conservative certification criteria designed to guarantee a very safe vehicle. In these situations, subtly, and often with apparently logical arguments, the criteria are altered so that flights may still be certified in time. They therefore fly in a relatively unsafe condition, with a chance of failure of the order of a percent (it is difficult to be more accurate).

Official management, on the other hand, claims to believe the probability of failure is a thousand times less. One reason for this may be an attempt to assure the government of NASA perfection and success in order to ensure the supply of funds. The other may be that they sincerely believed it to be true, demonstrating an almost incredible lack of communication between themselves and their working engineers.

In any event this has had very unfortunate consequences, the most serious of which is to encourage ordinary citizens to fly in such a dangerous machine, as if it had attained the safety of an ordinary airliner. The astronauts, like test pilots, should know their risks, and we honor them for their courage. Who can doubt that McAuliffe was equally a person of great courage, who was closer to an awareness of the true risk than NASA management would have us believe?

Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects. Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met. If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources.

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

Advertisement

Nature cannot be fooled, but man is capable of a great deal of self deception and politicians especially so. One commenter on Megan McArdle’s site unwittingly reiterated Feynman’s thesis. He argued that once the AGW money train began the danger of confirmation bias would rise almost unchecked. Like NASA’s Challenger a launch schedule for carbon amelioration had been publicly announced by the politicians, the activists and the UN.  McCardle’s commenter wrote eloquently of the terrible pressure to assume that it had to be and the horrible cost of standing in the way.

Megan,

With respect, you’re setting up a strawman. None of the scientists who have “come out” as climate skeptics allege a massive conspiracy by scientists, any more than there is a massive liberal conspiracy in Hollywood. What you have is a self-emergent, self-organizing bias. I hope I can illustrate it briefly.

I work in academic science (check my IP address if you wish). Scientists are, in general, uncompromising idealists for objective, physical truth. But occasionally, politics encroaches. Most of my work is funded by DoE, DoD, ONR, and a few big companies. We get the grants, because we are simply the best in the field. But we don’t work in isolation. We work as part of a department, which has equipment, lab space, and maintenance staff, IT, et cetera. We have a system for the strict partition of unclassified/classified research through collaboration with government labs. The department had set a research policy and infrastructure goal to attract defense funding, and it worked.

The same is true in climate science. Universities and departments have set policies to attract climate science funding. Climate science centers don’t spontaneously spring into existence – they were created, in increasingly rapid numbers, to partake in the funding bonanza that is AGW. This by itself is not political – currently, universities are scrambling to set up “clean energy” and “sustainable technology” centers. Before it was bio-tech and nanotechnology. But because AGW-funding is politically motivated, departments have adroitly set their research goals to match the political goals of their funding sources. Just look at the mission statements of these climate research institutes – they don’t seek to investigate the scientific validity or soundness of AGW-theory, they assume that it is true, and seek to research the implications or consequences of it.

This filters through every level. Having created such a department, they must fill it with faculty that will carry out their mission statement. The department will hire professors who already believe in AGW and conduct research based on that premise. Those professors will hire students that will conduct their research without much fuss about AGW. And honestly, if you know anything about my generation, we will do or say whatever it is we think we’re supposed to do or say. There is no conspiracy, just a slightly cozy, unthinking myopia. Don’t rock the boat.

The former editor of the New Scientist, Nigel Calder, said it best – if you want funding to study the feeding habits of squirrels, you won’t get it. If you wants to study the effects of climate change on the feeding habits of squirrels, you will. And so in these subtle ways, there is a gravitational pull towards the AGW monolith.

I think it the most damning evidence for this soft tyranny is in the work of climate scientists whose scientific integrity has led them to publish results that clearly contradict basic assumptions in AGW modeling. Yet, in their papers, they are very careful to skirt around the issue, keeping their heads down, describing their results in a way obfuscates the contradiction. They will describe their results as an individual case, with no greater implications, and issue reassuring boilerplate statements about how AGW is true anyways.

For the field as a whole, it’s not a conspiracy. It’s the unfortunate consequence of having a field totally dominated by politically-motivated, strings-attached money. In the case of the CRU email group, well, the emails speak for themselves. Call it whatever you want.

Advertisement

The kind of PR momentum behind AGW argues for greater, not less scrutiny because in addition to the inherent uncertainties of scientific inquiry must be added one more: confirmation bias. Greenpeace sees what it has to see. Otherwise a lot of people in NGOs will be unemployed. That’s not to say that AGW doesn’t exist, but it does argue for a careful  double-checking. Has the data been faked? Better find out for sure. Those who argue that the world must embark on a multi-trillion dollar program to climate engineer the planet should ask themselves this. How much more confident are they in the AGW models produced at the University of East Anglia than they were in NASA’s risk analysis? How sure are they that actions taken according to the carbon model will not result in environmental catastrophe rather than amelioration, assuming arguendo that the catastrophe actually impends? How do they know this whole frigging contraption won’t blow up on launch? Is it one in a hundred, one in a hundred thousand? How much resolution has been lost in the adjustments and corrections that have been applied to the data? Does anyone even know? And if they don’t know, doesn’t the precautionary principle demand that we look — and look carefully — before we leap?

In the past it was not the job of public policy to act on a bet or a maybe. To justify government intervention in economic activity, personal lifestyles, the right to travel; to have the effrontery to prescribe how many children a population is allowed; how many sheets of toilet paper it can use; what it may purchase and how much it could be taxed,  a clear and compelling case formerly had to be put forward. Absent a compelling public interest you were obliged to leave people alone. Without a sound foundation in “reality” it really is dangerous to regulate the world. Honest. Maybe the media had a consensus; but perhaps Megan McArdle is beginning to have her doubts.


Tip Jar or Subscribe for $5

Advertisement

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement