Climategate: The Perils of Global Warming Models

So the bottom line question is this: Is there any way that a computer program can correctly handle all of these real-world possibilities -- even in this simple debt collection case? The answer is no.

We have considerable difficulties just translating the relatively simple thing we call language — e.g., Greek biblical texts into English. How many versions of the Bible are there? Why isn’t there just one?

Can we possibly hope to translate a process much more complicated than just words? We can certainly try, but clearly the answer is that there is a lot lost in the translation of any complex scenario (debtors, energy performance, etc.) into mathematical equations and computer code.

Some uninformed parties believe that the user has control of all the variables, and can manually (and accurately) change scenarios. That is incorrect, as the user-controlled elements only represent a small fraction of the actual number of factors that are built into the computer model.

A similar fallacy is to think something like “we know the assumptions that the programmers made, and are adjusting accordingly.” This is wrong.

In writing a computer program of any complexity, there are literally hundreds of assumptions made. The computer programmer does not reveal all these to his customer, for much the same reasons that an accountant does not tell his client all of the assumptions made in preparing a tax return. He goes over a few of the more basic items, and then says “sign here.”

Oh, yes, this example brings up still another major variable (#7): the data the programmer uses as the basis for his creation.

Just like preparing a tax return depends on two parties working together, writing a computer model is a collaboration between scientist and programmer. If the taxpayer gives incomplete or inaccurate data to the accountant, the result will be wrong. What’s disconcerting is that in many cases, neither party will know that the results are in error.

Similarly if the scientist (inadvertently) gives incomplete or inaccurate data to the programmer to use in his creation, the result will likewise be wrong. And neither party will know it.

There is still one more significant variable (#8) that we have to take into account. After a computer model is generated, there is an interpreter (e.g., IPCC) that translates the “results” for politicians and the public (i.e., the media).

Here’s a surprise: These public interpretations are influenced by such factors as political, religious, environmental, financial, and scientific opinions. In their public revelations, do the interpreters explain all of their underlying biases? By now you know the answer: absolutely not.

When these are introduced into the equation we obviously have strayed so far from scientific fact that it is not even in sight anymore.

So we need to think very carefully before we take major actions (e.g., spend a few trillion dollars based on climate predictions, wind energy projected performance, etc.) that are almost entirely based on computer models.

What to do? Should we just scrap all computer models?

No, that’s the other extreme. Computer models have merit -- but shouldn’t be the tail wagging the dog.

We should realistically see computer models for what they are -- tools to assist us in organizing our thoughts, and producers of highly subjective results that are simply starting points for real scientific analysis.

Because of their inherent limitations (which I’ve just touched on here) all computer models should be treated with a very healthy degree of skepticism.

To insure appropriate integrity, all computer models regarding matters of importance should be subjected to the rigors of scientific methodology.

If they can’t accurately and continuously replicate the results of real-world data, then they should be discarded.

Unfortunately, that is not what is happening.

We have gotten so addicted to the illusion that these programs are accurate -- and some have become so agenda driven -- that we are now adjusting or discarding real-world data that doesn’t agree with the model. This is insane.

If a model has not been proven to fully reflect reality, then it has very limited use and should be treated with the same degree of consideration that one might give a horoscope.