02-15-2019 01:00:05 PM -0800
02-15-2019 09:32:56 AM -0800
02-15-2019 07:34:51 AM -0800
02-14-2019 05:19:47 PM -0800
02-14-2019 04:32:01 PM -0800
It looks like you've previously blocked notifications. If you'd like to receive them, please update your browser permissions.
Desktop Notifications are  | 
Get instant alerts on your desktop.
Turn on desktop notifications?
Remind me later.
PJ Media encourages you to read our updated PRIVACY POLICY and COOKIE POLICY.
X


When Is a Climate Model 'Useful'?

Recently, a discussion on Twitter brought this aphorism from statistician George Box to my attention:

"All models are wrong but some are useful."

Like all good aphorisms, it's stating an essential truth that bears closer examination and deeper thought but also risks becoming a slogan that excuses deeper thought.

This misuse of this aphorism is particularly pernicious. On the one hand, people can focus on "always wrong" to dismiss all models. On the other hand, it's sometimes used to excuse bad models as being useful.

What is a model?

So let's think about this idea of models. First of all, of course, this isn't talking about Cindy Crawford and Twiggy, although I'm sure pictures of either one would help my search results immensely. We're talking about models as they're used in a scientific or statistical sense.

Apple's built-in dictionary gives as its third definition:

a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions: "a statistical model used for predicting the survival rates of endangered species." [Emphasis my own, for clarity.]

The essential point here is that a model is a simplified (and for our purposes always mathematical) description of a system or process to assist calculations or predictions.

A model, in other words, is a formal mechanism to state a hypothesis about some collection of observations. It's stated mathematically, or at least capable of being stated mathematically.

Science is always stated as a model, but then all knowledge is. You know that if a cat jumps from the bookshelf, the cat will fall; although you didn't state it mathematically when you first observed this as a toddler, it's certainly capable of being stated mathematically. Newton did, apocryphally when he saw an apple fall from a tree, but actually after trying to find an explanation of observations by Galileo, Tycho Brahe, and Kepler. And he had to invent differential calculus to do it.

The thing is, Newton's model is wrong: if he'd had the right measuring instruments he could have seen that the apple didn't fall exactly as fast as v=at would predict, because of air resistance that he didn't (couldn't) take into account. Three hundred some years later, Einstein's General Theory of Relativity predicted that Newton was slightly wrong in a way that it took careful observations by Eddington to confirm.

Newton's theory of gravitation was wrong, but that didn't keep it from being useful for everything from computing the trajectory of cannon balls to the arc Luigi takes in Mario Brothers.

What makes Newton's model useful is that it's predictive: Newton had a hypothesis about the behavior of bodies under the influence of gravity, and even though Newton had no idea what gravity was (honestly we still don't), his hypothesis was born out by innumerable successful predictions.