On April 22, 1970, I — along with a teeming multitude of junior high school and high school kids, college students, hippies and New Leftists — participated in the first Earth Day, a “teach-in” organized by Wisconsin Senator Gaylord Nelson to make the world aware of the imminent environmental crisis that faced the world — starvation from Malthusian collapse, the coming Ice Age, industrial pollution combined. In Pueblo, Colorado, we had a list of “demands” — the one I remember best was closing the City Park to car traffic so people wouldn’t be exposed to all that automobile exhaust.
It wasn’t a complete loss, as I met a couple of girls who I’d have an unrequited crush on for years after, but it didn’t have a whole lot of other effect locally — anti-pollution laws had already cut the emissions from the CF&I steel mill significantly, and the City Park remained open to cars, even on weekends. The enabling legislation that led to the Environmental Protection Agency had been signed the preceding January; and at least for me, my enthusiasm dimmed somewhat when my father pointed out to me that a lot of the people who drove into City Park on summer weekends were poor people who lived in poorer parts of town. The country club members who lived near us would be fine — it was my friends Manuel and Vern, who worked with me on the loading dock, who wouldn’t be able to visit the park.
In 1972, the Club of Rome published The Limits to Growth, which predicted the imminent environmental crisis that faced the world — starvation from Malthusian collapse, the exhaustion of oil and “nonrenewable” resources, industrial pollution combined. The charts looked impressive, the model looked impressive — this was long before I really became involved in modeling myself, and learned how much of a model’s results depend on the assumptions of the modelers — and I really thought this was the real thing. My friends and I started planning a sort of neo-Mission style adobe fort in order to survive the collapse.
That didn’t happen either, the brunette with the waist-length hair who was going to be the new Eve to my Adam never did actually sleep with me, I went off to college, the world didn’t end again. By then, I was starting to get more skeptical.
When the imminent environment crisis of global warming faced the world, I read about it fairly widely. The model of CO2-forced warming seemed plausible, but too many of the predictions depended on models that I knew were more complicated than I would trust — and by then I’d lived through the predictions of imminent nuclear war, and nuclear winter, and nuclear winter’s baby cousin the global cooling that would be caused if the U.S. were foolish enough to try to take back Kuwait, forcing Saddam Hussein to set fire to the oil fields, and a half dozen other imminent crises — and I had become a confirmed skeptic of imminent crises in general.
The Skeptical Environmentalist came out in 2001 — right after the imminent collapse of civilization caused by Y2K computer bugs — and I found it fascinating. The author, Bjørn Lomborg, had started out a believer, but in trying to analyze the work of Julian L Simon, he had realized that an awful lot of the imminent crises either weren’t supported by real data, or were much less harmful than the solutions that were being proposed.
Still, I was only a sort of mildly interested skeptic myself; I proposed a pair of articles to PJM in 2007, one taking the pro-AGW position and one the anti, and I was finding the pro-AGW article hard to write. There were too many questions, and I was already seeing the way that the science and the politics had combined in 2004 and were shaping up for the 2008 election. Now, though, I was interested and reading much more widely.
Then, in 2010, was the Climategate bombshell; we at PJM were among the first non-specialist media to break the story of the purloined files that showed how a relatively small clique were working to suppress research that contradicted the imminent environmental crisis predictions, while concealing the fact that they themselves had real problems even replicating their own research.
This whole history is a (possibly too long) lead-in to suggest that we can take a careful and appropriately skeptical look at the science, and also at the reasoning behind the science, and make our own decisions, without being seduced by the press release science.
What is the hypothesis?
Basically, what I suggest is that we think about what chain of things must be true for the whole climate change hypothesis to be probable. Technically, this is called a chain of implications.
To start with, let’s set out the hypothesis clearly. The hypothesis of the Intergovernmental Panel on Climate Change has been that the evidence supports several statements:
- that the Earth’s overall average temperature is increasing and we can measure the magnitude of that warming with sufficient accuracy to reason further
- that the Earth’s climate is changing as the result of an overall increase in the global average surface temperature
- that the magnitude of this change is greater than can be accounted for by natural processes
- that the primary mechanism responsible for this warming is the change in radiative balance caused by increased CO2 content in the atmosphere (the “greenhouse effect”)
- that humans are responsible for this warming
- that this warming will continue
What are the questions?
So now, let’s consider what questions we have to answer to confirm this hypothesis — or better, what observations might disprove it.
Is there observable warming?
This is the obvious first step: if global warming is an issue, there has to be global warming. There seems to be general agreement that there has been effective warming, at least over the 300 or so years since the thermometer was invented and there was some agreement on temperature scales. On the other hand, the distribution of thermometers around the world was very sparse at first, and surprisingly sparse now; estimates of the global average surface temperature (GAST) still depend on statistical processes being applied to the measurements that exist. As a result, the estimates of GAST do vary from source to source — as we’d expect.
There is an additional confounding factor, though: those measurements themselves can be affected by systematic error, which is to say errors in the process by which the original measurements are made in a way that biases the results. As Anthony Watts and others have shown, siting changes — things like an air conditioner blowing its warm exhaust on a measurement station, or an asphalt parking lot being built where there had been an open field — have made a number of the surface stations in the United States questionable.
Is the Earth’s climate changing as a result?
Looked at narrowly, of course, this is a tautology: if the temperature is increasing, the climate is necessarily changing. The implication is that it’s changing in harmful ways. Again, there is certainly some agreement that the Earth’s overall climate has changed over the same 300 year interval. That 300-400 year interval since the invention of effective thermometers starts roughly at the depth of what is sometimes called the Little Ice Age, a period in which winters were significantly colder and harder.
There is less agreement about the magnitude of the change, and about how warm the preceding period, which is called the Medieval Warm Period or Little Climatic Optimum, really was.
That the magnitude of this change can’t be accounted for naturally
Now we get to the meat of the argument. In 1998, Michael Mann and others published a paper with a chart that became famous as the “Hockey Stick,” shown here in its best press-release-science form (taken from thinkprogress.org.)
Notice several things about it — a pleasant cool mottled background, a very faint shading around the line in the past, the very flat downward-sloping green trend line, and the nice fiery red-and-orange line leaping upward. The faint beige is an attempt to represent error bands.
Mann and the others were reconstructing temperatures based on a variety of temperature proxies, such as patterns of tree growth. (You’ll notice the temperature values here go back long before thermometers.) These, as with many other parts of this argument, are based on statistical models — which isn’t to say they’re necessarily wrong, just that we can justly be suspicious of the results. For example, you may notice that this version doesn’t show a pronounced Medieval Warm Period at all, which isn’t very consistent with historical accounts of the weather at the time.
Now look at this version, which was published with an article about Dr Mann.
The error bands are — how to say this? — somewhat wider. What’s more, the Medieval Warm Period is back, with a peak that actually is slightly higher than current temperatures. In fact, considering the error bars it’s hard to say that there has actually been any real change in GAST at all.
Mann et al based their original reconstructed temperatures on a statistical process that selected what data to emphasize — essentially to trust — and what data was less worthy of trust. In the interval between the Think Progress chart and the more recent one, McIntyre and McKittrick had published several papers critical of the approach used by Mann et al, culminating (in my mind at least) with a paper in which they showed that feeding random numbers to Mann’s original approach would produce the same “hockey stick”. (A more approachable explanation of this can be found here, and Rand Simberg wrote about it on PJ Media here.)
A National Science Foundation ad hoc committee was formed to evaluate Mann’s statistical methods, and found them to be flawed. This report, called the Wegman Report for the lead author, has been criticized repeatedly, but never for its actual methodology.
The argument has been that the “hockey stick” shows an unusual warming in the last roughly 200 years. When you consider this, I think the most important point to consider is the “null hypothesis” I wrote about a few weeks ago. Basically, given those error bands, if there has been significantly more warming than we would expect, could we tell?
Is the major mechanism causing the warming a result of increasing CO2 concentration?
Again, there are a number of modeling approaches that suggest it should be, and again, all models should be considered suspect (even my own). There are at least a number of competing hypotheses that ought to be considered:
- There may be variations in solar output that account for these changes.
- There may be systematic errors in the data itself.
- There may have been other mechanisms at play that are human-caused but not CO2-based.
Will the warming continue?
The imminent environmental crisis of climate change is based on, once again, the predictions of climate models. If the models are correct, then there will be much greater warming in the future, which is predicted to have various catastrophic effects.
We looked at that along with the null hypothesis in a previous column. Basically, there’s one real problem — the real climate refuses to behave correctly. I went into this at length then, so I won’t repeat the whole argument, but the basic point is this: the actual observed temperatures have been flat for almost 20 years, and are now at the edge of the confidence interval — that is, the modelers would have taken a 20-1 bet against the temperatures staying this low.
There are no conclusions
Every scientific paper is supposed to end with a “Conclusions” section, and that’s always a fiction: there are no conclusions in science, just our best knowledge at the time.
Now that you have the questions, you can read up on the topic. Draw your own conclusions.