Uncritically, Media Accepts Misleading Global Warming Poll

The headline on an October 9 press release from the Yale Project on Climate Change Communication and the George Mason University Center for Climate Change Communication read: “Poll Shows Americans Believe Global Warming Is Making Extreme Weather Worse.”

Advertisement

Mainstream media quickly reported uncritically on the poll. The Chicago Tribune ran a reproduction of a Reuters newswire: “Most Americans Link Weather to Global Warming: Survey.” At the Huffington Post: “Climate Change Survey Shows Most Americans Believe Warming Is Tied To Extreme Weather Events.”

But did the poll “Climate Change in the American Mindactually demonstrate this?

To evaluate the significance of the survey results, we must compare the methodology employed by the researchers with that needed to generate meaningful results about this complex and controversial topic.

Conducting surveys that measure real public opinion about global warming is difficult. Because the hypothesis that humanity is causing dangerous warming is now loudly supported by most opinion leaders — media, educators, and government — and alternative viewpoints are condemned, most citizens are reluctant to express skepticism about the issue despite what they actually think. The public will often give answers contrary to their opinions so as to conform to what they believe is socially acceptable concerning issues on which the politically correct position is clear.

Researchers must attempt to account for at least some of this unavoidable “social desirability bias” in global warming polling by crafting questions so that opinions not currently fashionable are portrayed as acceptable. They must also avoid well-known biases that are completely within survey coordinators’ control by steering clearing of certain problems:

Advertisement
  • Writing leading questions. A question’s wording must not be structured so as to unduly favor one answer over another. Questions should also not present as implied fact information that is under dispute.
  • Acquiescence Response Bias. This is the well-understood tendency for poll respondents to agree with statements no matter what their content. This problem is especially pronounced in polls that include agree-disagree questions. Research has shown that the least educated and least informed respondents are most likely to agree with statements presented in this way.

A Pew Research Center experiment (see p. 26 here) demonstrates the impact of Acquiescence Response Bias. Respondents were asked whether they agree or disagree with the following statement: “The best way to ensure peace is through military strength.” 55% agreed. 42% disagreed. When respondents were given a choice — “The best way to ensure peace is through military strength,” or “Diplomacy is the best way to ensure peace” — only 33% supported the military strength option while 55% supported the second choice.

  • Expectancy Bias. If respondents perceive that the questioner has an expectancy of or a desire for, a certain answer, people will more likely give a response that they think will please the pollster.
  • Asking respondents to judge the veracity of statements about which they have little knowledge. People often reply to questions about which they know little or nothing because they do not want to appear ignorant or because they wish to please the researcher. In this case, respondents will usually say what they have heard most often to be true — often being strongly influenced by the Acquiescence Response Bias.
Advertisement

So, how does the Yale/George Mason University poll fare when evaluated against these criteria? The two questions most highlighted by survey coordinators in their report, and focused on by media, are as follows:

  1. “How much do you agree or disagree with the following statements [sic]: ‘Global warming is affecting weather in the United States’?”
  2. “Some people say that global warming made each of the following events worse. How much do you agree or disagree?”

There are obviously serious problems here. Since they are both agree/disagree questions, they clearly result in Acquiescence Response Bias, boosting the case for U.S. public belief in a global warming/extreme weather connection.

Both of these questions also are poorly formulated because each assumes that the respondent agrees that global warming is actually happening. Few people know that global warming generally stopped in about 2003, and so could not possibly have affected events in at least the last decade. In that sense, they are also leading questions, since they imply that the pollster believes we are still in a warming phase. This then activates the problem of Expectancy Bias.

If survey coordinators had wanted to determine if respondents thought there is a connection between weather and global temperature trends, then they should have asked a neutral, and far more important, question:

Do you think that weather in the United States is being dangerously affected by global temperature trends?

Advertisement

It must be “dangerous” weather change that is being asked about. If it is not dangerous, then while the effect of temperature trends on weather is interesting to scientists, it should not be a public policy issue at all — let alone worth billions of dollars trying to “fix.” Asking whether respondents believed there is or is not global warming should be an independent question.

The choices of answers for the question about whether “global warming made each of the following events worse” are also problematic. They were:

  1. “The current drought in the Midwest and the Great Plains”
  2. “The severe storm (known as a ‘derecho’) that knocked down trees and power lines from Indiana to Washington D.C. in June of 2012”
  3. “This year’s record forest fires in Colorado and elsewhere in the American West”
  4. “Record high Summer temperatures in the U.S. in 2012”
  5. “The unusually warm Spring across the United States in 2012”
  6. “The unusually warm Winter across the United States in 2011-2012.”

All of these events would have occurred (some did not occur at all — see below) during a period of no global warming. So the question makes no sense.

Average high temperatures for the U.S. as a whole did not set a record this summer. It was only when one takes the average of the lows of the night and the highs of the day that one gets a statistically insignificant one-fifth of a degree Fahrenheit higher temperature this past July than what one would calculate for July 1936. The average high of the day in July 1936 was still higher than this past summer.

Advertisement

Similarly, the frequency, intensity, and size of forest fires have not increased.

Asking whether global warming made “events worse” implies that the events are a problem. While that is true for droughts and derechos, most people would consider an unusually warm winter anything but a problem and a warm spring is usually a welcomed relief after winter. It makes no sense to ask if good events are made worse by anything, let alone fictitious global warming.

In another part of the survey, respondents are asked: “Has extreme weather caused more or fewer of the following problems in your local area over the past few decades?” One of the answer options is “forest fires.” But forest fires are not necessarily caused by extreme weather. Such factors as forest management play important — and typically more important — roles.

In several places in the report, the authors treat “climate change” and “global warming” as synonyms. They are not. Climate change includes many factors, including cooling — and it is cooling, not warming, that is thought to result in increased extreme weather.

Regardless, asking respondents to judge the veracity of statements about which most would know almost nothing makes little sense. The relationship between global warming and extreme weather is a topic of intense research among climate scientists who do not really know what, if any, connection exists.

Pollsters who want the results of their surveys to provide meaningful measures of public opinion must work hard to overcome the biases that lead to such problems as the spiral of silence and the bandwagon effect. The Yale/George Mason pollsters have not done this. They must recreate and then readminister their poll following the fundamental principles of public opinion surveys if they want their work to assist in public policy formulation.

Advertisement

In the meantime, their poll results must be taken with a rather large grain of salt.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement