News & Politics

The News Isn't Getting Any Better for Pollsters

(AP Photo/John Minchillo)

It turns out that all those “shy” Trump voters aren’t really all that “shy.” In fact, they were right there all along and countable. They just never answered the phone.

The failure of polling this election cycle was even worse than it was in 2016. One problem is changing communications technology that the pollsters have yet to fully understand. What’s worse, fewer and fewer people are responding to pollsters’ requests for their opinions. Polling companies have to call twice or three times the number of random people in order to get any kind of cross-section of opinion.

Meanwhile, they are getting it embarrassingly wrong.

Yes, the polls correctly predicted that Joe Biden would win the presidency. But they got all kinds of details, and a number of Senate races, badly wrong. FiveThirtyEight’s polling models projected that Biden would win Wisconsin by 8.3 points; with basically all the votes in, he won by a mere 0.63 percent, a miss of more than 7 points. In the Maine Senate race, FiveThirtyEight estimated that Democrat Sara Gideon would beat Republican incumbent Susan Collins by 2 points; Gideon lost by 9 points, an 11-point miss.

Biden’s lead was robust enough to hold even with this kind of polling error, but the leads of candidates like Gideon (or apparently, though it’s not officially called yet, Cal Cunningham in North Carolina) were not. Not all ballots have been counted yet, which could change polling-miss estimates, but a miss is already evident in states like Wisconsin and Maine where the votes are almost all in.

The race between incumbent Republican North Carolina Senator Thom Tillis and Democrat Cal Cunningham brings polling’s failures into stark relief. An NBC News poll the week before the election had Cunningham up on Tillis by 10 points. Other polls had Tillis trailing 3-5 points. Tillis ended up eking out a narrow victory, but it begs the question: is the process at fault or the science?

Polling is all about modeling. But the models are only as good as the data that’s input. What if the data itself is corrupted?

The theory is that the kind of people who answer polls are systematically different from the kind of people who refuse to answer polls — and that this has recently begun biasing the polls in a systematic way.

This challenges a core premise of polling, which is that you can use the responses of poll takers to infer the views of the population at large — and that if there are differences between poll takers and non-poll takers, they can be statistically “controlled” for by weighting according to race, education, gender, and so forth. (Weighting increases and decreases the importance of responses from particular groups in a poll to better match their share of the actual population.) If these two groups do differ systematically, that means the results are biased.

Polling is intrusive. And non-respondents tend not to trust other people with their closely held beliefs. Pollsters used to believe they could smooth out the differences between non-respondents and respondents through weighting techniques. It’s not working anymore. Non-respondents skew heavily Republican.

Now, in 2020, Shor argues that the differences between poll respondents and non-respondents have gotten larger still. In part due to Covid-19 stir-craziness, Democrats, and particularly highly civically engaged Democrats who donate to and volunteer for campaigns, have become likelier to answer polls. It’s something to do when we’re all bored, and it feels civically useful. This biased the polls, Shor argues, in deep ways that even the best polls (including his own) struggled to account for.

Changes in technology, changes in the culture, and a lack of trust in big institutions generally are all making political polling harder to get right. It’s a challenge that probably won’t be met in the next couple of election cycles, so until the pollsters prove their worth, we would all do well to ignore them.