The Truth About Polling: Yes, Romney Is Probably Tied or Winning

It isn’t unusual for the state of the polls to be a big issue this close to an election, but this week has been different from any previous campaign I remember — it’s not who’s ahead in the polls, it’s the polls themselves that are the big topic of discussion. My article “Skewed and Unskewed Polls” got picked up by both Drudge Report and Rush Limbaugh’s Stack O’ Stuff and — along with plenty of other contributions — made the topic of how polls are being performed into a national one.

Advertisement

The problem is that the discussion (as happens only too often) is now being led by people who don’t understand the whole topic very well. So let’s just talk about this a bit. I promise to almost completely eliminate the math; believe it or not, people can learn to reason about statistics without learning the central limit theorem.

Imagine a future day when, through technology and psychic powers, we can at any moment take an instant poll, checking what people mean in their heart of hearts to do when they vote in November. (And let’s not think what else could be done with that technology — this is a thought experiment.)

Secretary Dumbledore, Minister of Polling, goes into the office and pushes the appropriate button, and the poll is taken, click. It’s 51,267,303 for Romney and 49,109,941 for Obama, with 2,007,007 undecided. Does this tell us how the election is going to really come out? No, because it’s still 40 days (and nights) into the future. People may change their minds. Some people will die unexpectedly. And those two million-odd undecideds will have to either decide how to vote or decide not to vote at all.

And that’s the way it would work if we had this Perfect Magical Polling Wizardry. That would be a perfect poll.

Of course, we don’t. So this is what real polling companies do:

— First, they get a whole bunch of phone numbers. (When I was a kid, they actually sent pollsters door to door, but that’s pretty much died out.)

Advertisement

— Then, they establish some rules: people who are qualified to answer are adults, or adults who are registered to vote. They want to make sure to sample enough of important demographics (woman, minorities, and so forth) and they don’t want to sample too many people because it costs them between $10 and $100 per person they call. So they set some goal for the number of respondents they want, say 1000.

— They start to make calls, and at each call they ask some questions. Let’s say they call me, and I tell them I’m a male, mixed-race, Buddhist, registered Republican. If by chance they’ve already called the other one, they may say “okay, thanks, that’s all we need.” Or, they may go ahead and ask their questions.

— They repeat this thousands of times. Yes, thousands, plural: some people don’t answer, some people hang up, some say “Mommy’s in the bathroom and I’ve got a new kitty, want to hear him?” Eventually, they get a big enough sample.

(How, you may ask, do they know what “big enough” is? There’s a mathematical way of estimating what the probable error will be for a given population and sample size. You can look it up — look for “standard error” and “confidence intervals.” The main thing to remember is that it quickly leads to diminishing returns. You’ve got to get lots more respondents to improve the accuracy very much. This is why nearly every national poll has more or less the same margin of error: that’s as much accuracy as the press will pay for.)

Advertisement

The thing about this is that these people have been picked as closely as possible to be a random sample. Ideally, they use dice or something like them to pick which people’s numbers they call; practically, they can’t always get it perfectly random — what if you’re calling a Texas football town and the high school game is that night? — but they do try.

Finally, though, they have their sample, and it includes enough people according to their rules. But: because they have been picked randomly, they almost certainly don’t exactly represent the real population.

Think about it — I have 100 million people, and I can only pick 1,000 of them. There are lots and lots of ways they could pick that random set (lots and lots: the interested reader should go to Wolfram Alpha and type in “100 million choose 1000” to see how many.) And almost none of them will really be a perfect sample.

What they have instead may give them 63 percent Democrats and 32 percent Republicans.

Now, here’s where the pollster’s special magic comes in. Through mathematical methods, they can manipulate these numbers for different populations. So they create a mathematical model of the population, and they adjust the raw results to match that model. In an election poll like the ones we’re talking about, that is called a turnout model, and it comes down to saying that you expect there to be, say, 37 percent Democrats, 32 percent Republicans, and 31 percent independents.

Advertisement

So, the polling company takes their actual raw data, and they fit it to that turnout model, and that’s what they present as the result of this poll.

Those results, however, depend on how well the sample matches the real population and depend on how well their turnout model fits what people actually do on that longed-for day when we actually have the election and can stop fretting about polling companies.

So with all that in mind, now let’s think about how to read a poll.

First: when you read a poll, you’ll always see it stated as something like “53 percent Democratic, margin of error of plus or minus 3.5 percent.” Here’s how you should read that: “The polling company believes that if the election were taken today, there is 1 chance in 20 that the actual result will be more than 56.5 percent Democrat or less than 49.5 percent Democrat.”

You see, that’s what the “margin of error” really means: by statistical methods, they believe that 19 out of 20 times — or 95 percent of the time — the real value would come out between 49.5 percent and 56.5 percent.

Second, and this is where the controversy is coming now: the quality of those results depends on the accuracy of that original model.

So, when a Quinnipiac poll comes out and says that Ohio is now 53 percent Obama, what should you do? Ready, class?

First, we restate that. I can’t find the margin of error for this poll readily (this should make you suspicious at the start), but if you can’t find one, you’ll always be close to right if you say it’s 3 percent plus or minus. So we re-read this as “they’re saying there’s 1 chance in 20 it will come out either above 56 or below 50.”

Advertisement

Second, we look for the polls’ “internals” — in other words, that turnout model. Now, here’s where this poll gets interesting. Hugh Hewitt interviewed Peter Brown, the head of Quinnipiac, and challenged him directly on the turnout model:

HH: I want to start with the models, which are creating quite a lot of controversy. In Florida, the model that Quinnipiac used gave Democrats a nine-point edge in turnout. In Ohio, the sample had an eight-point Democratic advantage. What’s the reasoning behind those models?

PB: Well, what is important to understand is that the way Quinnipiac and most other major polls do their sampling is we do not wait for party ID. We ask voters, or the people we interview, do they consider themselves a Democrat, a Republican, an independent or a member of a minor party. And that’s different than asking them what their party registration is. What you’re comparing it to is party registration. In other words, when someone starts as a voter, they have the opportunity of, in most states, of being a Republican, a Democrat, or a member of a minor party or unaffiliated … (emphasis added)

A little later, he says:

HH: Why would guys run a poll with nine percent more Democrats than Republicans when that percentage advantage, I mean, if you’re trying to tell people how the state is going to go, I don’t think this is particularly helpful, because you’ve oversampled Democrats, right?

PB: But we didn’t set out to oversample Democrats. We did our normal, random digit dial way of calling people. And there were, these are likely voters. They had to pass a screen. Because it’s a presidential year, it’s not a particularly heavy screen.

Advertisement

In other words, the way Quinnipiac does it is they assume that the thousand or so people they poll on a certain day really do represent the population, and if that comes out with a D+8 distribution, that’s the way they report it.

Now, it’s tempting to jump for the conspiracy theory and think Quinnipiac is slanting the poll, but they’re being very upfront about it. They have a method they like and they’re running with it. They’re not adjusting to an assumed turnout model, which seems at least as reasonable in a real-world way as having an assumed model. But the effect is the same: whether they have a skewed sample, or they make a turnout model adjustment, if what they’re going with doesn’t match the real world, the results will be skewed.

So what do you, as a lay reader, do? Well, you can work out a correction with algebra, but you’ll be pretty close if you assume that every percentage point you correct the distribution will change the results by one percentage point, too. So, if we have, say, Obama 53, Romney 45 and a D+8 sample, then we can look at a range of assumptions:

D+ Obama 53 Romney 45
D+8 53 45
D+7 52 46
D+6 51 47
D+5 50 48
D+4 49 49
D+3 48 50
D+2 47 51
D+1 46 52
even 45 53
D-1 44 54

Now, you’re ready to think for yourself. Do you really think turnout in Ohio will be 38 percent Democrat, 30 percent Republican? That was how it went in 2008, with the Lightworker in full cosmic wonder.

But notice if the real turnout is only D+7, we’re in that margin of error. If it’s D+4 the odds are 50/50. If it’s D+1, then we’re back to where the poll is starting to really predict a win for Romney.

Advertisement

The real lesson of all of this is: the only poll that really matters is the one on Tuesday, November 6. Or if you’re a Democrat, Wednesday, November 7.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement