The Five False Assumptions Behind Poll-Skewing
• Polling companies need to be accurate in order to gain a reputation for reliability, so they have no motivation to lie.
This assumption, which the pollsters hope the public has, is partly true. A reputation for accuracy is one way for a polling company to attract clients.
But polling companies have a second motivation often at odds with and usually trumping the desire for accuracy: To give their clients (in this instance, political campaigns) what they want.
Want to see a poll that shows you're winning (so you can use those false stats to sway the electorate?) You got it!
Want us to weight the results so that opposing voters become too despondent to bother voting? You got it!
Want evidence that you were in the lead so that when when voter fraud propels you to otherwise undeserved victory, it looks believable? You got it!
Campaigns will seek out any pollster who can provide them with the propaganda necessary to manipulate the election. Accuracy is only useful for secret internal polls; intentionally deceptive skewing is useful as tool to trick voters.
Some voters have figured this out, and now place more trust in the campaign's secret internal polls than they do in publicly announced polls. But outsiders rarely get a glimpse of those secret internal polls unless they're intentionally leaked. Diabolical campaign managers have begun to realize that they can also sway those hard-to-discourage skeptical voters by "leaking" supposedly reliable internal polling numbers which support their propagandistic goal; and since the polls are secret, there's no expectation to release the underlying breakdowns, so the propagandists are free to concoct and release any "internal polling numbers" they so desire.
All in all, the polling industry has just as much if not more structural motivation for corruption as it has for honesty, so we can't rely on the "marketplace" to weed out biased polls.
Five assumptions. Never questioned. All or most of them need to be true for poll-skewing to be effective. And yet under closer inspection none of them are proven to be true. On the other hand, there's no solid evidence that they're false, either; it's all a guessing game of untested hypotheses. From my vantage point, which in the absence of any solid data is as valid as anyone else's, many of the assumptions are not only false but they are inverted: The exact opposite assumption is more likely to be true.
Campaign strategists and their poll-skewing accomplices may be shooting themselves in the foot every single day by jumping to unproven conclusions about mass psychology. For all they know, every action they take backfires, and helps the opposition.
But they can't be bothered by doubts. Merrily they skew and skew, convinced of their cunning.
Thanks to a tip from "Rob Crawford" in the comments section, I have finally tracked down what I was looking for: a study which does in fact attempt to test and document what it calls "the bandwagon effect." You can download the pdf of the full study here:
The authors claim to have demonstrated experimentally that false poll reports can sway the electorate as much as 6%. But a close examination of how the study was conducted reveals that its conclusions are only applicable to a highly specific situation, and are irrelevant to elections such as the contest between Obama and Romney.
In the study, which was conducted during the Republican presidential primaries in February 1996, self-identified Republicans were divided into two groups and each isolated group asked their preference between candidates Bob Dole and Steve Forbes. Then each group was subsequently informed of a different false poll: the first group was told that a recent poll put Dole far in the lead against Forbes; while the second group was told the opposite, that a poll placed Forbes way out in front. Then the groups were asked a second time their preferences, and (to gloss over the complicated statistics) the Dole-poll group swayed 6% to Dole, while the Forbes-poll group swayed 6% to Forbes.
While this might seem pretty convincing at first glance, there are several factors which render it meaningless vis-a-vis the current election:
• There really was very little ideological or political difference between Dole and Forbes. Both were moderate-conservative Republicans with similar positions on almost every issue. The only differentiating factor was that Forbes was pushing a flat-tax gimmick, the consequence of which wouldn't have been much different from what Dole was proposing more boringly. As a result, it was quite easy and not a philosophical leap to switch from one to the other (as it would be to switch from Romney to Obama). One might as well have conducted a study proving that 6% of people would change their ice cream order from Mint Chocolate Chip to Mint Fudge Ripple once the waitress told them that Mint Fudge Ripple was more popular. But imagine if the choice was between Vanilla Pudding and Szechuan Gizzard Flambé; I doubt many people would change their orders no matter what the waitress said.
• Almost all voters rally around their party's eventual nominee, no matter who he turns out to be. Thus, it's hardly surprising that a room full of Republicans would embrace whichever Republican appeared to be winning the race. But the situation is completely different in a general election, which is not a friendly rivalry like a intra-party primary but rather is a knock-down-drag-out showdown between competing worldviews. So the 1996 study is not really applicable to 2012 realities.
• The experiment was a hermetically sealed environment in which the participants had only one source of information which they were told to trust implicitly. So it was effortless to mislead them. But a real-world campaign is far messier, and try as they might the Obama-friendly media does not have a monopoly over the narrative, and voters have to choose between competing claims — the polls are true, the polls are lies, Obama's leading, the race is a tie, Romney's ahead, and so forth. In our real-world election, the poll-debunkers have created such a fuss that even the Obama-loving media spends an inordinate amount of time discussing and trying to dismiss the doubters, which shatters their own narrative monopoly. And to top it off, record numbers of Americans distrust the media altogether.
• A secondary part of the 1996 study was to test whether strong convictions could be altered by false polls, and found that in fact they can't be. The experimenters conducted follow-up tests that essentially proved that strong opinions can't be changed by false poll results, whereas weak opinions can be. But that once again confirms that the study has no relevance to 2012, in which voters on both sides express strong support for each candidate in record numbers, as compared to earlier elections. It also proves that the original subjects probably didn't really care about the difference between Dole and Forbes in the first place, since they were partly swayed by false polls.
All in all, the "Effects of Poll Reports on Voter Preferences" study made some interesting observations, but nothing in the study demonstrated that "the bandwagon effect" can induce people to change parties or fundamental ideologies — only that false polls can induce people to vacillate between two similar options.
One final note: "Effects of Poll Reports on Voter Preferences" makes oblique references to earlier studies of the bandwagon effect, though many of them were conducted many decades ago. It would be interesting to dig them out and see if any of them have any relevance to our current election.
If poll-skewers are relying on evidence like this study to justify their deceptions, they may be in for a shock when election day rolls around.