Do Opinion Polls Reflect or Drive Voters’ Intentions?

Barack_Obama

Are people’s viewpoint snapshots more reliable predictors of election outcome than horoscopes or reading tea leaves? Should they be allowed in elections?

In the run-up to elections and in between, the public is bombarded with surveys and polls revealing views of random samples of voters, snapshot opinions of think-tanks, focus groups and ordinary men and women in their guises as social types, income earners, marrieds, homeowners, benefit recipients, taxpayers, consumers. This is a regular marketing tool, but in elections, the real question is: do poll findings drive public opinion or accurately reflect it?

If on a given day, say, 79% of single mothers answer the voting intention question by saying it’ll be Party A, does that influence other single mothers to believe there is no point in supporting Party B? Or convince those who are not single mothers that Party B is the best one? Or dissuade anyone from voting?

Opinion Poll History: a Mixed Bag?

Polls work by extrapolating generalities from a specific sample. The first such is believed to have been a localised straw poll for the Harrisburg Pennsylvanian in 1824, to help determine who would win the US Presidency, Andrew Jackson or John Quincy Adams. In February the next year, the House of Representatives decided that Adams should be President, so the local poll got it wrong. However, the idea of opinion-gathering from local sampling caught on.

In 1916, the Literary Digest predicted from a national survey, (they sent out postcards to subscribers), that Woodrow Wilson would win. He did, and they simply followed this procedure for the next four elections. By 1936, they didn’t realise the dynamics of their readers had changed to more wealthy people, so they predicted Franklin D Roosevelt would lose, but he won by a landslide. Simultaneously George Gallup interviewed a smaller, more demographically representative sample, and got it right.

Another US pioneer in political forecasting was Elmo Roper who was later joined by Louis Harris. Gallup set up a UK subsidiary which successfully predicted Labour’s landslide in 1945, while all other pundits expected that Conservative Winston Churchill would win after leading the nation to wartime victory.

During the November 2008 election of Barack Obama, media pundits harnessed latest technologies to assess the candidates’ campaign trail progress, including controversially, sampling people by cell (or mobile) phones. US baseball statistician Nate Silver was more or less spot-on with his prediction of Obama taking 53% to McCain’s 46% of the popular vote, based not on his own polling, but by analysing every other poll and voting model.

Are Poll Predictions as often Wrong as Right?

Generally in the UK, polls after 1945 called right all elections. However, in 1992, they went spectacularly wrong in predicting Labour victory by 0.8% on election morning. Exit polls taken as people left the polling stations having cast their preferences, indicated a Conservative lead of 4%. The actual, real vote outcome was a 7.5% lead to John Major, who governed for five further years.

The reason for this 8%+ error was, concluded the Market Research Society, late swing (1-2%), wrong sample quotas (2%), leaving almost 5% accounted for by people who refused to answer accurately/truthfully their intention to vote Conservative.

Since then, not only have polling groups proliferated, but they’ve been meticulous in better balancing samples, taking more factors into account, such as previous and expected party loyalties, owner-occupiers, self-employed, number of dependents, localities and pension-holders. ICM now assume that of those who say they don’t know how they will vote, 60% will vote as previously; NOP assume don’t-knows will identify with a party that appeals to their economic concerns.

That information, along with how people are interviewed (at home, at work, commuting, unemployed, on the phone, face to face, by post, what age they are, what demographic grouping they occupy) is almost never released. It is therefore unsurprising that political parties commission their own polling.

Sometimes this is designed to give the answer that is wanted: if somebody asks the right questions, a predicted outcome can prove anything. TV networks, newspapers and universities have joined in the scramble, setting up their own interviewing machinery.

Politicians who find unpalatable answers from voters, usually keep them for private consumption. Adam Lovejoy writing on Stirring Trouble Internationally is far from alone in demanding that polls should be banned during election campaigns, to prevent forged statistics and undue influence on voters.

Vested interests apart, while there may be a case for such a ban in the UK where Parliamentary elections are not time-fixed, it would be impossible in the USA, where the next four-year Presidential election starts as soon as the last one is over.

Rather than follow polls, perhaps the best thing for confused, bamboozled voters searching for what to vote for in the absence of particular reasons to vote one way or another, is to follow the smart money. They should look at bookmakers’ odds given on candidates, and then back the favourite. Or the outsider. People putting down real money to back a particular outcome may be more accurate than answering a pollster’s questions. Or maybe not.