A lot of the points I made in my essay on how not to report polls boiled down to not taking a poll in isolation. Not making the outlier the story, only comparing apples to apples, not cherry picking – they all boil down to similar things, especially on voting intention.
In the last couple of days I’ve watched people getting overexcited over two polls. Yesterday’s ICM poll provoked lots of Tory excitement on Twitter and comments about the Labour lead falling and it being a terrible poll for Labour and so on. ICM’s poll, of course, did not show Labour’s lead falling at all – it showed it steady for the fourth month in a row. ICM’s methodology merely produces consistently lower leads for Labour due to their methodological approach. Saturday night had the usual flurry of excitable UKIP comments on Twitter about being on the rise and being the 3rd party after the Survation poll was published, conveniently ignoring the fact that 95% of polls this year have had them in fourth – often by a very long way. There was, needless to say, no similar excitement over UKIP being on 4%, 11 points behind the Lib Dems, in the ICM poll yesterday.
Different pollsters have different approaches, on things like weighting, likelihood to vote, how they deal with don’t knows, how they prompt and so on. While all the pollsters are politically neutral, these do have some consistent partisan effects – for example, ICM’s methods tend to produce the highest levels of support for the Liberal Democrats, YouGov’s methods tend to produce the lowest levels of support for the Liberal Democrats. The graph below shows an estimate of the partisan house effects of each polling company’s voting intention methodology, calculated by comparing each company’s poll results to the rolling average of the YouGov daily poll (1)

YouGov, ICM and ComRes’s online polls tend to show the highest shares of the vote for the Conservative party. However, in the case of YouGov this is cancelled out by a tendency to also show the highest levels of support for Labour, so the result is that ICM show the lowest Labour leads while YouGov tend to show some of the highest Labour leads after Angus Reid and TNS. For the Liberal Democrats, ICM show far higher support for the party than any other company, averaging at plus 3.3 points. Next highest is Survation and ComRes’s telephone polls. At the opposite end of the spectrum YouGov tend to show significantly lower Liberal Democrat support.
It would take a much longer post to dissect the full methodology of each pollster and the partisan implications, but to pick up the general methodological factors that contribute to the house effects:
How pollsters account for likelihood to vote. Some companies like YouGov and Angus Reid do not take any account of how likely people say they are to vote away from elections(2). Companies like ICM and Populus weight by how likely people say they are to vote, so that people who say they are 10/10 certain to vote count much more than someone who says their chances of voting are only 5/10. At the opposite end of the scale from YouGov, Ipsos MORI include only those people who are 10/10 certain to vote, and exclude everyone else from their topline figures. Other twists here are ICM, who also heavily downweight anyone who says they didn’t vote in 2010, and ComRes, who use a much harsher likelihood to vote question for people voting for minor parties than for the big three. Most of the time Conservative voters say they are more likely to vote than Labour voters, so the more harshly a pollster weights or filters by likelihood to vote the better it is for the Tories.
How pollsters deal with don’t knows. Somewhere around a fifth of people normally tell pollsters they don’t know how they would vote in an election tomorrow. Some pollsters like YouGov simply ignore these respondents. Some like MORI ask them a “squeeze question”, something like “which party are you most likely to vote for?”. Others estimate how those people would vote using other information from the poll, such as party ID (ComRes) or how those people say they voted at the previous election (ICM and Populus). These adjustments tend to help parties that have lost support since the last general election – so currently ICM and Populus’s adjustment tends to help the Liberal Democrats and, to a lesser extent, the Conservatives. In past Parliaments it has helped the Labour party.
How the poll is conducted. About half the current regular pollsters do their research online, about half do it by telephone. While there is no obvious systemic difference between online and telephone polls in terms of support for the Conservatives, Labour and Liberal Democrats there is a noticable difference in support for UKIP, with polls conducted online consistently showing greater UKIP support. This may be to do with interviewer effect, with respondents being more willing to admit supporting a minor party in an online poll than to a human interviewer, or may be something to do with sampling.
How the poll is weighted. Almost all pollsters now use political weighting of some sort in their samples. In the majority of cases this means weighting the sample by how people said they voted at the last election – i.e. we know 37% of people who voted in Great Britain in 2010 voted Tory, so in a representative sample 37% of people who say they voted at the previous election. It isn’t quite as simple as that because of false recall – people tend to forget their vote, or misreport voting tactically, or claim they vote when they didn’t actually bother, or align their past behaviour with their present preferences and say how they wish they had voted with hindsight. Most pollsters estimate some level of false recall in deciding their weighting targets, Ipsos MORI reject it on principle with the effect that proportionally their samples tend to contain slightly more people who say they voted Labour at the last election, and somewhat fewer who say they voted Lib Dem.
How the poll is prompted. As discussed at the weekend, almost all companies prompt their voting intention along the lines of Conservative, Labour, Lib Dem, Scots Nats/Plaid if appropriate and Other. Survation also include UKIP in their main prompt, leading to substantially higher UKIP support in their polls.
All these factors interact with one another – so you can’t look at one in isolation. For example, MORI’s sample tends to be a bit more Labour than other parties, but their turnout filter is harsher than most other companies which disadvantages Labour and cancels out the pro-Labour effect of not weighting by past vote. ComRes’s online polls tend to find a higher level of UKIP support than many other companies, but their harsh filter on likelihood to vote for other parties cancels this out. They also change over time – so while re-allocation of don’t knows currently helps the Lib Dems, in past years it has helped Labour (and when originally introduced in the 1990s helped the Tories.)
Inevitably the question arises which polls are “right”. The question cannot be answered. Come actual elections polls using different methods all tend to cluster together and show very similar results – polls have a margin of error of plus or minus 3%, so judging which methodology is more accurate based on one single poll every five years when all the companies are within the 3% margin of error is an utter nonsense.
Realistically it a more a philosophical question than a methodological one – the reason pollster show different figures is that they are measuring different things. YouGov don’t make second guesses about don’t knows and assume everyone who says they vote will. Their figures are basically how people say they would vote tomorrow. In comparison ICM weight by how likely people say they are to vote, assume people who didn’t vote last time are less likely to do so than they say they are, and make estimates of how people who say don’t know would actually vote. Their figures are basically how ICM estimate people would actually vote tomorrow. They are two different approaches, and there is not right answer as to which one to take. Shouldn’t a pollster actually report what people say they’d do, rather than making second guesses about what they’d really do? But if a pollster has good reason to think that people wouldn’t behave how they say they will, shouldn’t they factor that in? No easy answer.
Given these differences though, when you see a poll, it is important to remember house effects and to look at the wider trends. A poll from ICM showing a smaller Labour lead than in most other companies’ polls isn’t necessarily a sign of some great collapse in Labour’s lead, it’s more likely because ICM always show a smaller Labour lead than other companies (ditto a great big Labour lead in an Angus Reid poll). That said, even a big Labour lead from ICM or a small Labour lead from Angus Reid shouldn’t get people too excited either, as any single poll can easily be an outlier. As ever, the rule remains to look at the broad trend across all the polls. Do not cherry pick the polls that tell you what you want to hear, do not try to draw trends from one company to another when they use different methods and don’t get overexcited by single outlying polls.
(1)House effects were calculated by using the daily YouGov poll as a reference point. I took a rolling 5 day average of the YouGov daily poll, and compared that to each poll from another company. This was used to calculate each company’s average difference from the YouGov daily poll. Then it was calibrated to the average for difference for each party, so that YouGov wasn’t automatically the mid-point!)
(2) YouGov do take into account likelihood to vote during election campaigns, using roughly the same approach as Populus
Filed under:
Methodology,
o