Polls often give contrasting results. Sometimes this is because they were done at different times and public opinion has actually changed, but most of time that’s not the reason. A large part of the difference between polls showing different results is often simple random variation, good old margin of error. We’ve spoken about that a lot, but today’s post is about the other reason, systemic differences between pollsters (or “house effects”).
Pollsters use different methods, and sometimes those different choices result in consistent differences between the results they produce. One company’s polls, because of the methodological choices they make, may consistently show a higher Labour score, or a lower UKIP score, or whatever. This is not a case of deliberate bias – unlike in the USA there are not Conservative pollsters or Labour pollsters, every company is non-partisan, but the effect of their methodological decisions mean some companies do have a tendency to produce figures that are better or worse for each political party – we call these “house effects”.
The graph above shows these house effects for each company, based upon all the polls published in 2014 (I’ve treated ComRes telephone and ComRes online polls as if they are separate companies, as they use different methods and have some consistent differences). To avoid any risk of bias from pollsters carrying more or less polls when a party is doing well or badly I work out the house effects by using a rolling average of the daily YouGov poll as a reference point – I see how much each poll departs from the YouGov average on the day when its fieldwork finished and take an average of those deviations over the year. Then I take the average of all those deviations and graph them relative to that (just so YouGov aren’t automatically in the middle). It’s important to note that the pollsters in the middle of the graph are not necessarily more correct, these differences are relative to one another. We can’t tell what the deviations are from the “true” figure, as we don’t know what the “true” figure is.
As you can see, the difference between the Labour and Conservative leads each company show are relatively modest. Leaving aside TNS, who tended to show substantially higher Labour leads than other companies, everyone else is within 2 points of each other. Opinium and ComRes phone polls tend to show Labour leads that are a point higher than average, MORI and ICM tend to show Labour leads that are a point lower than average. Ashcroft, YouGov, ComRes online and Populus tend to be about average. Note I’m comparing the Conservative-v-Labour gap between different pollsters, not the figures for each one. Populus, for example, consistently give Labour a higher score than Lord Ashcroft’s polls do… but they do exactly the same for the Conservatives, so when it comes to the party lead the two sets of polls tend to show much the same.
There is a much, much bigger difference when it comes to measuring the level of UKIP support. The most “UKIP friendly” pollster, Survation, tends to produce a UKIP figure that is almost 8 points higher than the most “UKIP unfriendly” pollster, ICM.
What causes the differences?
There are a lot of methodological differences between pollsters that make a difference to their end results. Some are very easy to measure and quantify, others are very difficult. Some contradict each other, so a pollster may do something that is more Tory than other pollsters, something that is less Tory than other pollsters, and end up in exactly the same place. They may interact with each other, so weighting by turnout might have a different effect on a phone poll from a telephone poll. Understanding the methodological differences is often impossibly complicated, but here are some of the key factors:
Phone or online? Whether polls get their sample from randomly dialling telephone numbers (which gives you a sample made up of the sort of people who answer cold calls and agree to take part) or from an internet panel (which gives you a sample made up of the sort of people who join internet panels) has an effect on sample make up, and sometimes that has an effect on the end result. It isn’t always the case – for example, raw phone samples tend to be more Labour inclined… but this can be corrected by weighting, so phone samples don’t necessarily produce results that are better for Labour. Where there is a very clear pattern is on UKIP support – for one reason or another, online polls show more support for UKIP than phone polls. Is this because people are happier to admit supporting UKIP when there isn’t a human interviewer? Or it is because online samples include more UKIP inclined people? We don’t know
Weighting. Pollsters weight their samples to make sure they are representative of the British population and iron out any skews and biases resulting from their sampling. All companies weight by simple demographics like age and gender, but more controversial is political weighting – using past vote or party identification to make sure the sample is politically representative of Britain. The rights and wrongs of this deserve an article in their own right, but in terms of comparing pollsters most companies weight by past vote from May 2010, YouGov weight by party ID from May 2010, Populus by current party ID, MORI and Opinium don’t use political weighting at all. This means MORI’s samples are sometimes a bit more Laboury than other phone companies (but see their likelihood to vote filter below), Opinium have speculated that their comparatively high level of UKIP support may be because they don’t weight politically and Populus tend to heavily weight down UKIP and the Greens.
Prompting. Doesn’t actually seem to make a whole lot of difference, but was endlessly accused of doing so! This is the list of options pollsters give when asking who people vote for – obviously, it doesn’t include every single party – there are hundreds – but companies draw the line in different places. The specific controversy in recent years has been UKIP and whether or not they should be prompted for in the main question. For most of this Parliament only Survation prompted for UKIP, and it was seen as a potential reason for the higher level of UKIP support that Survation found. More recently YouGov, Ashcroft and ComRes have also started including UKIP in their main prompt, but with no significant effect upon the level of UKIP support they report. Given that in the past testing found prompting was making a difference, it suggests that UKIP are now well enough established in the public mind that whether the pollster prompts for them or not no longer makes much difference.
Likelihood to vote. Most companies factor in respondents likelihood to vote somehow, but using sharply varying methods. Most of the time Conservative voters say they are more likely to vote than Labour voters, so if a pollster puts a lot of emphasis on how likely people are to actually vote it normally helps the Tories. Currently YouGov put the least emphasis on likelihood to vote (they just include everyone who gives an intention), companies like Survation, ICM and Populus weight according to likelihood to vote which is a sort of mid-way point, Ipsos MORI have a very harsh filter, taking only those people who are 10/10 certain to vote (this probably helps the Tories, but MORI’s weighting is probably quite friendly to Labour, so it evens out).
Don’t knows. Another cause of the differences between companies is how they treat people who say don’t know. YouGov and Populus just ignore those people completely. MORI and ComRes ask those people “squeeze questions”, probing to see if they’ll say who they are most likely to vote for. ICM, Lord Ashcroft and Survation go further and make some estimates about those people based on their other answers, generally assuming that a proportion of people who say don’t know will actually end up voting for the party they did last time. How this approach impacts on voting intention numbers depends on the political circumstances at the time, it tends to help any party that has lost lots of support. When ICM first pioneered it in the 1990s it helped the Tories (and was known as the “shy Tory adjustment”), these days it helps the Lib Dems, and goes a long way to explain why ICM tend to show the highest level of support for the Lib Dems.
And these are just the obvious things, there will be lots of other subtle or unusual differences (ICM weight down people who didn’t vote last time, Survation ask people to imagine all parties are standing in the seat, ComRes have a harsher turnout filter for smaller parties in their online polls, etc, etc)
Are they constant?
No. The house effects of different pollsters change over time. Part of this is because political circumstances change and the different methods have different impacts. I mentioned above that MORI have the harshest turnout filter and that most of the time this helps the Tories, but that isn’t set in stone – if Tory voters became disillusioned and less likely to vote and Labour voters became more fired up it could reverse.
It also isn’t consistent because pollsters change methodology. In 2014 TNS tended to show bigger Labour leads than other companies, but in their last poll they changed their weighting in a way that may well have stopped that. In February last year Populus changed their weights in a way that reduced Lib Dem support and increased UKIP support (and changed even more radically in 2013 when they moved from using the telephone to online). So don’t assume that because a pollster’s methods last year had a particular skew it will always be that way.
So who is right?
At the end of the day, what most people asking the question “why are those polls so different” really want to know is which one is right. Which one should they believe? There is rarely an easy answer – if there was, the pollsters who were getting it wrong would correct their methods and the differences would vanish. All pollsters are trying to get things right.
Personally speaking I obviously I think YouGov polls are right, but all the other pollsters out there will think the same thing about the polling decisions they’ve made and I’ve always tried to make UKPollingReport about explaining the differences so people can judge for themselves, rather than championing my own polls.
Occasionally you get an election when there is a really big spread across the pollsters, when some companies clearly get it right and others get it wrong, and those who are wrong change their methods or fade away. 1997 was one of those elections – ICM clearly got it right when others didn’t, and other companies mostly adopted methods like those of ICM or dropped out of political polling. These instances are rare though. Most of the time all the pollsters show about the same thing, are all within the margin of error of each other, so we never really find out who is “right” or “wrong” (as it happens, the contrast between the level of support for UKIP shown by different pollsters is so great that this may be an election where some polls end up being obviously wrong… or come the election the polls may end up converging and all showing much the same. We shall see).
In the meantime, with an impartial hat on all I can recommend is to look at a broad average of the polls. Sure, some polls may be wrong (and it’s not necessarily the outlying pollster showing something different to the rest – sometimes they’ve turned out to be the only one getting it right!) but it will at least help you steer clear of the common fallacy of assuming that the pollster showing results you like the most is the one that is most trustworthy.