Despite the major differences between the methodology used by the different pollsters, by hook or by crook the voting intention figures they produce all tend to be relatively similar to one another. The exception is the Liberal Democrats, who vary significantly between ICM, who give them the highest scores, and YouGov, who give them the lowest. In a post earlier this year I calculated the average difference between the Lib Dem scores from the two pollsters to be 3.0%. Perhaps this is made even more noticable because it does make it difficult to judge how well the Lib Dems are doing – there are sometimes significant differences between the pollsters Labour or Conservative scores, but since recently the overall picture is always one of a Conservative lead over Labour it doesn’t really change anything. When it comes to the Lib Dems all the polls show thenm down on the last election, but with ICM is a relatively small fall, with YouGov they appear to have been mercilessly squeezed.

Several people have asked what the reason might be – including a post my Mark Park on Libdemvoice here – and which figure (if any) is right? Unfortunately these are no easy answers to who is right, but I can at least flag up some potential reasons:

Mode of questioning. ICM and Populus ask people over the phone, YouGov ask people online. In lots of cases people might give an answer to an anonymous computer screen that they would be embarrassed to give to a live interviewer on the other end of a phone line. Obvious cases are extremist parties like the BNP, but small fringe parties suffer in general because people are unsure about going out on a limb. With a mainstream party like the Lib Dems this shouldn’t be a factor…but actually it could be having a knock-on effect. Some support for the Liberal Democrats isn’t positive support for their ideas or policies, but a “neither of the above” vote. Potentially some of the higher support for the Lib Dems in phone polls could be “neither of above” voters who might really be tending towards fringe or extreme parties but are unsure about naming a fringe party to an interviewer. Given that YouGov tend to have a higher “other” score than other pollsters though, this might well be a factor.

Don’t knows. ICM and Populus both do an adjustment to their figures to take account of don’t knows. Based on past studies they assume that a proportion of don’t knows will actually end up voting for the party they did last time round. People normally think of this as an adjustment for “shy Tories”, but the net effect these days is rarely if ever to help the Conservatives. For the last couple of years it has normally helped Labour. The way it works though will help any party who finds that some of their past supporters have drifted away and are now telling pollsters they don’t know how they’ll vote. In ICM polls 50% of previous Lib Dem voters who now say “don’t know” are added onto the Lib Dem score, in Populus polls they re-allocate 30% of previous Lib Dem voters. Does this increase the level of Lib Dem support in ICM polls? Because ICM publish the figures before and after the adjustment this is one area where we can quantify the difference – some observers have taken the difference as having emerged after December 2005, so taking the 18 monthly ICM polls since then, the adjustment has been large enough to increase the Lib Dem score by 1 point six times, and has reduced it by 1 once – so on average it increases Lib Dem support by 0.28 of a percentage point, leaving another 2.72% difference to be explained by other factors.

Likelihood to vote. Uniquely amongst the pollsters YouGov do not filter or weight by likelihood to vote. If Lib Dem supporters were actually more likely to vote than supporters of other parties then this could explain some of the difference. In fact, it’s normally Tory supporters who are most likely to vote, followed by the Lib Dems with Labour supporters further behind – so perhaps this could be contributing to a lower level of Lib Dem support in YouGov polls? Again, by looking at the figures before weighting by turnout in ICM’s detailled tables we can quantify this – since December 2005 ICM have published the breakdown for likelihood to vote from 11 of their Guardian polls. If you compare what the rounded figure would have been without turnout weighting, and what it actually was afterwards, 2 times it increased the LD score by a point, 2 times it reduced it…so it has no overall effect at all. Even with MORI’s very harsh filter by turnout it makes only a minimal difference.

Sampling. Could there be a difference from the sampling techniques? Could the people who pop up in a telephone poll be more likely to be Lib Dem voters than people who join an online polling panel? In terms of political activists it probably works the other way if anything, but when it comes to normal voters who knows? It is possible, but given than both ICM and YouGov weight their polls politically it this was a problem it would be something that the pollsters should be able to address using political weighting – if YouGov’s sampling produced too few Lib Dems they would weight they upwards, if ICM’s produced too many they would weight them down (though in actual fact ICM tends to weight the proportion of part Lib Dem voters upwards, sometimes quite heavily). That bring us to…

Weighting targets. This is potentially where most of the difference lies. I suspect that when pollsters weight their polls politically ICM and Populus are weighting their sample to have a slightly higher proportion of past Lib Dem voters than YouGov are. Unfortunately it is impossible to directly compare the proportions used because ICM and Populus weight using recalled past vote and YouGov weight using party identification.

The data used for political weighting. This is the most subtle difference, and the one that I suspect is behind a fair amount of the difference. Phone polls do their political weighting based on data that is collected now. They then have to take account of false recall and forgetfulness when drawing up weighting targets. In contrast YouGov weight their polls using the data they collected back in May 2005 when it was fresh in respondents minds who they had actually voted for that day. The people who voted Lib Dem in 2005 but who don’t recall or say they voted for a different party if you ask them now are, perhaps not surprisingly, also far less likely to say they would vote Lib Dem in a general election tomorrow. In other words, the past Lib Dem voters that phone pollsters find are the more committed Liberal Democrats. “Flakier” Liberal Democrats who are more likely to switch to other parties are more likely to have forgotten they voted Lib Dem in the first place. Those identified as Lib Dems in YouGov’s samples probably contain more of those “flaky” Lib Dems than those identified as Lib Dems in phone samples.

What is the ultimate reason for the difference? I don’t know. It looks as though we can rule out likelihood to vote weighting and we can see that the “spiral of silence adjustment” is only a very small factor. The mode of questioning may be having some effect – people who say “Liberal Democrat” to a live interviewer might be willing to admit to a computer screen that they will actually vote for a smaller fringe party. In my opinion the difference is more likely to be somewhere in the weighting, and here is it almost impossible to draw any conclusions – ICM and YouGov weight using a different measure, based on data collected at different times and, as with all political weighting, chosing the targets they weight to is as much an art as a science.

Who’s right? It is impossible to say. If the reason had turned out to be something to do with likelihood to vote then you could have a rationial discussion about to what extent and how turnout should be factored in, ditto the way don’t knows are dealt with. It really is very difficult to make informed judgements about weighting. I am not one who believes that is any accurate way of telling how well parties are doing apart from polls: people vote differently in general elections to local elections; even if they were a decent guide, we must be approaching a record period without a Parliamentary by-election; local by-elections don’t even seem to be a good guide to local elections these days, let alone anything else! The only reliable way to tell which poll is producing more accurate results will be to wait until the next general election and see what the actual results are. Sadly, they doesn’t help you much in deciding who is right now.

I’ll give you two warnings. Firstly, it isn’t always the poll that is different from the rest that is wrong. In 1992 Harris was showing a Tory lead when everyone else had Labour ahead. People dismissed Harris as being wrong, the rest is history. Secondly, it is very tempting to believe the poll you want to believe – to see fault in every detail of the methodology of the poll who you really hope isn’t right and the obvious superiority of the poll you hope is true. In few if any cases do I think that people are deliberately talking up polls that favour their own party. I just think that somewhere deep in our subconscious we all tend to will ourselves into finding arguments in favour of the methodology that produces the results we like more convincing :)


Comments are closed.