There have been three polls over the last week – in the Sunday papers there were polls from ComRes and Opinium, the regular YouGov poll for the Times last week. Voting intention figures were:

Opinium – CON 37%, LAB 25%, LDEM 16%, BREX 13%, GRN 2% (tabs)
ComRes – CON 28%, LAB 27%, LDEM 20%, BREX 13%, GRN 5% (tabs)
YouGov – CON 32%, LAB 23%, LDEM 19%, BREX 14%, GRN 7% (tabs)

There isn’t really a consistent trend to report here – YouGov and ComRes have the Conservatives declining a little from the peak of the Johnson honeymoon, but Opinium show them continuing to increase in support. My view remains that voting intention probably isn’t a particularly useful measure to look at when we know political events are looming that are likely to have a huge impact. Whatever the position is now, it is likely to be transformed by whether or not we end up leaving the European Union next month, on what terms and under what circumstances.

What did receive some comment was the sheer contrast between the reported leads, particularly because the ComRes (1 point Tory lead) and Opinium (12 point Tory lead) were published on the same day.

Mark Pickup, Will Jennings and Rob Ford wrote a good article earlier this month looking at the house effects of different pollsters. As you may expect if you’ve been watching recent polls, ComRes tend to show some of the largest Labour leads, YouGov some of the biggest Tory leads. Compared to the industry average Opinium actually tend to be slightly better for Labour and slightly worse for the Tories, though I suspect that may be changing: “House effects” for pollsters are not set in stone and can change over time, partly because pollsters change methods, partly because the impact of methodological differences change over time.

What that doesn’t tell us why there is a difference. I saw various people pointing at the issue of turnout, and how pollsters model likelihood to vote. I would urge some caution there – in the 2017 election, most of the difference between polls was indeed down to how polling companies predicted likelihood to vote, and this was the biggest cause of polling error. However when those new turnout models backfired and went wrong, polling companies dropped them. There are no longer any companies using demographic based turnout models that have a huge impact on voting intention figures and weight down young people. These days almost everyone has gone back to basing their turnout models primarily on how likely respondents themselves say they are to vote, a filter that typically only has a modest impact. It may be one factor, but it certainly wasn’t the cause of the difference between ComRes and Opinium.

While polling companies don’t have radically different turnout models, it is true to say (as Harry does here) that ComRes tends to imply a higher level of turnout among young people that Opinium. One thing that is contributing to that in the latest poll is that Opinium ask respondents if they are registered to vote, and only include those people who are, reducing the proportion of young people in their final figures. I expect, however, that some of it is also down to the respondents themselves, and how representative they are – in other words, because of the sample and weights ComRes may simply have young people who say they are more likely to vote than the young people Opinium have.

As regular readers will know, one important difference between polling companies at the moment appears to be the treatment of past vote weighting, and how polling companies account for false recall. Every polling company except for Ipsos MORI use past vote in their weighting scheme. We know how Britain actually voted at the last election (CON 43%, LAB 41%, LDEM 8%), so a properly representative sample should have, among those people who voted, 43% people who voted Tory, 41% people who voted Labour, 8% who voted Lib Dem. If a polling company finds their sample has, for example, too many people who voted Tory at the previous election, they can weight those people down to make it representative. This is simple enough, apart from the fact that people are not necessarily very good at accurately reporting how they voted. Over time their answers diverge from reality – people who didn’t vote claim they did, people forget, people say they voted for the party they wish they’d voted for, and so on. We know this for certain because of panel studies – experiments where pollsters ask people how they voted after an election, record it, then go back and ask the same people a few years later and see if their answers have changed.

Currently it appears that people are becoming less likely to remember (or report) having voted Labour in 2017. There’s an example that YouGov ran recently here. YouGov took a sample of people whose votes they had recorded in 2017 and asked them again how they had voted. In 2017 41% of those people told YouGov’s they’d voted Labour, when re-asked in 2019 only 33% of them said they had voted Labour. This causes a big problem for past vote weighting, how can you weight by it, if people don’t report it accurately? If a fifth of your Labour voters do not accurately report that they voted Labour and the pollster weights the remaining Labour voters up to the “correct” level they would end up with too many past Labour voters, as they’d have 41% past Labour voters who admitted it, plus an unknown amount of past Labour voters who did not.

There are several ways of addressing this issue. One is for polling companies to collect the data on how their panellists voted as soon as possible after the election, while it is fresh in their minds, and then use that contemporaneous data to weight future polls by. This is the approach YouGov and Opinium use. The other approach is to try and estimate the level of false recall and adjust for it – this is what Kantar have done, instead of weighting to the actual vote shares in 2017, they assume a level of false recall and weight to a larger Conservative lead than actually happened. A third approach is to assume there is no false recall and weight to the actual figures – one that I think currently risks overstating Labour support. Finally, there is the approach that Ipsos MORI have always taken – assuming that false recall is such an intractable problem that it cannot be solved, and not weighting by past vote at all.

Dealing with false recall is probably one reason for the present difference between pollsters. Polling companies who are accounting for false recall or using methods that get round the problem are showing bigger Tory leads than those who do not. It is, however, probably not enough to explain all the difference. Neither, should we assume that the variation between pollsters is all down to those differences that are easy to see and compare in the published tables. Much of it is probably also down to the interaction of different weighting variables, or to the very samples themselves. As Pat Sturgis, the chair of the 2015 enquiry into polling error, observed at the weekend there’s also the issue of the quality of the online panels the pollsters use – something that is almost impossible to objectively measure. While we are wondering about the impact of weights and turnout filters, the difference may just be down to some pollsters having better quality, more representative panels than others.


One Response to “Latest voting intention and the difference between the polls”

  1. Anyone here?

Leave a Reply

NB: Before commenting please make sure you are familiar with the Comments Policy. UKPollingReport is a site for non-partisan discussion of polls.

You are not currently logged into UKPollingReport. Registration is not compulsory, but is strongly encouraged. Either login here, or register here (commenters who have previously registered on the Constituency Guide section of the site *should* be able to use their existing login)