Rather than their usual poll for the Times, this week YouGov have a full MRP model of voting intention (that is, the same method that YouGov used for their seat projection at the general election). Topline voting intention figures from the YouGov MRP model are CON 39%, LAB 34%, LDEM 11%, UKIP 5%. The fieldwork was Sun-Thursday last week, with just over 40,000 respondents.

The aim of an MRP model is not really the vote shares though, the whole point of the technique is project shares down to seat level, and project who would win each seat. The model currently has the Conservatives winning 321 seats, Labour 250, the Liberal Democrats 16 and the SNP 39. Compared to the 2017 election the Conservatives would make a net gain of just 4 seats, Labour would lose 12 seats, the Liberal Democrats would gain 4 and the SNP would gain 4. It would leave the Conservatives just shy of an overall majority (though in practice, given Sinn Fein do not take their seats and the Speaker and Deputies don’t vote, they would have a majority of MPs who actually vote in the Commons). Whether an extra four seats would really help that much is a different question.

The five point lead it shows for the Conservatives is a swing of 1.4% to the Conservatives – very small, but on a pure uniform swing it would be enough for the Tories to get a proper overall majority. The reason they don’t here is largely because the model shows Labour outperforming in the ultra-marginal seats they won off the Conservatives at the last election (a well known phenomenon – they gain the personal vote of the new Labour MP, lose any incumbency bonus from the former Tory MP. It is the same reason the Conservatives failed to gain a meaningful number of seats in 2001, despite a small swing in their favour).

For those interested in what MRP actually is, YouGov’s detailed explanation from the 2017 election is here (Ben Lauderdale & Jack Blumenau, who created the model for the 2017 election, also carried out this one). The short version is that it is a technique designed to allow projection of results at smaller geographical levels (in this case, individual constituencies). It works by modelling respondents’ voting intention based on their demographics and the political circumstances in each seat, and then applying the model to the demographics of each of the 632 seats in Great Britain. Crucially, of course, it also called the 2017 election correctly, when most of the traditional polls ended up getting it wrong.

Compared to more conventional polling the Conservative lead is similar to that in YouGov’s recent traditional polls (which have shown Tory leads of between 5-7 points of late), but has both main parties at a lower level. Partly this is because it’s modelling UKIP & Green support in all seats, rather than in just the constituencies they contested in 2017 (when the MRP was done at the last election it was after nominations had closed, so it only modelled the actual parties standing in each seat) – in practice their total level of support would likely be lower.

The Times’s write up of the poll is here, details from YouGov are here and technical details are here


There are two new voting intention polls out today – YouGov for the Times, and Ipsos MORI’s monthly political monitor in the Evening Standard.

Ipsos MORI‘s topline figures are CON 38%(nc), LAB 38%(nc), LDEM 10%(+1), UKIP 4%(nc). Fieldwork was between Friday and Tuesday (1st-5th), and changes are from MORI’s last poll back in December.

YouGov‘s topline figures are CON 41%(+2), LAB 34%(nc), LDEM 10(-1), UKIP 4%(-2). Fieldwork was on Sunday and Monday, and changes are from YouGov’s last poll in mid-January.

This does not, of course, offer us much insight on what is really happening. At the weekend a lot of attention was paid to a poll by Opinium showing a big shift towards the Conservatives and a 7 point Tory lead. Earlier in the week Opinium also published a previously unreleased poll conducted for the People’s Vote campaign the previous week, which showed a four point Tory lead, suggesting their Observer poll was more than just an isolated blip. Today’s polls do little to clatify matters – MORI show no change, with the parties still neck-and-neck. YouGov show the Tories moving to a seven point lead, the same as Opinium, but YouGov has typically shown larger Tory leads anyway of late so it doesn’t reflect quite as large a movement.

I know people look at polls hoping to find some firm evidence – the reality is they cannot always provide it. They are volatile, they have margins of error. Only time will tell for sure whether Labour’s support is dropping as events force them to take a clearer stance on Brexit, or whether we’re just reading too much into noise. As ever, the wisest advice I can give is to resist the natural temptation to assume that the polls you’d like to be accurate are the ones that are correct, and that the others must be wrong.

Ipsos MORI tables are up here, YouGov tables are here.


-->

Opinium’s fortnightly poll in the Observer today has topline voting intention figures of CON 41%(+4), LAB 34%(-6), LDEM 8%(+1), UKIP 7%(nc). Fieldwork was between Wednesday and Friday, and changes are from Opinium’s previous poll in mid-January, conducted straight after May lost her vote on the deal, but won her no confidence vote.

A seven point Conservative lead is the largest since the election. While it is not significantly larger than the 5 or 6 point leads YouGov have been showing this month, it’s a noticable change to Opinium’s previous recent polls, which have tended to show Labour and Conservative roughly neck-and-neck.

As ever, one should be a little cautious about reading too much into a single poll. Survation’s poll for Thursday’s Daily Mail had fieldwork conducted on Wednesday, so actually overlaps the fieldwork period for this poll and showed a one point Labour lead with no meaningful swing from Labour to Conservative. It would be wise to wait and see if subsequent polls confirm whether public opinion has shifted against Labour, or whether this is just an outlier.

Also, be cautious about reading too much into what has caused the change. We really don’t know if there has been a change yet, let alone exactly where it has come from and why (not that it will stop people assuming things). It has been two weeks since Opinium’s last poll, and an awful lot has happened – so one cannot pin the change on any one specific event. Neither can cross-breaks really give much guidance (as Michael Savage notes in the Observer, Labour are down among both remainers and leavers… though discerning any signal from the noise of crossbreaks would be difficult even if the change was all on one side).

The full tables from Opinium are here.


There have been several new polls with voting intention figures since the weekend, though all so far have been conducted before the government’s defeat on their Brexit plan.

ComRes/Express (14th-15th) – CON 37%(nc), LAB 39%(nc), LDEM 8%(-1), UKIP 7%(+1)
YouGov/Times (13th-14th)- CON 39%(-2), LAB 34%(-1), LDEM 11%(nc), UKIP 6%(+2)
Kantar (10th-14th) – CON 35%(-3), LAB 38%(nc), LDEM 9%(nc), UKIP 6%(+2)

Looking across the polls as a whole Conservative support appears to be dropping a little, though polls are still ultimately showing Labour and Conservative very close together in terms of voting intention. As ever there are some differences between companies – YouGov are still showing a small but consistent Tory lead, the most recent polls from BMG, Opinium and MORI had a tie (though Opinium and MORI haven’t released any 2019 polls yet), Kantar, ComRes and Suration all showed a small Labour lead in their most last polls.

Several people have asked me about the reasons for the difference between polling companies figures. There isn’t an easy answer – there rarely is. The reality is that all polling companies want to be right and want to be accurate, so if there were easy explanations for the differences and it was easy to know what the right choices were, they would all rapidly come into line!

There are two real elements that are responsible for house effects between pollsters. The first is the things they do to the voting intention data after it is collected and weighted – primarily that is how do they account for turnout (to what extent do they weight down or filter out people who are unlikely to vote), and what to do they with people who say they don’t know how they’ll vote (do they ignore them, or use squeeze questions or inference to try and estimate how they might end up voting). The good thing about these sort of differences is that they are easily quantifiable – you can look up the polling tables, compare the figures with turnout weighting and without, and see exactly the impact they have.

At the time of the 2017 election these adjustments were responsible for a lot of the difference between polling companies. Some polls were using turnout models that really transformed their topline figures. However, those sort of models also largely turned out to be wrong in 2017, so polling companies are now using much lighter touch turnout models, and little in the way of reallocating don’t knows. There are a few unusual cases (for example, I think ComRes still reallocate don’t knows, which helps Labour at present, but most companies do not. BMG no longer do any weighting or filtering by likelihood to vote, an adjustment which for other companies tends to reduce Labour support by a point or two). These small differences are not, by themselves, enough to explain the differences between polls.

The other big differences between polls are their samples and the weights and quotas they use to make them representative. It is far, far more difficult to quantify the impact of these differences (indeed, without access to raw samples it’s pretty much impossible). Under BPC rules polling companies are supposed to be transparent about what they weight their samples by and to what targets, so we can tell what the differences are, but we can’t with any confidence tell what the impact is.

I believe all the polling companies weight by age, gender and region. Every company except for Ipsos MORI also votes by how people voted at the last election. After that polling companies differ – most vote by EU Ref vote, some companies weight by education (YouGov, Kantar, Survation), some by social class (YouGov, ComRes), income (BMG, Survation), working status (Kantar), level of interest in politics (YouGov), newspaper readership (Ipsos MORI) and so on.

Even if polling companies weight by the same variables, there can be differences. For example, while almost everyone weights by how people voted at the last election, there are differences in the proportion of non-voters they weight to. It makes a difference whether targets are interlocked or not. Companies may use different bands for things like age, education or income weighting. On top of all this, there are questions about when the weighting data is collected, for things like past general election vote and past referendum vote there is a well-known phenomenon of “false recall”, where people do not accurately report how they voted in an election a few years back. Hence weighting by past vote data collected at the time of the election when it was fresh in people’s minds can be very different to weighting by past vote data collected now, at the time of the survey when people may be less accurate.

Given there isn’t presently a huge impact from different approaches to turnout or don’t knows, the difference between polling companies is likely to be down some of these factors which are – fairly evidently – extremely difficult to quantify. All you can really conclude is that the difference is probably down to the different sampling and weighting of the different companies, and that, short of a general election, there is no easy way for either observers (nor pollsters themselves!) to be sure what the right answer is. All I would advise is to avoid the temptation of (a) assuming that the polls you want to be true are correct… that’s just wishful thinking, or (b) assuming that the majority are right. There are plenty of instances (ICM in 1997, or Survation and the YouGov MRP model in 2017), when the odd one out turned out to be the one that was right.


The Guardian today has the results of a Populus poll for Best for Britain, apparently leaked without their permission. It found “almost a third” of respondents who would less likely to vote Labour if the party was committed to stopping Brexit, compared to 25% who said it would make them more likely – presumably the opposite of the headline finding the client was hoping for.

As regular readers will know, I think “would policy X make you more likely to vote Y” questions are of little or no worth anyway. Many respondents use them to indicate their support or opposition to the policy in question, regardless of whether it would actually change their vote, and you typically find a substantial proportion of people who say it would make them more likely to vote for a party already do so (and many of those saying less likely would never do so anyway).

This means the response from Best for Britain in the Guardian write up about the picture being skewed by Conservative and UKIP voters, while it may sound like special pleading, is probably quite right. I expect the third of people saying they’d the less likely to vote Labour are indeed probably largely Conservative and UKIP voters who wouldn’t vote Labour anyway. On the other hand, the people saying more likely are probably largely Labour voters who are already voting Labour – it’s why it is such a poor approach to the question.

In the meantime, it’s a reminder of why one needs to be a little cautious about polls commissioned by campaigns. You can never tell what other polls they did that they never released. It is the job of pollsters to make sure the actual questions are fair and balanced, but ultimately it’s often up to clients whether they keep a poll private, or stick it in a press release.