Here are the mid-week polls so far:

Kantar – CON 45%(+8), LAB 27%(nc), LDEM 16%(-1), BREX 2%(-7)
YouGov/Times/Sky – CON 42%(-3), LAB 30%(+2), LDEM 15%(nc), BREX 4%(nc)
ICM/Reuters – CON 42%(+3), LAB 32%(+1), LDEM 13%(-2), BREX 5%(-3)
Survation/GMB – CON 42%, LAB 28%, LDEM 13%, BREX 5%

A few things to note. Kantar and ICM have now removed the Brexit party as an option in the seats where they are not standing, which will have contributed to the increase in Conservative support and decrease in Brexit party support (YouGov had already introduced this change last week).

The Survation poll is the first telephone poll that they’ve conducted in this election campaign (all their other recent polls have been conducted online), hence they’ve recommended against drawing direct comparisons with their previous poll. The fourteen point Tory lead in this poll is substantially larger than in Survation’s previous poll, which had a lead of only six points, but it’s impossible to tell whether that’s down to an increase in Conservative support or the different methodology. At the last election their two approaches produced similar results, with their final poll being conducted by phone.

Finally, Kantar’s polling has received some criticism on social media for their approach to turnout weighting, with “re-weighted” versions of their figures doing the rounds. The details of this criticism are wrong on almost every single measure. It’s very easy for people to retweet figures claiming they show the turnout figures from Kantar, but it takes rather longer to explain why the sums are wrong Matt Singh did a thread on it here, and RSS Statistical Ambassador, Anthony Masters, has done a lengthier post on it here.

In short, the claims confuse normal demographic weights (the ones Kantar use to ensure the proportion of young and old people in the samples matches the figures the ONS publish for the British population as a whole) with their turnout model. Secondly, they compare youth turnout to early estimates straight after the 2017 election, when there have been subsequent measures from the British Election Study that were actually checked against the marked electoral register, so are almost certainly more accurate. Compared to those figures, Kantar’s turnout levels look far more sensible. The figures do imply a small increase in turnout among older voters, a small drop amongst younger votes, but nowhere near the level that has been bandied about on social media.

However, if we leave aside the specific criticisms, it is true to say that turnout has different impacts on different pollsters. In the 2017 election many pollsters adopted elaborate turnout models based on demographic factors. These models largely backfired, so pollsters dropped them. Most polling companies are now using much simpler turnout models, that have much less of an impact, and which are based primarily on how likely respondents to the poll say they will vote.

Kantar is the exception – in 2017 they used a model that predicted people’s likelihood to vote based on both how likely they said they were to vote, but also their past voting and how old they are. Unlike many other companies this worked well for them and they were one of the more accurate polling companies, so they kept it. That does mean that Kantar now have a turnout model that makes more difference than most.

Looking at the polls at the top of this post, factoring in turnout made no difference to the lead in YouGov’s poll (it was a 12 point Tory lead before turnout weighting, a 12 point Tory lead afterwards). The same is true of Survation – their poll would have had a 14 point lead before turnout was factored in, and a 14 point lead afterwards. In ICM’s poll, without turnout the lead would have been 7 points, with turnout it grows to 10 points. With Kantar’s latest poll, the tables suggest that the turnout weighting increased the Tory lead from 10 points to 18 points.

Hence, while the specific claims about Kantar are nonsense, it is true to say their turnout model has more impact than that of some other companies. That does not, of course, mean it is wrong (turnout is obviously a significant factor in elections). However, before going off on one about how important turnout weighting is to the current polls, it’s rather important to note that for many companies it is contributing little or nothing to the size of the Tory lead.


There are (so far at least), five GB voting intention polls in the Sunday papers.

YouGov/Sunday Times – CON 45%(+3), LAB 28%(nc), LDEM 15%(nc), BREX 4%(nc). Fieldwork was Thursday and Friday, and changes are from the YouGov/Times/Sky poll mid-week. (tabs)

SavantaComRes/S Telegraph – CON 41%(+1), LAB 33%(+3), LDEM 14%(-2), BREX 5%(-2). Fieldwork was on Wednesday and Thursday, and changes are from the midweek poll for the Telegraph (tabs)

Opinium/Observer – CON 44%(+3), LAB 28%(-1), LDEM 14% (-1), BREX 6% (nc). Fieldwork was Wednesday and Thursday, and changes are from last week. Tables here.

BMG/Independent – CON 37%(nc), LAB 29%(nc), LDEM 16%(nc), BREX 9%(nc). Fieldwork was Tuesday to Friday. Changes would be from last week, though in this case, the party shares are unchanged.

Deltapoll/Mail on Sunday – CON 45%(+4), LAB 30%(+1), LDEM 11%(-5), BREX 6%(nc). Changes are from last week.

There are differences in how companies have dealt with the Brexit party. As discussed earlier in the week, YouGov are only letting people pick Brexit party if they live in a seat where the Brexit party are standing. ComRes are allowing people to pick the Brexit party everyone, but are not including them in their main prompt. BMG are still including them everywhere, but will be adopting candidate lists from next week. I’m unclear what Deltapoll or Opinium are currently doing. Given Thursday was close of nominations and full candidate lists are now available, I’d expect many companies to switch to showing people the actual parties standing in their seats from next week.

We’ve got a mixture of the pollsters who tend to show bigger Tory leads and the pollsters who tend to show smaller ones here – showing the contrast between different companies. Three of the companies publishing tonight (YouGov, Opinium and Deltapoll) gave the Conservatives leads up in the mid-teens, with Conservatives at 44-45% and Labour at 28-30%. The other two companies (ComRes and BMG) both showed an eight point lead (though there is some contrast between their figures, BMG have both the Conservatives and Labour signficantly lower than ComRes). (For those interested in the potential reasons behind the differences in the polls, I wrote more about it back in September.)

Perhaps more important is the trend – there is little sign of the Conservative lead narrowing here. Three of the polls have it growing (by 3, 3 and 4 points), ComRes have it narrowing by 2 (though their mid-week poll had the Tory lead growing by two, so the two cancel out), BMG have everything static. Next week the Labour manifesto is released, so has the potential to change things around. I would note, however, that the impact the manifestos had in 2017 was very much the exception to the rule. Historically the publication of manifestos has not tended to have any obvious impact upon party support.


-->

A round up of voting intention polls published during the week. We have had three polls with fieldwork conducted wholly after the announcement from Nigel Farage that the Brexit party would not stand in Conservative seats:

Panelbase (13th-14th) – CON 43%(+3), LAB 30%(nc), LDEM 15%(nc), BREX 5%(-3) – (tabs)
YouGov/Times/Sky (11th-12th) – CON 42%(+3), LAB 28%(+2), LDEM 15%(-2), BREX 4%(-6) – (tabs)
SavantaComRes/Telegraph (11th-12th) – CON 40%(+3), LAB 30%(+1), LDEM 16(-1), BREX 7%(-2) – (tabs)

The three companies have taken different methodological approaches to this. The YouGov survey offered respondents a list of the parties likely to stand in their constituency (so if a respondent lived in a Conservative seat, they were not able to pick the Brexit party). The Panelbase survey offered people the full list of parties, but also asked their second preference, and used the second preferences of those people who said they were going to vote for the Brexit party but lived in a seat where they are not actually going to stand. ComRes still allowed people to say Brexit party in seats where the Brexit party are not going to stand, but no longer included them in their main prompt when asking who people were going to vote for). I expect some of these approaches will be purely temporary, as going forward we will have the actual list of candidates in each seat and I expect many companies will move towards giving respondents only the relevant candidates for their own constituency.

Obviously all three show Brexit support falling sharply as fewer people are able to vote for them, and unsurprisingly this has favoured the Conservative party (though given any direct transfer to the Conservative party from the Brexit party standing down will be concentrated in seats the Conservatives already hold, so it won’t necessarily help them win any extra seats).

Since the weekend, but before the Farage announcement, we also had the following polls released.

ICM/Reuters (8th-11th) – CON 39%(+1), LAB 31%(nc), LDEM 15%(nc), BREX 8%(-1) (tabs)
Kantar (7th-11th) – CON 37%, LAB 27%, LDEM 17%, BREX 9% (tabs
ComRes/BritainElects (8th-10th) – CON 37%(+1), LAB 29%(nc), LDEM 17%(nc), BREX 9%(-2) (tabs)
Survation (6th-8th) – CON 35%(+1), LAB 29%(+3), LDEM 17%(-2), BREX 10%(-2) (tabs)

Note that Kantar made significant changes to their methodology for this poll, adding a squeeze question for don’t knows, and imputing voting intention for those who still said don’t know. This change reduced Conservative support by 4 points, and Labour support by 1 point, so the like-for-like changes from their previous poll in October would have been Conservatives up 2, Labour up 3.

A word about trying to discern trends in support. As regular readers will know, the different methodological approaches taken by pollsters mean there tend to be some consistent differences between their figures, one company may typically have higher figures for the Conservatives, one may have higher figures for Labour. These are known as “house effects”. Currently ICM, ComRes and Survation tend to show lower Conservative leads. Deltapoll, YouGov, Opinium are tending to show higher Conservative leads.

The way the publication schedule has panned out, the companies showing higher leads are tending to publish more at the weekend (because they are polling for the Observer, Sunday Times and Mail on Sunday) while the polls for the companies with smaller leads are tending to come out midweek (as they are polling for the Daily Telegraph and Reuters). What this means in practice is that you’re liable to get two or three polls in a row showing smaller leads mid-week, and two or three polls in a row showing bigger leads at the weekend. It doesn’t mean the lead is falling and rising, it’s just the different approaches taken by pollsters. The thing to look at is the trend from the same pollster – is the lead up or down compared to the last poll from the same pollster? Are other pollsters showing the same trend? If so, something is afoot. If not, it’s probably noise.

On that basis, the lead appears to be broadly steady – both Labour and the Conservatives are gaining support that the expense of the Liberal Democrats and the Brexit party.

With four weeks to go, the Conservatives maintain a solid lead. Of course it’s worth remembering that the Conservatives also had a solid lead at this point in the last election too – much of the narrowing in the Tory lead came after the manifestos were published. In theory at least, there is time for things to change – although that said 2017 was an extremely unusual campaign in terms of the amount of change in party support.


Sunday polls round-up

Four voting intention polls in the Sunday papers.

YouGov/Sunday Times – CON 39%(+3), LAB 26%(+1), LDEM 17%(nc), BREX 10%(-1) (tabs)
Deltapoll/Mail on Sunday – CON 41%(+1), LAB 29%(+1), LDEM 16%(+2), BREX 6%(-5) (tabs)
Opinium/Observer – CON 41%(-1), LAB 29%(+3), LDEM 15%(-1), BREX 6%(-3) (tabs)
Panelbase – CON 40%(nc), LAB 30%(+1), LDEM 15%(+1), BREX 8%(-1) (tabs)

In most cases changes are from last week, YouGov’s changes are from their midweek poll for the Times & Sky. All four show the upwards trend in Labour support continuing (though given some of them also show the Conservatives gaining support, we cannot say that the Tory lead is narrowing). All four also show support for the Brexit party falling, particularly in the Deltapoll and Opinium figures. Apart from one outlier in August, 6% is the lowest the Brexit party have recorded since the European elections.

Following on from my post about MRP in the week, the Observer also reports the topline results from a second MRP model, again carried out by an organisation campaigning for tactical voting – this time Gina Miller’s Remain United. The model currently predicts 347 Conservative seats, 204 Labour seats and 24 Liberal Democrat seats. In comparison the BestforBritain MRP results that Chris Hanretty scraped from their website seemed to imply a better showing for the Conservatives, with around 358 Conservatives seats and just 188 Labour and 19 Lib Dems (note that the Best for Britain model only covered England & Wales). The average vote shares in the Remain United model imply a Conservative lead of only around 6 points, so the difference in seat numbers may very well just be down to projecting a higher level of Labour support, rather than a different pattern of swings. For all the fuss about “rival tactical voting sites”, by my count there are only 13 seats where RemainUnited suggest voting Labour and BestforBritain suggest voting Lib Dem.

There is a methodological explanation on the remain united website here that says it is not actually an MRP model, but an RRP model, which is apparently “similar” to an MPR model (I don’t know what the technical differences are to the approach, but the explanation they give does indeed sound very similar). The model is based on a ComRes sample of only 6097 responses, significantly smaller than the 46,000 sample that Best for Britain used and the 50,000 or so samples that YouGov were using at the last election. There was a similar ComRes/ElectoralData RRP model for the European elections earlier this year which did not perform particularly well – while it got the share of Brexit party support correct, it overstated Labour support by 10%, understated the Greens by 6% and Liberal Democrats by 5%, which would be rather a problem is your aim is to work out which remain party is best placed to win in seats. That said, the data for their European election model was collected more than a week before polling day, and they may well have finessed the model since then.


Given the success of the approach at the 2017 election I expect we’ll see several MRP seat models this time round. The first one to emerge however is one constructed by Focaldata, using data from mixed sources, including YouGov, that Best for Britain have used to drive a tactical voting website. It has caused some controversy – particularly on the comment pages of the Guardian – with people arguing over the validity of its recommendations. I won’t get too far into that (vote for whoever the hell you want), but thought it was probably worth making a few comments about MRP itself, considering it will crop up again through the campaign.

What is MRP?

First, we need to understand what MRP is. It stands for multilevel regression and post-stratification, which almost certainly doesn’t help. There is a academic paper by Ben Lauderdale and his colleagues who run the YouGov MRP that explains it in great detail here, however the short version is that it’s a modelling technique aimed at producing robust estimates for small geographic areas from large national samples. In the context of elections, that means coming up with estimates of vote share in single constituencies based on a big national sample.

Using traditional techniques even very, very large samples don’t contain enough respondents to be a good guide to individual seats. If you had a huge sample of 50000, divided by 632 seats it would still give you less than 100 people a seat – which wouldn’t be enough to produce decent data. I’ve seen this as a naive criticism of the Best for Britain MRP model (there are only 70 people per seat!) but in fact that is exactly the problem that MRP is intended to solve.

MRP works by modelling the relationship between demographic and political variables and voting intention (the multilevel regression part), and then applying that to the demographics and political circumstances in each individual constituency (the post-stratification). So in this case, an MRP model would look at how demographics like age, gender and education relate to vote intention, and how that differs based on political variables (Is there an incumbent MP? Is it a remain or leave area?). That model is then applied to the known characteristics each seat. What that means is the projection in an individual seat is not just based upon how respondents in that seat say they would vote, it’s effectively also based on how respondents with the same demographics in seats with similar political circumstances say they would vote.

How well does it work?

At the last election YouGov had an MRP model that performed very well – correctly predicting the hung Parliament and some of the more unusual election results like Canterbury and Kensington. Clearly, given the accuracy of the YouGov model, it is possible to use MRP successfully to produce decent seat level estimates from a big national sample.

Best for Britain’s defence of their tactical recommendations relies heavily on how well the YouGov MRP model did in 2017. However, not all MRP models are necessarily equal. It isn’t one single model, it’s a technique, and it’s possible to do it well or badly. It is not certainly not a magical guarantee of accuracy. If we look back to 2017 the YouGov MRP model got all the attention, but it wasn’t the only MRP model out there. Lord Ashcroft also commissioned an MRP model, but that wrongly predicted a Tory majority. Just as some polls have been more accurate than others in recent years, some MRPs may be more accurate than others.

The things that drive the quality of a MRP model should be the quality of the data that’s going into it, and the quality of the model itself – have those designing it picked demographics and political factors that allow them to accurately model voting intentions? As an external observer however, it is quite hard to judge that. For the YouGov model there is its track record from 2017. From other MRP models, we’re driving a bit blind. We know it is a technique that can be very successful if done well, but we won’t really know if it is being done well until it’s compared to actual election results.

Are tactical voting recommendations based on an MRP model sensible?

In principle, yes. MRP is obviously not perfect or infallible – nothing is – but it is an established technique for producing estimates of support in small geographical areas from a larger national poll. Certainly it should be better than using a crude uniform swing, or just basing recommendations on what the levels of support were at the previous election and assuming nothing has changed since.

In practice, of course, it depends on the quality of the model and the tactical decisions that people make based upon them – I certainly don’t intend to get into that debate, especially since I expect there will be various rival tactical voting sites with different recommendation, and perhaps different aims and motivations.