There is plenty of new polling in today’s papers, including two polls proporting to show that large numbers of people would vote for new political parties. One by BMG for the Huffington Post, claiming 58% of people would consider backing a new party at the next election, and a ComRes poll for BrexitExpress, claiming 53% of people in a selection of Tory constituencies would consider voting for a single issue party campaigning to “conclude Brexit as quickly and as fully as possible”. There have been various other polls in recent weeks asking similar questions about how popular new parties would be.

These sound like large figures, but you should take them all with a huge pinch of salt – the reality is that quantifying the prospects of a new political party before it exists is an almost impossible task. Certainly it is not something that can be done with a single question.

First let’s look at the question itself. Polls tend to take two approaches to this question, both of which have flaws. The first is to say “Imagine there was a new party that stood for x, y and z – how likely would you be to consider voting for it?”. The problem with that as a question is that “consider” is a pretty low bar. Does thinking about something for a fleeting second before dismissing it count as “considering”?

An alternative approach is to say “Imagine there was a new party that stood for x, y and z. How would you vote if they stood at the next election?” and then prompt them alongside the usual political parties. This does at least force a choice, and sets the new hypothetical party alongside the alternative established parties, prompting to people to consider whether they would actually vote for their usual party after all.

There are, however, rather deeper problems with the whole concept. The first is the lack of information about the party – it asks people whether they would vote for a rather generic new party (a new anti-Brexit party, a new pro-Brexit party, a new pro-NHS party, or whatnot). That misses out an awful lot of the things that determine people’s vote. Who is the leader of the party? Are they any good? Do the party appear competent and capable? Do they share my values on other important issues? Can I see other people around me supporting them? Are they backed by voices I trust?

Perhaps most of all, it misses out the whole element of whether the party is seen as a serious, proper contender, or a wasted vote. It ignores the fact that for most new parties, a major hurdle is whether voters are even aware of you, have ever heard of you, or think you are a viable challenger. That is the almost insoluble problem with questions like this: by asking a question that highlights the existance of the new party and implies to respondents that it is a party that is worthy of serious consideration a pollster has ignored the biggest and most serious problem most new parties face.

That’s the theory of why they should be treated with some caution. What about their actual record? What about when people polled about hypothetical parties that later became real parties that stood in real elections? Well, there aren’t that many cases of large nationwide parties launching, though there are more instances of constituency level polls asking similar questions. Here are the examples I can find:

  • At the 1999 European elections two former Conservative MEPs set up a “Pro-Euro Conservative party”. Before that a hypothetical MORI poll asked how people would vote in the European elections “if breakaway Conservatives formed their own political party supporting entry to the single European currency”. 14% of those certain or very likely to vote said they would vote for the new breakaway pro-Euro Conservatives. In reality, the pro-Euro Conservative party won 1.3%.
  • Back in 2012 when the National Health Action party was launched Lord Ashcroft did a GB poll asking how people would vote if “Some doctors opposed to the coalition government’s policies on the NHS […] put up candidates at the next election on a non-party, independent ticket of defending the NHS”. It found 18% of people saying they’d vote for them. In reality they only stood 12 candidates at the 2015 election, getting 0.1% of the national vote and an average of 3% in the seats they contested.
  • Just before the 2017 election Survation did a poll in Kensington for the Stop Brexit Alliance – asked how they might vote if there was a new “Stop Brexit Alliance” candidate in the seat, 28% of those giving a vote said they’d back them. In the event there were two independent stop Brexit candidates in Kensington – Peter Marshall and James Torrance. They got 1.3% between them (my understanding, by the way, is that the potential pro-Europe candidates who did the poll are not the same ones who actually stood).
  • Survation did a similar poll in Battersea, asking how people would vote if a hypothetical “Independent Stop Brexit” candidate stood. That suggested he would get 17%. In reality that independent stop Brexit candidate, Chris Coghlan, got only 2%.
  • Advance Together were a new political party that stood in the local elections in Kensington and Chelsea earlier this year. In an ICM poll of Kensington and Chelsea conducted in late 2017 64% of people said they would consider voting for such a new party. In reality Advance Together got 5% of the boroughwide vote in Kensington and Chelsea, an average of 7% in the wards where they stood.

In all of these examples the new party has ended up getting far, far, far less support than hypothetical polls suggested they might. It doesn’t follow that this would always be the case, and that a new party can’t succeed. I suspect a new party that was backed by a substantial number of existing MPs and had a well-enough known leader to be taken seriously as a political force could do rather well. My point is more that hypothetical polls really aren’t a particularly good way of judging it.


It is year since the 2017 general election. I am sure lots of people will be writing a lot of articles looking back at the election itself and the year since, but I thought I’d write a something about the 2017 polling error, something that has gone largely unexamined compared to the 2015 error. The polling companies themselves have all carried out their own internal examinations and reported to the BPC, and the BPC will be putting out a report based on that in due course. In the meantime, here are my own personal thoughts about the wider error across the industry.

The error in 2017 wasn’t the same as 2015.

Most casual observers of polls will probably have lumped the errors of 2015 and 2017 in together, and seen 2017 as just “the polls getting it wrong again”. In fact the nature of the error in 2017 was completely different to that in 2015. It would be wrong to say they are unconnected – the cause of the 2017 errors was often pollsters trying to correct the error of 2015 – but the way the polls were wrong was completely different.

To understand the difference between the errors in 2015 and the errors in 2017 it helps to think of polling methodology as being divided into two bits. The first is the sample – the way pollsters try to get respondents who are representative of the public, be that through their sampling itself or the weights they apply afterwards. The second is the adjustments they make to turn that sample into a measure of how people would actually vote, how they model things like turnout and accounting for people who say don’t know, or refuse to answer.

In 2015, the polling industry got the first of those wrong, and the second right (or at least, the second of those wasn’t the cause of the error). The Sturgis Inquiry into the 2015 polling error looked at every possible cause of error, and decided that the polls had samples that were not representative. While they didn’t think the way pollsters predicted turnout was based on strong enough evidence and recommended improvements there too, they ruled it out as cause of the 2015 error.

In 2017 it was the opposite situation. The polling samples themselves had pretty much the correct result to start with, showing only a small Tory lead. More traditional approaches towards modelling turnout (which typically made only small differences) would have resulted in polls that only marginally overstated the Tory lead. The large errors we saw in 2017 were down to the more elaborate adjustments that pollsters had introduced. If you had stripped away all the attempts aimed at modelling turnout, don’t knows and suchlike (as in the table below) then the underlying samples the pollsters were working with would have got the Conservative lead over Labour about right:

What did pollsters do that was wrong?

The actual things that pollsters did to make their figures wrong varied from pollster to pollster. So for ICM, ComRes and Ipsos MORI, it looks as if new turnout models inflated the Tory lead, for BMG it was their new adjustment for electoral registration, for YouGov it was reallocating don’t knows. The actual details were different in each case, but the thing they had in common was that pollsters had introduced post-fieldwork adjustments that had larger impacts than at past elections, and which ended up over-adjusting in favour of the Tories.

In working out how pollster came to make this error we need to have closer look at the diagnosis of what went wrong in 2015. Saying that samples were “wrong” is easy, if you are going to solve it you need to identify how they were wrong. After 2015 the broad consensus among the industry was that the samples had contained too many politically engaged young people who went out to vote Labour and not enough disinterested young people who stayed at home. Polling companies took a mixture of two different approaches towards dealing with this, though most companies did a bit of both.

One approach was to try and treat the cause of the error by improving the samples themselves, trying to increase the proportion of respondents who had less interest in politics. Companies started adding quotas or weights that had a more direct relationship with political interest, things like education (YouGov, Survation & Ipsos MORI), newspaper readership (Ipsos MORI) or straight out interest in politics (YouGov & ICM). Pollsters who primarily took this approach ended up with smaller Tory leads.

The other was to try and treat the symptom of the problem by introducing new approaches to turnout that assumed lower rates of turnout among respondents from demographic groups who had not traditionally turned out to vote in the past, and where pollsters felt samples had too many people who were likely to vote. The most notable examples were the decision by some pollsters to replace turnout models based on self-assessment, with turnout models based on demographics – downweighting groups like the young or working class who have traditionally had lower turnouts. Typically these changes produced polls with substantially larger Conservative leads.

So was it just to do with pollsters getting youth turnout wrong?

This explanation chimes nicely with the idea that the polling error was down to polling companies getting youth turnout wrong, that young people actually turned out at an unusually high level, but that polling companies fixed youth turnout at an artificially low level, thereby missing this surge in young voting. This is an attractive theory at first glance, but as is so often the case, it’s actually a bit more complicated than that.

The first problem with the theory is that it’s far from clear whether there was a surge in youth turnout. The British Election Study has cast doubt upon whether or not youth turnout really did rise that much. That’s not a debate I’m planning on getting into here, but suffice to say, if there wasn’t really that much of a leap in youth turnout, then it cannot explain some of the large polling misses in 2017.

The second problem with the hypothesis is that there isn’t really that much relationship between those polling companies who had about the right proportion of young people in their samples and those who got it right.

The chart below shows the proportion of voters aged under 25 in each polling company’s final polling figures. The blue bar is the proportion in the sample as a whole, the red bar the proportion in the final voting figures, once pollsters had factored in turnout, dealt with don’t knows and so on. As you would expect, everyone had roughly the same proportion of under 25s in their weighted sample (in line with the actual proportion of 18-24 year olds in the population), but among their sample of actual voters it differs radically. At one end, less than 4% of BMG’s final voting intention figures were based on people aged under 25s. At the other end, almost 11% of Survation’s final voting figures were based on under 25s.

According to the British Election Study, the closest we have to authorative figures, the correct figure should have been about 7%. That implies Survation got it right despite having far too many young people. ComRes had too many young people, yet had one of the worst understatements of Labour support. MORI had close to the correct proportion of young people, yet still got it wrong. There isn’t the neat relationship we’d expect if this was all about getting the correct proportion of young voters. Clearly the explanation must be rather more complicated than that.

So what exactly did go wrong?

Without a nice, neat explanation like youth turnout, the best overarching explanation for the 2017 error is that polling companies seeking to solve the overstatement of Labour in 2015 simply went too far and ended up understating them in 2017. The actual details of this differed from company to company, but it’s fair to say that the more elaborate the adjustments that polling companies made for things like turnout and don’t knows, the worse they performed. Essentially, polling companies over-did it.

Weighting down young people was part of this, but it was certainly not the whole explanation and some pollsters came unstruck for different reasons. This is not an attempt to look in detail at each pollster, as they may also have had individual factors at play (in BMG’s report, for example, they’ve also highlighted the impact of doing late fieldwork during the daytime), but there is a clear pattern of over-enthusiastic post-fieldwork adjustments turning essentially decent samples into final figures that were too Conservative:

  • BMG’s weighted sample would have shown the parties neck-and-neck. With just traditional turnout weighting they would have given the Tories around a four point lead. However, combining this with an additional down-weighting by past non-voting and the likelihood of different age/tenure groups to be registered to vote changed this into a 13 point Tory lead.
  • ICM’s weighted sample would have shown a five point Tory lead. Adding demographic likelihood to vote weights that largely downweighted the young increased this to a 12 point Tory lead.
  • Ipsos MORI’s weighted sample would have shown the parties neck-and-neck, and MORI’s traditional 10/10 turnout filter looks as if it would have produced an almost spot-on 2 point Tory lead. An additional turnout filter based on demographics changed this to an 8 point Tory lead.
  • YouGov’s weighted sample had a 3 point Tory lead, which would’ve been unchanged by their traditional turnout weights (and which also exactly matched their MRP model). Reallocating don’t knows changed this to a 7 point Tory lead.
  • ComRes’s weighted sample had a 1 point Conservative lead, and by my calculations their old turnout model would have shown much the same. Their new demographic turnout model did not actually understate the proportion of young people, but did weight down working class voters, producing a 12 point Tory lead.

Does this mean modelling turnout by demographics is dead?

No. Or at least, it shouldn’t do. The pollsters who got it most conspicuously wrong in 2017 were indeed those who relied on demographic turnout models, but this may have been down to the way they did it.

Normally weights are applied to a sample all at the same time using “rim weighting” (this is an iterative process that lets you weight by multiple items without them throwing each other off). What happened with the demographic turnout modelling in 2017 is that companies effectively did two lots of weights. First they weighted the demographics and past vote of the data so it matched the British population. Then they effectively added separate weights by things like age, gender and tenure so that the demographics of those people included in their final voting figures matched the people who actually voted in 2015. The problem is this may well have thrown out the past vote figures, so the 2015 voters in their samples matched the demographics of 2015 voters, but didn’t match the politics of 2015 voters.

It’s worth noting that some companies used demographic based turnout modelling and were far more successful. Kantar’s polling used a hybrid turnout model based upon both demographics and self-reporting, and was one of the most accurate polls. Surveymonkey’s turnout modelling was based on the demographics of people who voted in 2015, and produced only a 4 point Tory lead. YouGov’s MRP model used demographics to predicts respondents likelihood to vote and was extremely accurate. There were companies who made a success of it, and it may be more of a question about how to do it well, rather than whether one does it at all.

What have polling companies done to correct the 2017 problems, and should I trust them?

For individual polling companies the errors of 2017 are far more straightforward to address than in 2015. For most polling companies it has been a simple matter of dropping the adjustments that went wrong. All the causes of error I listed above have simply been reversed – for example, ICM have dropped their demographic turnout model and gone back to asking people how likely they are to vote, ComRes have done the same. MORI have stopped factoring demographics into their turnout, YouGov aren’t reallocating don’t knows, BMG aren’t currently weighting down groups with lower registration.

If you are worried that the specific type of polling error we saw in 2017 could be happening now you shouldn’t be – all the methods that caused the error have been removed. A simplistic view that the polls understated Labour in 2017 and, therefore, Labour are actually doing better than the polls suggest is obviously fallacious.
However, that is obviously not a guarantee that polls couldn’t be wrong in other ways.

But what about the polling error of 2015?

This is a much more pertinent question. The methodology changes that were introduced in 2017 were intended to correct the problems of 2015. So if the changes are reversed, does that mean the errors of 2015 will re-emerge? Will polls risk *overstating* Labour support again?

The difficult situation the polling companies find themselves in is that the methods used in 2017 would have got 2015 correct, but got 2017 wrong. The methods used in 2015 would have got 2017 correct, but got 2015 wrong. The question we face is what approach would have got both 2015 and 2017 right?

One answer may be for polling companies to use more moderate versions of the changes them introduced in 2017. Another may be to concentrate more on improving samples, rather than post-fieldwork adjustments to turnout. As we saw earlier in the article, polling companies took a mixture of two approaches to solving the problem of 2017. The approach of “treating the symptom” by changing turnout models and similar ended up backfiring, but what about the first approach – what became of the attempts to improve the samples themselves?

As we saw above, the actual samples the polls used were broadly accurate. They tended to have smaller parties too high, but the balance between Labour and Conserative was pretty accurate. For one reason or another, the sampling problem from 2015 appears to have completely disappeared by 2017. 2015 samples were skewed towards Labour, but in 2017 they were not. I can think of three possible explanations for this.

  • The post-2015 changes made by the polling companies corrected the problem. This seems unlikely to be the sole reason, as polling samples were better across the board, with those companies who had done little to improve their samples performing in line with those who had made extensive efforts.
  • Weighting and sampling by the EU ref made samples better. There is one sampling/weighting change that nearly everyone made – they started sampling/weighting by recalled EU ref vote, something that was an important driver of how people voted in 2017. It may just be that providence has provided the polling companies with a useful new weighting variable that meant samples were far better at predicting vote shares.
  • Or perhaps the causes of the problems in 2015 just weren’t an issue in 2017. A sample being wrong doesn’t necessarily mean the result will be wrong. For example, if I had too many people with ginger hair in my sample, the results would probably still be correct (unless there is some hitherto unknown relationship between voting and hair colour). It’s possible that – once you’ve controlled for other factors – in 2015 people with low political engagement voted differently to engaged people, but that in 2017 they voted in much the same way. In other words, it’s possible that the sampling shortcomings of 2015 didn’t go away, they just ceased to matter.

It is difficult to come to firm answer with the data available, but whichever mix of these is the case, polling companies shouldn’t be complacent. Some of them have made substantial attempts to improve their samples from 2015, but if the problems of 2015 disappeared because of the impact of weighting by Brexit or because political engagement mattered less in 2017, then we cannot really tell how successful they were. And it stores up potential problems for the future – weighting by a referendum that happened in 2016 will only be workable for so long, and if political engagement didn’t matter this time, it doesn’t mean it won’t in 2022.

Will MRP save the day?

One of the few conspicuous successes in the election polling was the YouGov MRP model (that is, multi-level regression and post-stratification). I expect come the next election there will be many other attempts to do the same. I will urge one note of caution – MRP is not a panacea to polling’s problems. They can go wrong, and still relies on the decisions people make in designing the model it runs upon.

MRP is primarily a method of modelling opinion at lower geographical areas from a big overall dataset. Hence in 2017 YouGov used it to model the share of the vote in the 632 constituencies in Great Britain. In that sense, it’s a genuinely important step forward in election polling, because it properly models actual seat numbers and, from there, who will win the election and will be in a position to form a government. Previously polls could only predict shares of the vote, which others could use to project into a result using the rather blunt tool of uniform national swing. MRP produces figures at the seat level, so can be used to predict the actual result.

Of course, if you’ve got shares of the vote for each seat then you’ll also be able to use it to get national shares of the vote. However, at that level it really shouldn’t be that different from what you’d get from a traditional poll that weighted its sample using the variables and the same targets (indeed, the YouGov MRP and traditional polls showed much the same figures for much of the campaign – the differences came down to turnout adjustments and don’t knows). Its level of accuracy will still depend on the quality of the data, the quality of the modelling and whether the people behind it have made the right decisions about the variables used in the model and on how they model things like turnout… in other words, all the same things that determine if an opinion poll gets it right or not.

In short, I do hope the YouGov MRP model works as well in 2022 as it did in 2017, but MRP as a technique is not infallible. Lord Ashcroft also did a MRP model in 2017, and that was showing a Tory majority of 60.

TLDR:

  • The polling error in 2017 wasn’t a repeat of 2015 – the cause and direction of the error were complete opposites.
  • In 2017 the polling samples would have got the Tory lead pretty much spot on, but the topline figures ended up being wrong because pollsters added various adjustments to try and correct the problems of 2015.
  • While a few pollsters did come unstuck over turnout models, it’s not as simple as it being all about youth turnout. Different pollsters made different errors.
  • All the adjustments that led to the error have now been reversed, so the specific error we saw in 2017 shouldn’t reoccur.
  • But that doesn’t mean polls couldn’t be wrong in other way (most worryingly, we don’t really know why the underlying problem in 2015 error went away), so pollsters shouldn’t get complacent about potential polling error.
  • MRP isn’t a panacea to the problems – it still needs good modelling of opinion to get accurate results. If it works though, it can give a much better steer on actual seat numbers than traditional polls.


-->

We’ve had three voting intention polls in the last couple of days:

  • Ipsos MORI‘s monthly political monitor had topline figures of CON 43%(+4), LAB 42%(nc), LDEM 6%(-3). Fieldwork was over last weekend (Fri-Wed), and changes are from January. Tabs are here.
  • YouGov/Times on Friday has toplines of CON 41%(nc), LAB 43%(+1), LDEM 7%(nc). Fieldwork was Mon-Tues and changes are from last week. Tabs are here.
  • Survation/GMB, reported in the Sunday Mirror, has CON 37%(-3), LAB 44%(+1), LDEM 9%(+1). Fieldwork was Wednesday and Thursday, and changes are from the tail end of January. No tabs yet.

There is no clear trend – Labour is steady across the board, Survation have the Tories falling, MORI have them rising. MORI and YouGov show the two main parties neck-and-neck, Survation have a clear Labour lead.

The better Labour position in Survation is typical, but it’s not really clear why. As regular readers will know, Survation do both online and telephone voting intention polls. Their phone polls really do have a significantly different methodology – rather than random digit dialling, they randomly select phone numbers from consumer databases and ring those specific people. That would be an obvious possible explanation for a difference between Survation phone polls and polls from other companies. However, this poll wasn’t conducted by telephone, it was conducted online, and Survation’s online method is pretty similar to everyone else’s.

Survation’s online samples at the general election were much the same as everyone elses. The differences were down to other companies experimenting with things like demographic turnout modelling in order to solve the problems of 2015, approaches that ultimately ended up backfiring. However, polling companies that got it wrong have now dropped the innovations that didn’t work and largely gone back to simpler methods on turnout, meaning there is now no obvious reason for the difference.

Meanwhile, looking at the other questions in the surveys the YouGov poll also included their all their regular EU trackers, following Theresa May and Jeremy Corbyn’s speeches. Neither, unsusprisingly, seem to have made much difference. 29% of people think that the Conservative party’s policy on Brexit is clear, up on a week ago (25%) but still significantly down from January (37%). 36% of people say they support May’s approach to Brexit, barely changed from a week ago (35%). For Labour, just 18% of people now think their Brexit policy is clear (down from 22% straight after Corbyn’s speech), 21% of people say they support the approach that Jeremy Corbyn is taking towards Brexit.


Survation have a poll in today’s Mail on Sunday. Topline figures are CON 37%(-1), LAB 45%(+1), LDEM 6%(-1). Fieldwork was Thursday and Friday and changes are since early October.

The eight point Labour lead is the largest any poll has shown since the election, so has obviously attracted some attention. As regular readers will know, Survation carry out both telephone and online polls. Their telephone method is unique to them, so could easily explain getting different results (Ipsos MORI still use phone polling, but they phone randomly generated numbers (random digit dialling), as opposed to Survation who phone actual numbers randomly selected from telephone databases). However, this was an online poll, and online there is nothing particularly unusual about Survation’s online method that might explain the difference. Survation use an online panel like all the other online polls, weight by very similar factors like age, gender, past vote, referendum vote and education, use self-reported likelihood to vote and exclude don’t knows. There are good reasons why their results are better for Labour than those from pollsters showing the most Tory results like Kantar and ICM (Kantar still use demographics in their turnout model, ICM reallocate don’t knows) but the gap compared to results from MORI and YouGov don’t have such an easy explanation.

Looking at the nuts and bolts of the survey, there’s nothing unusual about the turnout or age distribution. The most striking thing that explains the strong Labour position of the poll is that Survation found very few people who voted Labour in 2017 saying they don’t know how they would vote now. Normally even parties who are doing well see a chunk of their vote from the last election now saying they aren’t sure what they would do, but only 3% of Labour’s 2017 vote told Survation they weren’t sure how they would vote in an election, compared to about 10% in other polls. Essentially, Survation are finding a more robust Labour vote.

Two other interesting findings worth highlighting. One is a question on a second referendum – 50% said they would support holding a referendum asking if people supported the terms of a Brexit deal, 34% said they would be opposed. This is one of those questions that get very different answers depending on how you ask it – there are plenty of other questions that find opposition, and I’m conscious this question does not make it clear whether it would be a referendum on “accept deal or stay in EU”, “accept deal or continue negotiations” or “accept deal or no deal Brexit”. Some of these would be less popular than others. Nevertheless, the direction of travel is clear – Survation asked the same question back in April when there was only a five point lead for supporting a referendum on the deal, now that has grown to sixteen points (50% support, 34% opposed).

Finally there was a question on whether Donald Trump’s visit to the UK should go ahead. 37% think it should, 50% think it should not. This echoes a YouGov poll yesterday which found 31% think it should go ahead, 55% think it should not. I mention this largely as an antidote to people being mislead by twitter polls suggesting people want the visit to go ahead – all recent polls with representative samples suggest the public are opposed to a visit.

Tabs for the Survation poll are here.


Kantar have published a new voting intention poll ahead of the budget, the first I’ve seen from them since the general election. Topline figures are CON 42%, LAB 38%, LDEM 9%, UKIP 5%. Fieldwork was between last Tuesday and this Monday.

This is the first poll to show a Conservative lead since September and the largest Tory lead in any poll since the election. As ever, it’s best to look carefully at any poll that shows an unusual result before getting too excited/dismayed. The reason for the unusual result appears to be methodological, rather than from some sudden Tory recovery, and down to the way Kantar treat turnout. As regular readers will know, many polls came horribly unstuck at the 2017 election because instead of basing turnout on how likely respondents said they were to vote, they predicted respondents likelihood to vote based on factors like their age and class. These methods assumed young people would be much less likely to vote, and produced large Conservative leads that ended up being wrong. Generally speaking, these socio-economic models have been dropped.

At the election Kantar took a sort of halfway position – they based their turnout model on both respondents’ self-assessed likelihood to vote, whether they voted last time and their age, assuming that older people were more likely to vote than younger people. This actually performed far better than most other companies did; Kantar’s final poll showed a five point Conservative lead, compared to the 2.5 they actually got. As such, Kantar appear to have kept using their old turnout model that partly predicts likelihood to vote based on age. The impact of this is clear – before turnout weighting Labour would have had a one point lead, very similar to other companies’ polls. After turnout weighting the Conservatives are four points ahead (the full tabs and methodology details are here).

(Another noticable difference between Kantar’s method and other companies is that they use the leaders’ names in their voting intention question, though given there is not nearly as much of a gap between Theresa May and Jeremy Corbyn’s ratings as there used to be I’m not sure that would still have an impact.)