The first Sunday of the election campaign, and as you expect, several polls in the Sunday papers:

The Telegraph have a poll from ORB. Topline figures are CON 36%, LAB 28%, LDEM 14%, BREX 12%. Fieldwork was Wednesday and Thursday. While they’ve published some trackers on support for Brexit, the last time I recall seeing a voting intention poll from ORB was back in April, so changes here aren’t really relevant.

ComRes in the Sunday Express have topline figures of CON 36%(+3), LAB 28%(-1), LDEM 17%(-1), BREX 10%(-2). Fieldwork was Wednesday and Thursday, with changes from mid-October, at the time of Johnson’s deal.

The regular Opinium poll for the Observer has topline figures of CON 42%(+2), LAB 26%(+2), LDEM 16%(+1), BREX 9%(-1), GRN 2%(-1). Fieldwork was Thursday and Friday, and changes are from last week.

YouGov in the Sunday Times has topline figures of CON 39%(+3), LAB 27%(+6), LDEM 16%(-2), BREX 7%(-6). Fieldwork was Thursday and Friday, with changes from midweek.

Finally, Deltapoll in the Mail on Sunday (they’ll be running weekly polls for them for the campaign) have topline figures of CON 40%(+3), LAB 28%(+4), LDEM 14%(-5), BREX 11%(nc). Fieldwork was Thursday to Saturday, with changes from mid-October, just after Johnson’s deal.

All the polls continue to show a sizeable Conservative lead, though as ever this varies somewhat from pollster to pollster. The eight point leads that the ORB and ComRes polls show would be quite tight between a Conservative majority and a hung Parliament on a uniform swing (not that I’d expect a uniform swing); the 12 and 16 point leads that YouGov, Deltapoll and Opinium show should give them a solid majority.

Note that all the polls are showing the Conservatives gaining support (so did the Survation, Panelbase and MORI polls published on Thursday and Friday), and most of them are showing the Labour party gaining too – suggesting perhaps that now an election is a reality, some wavering voters are moving their support behind the two main parties after all (something we also saw in the 2017 election, when UKIP support fell off a cliff almost as soon as the election was called).

Note also that the fieldwork for the YouGov, Deltapoll and Opinium polls was conducted on and after October 31st, when Britain had obviously not left the European Union on time. It does not appear to have either damaged the Tories or boosted the Brexit party. That looked like the case anyway, after all, it’s been very obvious for about a week and a half that Britain was not going to leave on the 31st, but there was always that small chance that there would be an impact when the date actually passed. There wasn’t. (And I told you those hypothetical “How would you vote if Britain hadn’t left the EU by Oct 31st” questions showing Tory support slumping didn’t have any predictive value. Next time people do them – and they will – please do remember that they don’t work!)


The first voting intention polls published since the election was called were in this morning’s papers: Survation for the Mail, Ipsos MORI for the Standard and YouGov for the Times. Topline figures were

Survation – CON 34%, LAB 26%, LDEM 19%, BREX 12%, GRN 1% (tabs)
Ipsos MORI – CON 41%, LAB 24%, LDEM 20%, BREX 7%, GRN 3% (tabs)
YouGov – CON 36%, LAB 21%, LDEM 18%, BREX 13%, GRN 6% (tabs).

There’s quite a spread between the results – Ipsos MORI have the Conservatives up above 40, their highest in any poll since August. YouGov and Survation have them in the mid-thirties. Labour’s support varies between 26% in Survation down to 21% in YouGov. All three have the Lib Dems between 18%-20%. This means while the Conservative lead varies, there is a consistent Conservative lead across the board as we start the campaign.

It’s worth noting that that Tory lead is largely down to a split opposition. Even in the MORI poll the Conservatives have lost support since the election (in the YouGov and Survation polls they’ve lost a lot of support). This is not a popular government – in the MORI poll, their satisfaction rating is minus 55 – it’s just that the main opposition have lost even more support. The healthy Conservative lead is down to the fact that the Conservatives are retaining the bulk of the Leave vote, while the remain vote is split between Labour, the Liberal Democrats, the Greens, the SNP, Plaid and so on.

For as long as this is the case, the Conservatives should do well. If it should change they’ll struggle. If the Brexit party manage to get back into the race and take support from the Tories it would eat into their lead. The other risk for the Tories is if the Remain vote swings more decisively behind either Labour or the Liberal Democrats (or that there are signs of more effective tactical voting, winning seats off the Conservatives despite a split vote). Essentially Boris Johnson needs to keep the Leave vote united and the Remain vote divided.

It is also worth considering how the Conservative lead might translate into seats. In 2017 the Conservative lead over Labour was only two and a half percentage points. You would therefore expect an eight point Conservative lead to translate into a majority, and a fifteen or seventeen point lead to be a landslide. In reality that Survation poll could easily be touch-and-go for a Tory majority and, while the bigger leads would likely get a Tory majority, it may not be landslide territory.

The reason that the Conservatives translated votes more effectively into seats in 2015 and 2017 was to do with the distribution of the vote. The Conservative re-emergence in Scotland meant that Tory votes up there were no longer wasted (but Labour votes increasingly were), the collapse of the Liberal Democrats in the South-West meant that the Tories vote there returned more MPs. If at the coming election we see those trends reverse, and the Conservatives lose seats to the SNP in Scotland and the Lib Dems in the South, then suddenly their votes won’t be translated so effectively into seats, and they’ll need to win more seats off Labour to make up for it.

Right now we have little evidence of how uniform or not the changes in support are, of whether there is evidence of tactical voting (Survation have released a couple of constituency polls they have conducted for the Liberal Democrats showing them doing very well in individual seats, but I don’t think it’s too cynical to imagine that the Lib Dems may have selectively published seats they are doing particularly well in). In the fulness of time I expect we will see the publication of MRP models along the lines of those YouGov conducted in 2017 that may give us a better steer, but I’ll come to that another day.

In the meantime, as we cross the starting line the Conservatives have a clear lead in the polls, but how it translates into seats is unclear. In the polls with the smaller Tory leads, it may not produce a majority at all. Equally, their lead is dependent upon the Leave vote remaining relatively united, and the Remain vote remaining divided, if that changes, the race could end up being far closer.


-->

Today the British Election Study published its face-to-face data for the 2017 election. The BES has two main elements: one is a large online panel element, using the same respondents in multiple waves so that they can track change at an individual level. The other part is a smaller face-to-face element, using a genuine random sample to try and get the best sample possible. The face-to-face element is also cross-referenced with the marked electoral register (that is, the copy of the register in polling stations where people’s names are crossed off as they vote) so that they can verify after the election whether people genuinely did or did not vote.

This means the face-to-face data is by far the best data we have on actual turnout levels and on turnout levels among different demographic groups. When discussing turnout I’m often asked about the official figures for turnout among men and women, young and old, and have to explain that these figures do not exist. While there are official figures of the numbers of votes cast in each constituency and the number of people on the electoral register (a different figure, note, to the number of people who are actually eligible to vote, where there is an absence of official data), there are no actual figures for turnout among demographic sub-groups of the population. We know how many people voted, but not details of their age, gender, class or other demographics.

Up until now there has been a widespread narrative that in 2018 Labour managed to engage young people who do not normally vote and substantially increase youth turnout at the general election (referred to by the rather irriating neologism “youthquake”). This was never based on particularly strong evidence. The narrative had begun to take hold during the campaign itself because of the difference between polls (a simple explanation of the polls during the 2017 campaign was that companies showing a large Tory lead were doing so because they weighting down younger respondents based on their past unlikelihood to vote and companies showing smaller Tory leads were basing turnout more on self-reporting and, therefore, often showing higher youth turnout). A common and not unreasonable assumption before the general election was, therefore, that if youth turnout did increase those polls showing a smaller Tory lead would be right, if youth turnout stayed low the Tories would win comfortably. Another common discussion during the campaign were the enthusiastic crowds of young people that were attracted to Jeremy Corbyn’s events. People sensibly cautioned that what mattered was whether those crowds actually suggested normally uninterested young people would vote, or just represented the more politically engaged young people.

By election day, there was a narrative that if all those enthusiastic young people actually came out to vote Labour would do well, and if it was just a mirage the Tories would win. Therefore when the Conservatives did do less well than most people expected the most easily available explanation to reach for was that young people had indeed been enthused to go out and vote Labour. In the immediate aftermath of the election an implausible claim that youth turnout was 72% was widely reported, without any apparent source. Shortly after that polling evidence from various companies emerged that did support a higher level of youth turnout. Given that the problem with polling accuracy in 2015 was that poll samples had too many of the sort of people who vote, particularly among young people, this evidence was rather dicey. It could have been that youth turnout had risen… or it could have been that polls still contained too many of the sort of young people who vote. The final bit of evidence was that seats that contained a larger proportion of young people did see their turnout rise more at the election… though as Chris Prosser and the rest of the BES team ably explain in their paper, this is not necessarily the strong evidence you might think: seats with more young people tend to be urban and more diverse, so it’s equally possible that urban areas in general saw a larger increase in turnout.

In fact the BES data released today – using a random sample and checked against the electoral register – does not find evidence of any increase in turnout among under 25s, thought does find some evidence of an increase in turnout among those between 25 and 44. The boost in youth turnout that people have been using to explain the 2017 election may not actually exist at all (or if it does, it was among relatively young voters, rather than the youngest voters). That’s not to say that young voters were not still important in explaining the election result – age was still an important divide on how people voted, young people did still heavily vote for Labour so it is still fair to say Labour managed to enthuse young people more, it’s just that the level of turnout among under 25s does not appear to have risen; Labour just took a greater share of support among younger voters.

This does raise some other questions about the polls at the 2017 election. Until now the most obvious explanation for why some polls got the figures very wrong and others got them right is that, by basing turnout patterns on what happened in 2015 some polls missed out on a genuine surge in youth turnout, therefore understating Labour support, and that polls showing higher youth turnout were closer to the actual result. However, if youth turnout didn’t actually rise then this explanation seems far less convincing. My own view is that the way turnout models were done was probably still a major factor in the error, but it may be more a case of how they were done rather than the principle (besides, there were some approaches, like the YouGov MRP model, that used demographics in their turnout modelling and did well). More on that issue another time.

In the meantime, there’s a summary of the BES findings on youth turnout here and their full paper is here.


The British Election Study have released their data from the election campaign waves today – one large wave straight after the election was called, a wave of daily rolling polls from throughout the campaign itself and a third large wave conducted straight after the campaign. All three of these datasets were collected online by YouGov (the face-to-face element of the BES is still to come). If you’re au fait with stats software like SPSS, Stata or R the raw data is available for download on the British Election Study site here.

There’s already some analysis of the data by the BES team here (a longer version of the article you may have seen on BBC this morning), focusing on how people changed their votes between 2015 and 2017, and between the beginning and end of the election campaign.

The article breaks down 2015 vote by Remainers and Leavers. Looking at how 2015 voters who backed Leave ended up voting in 2017, the Conservatives kept the vast majority of their 2015 leave voters and picked up over half of the 2015 UKIP vote (as well as a chunk of Labour Leavers). The collapse of UKIP wasn’t all to the Conservatives’ favour though, 18% of UKIP Leavers ended up moving to Labour.

Turning to the Remain vote, Labour were the clear victor: around a third of 2015 Tories who voted remain drifted away from the party, either to Labour or to the Lib Dems, but Labour also picked up a chunk of the 2015 Lib Dem vote and most of the 2015 Green vote. Of course, while this is easy to view through the prism of Brexit, that doesn’t necessarily mean Brexit was the main driver (to give an obvious example, yes – a large proportion of Green Remain voters moved to Labour… but a large proportion of the 2015 Green vote had already moved to Labour before the referendum, presumably as a result of the direction Jeremy Corbyn had taken the party).

More interesting is the movement during the campaign itself. 19% of people changed how they would vote between the start and the end of the campaign. This is not in itself unusual – in 2015 the figure was 17%, and according to the BES team it was higher in 2010 and 2005. The difference in 2017 is that this movement was overwhelmingly in favour of the Labour party, whereas at previous elections the churn largely cancelled itself out. Hence during the campaign we can see significant numbers of Tory votes, Lib Dem voters and, most of all, don’t knows moving towards Labour, but very little movement away from Labour.

In terms of explanations for the movement – while the voters Labour attracted during the campaign were those you’d expect to be the most receptive (that is, tending to be opposed to a hard-Brexit and left-leaning), the most obvious movement was on leadership ratings, that sharp collapse in Theresa May’s ratings and the steady increase in Jeremy Corbyn’s, and those people who moved to Labour during the campaign were also those who displayed the biggest increase in their perceptions of Jeremy Corbyn.

Ed and Chris’s full article is here.


I’ve only had a couple of hours sleep so this is a very short comment on lessons from the polls at the election. The two best performing traditional polls seem to be those from Survation and Surveymonkey. Survation had a one point Con lead in their final GB poll, Surveymonkey had a four point lead in their final UK poll. The actual lead is 2 or 3 points depending on if you look at UK or GB figures. Congratulations to both of them. While it wasn’t a traditional poll, YouGov’s MRP model also came very close – it’s final GB figures were a four point lead (and some of the individual seat estimates that looked frankly outlandish, like Canterbury leaning Labour and Kensington being a tossup, actually turned out to be correct).

Looking across the board the other companies all overstated the Tory lead to one degree or another. The actual share of the Tory vote was broadly accurate, rather it was that almost everyone understated Labour support. I have a lot of sympathy with Peter Kellner’s article in the Standard earlier – that to some degree it was a case of pollsters “trying too hard”. Companies have all been trying to correct the problems of 2015, and in many cases those changes seem to have gone too far.

A big gulf between pollsters that many commented on during the campaign was the attitude to turnout. The pollsters who were furthest out on the lead, ComRes, ICM and BMG, all used methods that pumped up the Tory lead through demographic based turnout models, rather than basing turnout on how likely respondents said they are to vote. This was in many ways a way of addressing an issue in 2015 polling samples that contained too many of the sort of young people who vote, weighting down young turnout (and turnout among working class respondents, renters, or less well educated – different pollsters used different criteria). This wasn’t necessarily the wrong solution, but it was a risky one – it depends on modelling turnout correctly. What if turnout among young people actually did rise, then pollsters who were replicating 2015 patterns of turnout might miss it. That may be what happened.

That said, one shouldn’t jump to conclusions too quickly. It may be a case of how demographic turnout models were applied (by weighting the whole sample to match 2015 recalled vote and then separately weighting different demographic groups up or down based on likelihood to vote there’s a risk of “double-counting”). Most importantly, the YouGov MRP model and the Surveymonkey survey both based their turnout models on demographics too, and they both got the election right, so clearly it’s an approach that has the potential to work if done correctly.

Personally I’m pleased the YouGov model worked, disappointed the more traditional YouGov poll had too big a lead… but that at least gives us something to learn from (and for most of the campaign the two showed a similar lead, so rolling back some decisions and learning from the model seems a good starting point).

And with that, I’m going to get some sleep.