Today the British Election Study published its face-to-face data for the 2017 election. The BES has two main elements: one is a large online panel element, using the same respondents in multiple waves so that they can track change at an individual level. The other part is a smaller face-to-face element, using a genuine random sample to try and get the best sample possible. The face-to-face element is also cross-referenced with the marked electoral register (that is, the copy of the register in polling stations where people’s names are crossed off as they vote) so that they can verify after the election whether people genuinely did or did not vote.

This means the face-to-face data is by far the best data we have on actual turnout levels and on turnout levels among different demographic groups. When discussing turnout I’m often asked about the official figures for turnout among men and women, young and old, and have to explain that these figures do not exist. While there are official figures of the numbers of votes cast in each constituency and the number of people on the electoral register (a different figure, note, to the number of people who are actually eligible to vote, where there is an absence of official data), there are no actual figures for turnout among demographic sub-groups of the population. We know how many people voted, but not details of their age, gender, class or other demographics.

Up until now there has been a widespread narrative that in 2018 Labour managed to engage young people who do not normally vote and substantially increase youth turnout at the general election (referred to by the rather irriating neologism “youthquake”). This was never based on particularly strong evidence. The narrative had begun to take hold during the campaign itself because of the difference between polls (a simple explanation of the polls during the 2017 campaign was that companies showing a large Tory lead were doing so because they weighting down younger respondents based on their past unlikelihood to vote and companies showing smaller Tory leads were basing turnout more on self-reporting and, therefore, often showing higher youth turnout). A common and not unreasonable assumption before the general election was, therefore, that if youth turnout did increase those polls showing a smaller Tory lead would be right, if youth turnout stayed low the Tories would win comfortably. Another common discussion during the campaign were the enthusiastic crowds of young people that were attracted to Jeremy Corbyn’s events. People sensibly cautioned that what mattered was whether those crowds actually suggested normally uninterested young people would vote, or just represented the more politically engaged young people.

By election day, there was a narrative that if all those enthusiastic young people actually came out to vote Labour would do well, and if it was just a mirage the Tories would win. Therefore when the Conservatives did do less well than most people expected the most easily available explanation to reach for was that young people had indeed been enthused to go out and vote Labour. In the immediate aftermath of the election an implausible claim that youth turnout was 72% was widely reported, without any apparent source. Shortly after that polling evidence from various companies emerged that did support a higher level of youth turnout. Given that the problem with polling accuracy in 2015 was that poll samples had too many of the sort of people who vote, particularly among young people, this evidence was rather dicey. It could have been that youth turnout had risen… or it could have been that polls still contained too many of the sort of young people who vote. The final bit of evidence was that seats that contained a larger proportion of young people did see their turnout rise more at the election… though as Chris Prosser and the rest of the BES team ably explain in their paper, this is not necessarily the strong evidence you might think: seats with more young people tend to be urban and more diverse, so it’s equally possible that urban areas in general saw a larger increase in turnout.

In fact the BES data released today – using a random sample and checked against the electoral register – does not find evidence of any increase in turnout among under 25s, thought does find some evidence of an increase in turnout among those between 25 and 44. The boost in youth turnout that people have been using to explain the 2017 election may not actually exist at all (or if it does, it was among relatively young voters, rather than the youngest voters). That’s not to say that young voters were not still important in explaining the election result – age was still an important divide on how people voted, young people did still heavily vote for Labour so it is still fair to say Labour managed to enthuse young people more, it’s just that the level of turnout among under 25s does not appear to have risen; Labour just took a greater share of support among younger voters.

This does raise some other questions about the polls at the 2017 election. Until now the most obvious explanation for why some polls got the figures very wrong and others got them right is that, by basing turnout patterns on what happened in 2015 some polls missed out on a genuine surge in youth turnout, therefore understating Labour support, and that polls showing higher youth turnout were closer to the actual result. However, if youth turnout didn’t actually rise then this explanation seems far less convincing. My own view is that the way turnout models were done was probably still a major factor in the error, but it may be more a case of how they were done rather than the principle (besides, there were some approaches, like the YouGov MRP model, that used demographics in their turnout modelling and did well). More on that issue another time.

In the meantime, there’s a summary of the BES findings on youth turnout here and their full paper is here.


The British Election Study have released their data from the election campaign waves today – one large wave straight after the election was called, a wave of daily rolling polls from throughout the campaign itself and a third large wave conducted straight after the campaign. All three of these datasets were collected online by YouGov (the face-to-face element of the BES is still to come). If you’re au fait with stats software like SPSS, Stata or R the raw data is available for download on the British Election Study site here.

There’s already some analysis of the data by the BES team here (a longer version of the article you may have seen on BBC this morning), focusing on how people changed their votes between 2015 and 2017, and between the beginning and end of the election campaign.

The article breaks down 2015 vote by Remainers and Leavers. Looking at how 2015 voters who backed Leave ended up voting in 2017, the Conservatives kept the vast majority of their 2015 leave voters and picked up over half of the 2015 UKIP vote (as well as a chunk of Labour Leavers). The collapse of UKIP wasn’t all to the Conservatives’ favour though, 18% of UKIP Leavers ended up moving to Labour.

Turning to the Remain vote, Labour were the clear victor: around a third of 2015 Tories who voted remain drifted away from the party, either to Labour or to the Lib Dems, but Labour also picked up a chunk of the 2015 Lib Dem vote and most of the 2015 Green vote. Of course, while this is easy to view through the prism of Brexit, that doesn’t necessarily mean Brexit was the main driver (to give an obvious example, yes – a large proportion of Green Remain voters moved to Labour… but a large proportion of the 2015 Green vote had already moved to Labour before the referendum, presumably as a result of the direction Jeremy Corbyn had taken the party).

More interesting is the movement during the campaign itself. 19% of people changed how they would vote between the start and the end of the campaign. This is not in itself unusual – in 2015 the figure was 17%, and according to the BES team it was higher in 2010 and 2005. The difference in 2017 is that this movement was overwhelmingly in favour of the Labour party, whereas at previous elections the churn largely cancelled itself out. Hence during the campaign we can see significant numbers of Tory votes, Lib Dem voters and, most of all, don’t knows moving towards Labour, but very little movement away from Labour.

In terms of explanations for the movement – while the voters Labour attracted during the campaign were those you’d expect to be the most receptive (that is, tending to be opposed to a hard-Brexit and left-leaning), the most obvious movement was on leadership ratings, that sharp collapse in Theresa May’s ratings and the steady increase in Jeremy Corbyn’s, and those people who moved to Labour during the campaign were also those who displayed the biggest increase in their perceptions of Jeremy Corbyn.

Ed and Chris’s full article is here.


-->

I’ve only had a couple of hours sleep so this is a very short comment on lessons from the polls at the election. The two best performing traditional polls seem to be those from Survation and Surveymonkey. Survation had a one point Con lead in their final GB poll, Surveymonkey had a four point lead in their final UK poll. The actual lead is 2 or 3 points depending on if you look at UK or GB figures. Congratulations to both of them. While it wasn’t a traditional poll, YouGov’s MRP model also came very close – it’s final GB figures were a four point lead (and some of the individual seat estimates that looked frankly outlandish, like Canterbury leaning Labour and Kensington being a tossup, actually turned out to be correct).

Looking across the board the other companies all overstated the Tory lead to one degree or another. The actual share of the Tory vote was broadly accurate, rather it was that almost everyone understated Labour support. I have a lot of sympathy with Peter Kellner’s article in the Standard earlier – that to some degree it was a case of pollsters “trying too hard”. Companies have all been trying to correct the problems of 2015, and in many cases those changes seem to have gone too far.

A big gulf between pollsters that many commented on during the campaign was the attitude to turnout. The pollsters who were furthest out on the lead, ComRes, ICM and BMG, all used methods that pumped up the Tory lead through demographic based turnout models, rather than basing turnout on how likely respondents said they are to vote. This was in many ways a way of addressing an issue in 2015 polling samples that contained too many of the sort of young people who vote, weighting down young turnout (and turnout among working class respondents, renters, or less well educated – different pollsters used different criteria). This wasn’t necessarily the wrong solution, but it was a risky one – it depends on modelling turnout correctly. What if turnout among young people actually did rise, then pollsters who were replicating 2015 patterns of turnout might miss it. That may be what happened.

That said, one shouldn’t jump to conclusions too quickly. It may be a case of how demographic turnout models were applied (by weighting the whole sample to match 2015 recalled vote and then separately weighting different demographic groups up or down based on likelihood to vote there’s a risk of “double-counting”). Most importantly, the YouGov MRP model and the Surveymonkey survey both based their turnout models on demographics too, and they both got the election right, so clearly it’s an approach that has the potential to work if done correctly.

Personally I’m pleased the YouGov model worked, disappointed the more traditional YouGov poll had too big a lead… but that at least gives us something to learn from (and for most of the campaign the two showed a similar lead, so rolling back some decisions and learning from the model seems a good starting point).

And with that, I’m going to get some sleep.


So, here goes – the eve of the election means we get the final call polls. We already got Opinium’s final poll yesterday and Ipsos MORI won’t be till tomorrow, but everyone else should be reporting today.

ICM have tended to show the strongest leads for the Conservatives during the campaign – their final poll for the Guardian continues that trend with topline figures of CON 46%(+1), LAB 34%(nc), LDEM 7%(-1), UKIP 5%(nc), a Tory lead of twelve points. Fieldwork was yesterday and today. Note that these are preliminary figures and that ICM are continuing to collect data through the evening, so they will confirm final results later. The tables for the preliminary results are here.

ComRes for the Independent have final figures of CON 44%(-3), LAB 34%(-1), LDEM 9%(+1), UKIP 5%(+1). Fieldwork was between Monday and today. Along with ICM ComRes tend to show the largest leads for the Conservatives, and the ten point lead is actually their lowest of the campaign. Tables are here.

Surveymonkey for the Sun report just a four point lead for the Conservatives: CON 42%(-2), LAB 38%(nc), LDEM 6%(nc), UKIP 4%(nc). Fieldwork was Sunday to Tuesday and changes are from a week ago. Surveymonkey aren’t a BPC member so I don’t have more details, though we should be getting some later. Regular readers will remember that Surveymonkey polled at the last general election and got the Conservative lead right, albeit getting both main parties too low. There are more details of Surveymonkey’s approach here.

Panelbase have final figures of CON 44%(nc), LAB 36%(nc), LDEM 7%(nc), UKIP 5%(nc), GRN 2%(-1). Fieldwork was between Friday and today, and obviously shows no substantial change from their previous poll.

Kantar‘s final poll has topline figures of CON 43%(nc), LAB 38%(+5), LDEM 7%(-4), UKIP 4%(nc). Fieldwork was between Thursday and today and shows a narrowing of the Tory lead to just five points – Kantar have previously tended to show larger leads. Note that there is a very minor methodology change here, Kantar have fixed the share of the 2017 vote coming from 2015 Conservative and Labour voters at 61% – I’m not sure exactly what that means, but it has only a minor effect anyway, increasing the Tory lead by one point. Tables are here.

YouGov‘s final poll for the Times has topline figures of CON 42%, LAB 35%, LDEM 10%, UKIP 5%, GRN 2%. Fieldwork was Monday to today. Minor method change here too – adding candidate names to the voting question, and reallocating don’t knows using past vote (which knocked down Labour support by just over a point). Full details here.

Survation‘s final poll (using their phone methodology, rather than their online one) has topline figures of CON 41%, LAB 40%, LDEM 8%, UKIP 2%, GRN 2% – the one point Tory lead is the closest we’ve seen, though effectively the same as Survation’s last poll. Fieldwork was Monday and Tuesday and tables are here.

BMG, who haven’t polled since back in 2016, have also put out a final poll. Their topline figures are CON 46%, LAB 33%, LDEM 8%, UKIP 5%.


Last night we got a leaked version of the Labour manifesto. Over the next week it will be joined by the manifestos from all the other parties too. Lots of people will write articles about their impact. We will see polls asking about those policies and whether people approve of them. Lots of people will ask what impact they will have on voting intention or the result. The answer is probably not much. Specific policies make very little difference to voting intention.

This is counter-intuitive to many. Surely in an election on who is going to run the country, what they’ll say they’ll do will matter? One might very well well think it is what elections SHOULD be about. The thing is, it’s not really how people work.

First, most people don’t know what the policies are, so they can’t be influenced by them. One of the most difficult things for people who follow politics closely (which probably includes most people reading this) to grasp is how different they are from the vast majority who don’t pay much attention to politics. For example, in the first few weeks of the campaign Theresa May was the subject of mockery from people who follow politics for continually using the soundbite “strong and stable leadership”. While it sounded absurd to those of us who heard it a thousand times, when YouGov asked a representative sample of the public if they could recall any slogans or messages she had said only 15% remembered it. Most policies make no difference because most people have no knowledge of them.

Even if people were more aware of policies, it’s not really the sort of thing they vote upon. There is a huge body of academic research around elections and voter choice, and the general consensus is that the important factors in deciding how people vote are which party they normally identify with, what their perceptions are of the leaders are and which party they think would most competently handle the big issues of the day.

As human beings we don’t tend to be particularly good judges of what leads to our decisions (we all tend to overestimate how thoughtful and rational we are, when in reality our decisions are normally based on a jumble of bias, instinct and rules-of-thumb, which we rationalise afterwards). However, if you ask voters directly we don’t even think that policies are why we vote the way we do – most people say that it’s the broad values and priorities of a party that matter, or how good their leader is, not the specific policies they offer.

Of course that doesn’t mean policies aren’t part of the mix. When it comes to whether the public think that a party is competent, whether or not they have policies that seem sensible and well-thought through is probably a factor. What sort of policies a party puts forward will make a contribution to what people make of a party’s values and principles. They are not irrelevant, but they are only a small part of a much bigger mix. What this all means is that one can’t look at the popularity of individual policies and conclude a party will gain support. Any party can put together a shopping list of superficially attractive sounding policies – it’s whether collectively those policies, the people putting them forward, the values they represent, how competently they come across, how all these things come together to create a party that people identify with and think would offer a competent government.

In short, in the absence of other other big events in the coming week, don’t be surprised if the polls carried out after the manifestos appear are much the same as the polls carried out before they were published.