One of the key bits of evidence on why the polls got it wrong has today popped into the public domain – the British Election Study face to face survey. The data itself is downloadable here if you have SPSS or Stata, and the BES team have written about it here and here. The BES has two elements – an online panel study, going back to the same people before, during and after the election campaign, and a post-election random face-to-face study, allowing comparison with similar samples going back to the 1964 BES. This is the latter part.

The f2f BES poll went into the field just after the election and fieldwork was conducted up until September (proper random face-to-face polls take a very long time). On the question of how people voted in the 2015 election the topline figures were CON 41%, LAB 33%, LDEM 7%, UKIP 11%, GRN 3%. These figures are, of course, still far from perfect – the Conservatives and Labour are both too high, UKIP too low, but the gap between Labour and Conservative – the problem that bedevilled all the pre-election polls, is much closer to reality.

This is a heavy pointer towards the make-up of samples having been a cause of the polling error. If the problems had been caused by people incorrectly reporting their voting intentions (“shy Tories”) or people saying they would when they did not then it is likely that exactly the same problems would have shown up in the British Election Study (indeed, given the interviewer effect those problems could have been worse). The difference between the BES f2f results and the pre-election polls suggests that the error is associated with the thing that makes the BES f2f so different from the pre-election polls – the way it is sampled.

As regular readers will know, most published opinion polls are not actually random. Most online polls are conducted using panels of volunteers, with respondents selected using demographic quotas to model the British public as closely as possible. Telephone polls are quasi-random, since they do at least select randomised numbers to call, but the fact that not everyone has a landline and that the overwhelming majority of people do not answer the call or agree to take part means the end results is not really close to a random sample. The British Election Study was a proper randomised study – it randomly picked consistencies, then addresses within in them, then a person at that address. The interviewer then repeatedly attempted to contact that specific person to take part (in a couple of cases up to 16 times!). The response rate was 56%.

Looking at Jon Mellon’s write up, this ties in well with the idea that polls were not including enough of the sort of people who don’t vote. One of the things that pollsters have flagged up in the investigations of what went wrong is that they found less of a gap in people’s reported likelihood of voting between young and old people than in the past, suggesting polls might no longer be correctly picking up the differential turnout between different social groups. The f2f BES poll did this far better. Another clue is in the comparison between whether people voted, and how difficult it was to get them to participate in the survey – amongst people who the BES managed to contact on their first attempt 77% said they had voted in the election, among those who took six or more goes only 74% voted. A small difference in the bigger scheme of things, but perhaps indicative.

This helps us diagnose the problem at the election – but it still leaves the question of how to solve it. I should pre-empt a couple of wrong conclusions that people will jump to. One is the idea polls should go back to face-to-face – this mixes up mode (whether a poll is done by phone, in person, or online) with sampling (how the people who take part in the poll are selected). The British Election Study poll appears to have got it right because of its sampling (because it was random), not because of its mode (because it was face-to-face). The two do not necessarily go hand-in-hand: when face-to-face polling used to be the norm in the 1980s it wasn’t done using random sampling, it was done using quota sampling. Rather than asking interviewers to contact a specific randomly selected person and to attempt contact time and again, interviewers were given a quota of, say, five middle-aged men, and any old middle-aged men would do.

That, of course, leads to the next obvious question of why don’t pollsters move to genuine random samples? The simple answers there are cost and time. I think most people in market research would agree a proper random sample like the BES is the ideal, but the cost is exponentially higher. This isn’t more expensive in the sense of “well, they should pay a bit if they want better results” type way – it’s more expensive as in a completely difference scale of expense, the difference between a couple of thousand and a couple of hundred thousand. No media outlet could ever justify the cost of a full scale random poll, it’s just not ever going to happen. It’s a shame, I for one would obviously be delighted were I to live in a world where people were willing to pay hundreds of thousands of pounds for polls, but such is life. Things like the BES only exist because of big funding grants from the ESRC (and at some elections that has need to be matched by grants from other charitable trusts).

The public opinion poll industry has always been about a finding a way of measuring public opinion that can combine accuracy with being affordable enough for people to actually buy and speedy enough to react to events, and whatever the solutions that emerge from the 2015 experience will have those same aims. Changing sampling techniques to make them resemble random sampling more could, of course, be one of the routes that companies look at. Or controlling their sampling and weighting in ways to better address shortcomings of the sampling. Or different ways of modelling turnout, like ComRes are looking at. Or something else yet unspeculated. Time will tell.

The other important bit of evidence we are still waiting for is the BES’s voter validation exercise (the large scale comparison of whether poll respondents’ claims on whether they voted or not actually match up against their individual records on the marked electoral register). That will help us understand a lot more about how well or badly the polls measured turnout, and how to predict individual respondents’ likelihood of voting.

Beyond that, the polling inquiry team have a meeting in January to announce their initial findings – we shall see what they come up with.

Things remain very quiet on the polling front, but we do at least have the weekly ICM tracker of EU referendum voting intention. Latest figures are REMAIN 46%, LEAVE 38%. 46% is the highest ICM have recorded for Remain in their weekly tracker, though it’s still well within the normal margin of error. For now the picture from ICM’s regular polling remains one of a small but stable lead for Remain, rather than any movement in either direction.

Full tabs are here.


The referendum on EU membership will naturally cover the whole of the United Kingdom, but the vast majority of polling covers only Great Britain. This is because Northern Irish politics are so radically different from the rest of the UK. I suppose in some cases one could make a similar case for much more polling in the post-devolution age as Scottish politics diverges more and more from English politics, but we are where we are – the default position is still for polls to cover Great Britain but not Northern Ireland. When we get closer to the referendum I expect we’ll see some start to include Northern Ireland, but for the time being many questions will just be being asked on the back of regular Omnibus surveys covering just Great Britain.

The Belfast Telegraph today have a new poll from Lucidtalk asking specifically about EU voting intention in Northern Ireland. Current Northern Ireland voting intentions are REMAIN 56%, LEAVE 28%. Unionist voters are more than two-to-one against EU membership (REMAIN 21%, LEAVE 54%), Nationalist voters are overwhelmingly pro-EU (REMAIN 91%, LEAVE 8%).

Northern Ireland is only 3% of the UK population so is unlikely to have a decisive effect in the EU referendum unless it’s extremely close – even if Northern Ireland does vote two-to-one in favour of EU membership, that would increase the REMAIN lead in the UK as a whole by about one percentage point. Still, worth remembering when looking at GB polls that the UK position will be ever so marginally more pro-EU once Northern Ireland is included.

We have two new voting intention polls today. First is a telephone poll from ComRes for the Daily Mail – topline figures are CON 38%(-1), LAB 33%(+3), LDEM 8%(-1), UKIP 10%(-2), GRN 3%(-1). Since introducing their new turnout model based on socio-economic factors ComRes have tended to show the biggest leads for the Conservative party, typically around twelve points, so while this poll is pretty similar to the sort of Conservative leads that MORI, ICM, YouGov and Opinium have recorded over the last month, compared to previous ComRes polls it represents a narrowing of the Conservative lead. Full tabs are here.

The second new poll is from BMG research, a company that conducted a couple of voting intention polls just before the general election for the May2015 website, but hasn’t released any voting intention figures since then. Their topline figures are CON 37%, LAB 31%, LDEM 6%, UKIP 15%, GRN 5%. BMG have also adopted a methodology including socio-economic factors – specifically, people who don’t give a firm voting intention but who say they are leaning towards voting for a party (a “squeeze question”) or who do say how they voted last time are included in the final figures, but weighted according to age, with younger people being weighted harshly downwards. Full tabs are here.

BMG also asked voting intention in the European refrendum, with headline figures of Remain 52%, Leave 48%. ICM also released their regular EU referedum tracker earlier in the week, which had toplines of Remain 54%, Leave 46%. A third EU referendum poll from YouGov found it 50%-50% – though note that poll did not use the actual referendum question (YouGov conduct a monthly poll across all seven European countries they have panels in, asking the same questions to all seven countries and including a generic question on whether people would like their own country to remain in the EU – this is that question, rather than a specific British EU referendum poll, where YouGov do use the referendum question).

A quick note on the Individual Electoral Registration vote tonight and what it means, since I fear it will be badly reported elsewhere. As readers may know, electoral registration has now moved over from household registration (where one member of the household filled in a form to register everyone) to individual registration (there is still a household form to sign off for no change, but new registrations need to be done individually). Making it a little harder to register has created a lot of concern about whether it will lead to falling registration, particularly in residential communities like student halls of residence, where in the past the university authority could have registered everyone en masse.

That, however, is for another time. Tonight’s vote isn’t about the principle of individual registration and will make no difference to whether it happens or not. It is on one narrow, but important, part of the transition from household registration to individual registration.

The normal process of electoral registration is – crudely speaking – that once a year there is a canvas of every household, asking people if their details on the electoral register are correct. Local councils will remove entries that are no longer accurate and add on new people. People who don’t reply at all will be badgered with extra letters and knocks on the door, but eventually some people won’t reply. Those people are left on the register for a year, and then if they don’t reply to two canvasses in a row, deleted from the register. People get one year’s grace without being removed.

During the transition process that was different. The annual canvas in 2014 was cancelled for the transition process – people on the old register were matched against government databases, like benefits records, and those who matched were automatically moved across to the new system. Only people who couldn’t be automatically moved across were contacted and required to register on the new system. There was no cleaning of the register though, even if they couldn’t be automatically moved and across and didn’t respond to contacts, people on the old register were kept on the new Dec 2014 register to make sure they didn’t miss out on the general election.

For 2015 the annual canvas was started again, so every household got a letter asking people to confirm their existing details on the register. People who reply were updated (though new people now need to fill in an individual registration) and people who didn’t reply at all were chased. The question to be decided tonight is what to do with people who didn’t reply (or more specifically, people who don’t reply this year and weren’t verified or registered last year either – the year’s grace remains either way).

The legislation setting up individual registration said that people who don’t reply in 2015 should NOT be removed in 2015, but also specifically gave the government the power to change this by statutory instrument and recommence cleaning in 2015 if they preferred. The Electoral Commission recommended the government did not do this, and gave people the extra year’s grace. This is what tonight’s vote is on – are people who weren’t transitioned or re-registered on the new system in 2014 AND did not reply to this year’s electoral canvas left on the register or not?

In May 2015 there were 1.9m people still on the register who hadn’t been registered under the new system. Of course, all of these entries will not be removed, as there has been a full canvas since then and many of them will have replied to this year’s canvas and now be on the new system. It is still likely to be a substantial number. The change only affects people who replied to the electoral canvas at an address in 2013, but have not subsequently replied to electoral registration officers at that address since then, when during that time efforts will have been made to contact them several times for the transition to individual voting and in this year’s annual canvass. They will probably have had to ignore about nine letters reminding them to register. They also need to not be in receipt of benefits at that address and not on other government databases used for data matching, or they would have been automatically registered. In short, a lot of those people probably couldn’t be matched because they don’t live at that address any more, and may or may not be living or registered somewhere else. Finally, it’s worth remembering that people who are left off this December’s register can register to vote up until a couple of weeks before the local/mayoral/police/Scottish/Welsh elections next year.

In terms of the impact on individual voters, I fear there is some hyperbole going on. However, the impact of the vote isn’t just on individual voters, it’s important for another reason – arguably more so. The registers published on the 1st December this year are the ones that will be used for the new boundary review, and the removal of these rolled over names will make a difference. In the twenty council areas with the highest number of people held over from the 2013 register, about 11% of people on the register in May 2015 were held over, in the twenty council areas with the lowest number of people rolled over about 1% of people on the register in May 2015 were held over. The places with lots of held over entries are mostly (but not exclusively) Labour held areas, the places with few held over entries are mostly (but not exclusively) Conservative held areas. Again, remember many of these people will probably have been picked up in this year’s canvas, so it doesn’t mean 11% and 1% will be removed – the numbers will be lower than that – but it does mean the number of people on the registers will drop more in Labour areas than in Conservative areas.

Cleaning people who have not responded to the canvas off the register will decrease the registered electorate in inner-city Labour areas and make the boundary review better for the Conservatives. Leaving them on will make the boundary review better for Labour. We don’t know what proportion of the rolled over entries on the register relate to real people still living at those addresses and what proportion are “dead entries” related to people who no longer live at that address. The Conservatives can argue that leaving inaccurate entries on the register would skew the review by bumping up the electorate in areas with inaccurate registers full of outdated entries, Labour can argue that harshly pruning the register would skew the review by under-representing the electorate in areas of social-deprivation with populations who are less likely to register to vote. I suspect neither are entirely free from self-interest, but one way or the other it has to be decided: Parliament has until Monday to annul the statutory instrument or it remains law.