One of the key bits of evidence on why the polls got it wrong has today popped into the public domain – the British Election Study face to face survey. The data itself is downloadable here if you have SPSS or Stata, and the BES team have written about it here and here. The BES has two elements – an online panel study, going back to the same people before, during and after the election campaign, and a post-election random face-to-face study, allowing comparison with similar samples going back to the 1964 BES. This is the latter part.

The f2f BES poll went into the field just after the election and fieldwork was conducted up until September (proper random face-to-face polls take a very long time). On the question of how people voted in the 2015 election the topline figures were CON 41%, LAB 33%, LDEM 7%, UKIP 11%, GRN 3%. These figures are, of course, still far from perfect – the Conservatives and Labour are both too high, UKIP too low, but the gap between Labour and Conservative – the problem that bedevilled all the pre-election polls, is much closer to reality.

This is a heavy pointer towards the make-up of samples having been a cause of the polling error. If the problems had been caused by people incorrectly reporting their voting intentions (“shy Tories”) or people saying they would when they did not then it is likely that exactly the same problems would have shown up in the British Election Study (indeed, given the interviewer effect those problems could have been worse). The difference between the BES f2f results and the pre-election polls suggests that the error is associated with the thing that makes the BES f2f so different from the pre-election polls – the way it is sampled.

As regular readers will know, most published opinion polls are not actually random. Most online polls are conducted using panels of volunteers, with respondents selected using demographic quotas to model the British public as closely as possible. Telephone polls are quasi-random, since they do at least select randomised numbers to call, but the fact that not everyone has a landline and that the overwhelming majority of people do not answer the call or agree to take part means the end results is not really close to a random sample. The British Election Study was a proper randomised study – it randomly picked consistencies, then addresses within in them, then a person at that address. The interviewer then repeatedly attempted to contact that specific person to take part (in a couple of cases up to 16 times!). The response rate was 56%.

Looking at Jon Mellon’s write up, this ties in well with the idea that polls were not including enough of the sort of people who don’t vote. One of the things that pollsters have flagged up in the investigations of what went wrong is that they found less of a gap in people’s reported likelihood of voting between young and old people than in the past, suggesting polls might no longer be correctly picking up the differential turnout between different social groups. The f2f BES poll did this far better. Another clue is in the comparison between whether people voted, and how difficult it was to get them to participate in the survey – amongst people who the BES managed to contact on their first attempt 77% said they had voted in the election, among those who took six or more goes only 74% voted. A small difference in the bigger scheme of things, but perhaps indicative.

This helps us diagnose the problem at the election – but it still leaves the question of how to solve it. I should pre-empt a couple of wrong conclusions that people will jump to. One is the idea polls should go back to face-to-face – this mixes up mode (whether a poll is done by phone, in person, or online) with sampling (how the people who take part in the poll are selected). The British Election Study poll appears to have got it right because of its sampling (because it was random), not because of its mode (because it was face-to-face). The two do not necessarily go hand-in-hand: when face-to-face polling used to be the norm in the 1980s it wasn’t done using random sampling, it was done using quota sampling. Rather than asking interviewers to contact a specific randomly selected person and to attempt contact time and again, interviewers were given a quota of, say, five middle-aged men, and any old middle-aged men would do.

That, of course, leads to the next obvious question of why don’t pollsters move to genuine random samples? The simple answers there are cost and time. I think most people in market research would agree a proper random sample like the BES is the ideal, but the cost is exponentially higher. This isn’t more expensive in the sense of “well, they should pay a bit if they want better results” type way – it’s more expensive as in a completely difference scale of expense, the difference between a couple of thousand and a couple of hundred thousand. No media outlet could ever justify the cost of a full scale random poll, it’s just not ever going to happen. It’s a shame, I for one would obviously be delighted were I to live in a world where people were willing to pay hundreds of thousands of pounds for polls, but such is life. Things like the BES only exist because of big funding grants from the ESRC (and at some elections that has need to be matched by grants from other charitable trusts).

The public opinion poll industry has always been about a finding a way of measuring public opinion that can combine accuracy with being affordable enough for people to actually buy and speedy enough to react to events, and whatever the solutions that emerge from the 2015 experience will have those same aims. Changing sampling techniques to make them resemble random sampling more could, of course, be one of the routes that companies look at. Or controlling their sampling and weighting in ways to better address shortcomings of the sampling. Or different ways of modelling turnout, like ComRes are looking at. Or something else yet unspeculated. Time will tell.

The other important bit of evidence we are still waiting for is the BES’s voter validation exercise (the large scale comparison of whether poll respondents’ claims on whether they voted or not actually match up against their individual records on the marked electoral register). That will help us understand a lot more about how well or badly the polls measured turnout, and how to predict individual respondents’ likelihood of voting.

Beyond that, the polling inquiry team have a meeting in January to announce their initial findings – we shall see what they come up with.

We have two new voting intention polls today. First is a telephone poll from ComRes for the Daily Mail – topline figures are CON 38%(-1), LAB 33%(+3), LDEM 8%(-1), UKIP 10%(-2), GRN 3%(-1). Since introducing their new turnout model based on socio-economic factors ComRes have tended to show the biggest leads for the Conservative party, typically around twelve points, so while this poll is pretty similar to the sort of Conservative leads that MORI, ICM, YouGov and Opinium have recorded over the last month, compared to previous ComRes polls it represents a narrowing of the Conservative lead. Full tabs are here.

The second new poll is from BMG research, a company that conducted a couple of voting intention polls just before the general election for the May2015 website, but hasn’t released any voting intention figures since then. Their topline figures are CON 37%, LAB 31%, LDEM 6%, UKIP 15%, GRN 5%. BMG have also adopted a methodology including socio-economic factors – specifically, people who don’t give a firm voting intention but who say they are leaning towards voting for a party (a “squeeze question”) or who do say how they voted last time are included in the final figures, but weighted according to age, with younger people being weighted harshly downwards. Full tabs are here.

BMG also asked voting intention in the European refrendum, with headline figures of Remain 52%, Leave 48%. ICM also released their regular EU referedum tracker earlier in the week, which had toplines of Remain 54%, Leave 46%. A third EU referendum poll from YouGov found it 50%-50% – though note that poll did not use the actual referendum question (YouGov conduct a monthly poll across all seven European countries they have panels in, asking the same questions to all seven countries and including a generic question on whether people would like their own country to remain in the EU – this is that question, rather than a specific British EU referendum poll, where YouGov do use the referendum question).


Ipsos MORI have published their September political monitor for the Evening Standard. Topline voting intention figures are CON 39%, LAB 34%, LDEM 9%, UKIP 7%, GRN 4%.

MORI have made another methodological change in the light of the polling error at the general election. Previously they had started including how regularly people say they usually vote in the turnout filter, now they have also added additional weighting by newspaper readership. Again, the methodology review is still an ongoing process, and MORI make clear they anticipate making further changes.

The rest of the poll had a series of questions about perceptions of the party leaders and parties.

Jeremy Corbyn’s first satisfaction rating is minus 3 (33% are satisfied with him as leader, 36% dissatisfied). At first glance that isn’t bad – it’s a better net rating than Cameron or the government! In a historical context though it’s not good. New leaders normally get a polling honeymoon, the public give them the benefit of the doubt to begin with and Corbyn’s net rating is the worst MORI have recorded for a new leader of one of the big two parties (the initial ratings for past party leaders were Miliband +19, Brown +16, Cameron +14, Howard +9, IDS 0, Hague -1, Blair +18, Smith +18, Major +15, Kinnock +20, Foot +2)

Looking at the more detailed questions on perceptions of Jeremy Corbyn his strengths and weaknesses compared to David Cameron are very similar to the ones we got used to in Cameron v Miliband match ups: Cameron scores better on things like being a capable leader, good in a crisis, sound judgement; Corbyn scores better on being in touch with ordinary people, having more substance than style and being more honest than most politicians. Asked overall who would make the most capable Prime Minister Cameron wins by 53% to 27%.

Of course, all of Jeremy Corbyn’s ratings need to be seen in the context that he is very new to the job and the public don’t know a whole lot about him beyond the initial negative press. Early perceptions of him may yet change. His figures may get better… or worse.

MORI also asked about perceptions of the Labour and Conservative parties, and here the impact of Corbyn’s victory on how the Labour party itself is seen was very evident. The proportion of people seeing the party as divided is up 33 points to 75%, extreme is up 22 points to 36% and out of date is up 19 points to 55%. Both the Labour party and the Conservative party had a big jump in the proportion of people saying they were “Different to other parties” – I suppose it takes two parties to be different from each other!

Full details of the MORI poll are here

ICM have released their August poll for the Guardian. Topline voting intention figures are CON 40%, LAB 31%, LDEM 7%, UKIP 10%, GRN 4%. Full tables are here.

This is the first ICM poll since the election to feature an updated methodology in light of the polling error. Since 1993 or so ICM have reallocated people who don’t give a voting intention based on how they say they voted at the previous election. Colloquially this is often known as a “shy Tory” adjustment, as when it was initially introduced in the 1992 to 1997 Parliament it tended to help the Tories, though after the Iraq war it actually tended to be a “shy Labour” adjustment, and in the last Parliament it was a “shy Lib Dem” adjustment.

In practice ICM didn’t reallocate all their don’t knows and refusals as many people who refuse to give a current voting intention also refuse to say how they voted at the last election (ICM call these people “total refusals”, as opposed to “partial refusals” who say how they voted last time but not this time). Under the new method ICM are also attempting to estimate the views of these “total refusals” – they are reallocated at the same rate as “partial refusals” but are assumed to split slightly more in favour of the Conservatives, based upon what ICM found in their post-election re-contact survey. The effect on this change on ICM’s headline figures this month is to increase the level of Conservative support by one point and decrease Labour support by one point.

The implication of this adjustment is that at least some of the error at the general election was down to traditional “shy Tories”, that those who refused to answer pollsters’ questions were disproportionately Conservative supporters. However, from being on panels with Martin Boon since the election and hearing him speak at the British Polling Council inquiry meeting I don’t think he’ll have concluded that “shy Tories” was the whole of the problem, and in ICM’s tables they are clear that they “expect to produce further methodological innovations in the future.”

Polling news round up

Labour leadership

Regular polling remains sparse given the ongoing inquiry and that we’re in that odd sort of political interregnum with Labour yet to elect their new leaders, but there have been a couple of polls on the Labour contest and the EU. A new ORB poll on the Labour leadership earlier in the week showed Andy Burnham was seen as the candidate most likely to help Labour’s chances at the next election (36%), followed by Liz Kendall on 25%. Full tabs are here.

I would be extremely cautious of polling about the Labour leadership election. Essentially there are two real questions about the Labour leadership – who is going to win, and who would be best at winning votes for Labour. For the first one, we need a poll of Labour party members, and we don’t have a recent one (there is some data from a YouGov poll of party members for Tim Bale & Paul Webb, but that was done straight after the election before the candidates were clear). For the second, I suspect any data is fatally flawed by the public’s low awareness of the candidates – right now, polls about the Labour leadership are little more than name recognition contests. Looking at the tables for the ORB poll it looks to me as if the main reason the prospective leaders scored so highly is that the question didn’t offer a don’t know option, if it had, I bet the don’t knows would have had a runaway victory.

Worth looking at as a corrective is this ICM poll on the Labour leadership that asked people to identify photos of Andy Burnham, Yvette Cooper, Liz Kendall and Jeremy Corbyn. 23% were able to identify Burnham, 17% Cooper, 10% Kendall and 9% Corbyn. Essentially, if 90% of the respondents to a poll can’t even recognise a photo of Liz Kendall or Jeremy Corbyn, how good a judge are they going to be on what sort of Labour leader they’d be? “I’m a particular fan of the one I’ve never heard of and know nothing at all about” said no one, ever.

Sky poll on the EU

SkyNews have released a poll they have carried out themselves amongst a panel of BSkyB subscribers. The poll itself shows nothing particularly new (people think the EU is good for the British economy by 39% to 31%, etc, etc), but the concept itself is interesting – it’s a proper effort to get a representative sample from their subscriber database, weighted by age, gender, past vote, Experian segment (as an alternative to class), ethnicity, tenure and so on. It is, however, unavoidably only made up of Sky subscribers, which will bring with it its own biases. The question is to what extent those biases can be cancelled by their weighting and sampling. We shall see. The tables for this first poll are here.

Parliamentary debates

Last week there were two Parliamentary debates on regulating opinion polls. The first, last Thursday, was prompted by Lord Lipsey and concerned whether polling companies needed regulating to prevent them asking leading and biased questions – though it was largely made up of the specifics of one single poll on mitochondrial donation. The other was the second reading of George Foulkes private members bill regulating opinion polls, which includes a good response from Andrew Cooper of Populus. Lord Bridges for the government stated they had no plans to regulate polls. Lord Foulkes’s bill was nodded through to the committee stage, so will trundle on for a little longer.

Herding pollsters

Finally there’s a great piece by Matt Singh on pollster herding here. Matt mentions some of the possible reasons for herding, but more importantly actually does the sums on whether there was any herding… and finds there wasn’t. The spread between different pollsters in the final polls was very much in line with what you’d expect to find.