A quick update on some polling figures from the last few days.

ComRes released a new telephone poll for the Daily Mail on Friday. Topline voting intention figures were CON 37%, LAB 32%, LDEM 6%, UKIP 12%, GRN 4% (tabs are here.) On the EU referendum ComRes had voting intentions of REMAIN 54%, LEAVE 36%, DK 10%.

YouGov also released new figures on voting intention and the EU referendum on their website. Their lastest topline VI figures are CON 39%, LAB 30%, LDEM 6%, UKIP 17%, GRN 3% (tabs are here). On the EU referendum they have Leave slightly ahead – REMAIN 38%, LEAVE 42%, DK/WNV 20%.

Finally Ipsos MORI also released EU referendum figures (part of the monthly Political Monitor survey I wrote about earlier in the week). Their latest figures are REMAIN 50%, LEAVE 38%, DK 12%.

There continues to be a big contrast between EU referendum figures in polls conducted by telephone, and conducted online. The telephone polls from ComRes and Ipsos MORI both have very solid leads for remain, the online polls from ICM, YouGov, Survation and others all tend to have the race very close. In one sense the contrast seems to be in line with the contrast we saw in pre-election polls – while there was little consistent difference between online and telephone polls in terms of the position of Labour and the Conservatives (particularly in the final polls), there was a great big gulf in terms of the levels of UKIP support they recorded – in the early part of 2015 there was a spread of about ten points between those (telephone) pollsters showing the lowest levels of UKIP support and those (online) pollsters showing the highest levels of UKIP support. It doesn’t seem particularly surprising that this online/telephone gap in terms of UKIP support also translates into an online/telephone gap in terms of support for leaving the EU. In terms of which is the better predictor it doesn’t give us much in the way of clues though – the 13% UKIP ended up getting was bang in the middle of that range.

The other interesting thing about the telephone/online contrast in EU referendum polling is the don’t knows. Telephone polls are producing polls that have far fewer people saying they don’t know how they’ll vote (you can see it clearly in the polls in this post – the two telephone polls have don’t knows of 10% and 12%, the online poll has 20% don’t knows, the last couple of weekly ICM online polls have had don’t knows of 17-18%). This could have something to do with the respective levels of people who are interested in politics and the EU that the different sampling approaches are picking up, or perhaps something to do with people’s willingness to give their EU voting intention to a human interviewer. The surprising thing is that this is not a typical difference – in polls on how people would vote in a general election the difference is, if anything, in the other direction – telephone polls find more don’t knows and refusals than online polls do. Why it’s the other way round on the EU referendum is an (intriguing) mystery.


ICM released their final monthly voting intention poll of 2015 yesterday, with topline figures of CON 39%, LAB 34%, LDEM 7%, UKIP 10%, GRN 3%. I assume it’s the last voting intention poll we will see before Christmas. The full tables are here, where ICM also make an intriguing comment on methodology. They write,

For our part, it is clear that phone polls steadfastly continue to collect too many Labour voters in the raw sample, and the challenge for phone polling is to find a way to overcome the systematic reasons for doing so. The methodological tweaks that we have introduced since the election in part help mitigate this phenomenon by proxy, but have not overcome the core challenge. In our view, attempting to fully solve sampling bias via post-survey adjustment methods is a step too far and lures the unsuspecting pollster into (further) blase confidence. We will have more to say on our methods in the coming months.


-->

One of the key bits of evidence on why the polls got it wrong has today popped into the public domain – the British Election Study face to face survey. The data itself is downloadable here if you have SPSS or Stata, and the BES team have written about it here and here. The BES has two elements – an online panel study, going back to the same people before, during and after the election campaign, and a post-election random face-to-face study, allowing comparison with similar samples going back to the 1964 BES. This is the latter part.

The f2f BES poll went into the field just after the election and fieldwork was conducted up until September (proper random face-to-face polls take a very long time). On the question of how people voted in the 2015 election the topline figures were CON 41%, LAB 33%, LDEM 7%, UKIP 11%, GRN 3%. These figures are, of course, still far from perfect – the Conservatives and Labour are both too high, UKIP too low, but the gap between Labour and Conservative – the problem that bedevilled all the pre-election polls, is much closer to reality.

This is a heavy pointer towards the make-up of samples having been a cause of the polling error. If the problems had been caused by people incorrectly reporting their voting intentions (“shy Tories”) or people saying they would when they did not then it is likely that exactly the same problems would have shown up in the British Election Study (indeed, given the interviewer effect those problems could have been worse). The difference between the BES f2f results and the pre-election polls suggests that the error is associated with the thing that makes the BES f2f so different from the pre-election polls – the way it is sampled.

As regular readers will know, most published opinion polls are not actually random. Most online polls are conducted using panels of volunteers, with respondents selected using demographic quotas to model the British public as closely as possible. Telephone polls are quasi-random, since they do at least select randomised numbers to call, but the fact that not everyone has a landline and that the overwhelming majority of people do not answer the call or agree to take part means the end results is not really close to a random sample. The British Election Study was a proper randomised study – it randomly picked consistencies, then addresses within in them, then a person at that address. The interviewer then repeatedly attempted to contact that specific person to take part (in a couple of cases up to 16 times!). The response rate was 56%.

Looking at Jon Mellon’s write up, this ties in well with the idea that polls were not including enough of the sort of people who don’t vote. One of the things that pollsters have flagged up in the investigations of what went wrong is that they found less of a gap in people’s reported likelihood of voting between young and old people than in the past, suggesting polls might no longer be correctly picking up the differential turnout between different social groups. The f2f BES poll did this far better. Another clue is in the comparison between whether people voted, and how difficult it was to get them to participate in the survey – amongst people who the BES managed to contact on their first attempt 77% said they had voted in the election, among those who took six or more goes only 74% voted. A small difference in the bigger scheme of things, but perhaps indicative.

This helps us diagnose the problem at the election – but it still leaves the question of how to solve it. I should pre-empt a couple of wrong conclusions that people will jump to. One is the idea polls should go back to face-to-face – this mixes up mode (whether a poll is done by phone, in person, or online) with sampling (how the people who take part in the poll are selected). The British Election Study poll appears to have got it right because of its sampling (because it was random), not because of its mode (because it was face-to-face). The two do not necessarily go hand-in-hand: when face-to-face polling used to be the norm in the 1980s it wasn’t done using random sampling, it was done using quota sampling. Rather than asking interviewers to contact a specific randomly selected person and to attempt contact time and again, interviewers were given a quota of, say, five middle-aged men, and any old middle-aged men would do.

That, of course, leads to the next obvious question of why don’t pollsters move to genuine random samples? The simple answers there are cost and time. I think most people in market research would agree a proper random sample like the BES is the ideal, but the cost is exponentially higher. This isn’t more expensive in the sense of “well, they should pay a bit if they want better results” type way – it’s more expensive as in a completely difference scale of expense, the difference between a couple of thousand and a couple of hundred thousand. No media outlet could ever justify the cost of a full scale random poll, it’s just not ever going to happen. It’s a shame, I for one would obviously be delighted were I to live in a world where people were willing to pay hundreds of thousands of pounds for polls, but such is life. Things like the BES only exist because of big funding grants from the ESRC (and at some elections that has need to be matched by grants from other charitable trusts).

The public opinion poll industry has always been about a finding a way of measuring public opinion that can combine accuracy with being affordable enough for people to actually buy and speedy enough to react to events, and whatever the solutions that emerge from the 2015 experience will have those same aims. Changing sampling techniques to make them resemble random sampling more could, of course, be one of the routes that companies look at. Or controlling their sampling and weighting in ways to better address shortcomings of the sampling. Or different ways of modelling turnout, like ComRes are looking at. Or something else yet unspeculated. Time will tell.

The other important bit of evidence we are still waiting for is the BES’s voter validation exercise (the large scale comparison of whether poll respondents’ claims on whether they voted or not actually match up against their individual records on the marked electoral register). That will help us understand a lot more about how well or badly the polls measured turnout, and how to predict individual respondents’ likelihood of voting.

Beyond that, the polling inquiry team have a meeting in January to announce their initial findings – we shall see what they come up with.


We have two new voting intention polls today. First is a telephone poll from ComRes for the Daily Mail – topline figures are CON 38%(-1), LAB 33%(+3), LDEM 8%(-1), UKIP 10%(-2), GRN 3%(-1). Since introducing their new turnout model based on socio-economic factors ComRes have tended to show the biggest leads for the Conservative party, typically around twelve points, so while this poll is pretty similar to the sort of Conservative leads that MORI, ICM, YouGov and Opinium have recorded over the last month, compared to previous ComRes polls it represents a narrowing of the Conservative lead. Full tabs are here.

The second new poll is from BMG research, a company that conducted a couple of voting intention polls just before the general election for the May2015 website, but hasn’t released any voting intention figures since then. Their topline figures are CON 37%, LAB 31%, LDEM 6%, UKIP 15%, GRN 5%. BMG have also adopted a methodology including socio-economic factors – specifically, people who don’t give a firm voting intention but who say they are leaning towards voting for a party (a “squeeze question”) or who do say how they voted last time are included in the final figures, but weighted according to age, with younger people being weighted harshly downwards. Full tabs are here.

BMG also asked voting intention in the European refrendum, with headline figures of Remain 52%, Leave 48%. ICM also released their regular EU referedum tracker earlier in the week, which had toplines of Remain 54%, Leave 46%. A third EU referendum poll from YouGov found it 50%-50% – though note that poll did not use the actual referendum question (YouGov conduct a monthly poll across all seven European countries they have panels in, asking the same questions to all seven countries and including a generic question on whether people would like their own country to remain in the EU – this is that question, rather than a specific British EU referendum poll, where YouGov do use the referendum question).


Ipsos MORI have published their September political monitor for the Evening Standard. Topline voting intention figures are CON 39%, LAB 34%, LDEM 9%, UKIP 7%, GRN 4%.

MORI have made another methodological change in the light of the polling error at the general election. Previously they had started including how regularly people say they usually vote in the turnout filter, now they have also added additional weighting by newspaper readership. Again, the methodology review is still an ongoing process, and MORI make clear they anticipate making further changes.

The rest of the poll had a series of questions about perceptions of the party leaders and parties.

Jeremy Corbyn’s first satisfaction rating is minus 3 (33% are satisfied with him as leader, 36% dissatisfied). At first glance that isn’t bad – it’s a better net rating than Cameron or the government! In a historical context though it’s not good. New leaders normally get a polling honeymoon, the public give them the benefit of the doubt to begin with and Corbyn’s net rating is the worst MORI have recorded for a new leader of one of the big two parties (the initial ratings for past party leaders were Miliband +19, Brown +16, Cameron +14, Howard +9, IDS 0, Hague -1, Blair +18, Smith +18, Major +15, Kinnock +20, Foot +2)

Looking at the more detailed questions on perceptions of Jeremy Corbyn his strengths and weaknesses compared to David Cameron are very similar to the ones we got used to in Cameron v Miliband match ups: Cameron scores better on things like being a capable leader, good in a crisis, sound judgement; Corbyn scores better on being in touch with ordinary people, having more substance than style and being more honest than most politicians. Asked overall who would make the most capable Prime Minister Cameron wins by 53% to 27%.

Of course, all of Jeremy Corbyn’s ratings need to be seen in the context that he is very new to the job and the public don’t know a whole lot about him beyond the initial negative press. Early perceptions of him may yet change. His figures may get better… or worse.

MORI also asked about perceptions of the Labour and Conservative parties, and here the impact of Corbyn’s victory on how the Labour party itself is seen was very evident. The proportion of people seeing the party as divided is up 33 points to 75%, extreme is up 22 points to 36% and out of date is up 19 points to 55%. Both the Labour party and the Conservative party had a big jump in the proportion of people saying they were “Different to other parties” – I suppose it takes two parties to be different from each other!

Full details of the MORI poll are here