On Friday the BPC/MRS inquiry into the polls at the 2015 started rolling. The inquiry team had their first formal meeting in the morning and in the afternoon there was a public meeting, addressed by representatives of most of the main polling companies. It wasn’t a meeting intended to produce answers yet – it was all still very much work in progress, and the inquiry itself isn’t due to report until next March (Patrick Sturgis explained what some see as a very long time scale with reference to the need to wait for some useful data sources like the BES face to face data and the data that has been validated against marked electoral registers, neither of which will be available until later in the year). There will however be another public meeting sometimes before Christmas when the inquiry team will present some of their initial findings. Friday’s meeting was for the pollsters to present their initial thoughts.
Seven pollsters spoke at the meeting: ICM, Opinium, ComRes, Survation, Ipsos MORI, YouGov and Populus. There was considerable variation between how much they said – some companies offered some early changes they were making, some only went through possibilities they were looking at rather than offering any conclusions. As you’d expect there was a fair amount of crossover. Further down I’ve summarised what each individual company said, but there were several things that came up time and again:
- Most companies thought there was little evidence of late swing being a cause. Most of the companies had done re-contact surveys, reinterviewing people surveyed before the election and comparing their answers before and afterwards to see if they actually did change their minds after the final polls, and most found little change that cancelled itself out, or produced negligible movement to the Tories. Only one of the companies who spoke thought it was a major factor.
- Most of the pollsters seemed to be looking at turnout as being a major factor in the error, but this covered more than one root cause. One was people saying they will vote but not doing so, and this not being adequately dealt with by the existing 0-10 models of weighting and filtering by likelihood to vote. If that is the problem the solution may lie in more complicated turnout modelling, or using alternative questions to try and identify those who really will vote.
- However several pollsters also talked about turnout problems coming not from respondents inaccurately reporting if they vote, but from pollsters simply interviewing the sort of people who are more likely to vote, and this impacting some groups more than others. If that’s the cause, then it is more of problem of improving samples, or doing something to address getting too many engaged people in samples.
- One size doesn’t necessarily fit all, the problems affecting phone pollsters may end up being different to online pollsters, and that the solutions that work for one company may not work for another.
- Everyone was very wary of the danger of just artificially fitting the data to the last election result, rather than properly identifying and solving the cause(s) of the error.
- No one claimed they had solved the issue, everyone spoke very much about it being a work in progress. In many cases I think the factors they presented were not necessarily the ones they will finally end up identifying… but those where they had some evidence to show so far. Even those like ComRes who have already made some initial conclusions and changes in one area were very clear that their investigations were continuing, they were still open minded about possible reasons and conclusions and there were likely more changes to come.
Martin Boon of ICM suggested that ICM’s final poll showing a one point Labour lead was probably a bit of an outlier and in that limited sense was hence a bit of bad luck – ICM’s other polls during the campaign had shown small Conservative leads. He suggested this could possibly have been connected to doing the fieldwork for the final poll during the week, ICM’s fieldwork normally straddles the weekend and the political make up of C1/C2s in his final sample was significantly different from their usual polls (they broke for Labour, when ICM’s other campaign polls had them breaking for the Tories) (Martin has already published some of the same details here.) However, bad luck aside he was clear about there being a much deeper problem in that the fundamental error that had affected polls for decades – a tendency to overestimate Labour – has re-emerged.
ICM did a telephone recall poll of 3000 people who they had interviewed during the campaign. They found no significant evidence of a late swing, with 90% of people reporting they voted how they said they would. The recall survey also found that don’t knows split in favour of the Conservatives and that Conservative voters were more likely to actually vote… ICM’s existing reallocation of don’t knows and 0-10 weighting by likelihood to vote dealt well with this, but ICM’s weighting down of people who didn’t vote in 2010 was not, in the event, a good predictor (it didn’t help at all, though it didn’t hurt either).
Martin’s conclusion was that “shy Tories” and “lazy Labour” were NOT enough to explain the error, and there was probably some deeper problem with sampling that probably faced the whole industry. Typically ICM has to ring 20,000 phone numbers in order to get 1,000 responses – a response rate of 5% (though that will presumably include numbers that don’t exist, etc) and he worried again about whether our tools could get a representative sample.
Adam Drummond of Opinium also provided data from their recontact survey on the day of the election. They too found no evidence of any significant late swing, with 91% of people voting how they said they would. Opinium identified a couple of specific issues with their methodology that went wrong. One was their age weighting was too crude – they used to weight age using three big groups, with the oldest being 55+. They found that within that group there were too many people who were in their 50s and 60s and not enough in their 70s and beyond, and that the much older group were more Tory. Opinium will be correcting that by using more detailed age weights, with over 75s weighted separately. They also identified failings in their political weightings that weighted the Greens too high, and will be correcting that now they have the 2015 results to calibrate it by.
These were side issues though, Opinium thought the main issue was one of turnout, or more specifically, interviewing people who are too likely to vote. If they weighted the different age and social class groups to the turnout proportions suggested in MORI’s post-election election it would have produced figures of CON 37%, LAB 32%…. but of course, you can’t weight to post-election turnout data before an election, and comparing MORI’s data at past elections the level of turnout in different groups changes from election to election.
Looking forwards Opinium are going to correct their age and political weightings as described, and are considering whether or not to weight different age/social groups differently for turnout, or perhaps trying priming questions before the main voting intention. They are also considering how they reach more unengaged people – they already have a group in their political weighting for people who don’t identify with any of the main parties… but that isn’t necessarily the same thing.
Tom Mludzinski and Andy White of ComRes offered an initial conclusions were that there was a problem with turnout. Between the 2010 and 2015 elections actual turnout rose by 1%, but the proportion of people who said they were 10/10 certain to vote rose by 8%.
Rather than looking at self-reported levels of turnout in post-election surveys ComRes did regressions on actual levels of turnouts in constituencies by their demographic profiles, finding the usual patterns of higher turnout in seats with more middle class people and older people, lower turnout in seats with more C2DE voters and younger voters. As an initial measure they have introduced a new turnout model that weights people’s turnout based largely upon their demographics.
ComRes have already discussed this in more detail than I have space for on their own website, including many of the details and graphs they used in Friday’s presentation.
Damian Lyons Lowe of Survation discussed their late telephone poll on May 6th that had produced results close to the election, either through timing or through the different approach to telephone sampling they used. Survation suggested a large chunk of the error was probably down to late swing – their recontact survey had found around 85% of people saying they voted the way they had said they would, but those who did change their minds produced a movement to the Tories that would account for some of the error (it would have moved the figures to a 3 point Conservative lead).
Damian estimated late swing made up 40% of the difference between the final polls and the result, with another 25% made up from errors in weighting. The leftover error he speculated could be caused by “tactical Tories” – people who didn’t actually support the Conservatives, but voted for them out of fear about a hung Parliament and SNP influence and wouldn’t admit this to pollsters either before or after the election, pointing to the proportion of people who refused to say how they voted in their re-contact survey.
Tantalisingly, Damian also revealed that they were going to be able to release some of the private constituency polling they did during the campaign for academic analysis.
Gideon Skinner of Ipsos MORI‘s thinking was still largely along the lines of Ben Page’s presentation in May that was (perhaps a little crudely!) summarised as lazy Labour. MORI’s thinking is that their problem was not understating Tory support, but overstating Labour support. Like ComRes, they noted how the past relationship between stated likelihood to vote and actual turnout had got worse since the last election. At previous elections they noted how actual turnout had been about 10 points lower than the proportion of people who said they would definitely vote; at this election the gap had been 16 points.
Looking at the difference between people’s stated likelihood to vote in 2010 and their answers this time round the big change was amongst Labour voters. Other parties’ voters had stayed much the same, but the proportion of Labour voters saying they were certain to vote had risen from 74% to 86%. Gideon said how this had been noticed at the time (and that MORI had written about it as an interesting finding!), but it had seemed perfectly plausible that now the Labour party were in opposition their supporters would become more enthusiastic about voting to kick out a Conservative government than they had been at the end of a third-term Labour government. Perhaps in hindsight it was a sign of a deeper problem.
MORI are currently experimenting with including how regularly people have voted in the past as an additional variable in their turnout model, as we discussed in their midweek poll.
Joe Twyman of YouGov didn’t present any conclusions yet, just went through the data they are using and the things they were looking at. YouGov did the fieldwork for two academic election surveys (the British Election Study and the SCMS) as well as their daily polling, and all three used different question ordering (daily polling asked voting intention first, SCMS after a couple of questions, the BES after a bank of questions on important issues, which party is more trusted and party leaders) so will allow testing of the effect of “priming questions”. YouGov are looking at the potential of errors like “shy Tories”, geographical spread of respondents (are there the correct proportion of respondents in Labour and Conservative seats, safe and marginal seats), are respondents to surveys too engaged, is there panel effect and dealing with turnout (including used the validated data from the British Election Study respondents).
Andrew Cooper and Rick Nye of Populus also found no evidence of significant late swing. Populus did their final poll as two distinct halves and found no difference between the fieldwork done on the Tuesday and the fieldwork done on the Wednesday. Their recontact survey a fortnight after the election still found support at Con 33%, Lab 33%
On the issue of turnout Populus had experimented with more complicated turnout models during the campaign itself – using some of the methods that other companies are now suggesting. Populus had weighted different demographic groups differently by turnout using the Ipsos MORI 2010 data as a guide, and they also had tried using how often over 25s said they had voted in the past as a variable in modelling turnout. None of it had stopped them getting it wrong, though they are going to try and build upon it further.
Instead Populus have been looking for shortcomings in the sampling itself, looking at other measures that have not generally been used in sampling or weighting but may be politically relevant. Their interim approach so far is to include more complex turnout modelling and to add in disability, public/private sector employment and level of education into the measures they weight by to try and get more representative samples. Using those factors would have given them figures of CON 35%, LAB 31% at the last election… better, but still not quite there.