While the gap between online and telephone polls on the EU referendum has narrowed of late, it is still there, and Populus have put out an interesting paper looking at possible explanations and written by James Kanagasooriam of Populus and Matt Singh of Number Cruncher Politics. The full paper is here.
Matt and James essentially suggest three broad reasons. The first thing is don’t knows. Most telephone polls don’t prompt people with the option of saying don’t know, but respondents are free to volunteer it. In contrast in online polls people can only pick from the options that are presented on the screen, so don’t know has to be presented up front as an option (Personally, I have a suspicion that there’s a mode effect as well as a prompting effect on don’t knows. When there is a human interviewer people may feel a certain social pressure to give an answer – saying don’t know feels somehow unhelpful).
Populus tested this in two parallel surveys, one online, one phone, both split. The phone survey was split between prompting people just with the options of Remain or Leave, or explicitly including don’t know as an option in the prompt. The online survey had a split offering don’t know as an option, and a split with the don’t know option hidden away in smaller font at the bottom of the page (a neat idea to try and simulate not explicitly prompting for an option in an online survey).
- The phone test had a Remain lead of 11 points without a don’t know option (the way phone polls normally ask), but with an explicit don’t know it would have shown only a 3 point Remain lead. Prompting for don’t knows made a difference of eight points in the lead.
- The online survey had a Leave lead of six points with a don’t know prompt (the way they normally ask), but with the don’t know option hidden down the page it had only a one point Leave lead. Making the don’t know prompt less prominent made a difference of six points in the lead.
The impact here is actually quite chunky, accounting for a fair amount of the difference. Comparing recent phone and online polls the gap is about seven or so points, so if you looked just at the phone experiment here the difference in don’t knows could in theory account for the whole lot! I don’t think that is the case though: things are rarely so simple, earlier this year there was a much bigger gap and I suspect there are probably also some issues to do with sampling make up and interviewer effect in the actual answers. In the Populus paper they assume it makes up about a third of a gap of fifteen points between phone and online, obviously that total gap is smaller now.
The second thing Populus looked at was attitudinal differences between online and phone samples. The examples looked at here are attitudes towards gender equality, racial equality and national identity. Essentially, people give answers that are more socially liberal in telephone polls than they did in online polls. This is not a new finding – plenty of papers in the past have found these sort of differences between telephone and online polling, but because attitudinal questions are not directly tested in general elections these are never compared against reality and it is impossible to be certain which are “right”. Neither can we really be confident how much of the difference is down to different types of people being reached by the two approaches, and interviewer effects (are people more comfortable admitting views that may be seen as racist or sexist to a computer screen than to a human interviewer?). It’s probably a mixture of both. What’s important is that how socially liberal people were on these scales correlated with how pro-or-anti EU they were, so to whatever extent there is a difference in sample make-up rather than interviewer effect, it explains another couple of points difference between EU referendum voting intention in telephone and online polls. The questions that Populus asked had also been used in the face-to-face BES survey: the answers there were in the middle – more socially liberal than online polls, less socially liberal that phone polls. Of course, if there are interviewer effects at play here, face-to-face polling also has a human interviewer.
Populus think these two factors explain most of the difference, but are left with a gap of about 3 points that they can’t readily explain. They float the idea that this could be because online samples have more partisan people who vote down the line (so, for example, online samples have fewer of those odd “UKIP for Remain” voters), when in reality people are more often rather contradictory and random. It’s a interesting possibility, and chimes with my own views about polls containing people who are too politically aware, too partisan. The impact of YouGov adopting sampling and weighting by attention paid to politics last month was mostly to increase don’t knows on questions, but when we were doing testing it before rollout it did increase the position of remain relative to leave on the EU question, normally by two or three points, so that would chime with Populus’s theory.
According to Populus, therefore, the gap comes down partially to don’t know, partially towards the different attitudinal make-up and a final chunk because they think online samples are more partisan. Their estimate is that the reality will be somewhere inbetween the results being shown by online and telephone, a little closer towards telephone. We shall see.
(A footnote for the just the really geeky among you who have paid close attention to the BPC inquiry and the BES team’s posts on the polling error, but is probably too technical for most readers. When comparing the questions on race and gender Populus also broke down the answers in the BES face-to-face survey by how many contacts it took to interview them. This is something the BES team and the BPC inquiry team also did when investigating the polling error last May. The inquiries looking at the election polls found that if you took just those people the BES managed to interview on their first or second go the make up of the sample was similar to that from phone polls, and was too Labour, but people who were trickier to reach were more Conservative. Hence they took “easy for face-to-face interviewers to reach” as a sort of proxy for “people likely to be included in a poll”. In this study Populus did the same for the social liberal questions and it didn’t work the same way: phone polls were much more liberal than the BES f2f poll, but the easy to reach people in the BES f2f poll were the most conservative and the hard to reach the most liberal, so “easy to reach f2f” didn’t resemble the telephone sample at all. Populus theorise that this is a mobile sampling issue, but I think it raises some deeper questions about the assumptions we’ve made about what difficulty of contacting in the BES f2f sample can teach us about other samples. I’ve never seen any logical justification as to why people who it takes multiple attempts to reach face-to-face will necessarily be the same group that it’s hard to reach online – they could easily be two completely different groups. Perhaps “takes multiple tries to reach face-to-face” is not a suitable proxy for the sort of people phone polls can’t reach either…)