In Defence of Polling

2015 is unlikely to be remembered as a high point in opinion polling.

In the months since the election I’ve spoken at various events and appeared on various panels, and at almost every one at some point there’s been a question from the audience along the lines of “Why should I ever believe what a poll says ever again?”. It’s normally from a middle aged man, who looks very pleased with himself afterwards and folds his arms with his hands sort of tucked in his armpits. It may be the same man, stalking me. I fear he may be out in my front garden right now, waiting to pounce upon me with his question when I go to take the bins out.

Anyway, this is my answer.

Following the general election we pollsters took a lot of criticism and that’s fair enough. If you get things wrong, you get criticised. The commentariat all spent months writing about hung Parliaments and SNP-Labour deals and so on, and they did it because the polls were wrong. The one bit of criticism that I recall particularly grated with me though was a tweet (I can’t find it now, but I think it was from Michael Crick) saying that journalists should have talked to more people, rather than looking at polls.

And you know what, I thought to myself, maybe if you had talked to more people, maybe if you really ramped it up and talked to not just a handful of people in vox pops, but to thousands of people a day. Maybe then you could have been as wrong as we were, because that’s exactly what we were doing.

Polling is seen as being all about prediction – however often we echo Bob Worcester’s old maxim of a poll being a snapshot, the public and the media treat them as predictors, and we pollsters as some sort of modern-day augur. It isn’t, polling isn’t about prediction, it’s about measurement. Obviously there is a skill in interpreting what you have measured, and in measuring the right things in the first place, but ultimately the irreducible core of a poll is just asking people questions, and doing it in a controlled, quantifiable, representative way.

Polling about voting intention relies on a simple belief that the best way to find out how people are going to vote at a general election is to actually go and ask those people how they will vote at the general election. In that sense we are at one with whoever it was who wrote that tweet. You want to know how people will vote? Then talk to them.

The difference is if you want make that meaningful in any sense, you need to do it in an organised and sensible fashion. There is no point talking to 1000 people at, say, the Durham Miner’s Gala, or at Henley Regetta. There is no point only talking to people you can find within five minutes of the news studio. You are not going to get a proper picture if everyone you talk to is under 40 and white, or the sort of people walking round a shopping centre on a mid-week afternoon.

If you want to actually predict a general election based on talking to people, you’re going to have to make sure that the thousand people you talk to are properly reflective of all the people in Britain, that you’ve got the right number of people of different ages, genders, races, incomes, from every part of the country. And if you find out that despite your best efforts you have too many men and too few women, or too few young people and too few old people, you need to put it right by giving more weight to the answers from the women or young people you do have.

And that’s it. Ultimately all pollsters do is ask people questions, and try and do it in the fairest, most controlled and representative way we can. Anyone saying all polls are wrong or they’ll never believe a poll again is essentially saying it is impossible to find out how people will vote by asking them. It may be harder than you’d think, it may face challenges from declining response rates, but there’s no obviously better way of finding the information out.

No pollster worth his or salt has ever claimed that polls are infallible or perfect. Self evidently they weren’t, as they’d already got it wrong in 1970 and 1992. People can and do change their minds between polls being conducted and the election (sometimes even between the final polls and the election). Lots of people don’t know how they’ll vote yet. Some people do lie, sometimes people can’t answer a question because they don’t really know themselves, or don’t really have an opinion and are just trying to be helpful.

Pollsters know the limitations of what we do. We know a poll years out can’t predict an election. We know there are things that you really can’t find out by asking people straight (don’t get me started on “will policy X make you more likely to vote for Y?”). Our day to day job is often telling people the limitations of our product, of saying what we can’t do, what we can’t tell. If anything, the pendulum had swung too far before the election – some people did put too much faith in polls when we know they are less than perfect. I obviously hope the industry will sort out and solve the problems of this May (more on that here, for those who missed it). It’s a good thing that in 2020 journalists and pundits will caveat their analysis of the polling and the election with a “if the polls are right” and mean it, rather than just assuming they will be. For all its limitations though, opinion polling – that is, asking people what they think in a controlled and representative way – is probably the best we have, and it’s a useful thing to have.

Public opinion matters because we are a democracy, because people’s opinions drive how they vote and how they vote determines who governs us. Because it matters, it’s worth measuring.

What would it be without polling? Well, I’m not going to pretend the world would come to a shuddering halt. There are people who would like to see less polling because they think it would lead to a better press, better political reporting. The argument goes that political reporting concentrates too much on the horse race and the polling figures and not enough on policies. I don’t think a lack of polls would change that, if anything it would give more room for speculation. The recent Oldham by-election gave us an example of what our elections would be without polls: still a focus on who was going to win and what the outcome might be, except informed only by what campaign insiders were saying, what people picked up on the doorstep and what “private polls” were supposedly saying. Dare I whisper it after all the opprobrium, but perhaps if Lord Ashcroft had done one of his constituency polls early on and found UKIP were, in fact, a country mile behind Labour the reporting of the whole by-election might have been a tad better.

There was a time (up to the point that the Times commissioned YouGov to do a public poll) in the Labour leadership election when it looked like it might be the same, that journalists would be reporting the campaign solely upon what campaigns claimed their figures were showing, on constituency nomination figures and on a couple of private polls that Stephen Bush had glimpsed. While it may have been amusing if the commentariat had covered the race as if it was between Burnham and Cooper, only to find Corbyn a shock winner, I doubt it would have served democracy or the Labour party well. Knowing Corbyn could win meant he was scrutinised rather than being treated as the traditional left-wing also-ran, meant Labour members could cast their votes in the knowledge of the effect it might have, and vote tactically for or against a candidate if they wished.

And these are just election polls. To some extent there are other ways of measuring party support, like claimed canvassing returns, or models based upon local by-election results or betting markets. What about when it comes to other issues – should we legalise euthanasia? Should we bomb Syria? I might make my living measuring public opinion, but for what it’s worth I’m a grumpy old Burkean who is quite happy for politicians to ignore public opinion and employ their own good judgement. However, for anyone who thinks politicians should reflect public opinion in their actions, they need to have the tools to do so. Some people (and some politicians themselves) do think politicians should reflect what their voters want, and that means they need some half decent way of measuring it. That is, unless you’d like them to measure it by who can fill in the most prewritten “write to your MP” letters, turn out the largest number of usual suspects on a march, or manipulate the most clicks on a voodoo poll.

In 2016 we will have the results of the BPC inquiry, we’ll see what different methods the pollsters adopt to address whatever problems are identified and we’ll have at least three elections (London, Scotland and Wales) and possibly a European referendum to see if they actually work. Technically they won’t actually tell us if the polls have solved their problems or not (the polling in Scotland and Wales in May was actually fine anyway, and referendum polling presents its own unique problems), but we will be judged upon them nevertheless. We shall see how it pans out. In the meantime, despite the many difficulties there are in getting a representative sample of the British public, I still think those difficulties are surmountable, and that ultimately, it’s still worth trying to find out and quantify what the public think.


ICM released their final monthly voting intention poll of 2015 yesterday, with topline figures of CON 39%, LAB 34%, LDEM 7%, UKIP 10%, GRN 3%. I assume it’s the last voting intention poll we will see before Christmas. The full tables are here, where ICM also make an intriguing comment on methodology. They write,

For our part, it is clear that phone polls steadfastly continue to collect too many Labour voters in the raw sample, and the challenge for phone polling is to find a way to overcome the systematic reasons for doing so. The methodological tweaks that we have introduced since the election in part help mitigate this phenomenon by proxy, but have not overcome the core challenge. In our view, attempting to fully solve sampling bias via post-survey adjustment methods is a step too far and lures the unsuspecting pollster into (further) blase confidence. We will have more to say on our methods in the coming months.


-->

What Went Wrong

Today YouGov have put out their diagnosis of what what wrong at the election – the paper is summarised here and the full report, co-authored by Doug Rivers and myself, can be downloaded here. As is almost inevitable with investigations like this there were lots of small issues that couldn’t be entirely ruled out, but our conclusions focus upon two issues: the age balance of the voters in the sample and the level of political interest of people in the sample. The two issues are related – the level of political interest in the people interviewed contributed to the likely voters in the sample being too young. There were also too few over seventies in the sample because YouGov’s top age band was 60+ (meaning there were too many people aged 60-70 and too few aged over 70).

I’m not going to go through the whole report here, but concentrate upon what I think is the main issue – the problems with how politically interested people who respond to polls are and how that impacts on the age of people in samples. In my view it’s the core issue that caused the problems in May, it’s also the issue that is more likely to have impacted on the whole industry (different pollsters already have different age brackets) and the issue that more challenging to solve (adjusting the top age bracket is easily done). It’s also rather more complicated to explain!

People who take part in opinion polls are more interested in politics than the average person. As far as we can tell that applies to online and telephone polls and as response rates have plummeted (the response rate for phone polls is somewhere around 5%) that’s become ever more of an issue. It has not necessarily been regarded as a huge issue though – in polls about the attention people pay to political events we have caveated it, but it has not previously prevented polls being accurate in measuring voting intention.

The reason it had an impact in May is that the effect, the skew towards the politically interested, had a disproportionate effect on different social groups. Young people in particular are tricky to get to take part in polls, and the young people who have taken part in polls have been the politically interested. This, in turn, has skewed the demographic make up of likely voters in polling samples.

If the politically disengaged people within a social group (like an age band, or social class) are missing from a polling sample then the more politically engaged people within that same social group are weighted up to replace them. This disrupts the balance within that group – you have the right number of under twenty-fives, but you have too many politically engaged ones, and not enough with no interest. Where once polls showed a clear turnout gap between young and old, this gap has shrunk… it’s less clear whether it has shrunk in reality.

To give an concrete example from YouGov’s report, people who are under the age of twenty-five make up about 12% of the population, but they are less likely than older people to vote. Looking at the face-to-face BES survey, 12% of the sample would have been made up of under twenty-five, but only 9.1% of those people who actually cast a vote were under twenty-five. Compare this to the YouGov sample – once again, 12% of the sample would have been under twenty-five, but they were more interested in politics, so 10.7% of YouGov respondents who actually cast a vote were under twenty-five.

Political interest had other impacts too – people who paid a lot of interest to politics behaved differently to those who paid little attention. For example, during the last Parliament one of the givens was that former Liberal Democrat voters were splitting heavily in favour of Labour. Breaking down 2010 Liberal Democrat voters by how much attention they pay to politics though shows a fascinating split: 2010 Lib Dem voters who paid a lot of attention to politics were more likely to switch to Labour; people who voted Lib Dem in 2010 but who paid little attention to politics were more likely to split to the Conservatives. If polling samples had people who were too politically engaged, then we’d have too many LD=>Lab people and too few LD=>Con people.

So, how do we put this right? We’ll go into the details of YouGov’s specific changes in due course (they will largely be the inclusion of political interest as a target and updating age, but as ever, we’ll test them top to bottom before actually rolling them out on published surveys). However, I wanted here to talk about the two broad approaches I can see going forward for the wider industry.

Imagine two possible ways of doing a voting intention poll:

  • Approach 1 – You get a representative sample of the whole adult population, weight it to the demographics of the whole adult population, then filter out those people who will actually vote, and ask them who they’ll vote for.
  • Approach 2 – You get a representative sample of the sort of people who are likely to vote, weight it to the demographics of people who are likely to vote, and ask them who they’ll vote for.

Either of these methods would, in theory, work perfectly. The problem is that pollsters haven’t really doing either of them. Lots of people who don’t vote don’t take part in polls either, so actually pollsters end up with a samples of the sort of people who are likely to vote, but then weight them to the demographics of all adults. This means the final samples of voters over-represent groups with low turnouts.

Both methods present real problems. May 2015 illustrated the problems pollsters face in getting the sort of people who don’t vote in their samples. However, approach two faces an equally challenging problem – we don’t know the demographics of the people who are likely to vote. The British exit poll doesn’t ask demographics, so we don’t have that to go on, and even if we base our targets on who voted last time, what if the type of people who vote changes? While British pollsters have always taken the first approach, many US pollsters have taken a route closer to approach two and have on occasion come unstuck on that point – assuming an electorate that is too white, or too old (or vice-versa).

The period following the polling debacle of 1992 was a period of innovation. Lots of polling companies took lots of different approaches and, ultimately, learnt from one another. I hope there will be a similar period now – to follow John McDonnell’s recent fashion of quoting Chairman Mao, we should let a hundred flowers bloom.

From a point of view of an online pollster using a panel, the ideal way forward for us seems to be to tackle samples not having enough “non-political” people. We have a lot of control over who we recruit to samples so can tackle it at source: we record how interested in politics our panellists say they are, and add it to sampling quotas and weights. We’ll also put more attention towards recruiting people with little interest in politics. We should probably look at turnout models too, we mustn’t get lots of people who are unlikely to vote in our samples and then assume they will vote!

For telephone polling there will be different challenges (assuming, of course, that they diagnose similar causes – they may find the causes of their error was something completely different). Telephone polls struggle enough as it is to fill quotas without also trying to target people who are uninterested in politics. Perhaps the solution there may end up being along the second route – recasting quotas and weights to aim at a representative sample of likely voters. While they haven’t explicitly gone down that route, ComRes’s new turnout model seems to me to be in that spirit – using past election results to create a socio-economic model of the sort of people who actually vote, and then weighting their voting intention figures along those lines.

Personally I’m confident we’ve got the cause of the error pinned down, now we have to tackle getting it right.


One of the key bits of evidence on why the polls got it wrong has today popped into the public domain – the British Election Study face to face survey. The data itself is downloadable here if you have SPSS or Stata, and the BES team have written about it here and here. The BES has two elements – an online panel study, going back to the same people before, during and after the election campaign, and a post-election random face-to-face study, allowing comparison with similar samples going back to the 1964 BES. This is the latter part.

The f2f BES poll went into the field just after the election and fieldwork was conducted up until September (proper random face-to-face polls take a very long time). On the question of how people voted in the 2015 election the topline figures were CON 41%, LAB 33%, LDEM 7%, UKIP 11%, GRN 3%. These figures are, of course, still far from perfect – the Conservatives and Labour are both too high, UKIP too low, but the gap between Labour and Conservative – the problem that bedevilled all the pre-election polls, is much closer to reality.

This is a heavy pointer towards the make-up of samples having been a cause of the polling error. If the problems had been caused by people incorrectly reporting their voting intentions (“shy Tories”) or people saying they would when they did not then it is likely that exactly the same problems would have shown up in the British Election Study (indeed, given the interviewer effect those problems could have been worse). The difference between the BES f2f results and the pre-election polls suggests that the error is associated with the thing that makes the BES f2f so different from the pre-election polls – the way it is sampled.

As regular readers will know, most published opinion polls are not actually random. Most online polls are conducted using panels of volunteers, with respondents selected using demographic quotas to model the British public as closely as possible. Telephone polls are quasi-random, since they do at least select randomised numbers to call, but the fact that not everyone has a landline and that the overwhelming majority of people do not answer the call or agree to take part means the end results is not really close to a random sample. The British Election Study was a proper randomised study – it randomly picked consistencies, then addresses within in them, then a person at that address. The interviewer then repeatedly attempted to contact that specific person to take part (in a couple of cases up to 16 times!). The response rate was 56%.

Looking at Jon Mellon’s write up, this ties in well with the idea that polls were not including enough of the sort of people who don’t vote. One of the things that pollsters have flagged up in the investigations of what went wrong is that they found less of a gap in people’s reported likelihood of voting between young and old people than in the past, suggesting polls might no longer be correctly picking up the differential turnout between different social groups. The f2f BES poll did this far better. Another clue is in the comparison between whether people voted, and how difficult it was to get them to participate in the survey – amongst people who the BES managed to contact on their first attempt 77% said they had voted in the election, among those who took six or more goes only 74% voted. A small difference in the bigger scheme of things, but perhaps indicative.

This helps us diagnose the problem at the election – but it still leaves the question of how to solve it. I should pre-empt a couple of wrong conclusions that people will jump to. One is the idea polls should go back to face-to-face – this mixes up mode (whether a poll is done by phone, in person, or online) with sampling (how the people who take part in the poll are selected). The British Election Study poll appears to have got it right because of its sampling (because it was random), not because of its mode (because it was face-to-face). The two do not necessarily go hand-in-hand: when face-to-face polling used to be the norm in the 1980s it wasn’t done using random sampling, it was done using quota sampling. Rather than asking interviewers to contact a specific randomly selected person and to attempt contact time and again, interviewers were given a quota of, say, five middle-aged men, and any old middle-aged men would do.

That, of course, leads to the next obvious question of why don’t pollsters move to genuine random samples? The simple answers there are cost and time. I think most people in market research would agree a proper random sample like the BES is the ideal, but the cost is exponentially higher. This isn’t more expensive in the sense of “well, they should pay a bit if they want better results” type way – it’s more expensive as in a completely difference scale of expense, the difference between a couple of thousand and a couple of hundred thousand. No media outlet could ever justify the cost of a full scale random poll, it’s just not ever going to happen. It’s a shame, I for one would obviously be delighted were I to live in a world where people were willing to pay hundreds of thousands of pounds for polls, but such is life. Things like the BES only exist because of big funding grants from the ESRC (and at some elections that has need to be matched by grants from other charitable trusts).

The public opinion poll industry has always been about a finding a way of measuring public opinion that can combine accuracy with being affordable enough for people to actually buy and speedy enough to react to events, and whatever the solutions that emerge from the 2015 experience will have those same aims. Changing sampling techniques to make them resemble random sampling more could, of course, be one of the routes that companies look at. Or controlling their sampling and weighting in ways to better address shortcomings of the sampling. Or different ways of modelling turnout, like ComRes are looking at. Or something else yet unspeculated. Time will tell.

The other important bit of evidence we are still waiting for is the BES’s voter validation exercise (the large scale comparison of whether poll respondents’ claims on whether they voted or not actually match up against their individual records on the marked electoral register). That will help us understand a lot more about how well or badly the polls measured turnout, and how to predict individual respondents’ likelihood of voting.

Beyond that, the polling inquiry team have a meeting in January to announce their initial findings – we shall see what they come up with.


ICM have released their August poll for the Guardian. Topline voting intention figures are CON 40%, LAB 31%, LDEM 7%, UKIP 10%, GRN 4%. Full tables are here.

This is the first ICM poll since the election to feature an updated methodology in light of the polling error. Since 1993 or so ICM have reallocated people who don’t give a voting intention based on how they say they voted at the previous election. Colloquially this is often known as a “shy Tory” adjustment, as when it was initially introduced in the 1992 to 1997 Parliament it tended to help the Tories, though after the Iraq war it actually tended to be a “shy Labour” adjustment, and in the last Parliament it was a “shy Lib Dem” adjustment.

In practice ICM didn’t reallocate all their don’t knows and refusals as many people who refuse to give a current voting intention also refuse to say how they voted at the last election (ICM call these people “total refusals”, as opposed to “partial refusals” who say how they voted last time but not this time). Under the new method ICM are also attempting to estimate the views of these “total refusals” – they are reallocated at the same rate as “partial refusals” but are assumed to split slightly more in favour of the Conservatives, based upon what ICM found in their post-election re-contact survey. The effect on this change on ICM’s headline figures this month is to increase the level of Conservative support by one point and decrease Labour support by one point.

The implication of this adjustment is that at least some of the error at the general election was down to traditional “shy Tories”, that those who refused to answer pollsters’ questions were disproportionately Conservative supporters. However, from being on panels with Martin Boon since the election and hearing him speak at the British Polling Council inquiry meeting I don’t think he’ll have concluded that “shy Tories” was the whole of the problem, and in ICM’s tables they are clear that they “expect to produce further methodological innovations in the future.”