Yesterday the British Polling Council had an event talking about how the polls had changed since 2015. This included collecting up data from all the companies on what they’ve done to correct the error and what they are now doing differently – all those summaries are collected here.

In looking at what’s changed it’s probably best to start with what actually went wrong and what problem the pollsters are trying to solve. As all readers will know, the polls in 2015 wrongly overstated Labour support and understated the Conservatives. The BPC/MRS inquiry under Pat Sturgis concluded this was down to unrepresentative samples.

Specially, it looked as if polls had too many younger people who were too engaged and too interested in politics. The effect of this was that while in reality there was a big difference between the high turnout among old people and the low turnout among young people, among the sort of people who took part in polls this gap was too small. In short, the sort of young people who took part in polls went out and voted Labour; the sort of young people who weren’t interested and stayed at home didn’t take part in polls either.

So, what have polling companies done to correct the problems? There is a summary for each individual company here.

There have been a wide variety of changes (including YouGov interlocking past vote & region, ICM changing how they reallocate don’t knows, ICM and ComRes now both doing only online polls during the campaign). However, the core changes seem to boil down to two approaches: some companies have focused on improving the sample itself, trying to include more people who aren’t interested in politics, who are less well educated and don’t usually vote. Other companies have focused on correcting the problems caused by less than representative samples, changing their turnout model so it is based more on demographics, and forcing it to more accurately reflect turnout patterns in the real world. Some companies have done a bit of both.

Changes to make samples less politically engaged…

  • ICM and YouGov have both added a weight by respondents level of interest or attention to politics, based upon the British Election Study probability survey. YouGov have also added weights by level of educational qualification.
  • Ipsos MORI haven’t added political interest weights directly, but have added education weights and newspaper readership weights, which correlate with political interest.
  • Kantar have added education weighting, and also weight down turnout to the level they project it to be as a way of reducing the overall level of political engagement in their sample.

Changes to base turnout on demographics…

  • ComRes have changed their turnout model, so it is based more on respondents’ demographics rather than how likely they claim they are to vote. The effect of this is essentially to downweight people who are younger and more working class on the assumption that the pattern of turnout that we’ve seen at past elections remains pretty steady. ICM have a method that seems very similar in its aim (I’m not sure of the technicalities) – weighting the data so that the pattern of turnout by age & social grade is the same as in 2015.
  • Kantar (TNS) have a turnout model that is partially based on respondents age (so again, assuming that younger people are less likely to vote) and partially on their self-reported likelihood.
  • ORB weight their data by education and age so that it matches not the electorate as a whole, but the profile of people who the 2015 British Election Study who actually voted (they also use the usual self-reported likelihood to vote weighting on top of this).
  • Opinium, MORI and YouGov still base their turnout models on people’s answers rather than their demographics, but they have all made changes. YouGov and MORI now weight down people who didn’t vote in the past, Opinium downweight people who say they will vote for a party but disapprove of its leader.
  • Panelbase and Survation haven’t currently made any radical changes since 2015, but Panelbase say they are considering using BES data to estimate likelihood to vote in their final poll (which sounds to me as if they are considering something along the lines of what ICM are doing with their turnout model)

In terms of actual outcomes, the pollsters who have adopted demographic turnout-models (ComRes, ICM and Kantar) tend to show larger Conservative leads than companies who have tried to address the problem only through sampling and weighting changes. We cannot really tell which is more likely to be right until June 8th. In short, for companies who have concentrated only on making samples more representative, the risk is that it hasn’t worked well enough, and that there are still too many of the sort of young engaged voters who are attracted to Jeremy Corbyn in their samples. For companies who have instead concentrated on demographic-based turnout models, the risk is that the pattern of turnout in 2017 differs from that in 2015, and that Jeremy Corbyn’s Labour really does manage to get more young people to come out to vote than Ed Miliband did. We will see what happens and, I expect, the industry will learn from whatever is seen to work this time round.


Donald Trump has won, so we have another round of stories about polling shortcomings, though thankfully it’s someone else’s country this time round (this is very much a personal take from across an ocean – the Yougov American and British teams are quite separate, so I have no insider angle on the YouGov American polls to offer).

A couple of weeks ago I wrote about whether there was potential for the US polls to suffer the same sort of polling mishap as Britain had experienced in 2015. It now looks as if they have. The US polling industry actually has a very good record of accuracy – they obviously have a lot more contests to poll, a lot more information to hand (and probably a lot more money!), but nevertheless – if you put aside the 2000 exit poll, you have to go back to 1948 to find a complete polling catastrophe in the US. That expectation of accuracy means they’ll probably face a lot of flak in the days ahead.

We in Britain have, shall I say, more recent experience of the art of being wrong, so here’s what insight I can offer. First the Brexit comparison. I fear this will be almost universal over the next few weeks, but when it comes to polling it is questionable:

  • In the case of Brexit, the polling picture was mixed. Put crudely, telephone polls showed a clear lead for Remain, online polls showed a tight race, with leave often ahead. Our media expected Remain to win and wrongly focused only on those polls that agreed with them, leading to a false narrative of a clear Remain lead, rather than a close run thing. Some polls were wrong, but the perception that they were all off is wrong – it was a failure of interpretation.
  • In the case of the USA, the polling picture was not really mixed. With the exception of the outlying USC Dornslife/LA Times poll all the polls tended to show a picture of Clinton leading, backed up by state polls also showing Clinton leads consistent with the national polls. People were quite right to interpret the polls as showing Clinton heading towards victory… it was the polls themselves that were wrong.

How wrong were they? As I write, it looks as if Hillary Clinton will actually get the most votes, but lose in the Electoral College. In that sense, the national polls were not wrong when they showed Clinton ahead, she really was. It’s one of the most fustrating situations to be in as a pollster, those times when statistically you are correct… but your figures have told the wrong narrative, so everyone thinks you are wrong. That doesn’t get the American pollsters off the hook though: the final polls were clustered around a 4 point lead for Clinton, when in reality it looks about 1 point. More importantly, the state polls were often way out, polls had Ohio as a tight race when Trump stomped it by 8 points. All the polls in Wisconsin had Clinton clearly ahead; Trump won. Polls in Minnesota were showing Clinton leads of 5-10 points, it ended up on a knife edge. Clearly something went deeply wrong here.

Putting aside exactly how comparable the Brexit polls and the Trump polls are, there are some potential lessons in terms of polling methodology. I am no expert in US polling, so I’ll leave it to others more knowledgable than I to dig through the entrails of the election polls. However, based on my experiences of recent mishaps in British polling, there are a couple of places I would certainly start looking.

One is turnout modelling – US pollsters often approach turnout in a very different way how British pollsters traditionally did it. We’ve always relied on weighting to the profile of the whole population and asking people if they are likely to vote. US pollsters have access to far more information on which people actually do vote, allowing they to weight their samples to the profile of actual voters in a state. This has helped the normally good record of US pollsters… but carries a potential risk if the type of people who vote changes, if there is an unexpected increase in turnout among demographics who don’t usually vote. This was one of the ways British pollsters did get burnt over Brexit. After getting the 2015 election wrong lots of British companies experimented with a more US-style approach, modelling turnout on the basis of people’s demographics. Those companies then faced problems when there was unexpectedly high turnout from more working-class, less well-educated voters at the referendum. Luckily for US pollsters, the relatively easy availability of data on who voted means they should be able to rule this in or out quite easily.

The second is sampling. The inquiry into our general election polling error in 2015 found that unrepresentative samples were the core of the problem, and I can well imagine that this is a problem that risks affecting pollsters anywhere. Across the world landline penetration is falling, response rates are falling and it seems likely that the dwindling number of people still willing to take part in polls are ever more unrepresentative. In this country our samples seemed to be skewed towards people who were too educated, who paid too much attention to politics, followed the news agenda and the political media too closely. We under-represented those with little interest in politics, and several UK pollsters have since started sampling and weighting by that to try and address the issue. Were the US pollsters to suffer a similar problem one can easily imagine how it could result in polls under-representing Donald Trump’s support. If that does end up being the case, the question will be what US pollsters do to address the issue.


-->

Donald Trump has been citing Brexit as the model of how he could win the election despite expections, his surrogates of how there might be a shy Trump vote, like Brexit. So what, if any, lessons can we learn about the US election from recent polling experience in Britain?

In 2015 the British polls got the general election wrong. Every company had Labour and Conservative pretty much neck-and-neck, when in reality the Conservatives won by seven points. In contrast, the opinion polls as a whole were not wrong on Brexit, or at least, they were not all that wrong. Throughout the referendum campaign polls conducted by telephone generally showed Remain ahead, but polls conducted online generally showed a very tight race. Most of the online polls towards the end of the campaign showed Leave ahead, and polls by TNS and Opinium showed Leave ahead in their final eve-of-referendum polls.

That’s the first point that the parallel falls down – Brexit wasn’t a surprise because the polls were wrong. The polls were showing a race that was neck-and-neck. It was a surprise because people hadn’t believed or paid attention to that polling evidence. The media expected Remain would win, took polls showing Remain ahead more seriously and a false narrative built up that the telephone polls were more accurately reflecting the race when in the event, those online polls showing leave ahead were right. This is not the case in the US – the media don’t think Trump will lose because they are downplaying inconvenient polling evidence, they think Trump will lose because of the polling evidence consistently shows that.

In the 2015 general election however the British polls really were wrong, and while some of the polls got Brexit right, some did indeed show solid Leave victories. Do either of those have any relevance for Trump?

The first claim is the case of shy voters. Much as 1948 is the famous examples of polling failure in the US, in this country 1992 was the famous mistake, and was put down to “Shy Tories”. That is, people who intended to vote Conservative, but were unwilling to admit it to pollsters. Shy voters are extremely difficult to diagnose. If people lie to pollsters about how they’ll vote before the election but tell the truth afterwards, then it is impossible to distinguish “shy voters” from people changing their minds (in the case of recent British polls, this does not appear to be the case. In both the 2015 election and the 2016 EU referendum recontact surveys found no significant movement towards the Conservatives or towards Leave). Alternatively, if people are consistent in lying to pollsters about their intentions beforehand and lying about how they voted afterwards, it’s impossible to catch them out.

The one indirect way of diagnosing shy voters is to compare the answers given to surveys using live interviewers, and surveys conducted online (or in the US, using robocalls – something that isn’t regularly done in the UK). If people are reluctant to admit to voting a certain way, they should be less embarrassed when it isn’t an actual human being doing the interviewing. In the UK the inquiry used this approach to rule out “shy Tories” as a cause of the 2015 polling error (online polls did not have a higher level of Tory support than phone polls).

In the US election there does appear to be some prima facie evidence of “Shy Trumpers”* – online polls and robopolls have tended to produce better figures for Donald Trump than polls conducted by a human interviewer. However, when this same difference was evident during the primary season the polls without a live interviewer were not consistently more accurate (and besides, even polls conducted without a human interviewer still have Clinton reliably ahead).

The more interesting issue is sample error. It is wrong to read directly across from Brexit to Trump – while there are superficial similarities, these are different countries, very different sorts of elections, in different party systems and traditions. There will be many different drivers of support. To my mind the interesting similarity though is the demographics – the type of people who vote for Trump and voted for Brexit.

Going back to the British general election of 2015, the inquiry afterwards identified sampling error as the cause of the polling error: the sort of people who were able to be contacted by phone and agreed to take part, and the sort of people who joined online panels were unrepresentative in a way that weights and quotas were not then correcting. While the inquiry didn’t specify how the samples were wrong, my own view (and one that is shared by some other pollsters) is that the root cause was that polling samples were too engaged, too political, too educated. We disproportionately got politically-aware graduates, the sort of people who follow politics in the media and understand what is going on. We don’t get enough of the poorly educated who pay little attention to politics. Since then several British companies have adopted extra weights and quotas by education level and level of interest in politics.

The relevance for Brexit polling is that there was a strong correlation between educational qualification and how people voted. Even within age cohorts, graduates were more likely to vote to Remain, people with few or no educational qualifications were more likely to vote to Leave. People with a low level of interest in politics were also more likely to vote to Leave. These continuing sampling issues may well have contributed to some of those pollsters who did it wrong in June.

One thing that Brexit does have in common with Trump is those demographics. Trump’s support is much greater among those without a college degree. I suspect if you asked you’d find it was greater among those people who don’t normally pay much attention to politics. In the UK those are groups who we’ve had difficulty in properly representing in polling samples – if US pollsters have similar issues, then there is a potential source for error. College degree seems to be a relatively standard demographic in US polling, so I assume that is correct already. How much interest people have in politics is more nebulous, less easy to measure or control.

In Britain the root cause of polling mishaps in 2015 (and for some, but not all, companies in 2016) seems to be that the declining pool of people still willing to take part in polls under-represented certain groups, and that those groups were less likely to vote for Labour, more likely to vote for Brexit. If (and it’s a huge if – I am only reporting the British experience, not passing judgement on American polls) the sort of people who American pollsters struggle to reach in these days of declining response rates are more likely to vote for Trump, then they may experience similar problems.

Those thinking that the sort of error that affected British polls could happen in the US are indeed correct… but could happen is not the same as is happening. Saying something is possible is a long way from there being any evidence that is actually is happening. Some of the British polls got Brexit wrong, and Trump is a little bit Brexity, therefore the polls are wrong really doesn’t carry water.

xxxxxxxxxxxxxxxxxx

*This has no place in a sensible article about polling methodology, but I feel I should point out to US readers that in British schoolboy slang when I was a kid – and possibly still today – to Trump is to fart. “Shy Trump” sounds like it should refer to surreptitiously breaking wind and denying it.


Opinium have a new EU referendum poll in the Observer. The topline figures are REMAIN 43%, LEAVE 41%, Don’t know 14%… if you get the data from Opinium’s own site (the full tabs are here). If you read the reports of the poll on the Observer website however the topline figures have Leave three points ahead. What gives?

I’m not quite sure how the Observer ended up reporting the poll as it did, but the Opinium website is clear. Opinium have introduced a methodology change (incorporating some attitudinal weights) but have included what the figures would have been on their old methodology to allow people to see the change in the last fortnight. So their proper headline figures show a two point lead for Remain. However the methodology change improved Remain’s relative position by five points, so the poll actually reflects a significant move to leave since their poll a fortnight ago showing a four point lead for Remain. If the method had remained unchanged we’d be talking about a move from a four point remain lead to a three point leave lead; on top of the ICM and ORB polls last week that’s starting to look as if something may be afoot.

Looking in more detail at the methodology change, Opinium have added weights by people’s attitudes towards race and whether people identify as English, British or neither. These both correlate with how people will vote in the referendum and clearly do make a difference to the result. The difficulty comes with knowing what to weight them to – while there is reliable data from the British Electoral Study face to face poll, race in particular is an area where there is almost certain to be an interviewer effect (i.e. if there is a difference between answers in an online poll and a poll with an interviewer, you can’t be at all confident how much of the difference is sample and how much is interviewer efffect). That doesn’t mean you cannot or should not weight by it, most potential weights face obstacles of one sort or another, but it will be interesting to see how Opinium have dealt with the issue when they write more about it on Monday.

It also leaves us with an ever more varied picture in terms of polling. In the longer term this will be to the benefit of the industry – hopefully some polls of both modes will end up getting things about right, and other companies can learn from and adapt whatever works. Different companies will pioneer different innovations, the ones that fail will be abandoned and the ones that work copied. That said, in the shorter term it doesn’t really help us work out what the true picture is. That is, alas, the almost inevitable result of getting it wrong last year. The alternative (all the polls showing the same thing) would only be giving us false clarity, the picture would appear to be “clearer”… but that wouldn’t mean it wasn’t wrong.


In January the BPC inquiry team announced their initial findings on what went wrong in the general election polls. Today they have published their full final report. The overall conclusions haven’t changed, we’ve just got a lot more detail. For a report about polling methodology written by a bunch of academics it’s very readable, so I’d encourage you to read the whole thing, but if you’re not in the mood for a 120 page document about polling methods then my summary is below:

Polls getting it wrong isn’t new

The error in the polls last year was worse than in many previous years, but wasn’t unprecedented. In 2005 and 2010 the polls performed comparatively well, but going back further there has often been an error in Labour’s favour, particularly since 1983. Last year’s error was the largest since 1992, but was not that different from the error in 1997 or 2001. The reason it was seen as so much worse was twofold – first, it meant the story was wrong (the polls suggested Labour would be the largest party, when actually there was a Tory majority, in 1997 and 2001 the only question was scale of the Labour landslide), second in 2015 all the main polls were wrong – in years like 1997 and 2001 there was a substantial average error in the polls, but some companies managed to get the result right, so it looked like a failure of particular pollsters rather than the industry as a whole.

Not everything was wrong: small parties were right, but Scotland wasn’t

There’s a difference between getting a poll right, and being seen to get a poll right. All the pre-election polls were actually pretty accurate for the Lib Dems, Greens and UKIP (and UKIP was seen as the big challenge!) it was seen as a disaster because they got the big two parties wrong, and therefore they got the story wrong. It’s the latter bit that’s important – in Scotland there was also a polling error (the SNP were understated, Labour overstated) but it was largely unremarked because it was a landslide. As the report says, “underestimating the size of a landslide is considerably less problematic than getting the result of an election wrong”

There was minimal late swing, if any

Obviously it is possible for people to change their minds in those 24 hours between the final poll fieldwork and the actual vote. People really can tell a pollster they’ll vote party A on Wednesday, but chicken out and vote party B on Thursday. The Scottish referendum was probably an example of genuine late swing – YouGov recontacted the same people they interviewed in their final pre-referendum poll on polling day itself, and found a small net swing towards NO. However, when pollsters get it wrong and blame late swing it does always sound a bit like a lame excuse “Oh, it was right when we did it, people must have changed their minds”.

To conclude there was late swing I’d want to see some pretty conclusive evidence. The inquiry team looked, but didn’t find any. Changes from the penultimate to final polls suggested any ongoing movement was towards Labour, not the Conservatives. A weighted average of re-contact surveys found change of only 0.6% from Lab to Con (and that was including some re-contacts from late campaign surveys, rather than final call surveys. Including only re-contact of final call surveys the average movement was towards Labour)

There probably weren’t any Shy Tories

“Shy Tories” is the theory that people who were not natural Tories were reluctant to admit to interviewers (or perhaps even to themselves!) that they were going to vote Conservative. If people had lied during the election campaign but admitted it afterwards, this would have shown up as late swing and it did not. This leaves the possibility that people lied before the election and consistently lied afterwards as well. This is obviously very difficult to test conclusively, but the inquiry team don’t believe the circumstantial evidence supports it. Not least, if there was a problem with shy Tories we could reasonably have expected polls conducted online without a human interviewer to have shown a higher Tory vote – they did not.

Turnout models weren’t that good, but it didn’t cause the error

Most pollsters modelled turnout using a simple method of asking people how likely they were to vote on a 0-10 scale. The inquiry team tested this by looking at whether people in re-contact surveys reported actually voting. For most pollsters this didn’t work out that well, however, it it was not the cause of the error – the inquiry team re-ran the data replacing pre-election likelihood to vote estimates with whether people reported actually voting after the election and they were just as wrong. As the inquiry team put it – if pollsters had known in advance which respondents would and would not vote, they would not have been any more accurate.

Differential turnout – that Labour voters were more likely to say they were going to vote and then fail to do so – was also dismissed as a factor. Voter validation tests (checking poll respondents against the actual marked register) did not suggest Labour voters were any more likely to lie about voting than Tory voters.

Note that in this sense turnout is about the difference between people *saying* they’ll vote (and pollsters estimates of if they’ll vote) and whether they actually do. That didn’t cause the polling error. However, the polling error could still have been caused by samples containing people who are too likely to vote, something that is an issue of turnout but which comes under the heading of sampling. It’s the difference between having young non-voters in your samples and them claiming they’ll vote when they won’t, and not having them in your sample to begin with.

Lots of other things that people have suggested were factors, weren’t factors

The inquiry put to bed various other theories too – postal votes were not the problem (samples contained the correct proportion of them), excluding overseas voters was not the problem (there are only 0.2% of the electorate), voter registration was not the problem (in the way it showed up it would have been functionally identical to misreporting of turnout – people who told pollsters they were going to vote, but did not – for the narrow purpose of polling error it doesn’t matter why they didn’t vote).

The main cause of the error was unrepresentative samples

The reason the polls got it wrong in 2015 was the sampling. The BPC inquiry team reached this conclusion to begin with by using the Sherlock Holmes method – eliminating all the other possibilities, leaving just one which must be true. However they also had positive evidence to back up the conclusion – the first is the comparison with the random probability surveys conducted by the BES and BSA later in the year, where past recall more closely resembled the actual election result, the second are some observable shortcomings within the samples. The age distribution within bands was off, the geographical distribution of the vote was wrong (polls underestimated Tory support more in the South East and East). Most importantly in my view, polling samples contained far too many people who vote, particularly among younger people – presumably because they contain people too engaged and interested in politics. Note that these aren’t necessarily the specific sample errors that caused the error: the BPC team cited them as evidence that sampling was off, not as the direct causes.

In the final polls there was no difference between telephone and online surveys

Looking at the final polls there was no difference at all between telephone and online surveys. The average Labour lead in the final polls was 0.2% in phone polls, and 0.2% in online polls. The average error compared to the final result was 1.6% for phone polls and 1.6% for online polls.

However, at points during the 2010-2015 Parliament there were differences between the modes. In the early part of the Parliament online polls were more favourable towards the Conservatives, for a large middle part of the Parliament phone polls were more favourable, during 2014 the gap disappeared entirely, phone polls started being more favourable towards the Tories during the election campaign, but came bang into line for the final polls. The inquiry suggest that could be herding, but that there is no strong reason to expect mode effects to be stable over time anyway – “mode effects arise from the interaction of the political environment with the various errors to which polling methods are prone. The magnitude and direction of these mode effects in the middle of the election cycle may be quite different to those that are evident in the final days of the campaign.”

The inquiry couldn’t rule out herding, but it doesn’t seem to have caused the error

That brings us to herding – the final polls were close to each other. To some observers they looked suspiciously close. Some degree of convergence is to be expected in the run to the election, many pollsters increased their sample sizes for their final polls so the variance between figures should be expected to fall. However, even allowing for that polls were still closer than would have been expected. Several pollsters made changes to their methods during the campaign and these did explain some of the convergence. It’s worth noting that all the changes increased the Conservative lead – that is, they made the polls *more* accurate, not less accurate.

The inquiry team also tested to see what the result would have been if every pollster had used the same method. That is, if you think pollsters had deliberately chosen methodological adjustments that made their polls closer to each other, what if you strip out all those individual adjustments? Using the same method across the board the results would have ranged from a four point Labour lead to a two point Tory lead. Polls would have been more variable… but every bit as wrong.

How the pollsters should improve their methods

Dealing with the main crux of the problem, unrepresentative samples, the inquiry have recommended that pollsters take action to improve how representative their samples are within their current criteria, and to investigate potential new quotas and weights that correlate with the sort of people who are under-represented in polls, and with voting intention. They are not prescriptive as to what the changes might be – on the first point they float possibilities about longer fieldwork and more callbacks in phone polls, and more incentives for under-represented groups in online polls. For potential new weighting variables they don’t suggest much at all, worrying that if such variables existed pollsters would already be using them, but we shall see what changes pollsters end up making to their sampling to address these recommendations.

The inquiry also makes some recommendations about turnout, don’t knows and asking if people have voted by post already. These seem perfectly sensible recommendations in themselves (especially asking if people have already voted by post, which several pollsters already do anyway), but given none of these things contributed to the error in 2015 they are more improvements for the future than addressing the failures of 2015.

And how the BPC should improve transparency

If the recommendations for the pollsters are pretty vague, the recommendations to the BPC are more specific, and mostly to do with transparency. Pollsters who are members of the BPC are already supposed to be open about methods, but the inquiry suggest they change the rules to make this more explicit – pollsters should give the exact variables and targets they weight to, and flag up any changes they make to their methods (the BPC are adopting these changes forthwith). They also make recommendations about registering polls and providing microdata to help any future inquiries, and for changes in how confidence margins are reported in polls. The BPC are looking at exactly how to do that in due course, but I think I’m rather less optimistic than the inquiry team about the difference it will make. The report says “Responsible media commentators would be much less inclined, however, to report a change in party support on the basis of one poll which shows no evidence of statistically significant change.” Personally I think *responsible* media commentators are already quite careful about how they report polls, the problem is that not all media commentators are responsible…

There’s no silver bullet

The inquiry team don’t make recommendations for specific changes that would have corrected the problems and don’t pretend there is an easy solution. Indeed, they point out that even the hugely expensive “gold standard” BES random probability surveys still managed to get the Conservatives and UKIP shares of the vote outside of the margin of error. They do think there are improvements that can be made though – and hopefully there are (hopefully the changes that some pollsters have already introduced are improving matters already). They also say it would be good if stakeholders were more realistic about the limits of polling, of how accurately it is really possible to measure people’s opinions.

Polling accuracy shouldn’t be black and white. It shouldn’t be a choice between “polls are the gospel truth” and “polls are worthless, ignore them all”. Polls are a tool, with advantages and limitations. There are limits on how well we can model and measure the views of a complex and mobile society, but that should be a reason for caveats and caution, not a reason to give up. As I wrote last year despite the many difficulties there are in getting a representative sample of the British public, I still think those difficulties are surmountable, and that ultimately, it’s still worth trying to find out and quantify what the public think.