Almost a month on from the referendum campaign I’ve had chance to sit down and collect my thoughts about how the polls performed. This isn’t necessarily a post about what went wrong since, as I wrote on the weekend after the referendum, for many pollsters nothing at all went wrong. Companies like TNS and Opinium got the referendum resolutely right, and many polls painted a consistently tight race between Remain and Leave. However some did less well and in the context of last year’s polling failure there is plenty we can learn about what methodology approaches adopted by the pollsters did and did not work for the referendum.

Mode effects

The most obvious contrast in the campaign was between telephone and online polls, and this contributed to the surprise over the result. Telephone and online polls told very different stories – if one paid more attention to telephone polls then Remain appeared to have a robust lead (and many in the media did, having bought into a “phone polls are more accurate” narrative that turned out to be wholly wrong). If one had paid more attention to online polls the race would have appeared consistently neck-and-neck. If one made the – perfectly reasonable – assumption that the actual result would be somewhere in between phone and online, one would still have ended up expecting a Remain victory.

While there was a lot of debate about whether phone or online was more likely to be correct during the campaign, there was relatively little to go on. Pat Sturgis and Will Jennings of the BPC inquiry team concluded that the true position was probably in between phone and online, perhaps a little closer to online, by comparing the results of the 2015 BES face-to-face data to the polls conducted at the time. Matt Singh and James Kanagasooriam wrote a paper called Polls Apart that concluded the result was probably closer to the telephone polls because they were closer to the socially liberal results in the BES data (as issue I’ll return to later). A paper by John Curtice could only conclude that the real result was likely somewhere in between online and telephone, given that at the general election the true level of UKIP support was between phone and online polls. During the campaign there was also a NatCen mixed-mode survey based on recontacting respondents to the British Social Attitudes survey, which found a result somewhere in between online and telephone.

In fact the final result was not somewhere in between telephone and online at all. Online was closer to the final result, and far from being in between the actual result was more Leave than all of them.

As ever, the actual picture was not quite as simple as this and there was significant variation within modes. The final online polls from TNS and Opinium had Leave ahead, but Populus’s final poll was conducted online and had a ten point lead for Remain. The final telephone polls from ComRes and MORI showed large leads for Remain, but Survation’s final poll phone poll showed a much smaller Remain lead. ICM’s telephone and online polls had been showing identical leads, but ceased publication several weeks before the result. On average, however, online polls were closer to the result than telephone polls.

The referendum should perhaps also provoke a little caution about probability studies like the face-to-face BES. These are hugely valuable surveys, done to the highest possible standards… but nothing is perfect, and they can be wrong. We cannot tell what a probability poll conducted immediately before the referendum would have shown, but if it had been somewhere between online and phone – as the earlier BES and NatCen data were – then it would also have been wrong.

People who are easy or difficult to reach by phone

Many of the pieces looking of the mode effects in the EU polling looked at the differences between people who responded quickly and slowly to polls. The BPC inquiry into the General Election polls analysed the samples from the post-election BES/BSA face-to-face polls and showed how people who responded to the face-to-face surveys on the first contact were skewed towards Labour voters, only after including those respondents who took two or three attempts to contact did the polls correctly show the Conservatives in the lead. The inquiry team used this as an example of how quota sampling could fail, rather than evidence of actual biases which affected the polls in 2015, but the same approach has become more widely used in analysis of polling failure. Matt Singh and James Kanagasooriam’s paper in particular focused on how slow respondents to the BES were also likely to be more socially liberal and concluded, therefore, that online polls were likely to be have too many socially conservative people.

Taking people who are reached on the first contact attempt in a face-to-face poll seems like a plausible proxy for people who might be contacted by a telephone poll that doesn’t have time to ring back people who it fails to contact on the first attempt. Putting aside the growing importance of sampling mobile phones, landline surveys and face-to-face surveys do both depend on the interviewee being at home at a civilised time and willing to take part. It’s more questionable why it should be a suitable proxy for the sort of person willing to join an online panel and take part in online surveys that can be done on any internet device at any old time.

As the referendum campaign continued there were more studies that broke down people’s EU referendum voting intention by how difficult they were to interview. NatCen’s mixed-mode survey in May to June found the respondents that it took longer to contact tended to be more leave (as well as being less educated, and more likely to say don’t know). BMG’s final poll was conducted by telephone, but used a 6 day fieldwork period to allow multiple attempts to call-back respondents. Their analysis painted a mixed picture – people contacted on the first call were fairly evenly split between Remain and Leave (51% Remain), people on the second call were strongly Remain (57% Remain), but people on later calls were more Leave (49% Remain).

Ultimately, the evidence on hard-to-reach people ended up being far more mixed than initially assumed. While the BES found hard-to-reach people were more pro-EU, the NatCen survey’s hardest to reach people were more pro-Leave, and BMG found a mixed pattern. This also suggests that one suggested solution to make telephone sampling better – taking more time to make more call-backs to those people who don’t answer the first call – is not necessarily a guaranteed solution. ORB and BMG both highlighted their decision to spend longer over their fieldwork in the hope of producing better samples, both taking six days rather than the typical two or three. Neither were obviously more accurate than phone pollsters with shorter fieldwork periods.

Education weights

During the campaign YouGov wrote a piece raising questions about whether some polls had too many graduates. Level of educational qualifications correlated with how likely people were to support to EU membership (graduates were more pro-EU, people with no qualification more pro-Leave, even after controlling for age) so this did have the potential to skew figures.

The actual proportion of “graduates” in Britain depends on definitions (the common NVQ Level 4+ categorisation in the census includes some people with higher education qualifications below degree-level), but depending on how you define it and whether or not you include HE qualifications below degree level the figure is around 27% to 35%. In the Populus polling produced for Polls Apart 47% of people had university level qualifications, suggesting polls conducted by telephone could be seriously over-representing graduates.

Ipsos MORI identified the same issue with too many graduates in their samples and added education quotas and weights during the campaign (this reduced the Remain lead in their polls by about 3-4 points, so while their final poll still showed a large Remain lead, it would have been more wrong without education weighting). ICM, however, tested education weights on their telephone polls and found it made little difference, while education breaks in ComRes’s final poll suggest they had about the right proportion of graduates in their sample anyway.

This doesn’t entirely put the issue of education to bed. Data on the educational make-up of samples is spotty, and the overall proportion of graduates in the sample is not the end of the story – because there is a strong correlation between education and age, just looking at overall education levels isn’t enough. There need to be enough poorly qualified people in younger age groups, not just among older generations where it is commonplace.

The addition of education weights appears to have helped some pollsters, but it clearly depends on the state of the sample to begin with. MORI controlled for education, but still over-represented Remain. ComRes had about the right proportion of graduates to begin with, but still got it wrong. Getting the correct proportion of graduates does appear to have been an issue for some companies, and dealing with it helped some companies, but alone it cannot explain why some pollsters performed badly.

Attitudinal weights

Another change introduced by some companies during the campaign was weighting by attitudes towards immigration and national identity (whether people considered themselves to be British or English). Like education, both these attitudes were correlated with EU referendum voting intention. Where they differ from education is that there are official statistics on the qualifications held by the British population, but there are no official stats on national identity or attitudes towards immigration. Attitudes may also be more liable to change than qualifications.

Three companies adopted attitudinal weights during the campaign, all of them online. Two of these used the same BES questions on racial equality and national identity from the BES face-to-face survey that were discussed in Polls Apart… but with different levels of success. Opinium, who were the joint most-accurate pollster, weighted people’s attitudes to racial equality and national identity to a point half-way between the BES findings and their own findings (presumably on the assumption that half the difference was sample, half interviewer effect). According to Opinium this increased the relative position of remain by about 5 points when introduced. Populus weighted by the same BES questions on attitudes to race and people’s national identity, but in their case used the actual BES figures – presumably giving them a sample that was significantly more socially liberal than Opinium’s. Populus ended up showing the largest Remain lead.

It’s clear from Opinium and Populus that these social attitudes were correlated with EU referendum vote and including attitudinal weighting variables did make a substantial difference. Exactly what to weight them to is a different question though – Populus and Opinium weighted the same variable to very different targets, and got very different results. Given the sensitivity of questions about racism we cannot be sure whether people answer these questions differently by phone, online or face-to-face, nor whether face-to-face probability samples have their own biases, but choosing what targets to use for any attitudinal weighting is obviously a difficult problem.

While it may have been a success for Opinium, attitudinal weighting is unlikely to have improved matters for other online polls – online polls generally produce social attitudes that are more conservative than suggested by the BES/BSA face-to-face surveys, so weighting them towards the BES/BSA data would probably only have served to push the results further towards Remain and make them even less accurate. On the other hand, for telephone polls there could be potential for attitudinal weighting to make samples less socially liberal.

Turnout models

There was a broad consensus that turnout was going to be a critical factor at the referendum, but pollsters took different approaches towards it. These varied from a traditional approach of basing turnout weights purely on respondent’s self-assessment of their likelihood to vote, models that also incorporated how often people had voted in the past or their interest in the subject, through to a models that were based on the socio-economic characteristics of respondents, modelling people’s likelihood to vote based on their age and social class.

In the case of the EU referendum Leave voters generally said they were more likely to vote than Remain voters, so traditional turnout models were more likely to favour Leave. People who didn’t vote at previous elections leant towards Leave, so models that incorporated past voting behaviour were a little more favourable towards Remain. Demographic based models were more complicated, as older people were more likely to vote and more leave, but middle class and educated people were more likely to vote and more remain. On balance models based on socio-economic factors tended to favour Remain.

The clearest example is Natcen’s mixed mode survey, which explictly modelled the two different approaches. Their raw results without turnout modelling would have been REMAIN 52.3%, LEAVE 47.7%. Modelling turnout based on self-reported likelihood to vote would have made the results slightly more “leave” – REMAIN 51.6%, LEAVE 48.4%. Modelling the results based on socio-demographic factors (which is what NatCen chose to do in the end) resulted in topline figures of REMAIN 53.2%, LEAVE 46.8%.

In the event ComRes & Populus chose to use methods based on socio-economic factors, YouGov & MORI used methods combining self-assessed likelihood and past voting behaviour (and in the case of MORI, interest in the referendum), Survation & ORB a traditional approach based just on self-assessed likelihood to vote. TNS didn’t use any turnout modelling in their final poll.

In almost every case the adjustments for turnout made the polls less accurate, moving the final figures towards Remain. For the four companies who used more sophisticated turnout models, it looks as if a traditional approach of relying on self-reported likelihood to vote would have been more accurate. An unusual case was TNS’s final poll, which did not use a turnout model at all, but did include data on what their figures would have been if they had. Using a model based on people’s own estimate of their likelihood to vote, past vote and age (but not social class) TNS would have shown figures of 54% Leave, 46% Remain – less accurate than their final call poll, but with an error in the opposite direction to most other polls.

In summary, it looks as though attempts to improve turnout modelling since the general election have not improved matters – if anything the opposite was the case. The risk of basing turnout models on past voting behaviour at elections or the demographic patterns of turnout at past elections has always been what would happen if patterns of turnout changed. It’s true middle class people normally vote more than working class people, older people normally vote more than younger people. But how much more, and how much does that vary from election to election? If you build a model that assumes the same levels of differential turnout between demographic groups as the previous election then it risks going horribly wrong if levels of turnout are different… and in the EU ref it looks as if they were. In their post-referendum statement Populus have been pretty robust in rejecting the whole idea – “turnout patterns are so different that a demographically based propensity-to-vote model is unlikely ever to produce an accurate picture of turnout other than by sheer luck.”

That may be a little harsh, it would probably be a wrong turn if pollsters stopped looking for more sophisticated turnout models than just asking people, and past voting behaviour and demographic considerations may be part of that. It may be that turnout models that are based on past behaviour at general elections is more successful in modelling general election turnout than that for referendums. Thus far, however, innovations in turnout modelling don’t appear to have been particularly successful.

Reallocation of don’t knows

During the campaign Steve Fisher and Alan Renwick wrote an interesting piece about how most referendum polls in the past have underestimated support for the status quo, presumably because of late swing or don’t knows breaking for remain. Pollsters were conscious of this and rather than just ignore don’t knows in their final polls, the majority of pollsters attempted to model how don’t knows would vote. This went from simple squeeze questions, which way do don’t knows think they’ll end up voting, are they leaning towards or suchlike (TNS, MORI and YouGov), to projecting how don’t knows will vote based upon their answers to other questions. ComRes had a squeeze question and estimated how don’t knows would vote based on how people thought Brexit would effect the economy, Populus on how risky don’t knows thought Brexit was. ORB just split don’t knows 3 to 1 in favour of Remain.

In every case these adjustments helped remain, and in every case this made things less accurate. Polls that made estimates about how don’t knows would vote ended up more wrong than polls that just asked people how they might end up voting, but this is probably co-incidence, both approaches had a similar sort of effect. This is not to say they were necessarily wrong – it’s possible that don’t knows did break in favour of remain, and that that while the reallocation of don’t knows made polls less accurate, it was because it was adding a swing to data that was already wrong to begin with. Nevertheless, it suggests pollsters should be careful about assuming too much about don’t knows – for general elections at least such decisions can be based more firmly upon how don’t knows have split at past general elections, where hopefully more robust models can be developed.

So what we can learn?

Pollsters don’t get many opportunities to compare polling results against actual election results, so every one is valuable – especially when companies are still attempting to address the 2015 polling failure. On the other hand, we need to be careful about reading too much into a single poll that’s not necessarily comparable to a general election. All those final polls were subject to the ordinary margins of error and there are different challenges to polling a general election and a referendum.

Equally, we shouldn’t automatically assume that anything that would have made the polls a little more Leave is necessarily correct, anything that made polling figures more Remain is necessarily wrong – everything you do to a poll interacts with everything else, and taking each item in isolation can be misleading. The list of things above is by no means exhaustive either – my own view remains that the core problem with polls is that they tend to be done by people who are too interested and aware of politics, and the way to solve polling failure is to find ways of recruiting less political people, quota-ing and weighting by levels of political interest. We found that people with low political interest were more likely to support Brexit, but there is very little other information on political awareness and interest from other polling, so I can’t explore to what extent that was responsible for any errors in the wider polls.

With that said, what can we conclude?

  • Phone polls appeared to face substantially greater problems in obtaining a representative sample than online polls. While there was variation within modes, with some online polls doing better than others, some phone polls doing worse than others, on average online outperformed phone. The probability based samples from the BES and the NatCen mixed-mode experiment suggested a position somewhere between online and telephone, so while we cannot tell what they would have shown, we should not assume they would have been any better.
  • Longer fieldwork times for telephone polls are not necessarily the solution. The various analyses of how people who took several attempts to contact differed from those who were contacted on the first attempt were not consistent, and the companies who took longer over their fieldwork were no more accurate than those with shorter periods.
  • Some polls did contain too many graduates and correcting for that did appear to help, but it was not a problem that affected all companies and would not alone have solved the problem. Some companies weighted by education or had the correct proportion of graduates, but still got it wrong.
  • Attitudinal weights had a mixed record. The only company to weight attitudes to the BES figures overstated Remain significantly, but Opinium had more success at weighting them to a halfway point. Weighting by social attitudes faces problems in determining weighting targets and is unlikely to have made other online polls more Leave, but could be a consideration for telephone polls that may have had samples that were too socially liberal.
  • Turnout models that were based on the patterns of turnout at the last election and whether people voted at the last election performed badly and consistently made the results less accurate – presumably because of the unexpectedly high turnout, particular among more working class areas. Perhaps there is potential for such models to work in the future and at general elections, but so far they don’t appear successful.

ORB have a new poll out tonight for the Independent showing a ten point lead for leave: REMAIN 45%(-4), LEAVE 55%(+4). Changes are since their last comparable poll, all the way back in April. Unlike the weekly ORB telephone polls for the Telegraph, their more infrequent polls for the Indy are done online – hence the results that are far more pro-Brexit than their poll in the week. Even accounting for that, it shows a shift towards leave that we’ve seen in many recent polls.

The ten point lead is large, but as ever, it is only one poll. Don’t read too much into it unless we see it echoed in other polling. As things stand most other online polls are still tending to show a relatively close race between Remain and Leave.

Also out today was a statement on some methodology changes from Ipsos MORI. As well as following their normal pre-election practice of filtering out people who aren’t registered to vote now the deadline for registration has passed, from their poll next week they are also going to start quotaing and weighting by education, aimed at reducing an over-representation of graduates. MORI suggest that in their last poll the change would have reduced the Remain lead by 3 or 4 points.

While they haven’t yet decided how they’ll do it, in their article they also discuss possible approaches they might take on turnout. MORI have included examples of modelling turnout based on people who say they are certain to vote and voted last time, or say the referendum is important, or who say they usually vote and so on. Exactly which one MORI end up opting for probably doesn’t make that much difference, they all have a very similar impact, reducing the Remain share by a couple of point, increasing the Leave share by a couple of points.

The combined effect of these changes is that the MORI poll in the week is going to be better for Leave due to methodological reasons anyway. If it does show another shift towards Leave, take care to work out how much of that is because of the methodology change and how much of it is due to actual movement before getting too excited/distraught.


-->

Opinium have a new EU referendum poll in the Observer. The topline figures are REMAIN 43%, LEAVE 41%, Don’t know 14%… if you get the data from Opinium’s own site (the full tabs are here). If you read the reports of the poll on the Observer website however the topline figures have Leave three points ahead. What gives?

I’m not quite sure how the Observer ended up reporting the poll as it did, but the Opinium website is clear. Opinium have introduced a methodology change (incorporating some attitudinal weights) but have included what the figures would have been on their old methodology to allow people to see the change in the last fortnight. So their proper headline figures show a two point lead for Remain. However the methodology change improved Remain’s relative position by five points, so the poll actually reflects a significant move to leave since their poll a fortnight ago showing a four point lead for Remain. If the method had remained unchanged we’d be talking about a move from a four point remain lead to a three point leave lead; on top of the ICM and ORB polls last week that’s starting to look as if something may be afoot.

Looking in more detail at the methodology change, Opinium have added weights by people’s attitudes towards race and whether people identify as English, British or neither. These both correlate with how people will vote in the referendum and clearly do make a difference to the result. The difficulty comes with knowing what to weight them to – while there is reliable data from the British Electoral Study face to face poll, race in particular is an area where there is almost certain to be an interviewer effect (i.e. if there is a difference between answers in an online poll and a poll with an interviewer, you can’t be at all confident how much of the difference is sample and how much is interviewer efffect). That doesn’t mean you cannot or should not weight by it, most potential weights face obstacles of one sort or another, but it will be interesting to see how Opinium have dealt with the issue when they write more about it on Monday.

It also leaves us with an ever more varied picture in terms of polling. In the longer term this will be to the benefit of the industry – hopefully some polls of both modes will end up getting things about right, and other companies can learn from and adapt whatever works. Different companies will pioneer different innovations, the ones that fail will be abandoned and the ones that work copied. That said, in the shorter term it doesn’t really help us work out what the true picture is. That is, alas, the almost inevitable result of getting it wrong last year. The alternative (all the polls showing the same thing) would only be giving us false clarity, the picture would appear to be “clearer”… but that wouldn’t mean it wasn’t wrong.


ComRes had a new EU telephone poll in this morning’s Daily Mail. Topline figures are REMAIN 52%(-1), LEAVE 41%(+3), Don’t know 7%(-2). Tabs are here.

Note that this poll is now adjusted for likelihood to vote, using ComRes’s turnout model based on socio-economic factors, like age and class (the changes are adjusted to reflect this). Note that adjusting turnout based on ComRes’s model has marginally increased support for Remain (before the adjustment the figures would have been 51 and 41).

There’s a broad assumption that differental turnout is more likely to favour Leave in the EU referendum campaign, largely based on the fact that polls normally show Leave voters claiming they are more likely to be 10/10 certain to vote, and that Leave voters are older. I’m not so sure. Self-reported likelihood is a blunt tool (people who say they are 10/10 certain to vote are not really much more likely than 8/10 or 9/10 people), and the age skew that should favour Leave in terms of turnout (older people are more likely to vote, and more Leave) will to some degree be cancelled out by the social class and educational skews that favour Remain (middle class people and graduates are more likely to vote, and more Remain).

On the subject of education, YouGov also had an interesting article up today. Like Populus and ICM they have carried out parallel telephone and online surveys, but unlike other such tests which have found a big gulf between phone and online results YouGov found results that were very similar to each other: both phone and online polls found a small lead for Leave.

This result wasn’t just the weighting (even before weighting the raw sample was a lot more “leave” than the raw samples from other phone polls) suggesting it is something to do with the sampling. Obviously we can’t tell for certain what the reason is – the most obvious difference is that the YouGov poll was conducted over the period of a fortnight, so was slower than most telephone polls and there was more opportunity to ring back people who were unavailable on the first call – but there could be other differences to do with quotas or the proportion of mobile calls (the YouGov poll was about a third mobile, two-thirds landline. My understanding is most phone polls are about 50/50 now, though MORI is about 20/80).

Looking at the actual demographics of the sample YouGov highlight the difference between their landline sample and the samples for the Populus paper looking at phone/online differences – specifically on education. In the Populus telephone samples between 44-46% of people had degrees, whereas the actual figure in the Census and Annual Population Survey is around 30%. The YouGov phone sample had a lower proportion of people with degrees to begin with, and weighted it to the national figure.

There is a clear correlation between education and attitudes to the EU referendum (in the YouGov polls there was a Leave lead of about 30 points among people who left school at 16 and a Remain lead of 33 points among those who were in educated beyond the age of twenty. This is partially to do with age, but it remains true even within people of the same age) so samples are too educated or not educated enough it could easily make a difference. As it is we’ve only got education data for the Populus polling – we don’t know if there’s the same skew in other phone polls, or how much of a difference it would make if corrected, but different levels of education within achieved samples is a further hypothesis that could explain that ongoing difference between phone and telephone samples for the EU referendum.


Last year the election polls got it wrong. Since then most pollsters have made only minor interim changes – ComRes, BMG and YouGov have conducted the biggest overhauls, many others have made only tweaks, and all the companies have said they are continuing to look at further potential changes in the light of the polling review. In light of that I’ve seen many people assume that until changes are complete many polls probably still overestimate Labour support. While on the face of it that makes sense, I’m not sure it’s true.

The reason the polls were wrong in 2015 seems to be the samples were wrong. That’s sometimes crudely described as samples including too many Labour voters and too few Conservative voters. This is correct in one sense, but is perhaps describing the symptom rather than the cause. The truth is, as ever, rather more complicated. Since the polls got it wrong back in 1992 almost all the pollsters have weighted their samples politically (using how people voted at the last election) to try and ensure they don’t contain too many Labour people or too few Conservative people. Up until 2015 this broadly worked.

The pre-election polls were weighted to contain the correct number of people who voted Labour in 2010 and voted Conservative in 2010. The 2015 polls accurately reflected the political make up of Britain in terms how people voted at the previous election, what it got wrong it how they voted at the forthcoming election. Logically, therefore, what the polls got wrong was not the people who stuck with the same party, but the proportions of people who changed their vote between the 2010 and 2015 elections. There were too many people who said they’d vote Labour in 2015 but didn’t in 2010, too many people who voted Tory in 2010 but said they wouldn’t in 2015, and so on.

The reason for this is up for debate. My view is that it’s due to poll samples containing people who are too interested in politics, other evidence has suggested it is people who are too easy to reach (these two explanations could easily be the same thing!). The point of this post isn’t to have that debate, it’s to ask what it tells us about how accurate the polls are now.

The day after an election how you voted at the previous election is an extremely strong predictor of how you’d vote in an election the next day. If you voted Conservative on Thursday, you’d probably do so again on Friday given the chance. Over time events happen and people change their minds and their voting intention; how you voted last time becomes a weaker and weaker predictor. You also get five years of deaths and five years of new voters entering the electorate, who may or may not vote.

Political weighting is the reason why the polls in Summer 2015 all suddenly showed solid Conservative leads when the same polls had shown the parties neck-and-neck a few months earlier, it was just the switch to weighting to May 2015 recalled vote**. In the last Parliament, polls were probably also pretty much right early in the Parliament when people’s 2010 vote correlated well with their current support, but as the Lib Dems collapsed and UKIP rose, scattering and taking support from different parties and in different proportions polls must have gradually become less accurate, ending with the faulty polls of May 2015.

What does it tell us about the polls now? Well, it means while many polling companies haven’t made huge changes since the election yet, current polls are probably pretty accurate in terms of party support, simply because it is early in the Parliament and party support does not appear to have changed vastly since the election. At this point in time, weighting samples by how people voted in 2015 will probably be enough to produce samples that are pretty representative of the British public.

Equally, it doesn’t automatically follow that we will see the Conservative party surge into a bigger lead as polling companies do make changes, though it does largely depend on the approach different pollsters take (methodology changes to sampling may not make much difference until there are changes in party support, methodology changes to turnout filters or weighting may make a more immediate change).

Hopefully it means that polls will be broadly accurate for the party political elections in May, the Scottish Parliament, Welsh Assembly and London Mayoral elections (people obviously can and do vote differently in those elections to Westminster elections, but there will be a strong correlation to how they voted just a year before). The EU referendum is more of a challenge given it doesn’t correlate so closely to general election voting and will rely upon how well pollsters’ samples represent the British electorate. As the Parliament rolls on, we will obviously have to hope that the changes the pollsters do end up making keep polls accurate all the way through.

(**The only company that doesn’t weight politically is Ipsos MORI. Quite how MORI’s polls shifted from neck-and-neck in May 2015 to Tory leads afterwards I do not know. They have made only a relatively minor methodological change in their turnout filter. Looking at the data tables, it appears to be something to do with the sampling – ICM, ComRes and MORI all sample by dialing random telephone numbers, but the raw data they get before weighting it is strikingly different. Looking at the average across the last six surveys the raw samples that ComRes and ICM get before they weight their data has an equal number of people saying they voted Labour in 2015 and saying they voted Tory in 2015. MORI’s raw data has four percent more people saying they’d voted Conservative than saying they’d voted Labour, so a much less skewed raw sample. Perhaps MORI have done something clever with their quotas or their script, but it’s clearly working.)