Almost a month on from the referendum campaign I’ve had chance to sit down and collect my thoughts about how the polls performed. This isn’t necessarily a post about what went wrong since, as I wrote on the weekend after the referendum, for many pollsters nothing at all went wrong. Companies like TNS and Opinium got the referendum resolutely right, and many polls painted a consistently tight race between Remain and Leave. However some did less well and in the context of last year’s polling failure there is plenty we can learn about what methodology approaches adopted by the pollsters did and did not work for the referendum.

Mode effects

The most obvious contrast in the campaign was between telephone and online polls, and this contributed to the surprise over the result. Telephone and online polls told very different stories – if one paid more attention to telephone polls then Remain appeared to have a robust lead (and many in the media did, having bought into a “phone polls are more accurate” narrative that turned out to be wholly wrong). If one had paid more attention to online polls the race would have appeared consistently neck-and-neck. If one made the – perfectly reasonable – assumption that the actual result would be somewhere in between phone and online, one would still have ended up expecting a Remain victory.

While there was a lot of debate about whether phone or online was more likely to be correct during the campaign, there was relatively little to go on. Pat Sturgis and Will Jennings of the BPC inquiry team concluded that the true position was probably in between phone and online, perhaps a little closer to online, by comparing the results of the 2015 BES face-to-face data to the polls conducted at the time. Matt Singh and James Kanagasooriam wrote a paper called Polls Apart that concluded the result was probably closer to the telephone polls because they were closer to the socially liberal results in the BES data (as issue I’ll return to later). A paper by John Curtice could only conclude that the real result was likely somewhere in between online and telephone, given that at the general election the true level of UKIP support was between phone and online polls. During the campaign there was also a NatCen mixed-mode survey based on recontacting respondents to the British Social Attitudes survey, which found a result somewhere in between online and telephone.

In fact the final result was not somewhere in between telephone and online at all. Online was closer to the final result, and far from being in between the actual result was more Leave than all of them.

As ever, the actual picture was not quite as simple as this and there was significant variation within modes. The final online polls from TNS and Opinium had Leave ahead, but Populus’s final poll was conducted online and had a ten point lead for Remain. The final telephone polls from ComRes and MORI showed large leads for Remain, but Survation’s final poll phone poll showed a much smaller Remain lead. ICM’s telephone and online polls had been showing identical leads, but ceased publication several weeks before the result. On average, however, online polls were closer to the result than telephone polls.

The referendum should perhaps also provoke a little caution about probability studies like the face-to-face BES. These are hugely valuable surveys, done to the highest possible standards… but nothing is perfect, and they can be wrong. We cannot tell what a probability poll conducted immediately before the referendum would have shown, but if it had been somewhere between online and phone – as the earlier BES and NatCen data were – then it would also have been wrong.

People who are easy or difficult to reach by phone

Many of the pieces looking of the mode effects in the EU polling looked at the differences between people who responded quickly and slowly to polls. The BPC inquiry into the General Election polls analysed the samples from the post-election BES/BSA face-to-face polls and showed how people who responded to the face-to-face surveys on the first contact were skewed towards Labour voters, only after including those respondents who took two or three attempts to contact did the polls correctly show the Conservatives in the lead. The inquiry team used this as an example of how quota sampling could fail, rather than evidence of actual biases which affected the polls in 2015, but the same approach has become more widely used in analysis of polling failure. Matt Singh and James Kanagasooriam’s paper in particular focused on how slow respondents to the BES were also likely to be more socially liberal and concluded, therefore, that online polls were likely to be have too many socially conservative people.

Taking people who are reached on the first contact attempt in a face-to-face poll seems like a plausible proxy for people who might be contacted by a telephone poll that doesn’t have time to ring back people who it fails to contact on the first attempt. Putting aside the growing importance of sampling mobile phones, landline surveys and face-to-face surveys do both depend on the interviewee being at home at a civilised time and willing to take part. It’s more questionable why it should be a suitable proxy for the sort of person willing to join an online panel and take part in online surveys that can be done on any internet device at any old time.

As the referendum campaign continued there were more studies that broke down people’s EU referendum voting intention by how difficult they were to interview. NatCen’s mixed-mode survey in May to June found the respondents that it took longer to contact tended to be more leave (as well as being less educated, and more likely to say don’t know). BMG’s final poll was conducted by telephone, but used a 6 day fieldwork period to allow multiple attempts to call-back respondents. Their analysis painted a mixed picture – people contacted on the first call were fairly evenly split between Remain and Leave (51% Remain), people on the second call were strongly Remain (57% Remain), but people on later calls were more Leave (49% Remain).

Ultimately, the evidence on hard-to-reach people ended up being far more mixed than initially assumed. While the BES found hard-to-reach people were more pro-EU, the NatCen survey’s hardest to reach people were more pro-Leave, and BMG found a mixed pattern. This also suggests that one suggested solution to make telephone sampling better – taking more time to make more call-backs to those people who don’t answer the first call – is not necessarily a guaranteed solution. ORB and BMG both highlighted their decision to spend longer over their fieldwork in the hope of producing better samples, both taking six days rather than the typical two or three. Neither were obviously more accurate than phone pollsters with shorter fieldwork periods.

Education weights

During the campaign YouGov wrote a piece raising questions about whether some polls had too many graduates. Level of educational qualifications correlated with how likely people were to support to EU membership (graduates were more pro-EU, people with no qualification more pro-Leave, even after controlling for age) so this did have the potential to skew figures.

The actual proportion of “graduates” in Britain depends on definitions (the common NVQ Level 4+ categorisation in the census includes some people with higher education qualifications below degree-level), but depending on how you define it and whether or not you include HE qualifications below degree level the figure is around 27% to 35%. In the Populus polling produced for Polls Apart 47% of people had university level qualifications, suggesting polls conducted by telephone could be seriously over-representing graduates.

Ipsos MORI identified the same issue with too many graduates in their samples and added education quotas and weights during the campaign (this reduced the Remain lead in their polls by about 3-4 points, so while their final poll still showed a large Remain lead, it would have been more wrong without education weighting). ICM, however, tested education weights on their telephone polls and found it made little difference, while education breaks in ComRes’s final poll suggest they had about the right proportion of graduates in their sample anyway.

This doesn’t entirely put the issue of education to bed. Data on the educational make-up of samples is spotty, and the overall proportion of graduates in the sample is not the end of the story – because there is a strong correlation between education and age, just looking at overall education levels isn’t enough. There need to be enough poorly qualified people in younger age groups, not just among older generations where it is commonplace.

The addition of education weights appears to have helped some pollsters, but it clearly depends on the state of the sample to begin with. MORI controlled for education, but still over-represented Remain. ComRes had about the right proportion of graduates to begin with, but still got it wrong. Getting the correct proportion of graduates does appear to have been an issue for some companies, and dealing with it helped some companies, but alone it cannot explain why some pollsters performed badly.

Attitudinal weights

Another change introduced by some companies during the campaign was weighting by attitudes towards immigration and national identity (whether people considered themselves to be British or English). Like education, both these attitudes were correlated with EU referendum voting intention. Where they differ from education is that there are official statistics on the qualifications held by the British population, but there are no official stats on national identity or attitudes towards immigration. Attitudes may also be more liable to change than qualifications.

Three companies adopted attitudinal weights during the campaign, all of them online. Two of these used the same BES questions on racial equality and national identity from the BES face-to-face survey that were discussed in Polls Apart… but with different levels of success. Opinium, who were the joint most-accurate pollster, weighted people’s attitudes to racial equality and national identity to a point half-way between the BES findings and their own findings (presumably on the assumption that half the difference was sample, half interviewer effect). According to Opinium this increased the relative position of remain by about 5 points when introduced. Populus weighted by the same BES questions on attitudes to race and people’s national identity, but in their case used the actual BES figures – presumably giving them a sample that was significantly more socially liberal than Opinium’s. Populus ended up showing the largest Remain lead.

It’s clear from Opinium and Populus that these social attitudes were correlated with EU referendum vote and including attitudinal weighting variables did make a substantial difference. Exactly what to weight them to is a different question though – Populus and Opinium weighted the same variable to very different targets, and got very different results. Given the sensitivity of questions about racism we cannot be sure whether people answer these questions differently by phone, online or face-to-face, nor whether face-to-face probability samples have their own biases, but choosing what targets to use for any attitudinal weighting is obviously a difficult problem.

While it may have been a success for Opinium, attitudinal weighting is unlikely to have improved matters for other online polls – online polls generally produce social attitudes that are more conservative than suggested by the BES/BSA face-to-face surveys, so weighting them towards the BES/BSA data would probably only have served to push the results further towards Remain and make them even less accurate. On the other hand, for telephone polls there could be potential for attitudinal weighting to make samples less socially liberal.

Turnout models

There was a broad consensus that turnout was going to be a critical factor at the referendum, but pollsters took different approaches towards it. These varied from a traditional approach of basing turnout weights purely on respondent’s self-assessment of their likelihood to vote, models that also incorporated how often people had voted in the past or their interest in the subject, through to a models that were based on the socio-economic characteristics of respondents, modelling people’s likelihood to vote based on their age and social class.

In the case of the EU referendum Leave voters generally said they were more likely to vote than Remain voters, so traditional turnout models were more likely to favour Leave. People who didn’t vote at previous elections leant towards Leave, so models that incorporated past voting behaviour were a little more favourable towards Remain. Demographic based models were more complicated, as older people were more likely to vote and more leave, but middle class and educated people were more likely to vote and more remain. On balance models based on socio-economic factors tended to favour Remain.

The clearest example is Natcen’s mixed mode survey, which explictly modelled the two different approaches. Their raw results without turnout modelling would have been REMAIN 52.3%, LEAVE 47.7%. Modelling turnout based on self-reported likelihood to vote would have made the results slightly more “leave” – REMAIN 51.6%, LEAVE 48.4%. Modelling the results based on socio-demographic factors (which is what NatCen chose to do in the end) resulted in topline figures of REMAIN 53.2%, LEAVE 46.8%.

In the event ComRes & Populus chose to use methods based on socio-economic factors, YouGov & MORI used methods combining self-assessed likelihood and past voting behaviour (and in the case of MORI, interest in the referendum), Survation & ORB a traditional approach based just on self-assessed likelihood to vote. TNS didn’t use any turnout modelling in their final poll.

In almost every case the adjustments for turnout made the polls less accurate, moving the final figures towards Remain. For the four companies who used more sophisticated turnout models, it looks as if a traditional approach of relying on self-reported likelihood to vote would have been more accurate. An unusual case was TNS’s final poll, which did not use a turnout model at all, but did include data on what their figures would have been if they had. Using a model based on people’s own estimate of their likelihood to vote, past vote and age (but not social class) TNS would have shown figures of 54% Leave, 46% Remain – less accurate than their final call poll, but with an error in the opposite direction to most other polls.

In summary, it looks as though attempts to improve turnout modelling since the general election have not improved matters – if anything the opposite was the case. The risk of basing turnout models on past voting behaviour at elections or the demographic patterns of turnout at past elections has always been what would happen if patterns of turnout changed. It’s true middle class people normally vote more than working class people, older people normally vote more than younger people. But how much more, and how much does that vary from election to election? If you build a model that assumes the same levels of differential turnout between demographic groups as the previous election then it risks going horribly wrong if levels of turnout are different… and in the EU ref it looks as if they were. In their post-referendum statement Populus have been pretty robust in rejecting the whole idea – “turnout patterns are so different that a demographically based propensity-to-vote model is unlikely ever to produce an accurate picture of turnout other than by sheer luck.”

That may be a little harsh, it would probably be a wrong turn if pollsters stopped looking for more sophisticated turnout models than just asking people, and past voting behaviour and demographic considerations may be part of that. It may be that turnout models that are based on past behaviour at general elections is more successful in modelling general election turnout than that for referendums. Thus far, however, innovations in turnout modelling don’t appear to have been particularly successful.

Reallocation of don’t knows

During the campaign Steve Fisher and Alan Renwick wrote an interesting piece about how most referendum polls in the past have underestimated support for the status quo, presumably because of late swing or don’t knows breaking for remain. Pollsters were conscious of this and rather than just ignore don’t knows in their final polls, the majority of pollsters attempted to model how don’t knows would vote. This went from simple squeeze questions, which way do don’t knows think they’ll end up voting, are they leaning towards or suchlike (TNS, MORI and YouGov), to projecting how don’t knows will vote based upon their answers to other questions. ComRes had a squeeze question and estimated how don’t knows would vote based on how people thought Brexit would effect the economy, Populus on how risky don’t knows thought Brexit was. ORB just split don’t knows 3 to 1 in favour of Remain.

In every case these adjustments helped remain, and in every case this made things less accurate. Polls that made estimates about how don’t knows would vote ended up more wrong than polls that just asked people how they might end up voting, but this is probably co-incidence, both approaches had a similar sort of effect. This is not to say they were necessarily wrong – it’s possible that don’t knows did break in favour of remain, and that that while the reallocation of don’t knows made polls less accurate, it was because it was adding a swing to data that was already wrong to begin with. Nevertheless, it suggests pollsters should be careful about assuming too much about don’t knows – for general elections at least such decisions can be based more firmly upon how don’t knows have split at past general elections, where hopefully more robust models can be developed.

So what we can learn?

Pollsters don’t get many opportunities to compare polling results against actual election results, so every one is valuable – especially when companies are still attempting to address the 2015 polling failure. On the other hand, we need to be careful about reading too much into a single poll that’s not necessarily comparable to a general election. All those final polls were subject to the ordinary margins of error and there are different challenges to polling a general election and a referendum.

Equally, we shouldn’t automatically assume that anything that would have made the polls a little more Leave is necessarily correct, anything that made polling figures more Remain is necessarily wrong – everything you do to a poll interacts with everything else, and taking each item in isolation can be misleading. The list of things above is by no means exhaustive either – my own view remains that the core problem with polls is that they tend to be done by people who are too interested and aware of politics, and the way to solve polling failure is to find ways of recruiting less political people, quota-ing and weighting by levels of political interest. We found that people with low political interest were more likely to support Brexit, but there is very little other information on political awareness and interest from other polling, so I can’t explore to what extent that was responsible for any errors in the wider polls.

With that said, what can we conclude?

  • Phone polls appeared to face substantially greater problems in obtaining a representative sample than online polls. While there was variation within modes, with some online polls doing better than others, some phone polls doing worse than others, on average online outperformed phone. The probability based samples from the BES and the NatCen mixed-mode experiment suggested a position somewhere between online and telephone, so while we cannot tell what they would have shown, we should not assume they would have been any better.
  • Longer fieldwork times for telephone polls are not necessarily the solution. The various analyses of how people who took several attempts to contact differed from those who were contacted on the first attempt were not consistent, and the companies who took longer over their fieldwork were no more accurate than those with shorter periods.
  • Some polls did contain too many graduates and correcting for that did appear to help, but it was not a problem that affected all companies and would not alone have solved the problem. Some companies weighted by education or had the correct proportion of graduates, but still got it wrong.
  • Attitudinal weights had a mixed record. The only company to weight attitudes to the BES figures overstated Remain significantly, but Opinium had more success at weighting them to a halfway point. Weighting by social attitudes faces problems in determining weighting targets and is unlikely to have made other online polls more Leave, but could be a consideration for telephone polls that may have had samples that were too socially liberal.
  • Turnout models that were based on the patterns of turnout at the last election and whether people voted at the last election performed badly and consistently made the results less accurate – presumably because of the unexpectedly high turnout, particular among more working class areas. Perhaps there is potential for such models to work in the future and at general elections, but so far they don’t appear successful.

892 Responses to “What we can learn from the referendum polling”

1 2 3 18
  1. 1st. No to Jexit.

  2. 2nd jez we can

  3. Too many variables to fix the problem. Maybe it was just a truly exceptional election that was impossible to poll accurately.

  4. Well I noticed this comment in the ICM poll

    https://www.icmunlimited.com/polls/

    “1.The release of email invites staggered over a weekend in order to prevent certain types of respondent bed blocking geo-demographic quotas (introduced pre-referendum).
    2.Additional quotas set on voting in the 2015 General Election, allowing for DK/Refusal contributions (introduced late 2015).
    3.Past vote weighting to the 2015 result. The impact of this is negligible, given the application of political quotas as stated in (2) immediately above”

    So in terms of visibility, we no longer have visibility to the weights due to the “quotas” that are not described in the tables explaining the weighting? So what is the point of the tables now, if more and more of the weighting is done by quotas before the tables are even produced.

    Without visibility to the quota’s, we really have no way to compare much from pollster to pollster, or any way to analyse or compare anything. And even if we did, who knows what the results would have shown if the quota was not imposed, and the weighting done at the end?

  5. “Maybe it was just a truly exceptional election that was impossible to poll accurately.”

    Just like the last one.

  6. Another independent Candidate for Batley and Spen

    https://twitter.com/Waqas_Batley

    That’s now Labour plus four others.

  7. My impression is that the general impression is that “the pollsters got it wrong again”, however unfair that might be.

    After the election and the referendum I think polling companies are going to have to get a result fairly close to on the nail before this perception goes away. And it might not then, either, as “polling company correct” isn’t such a great headline.

  8. I say independent. He stood for UKIP in Shipley, Bradford in the 2015 GE.

  9. @AW Presumably part of the problem is that referenda (thank heavens) do not come that often and are difficult to predict on the basis of past voting patterns. Is there any mileage in using the early polls for a referendum to estimate the association between factors such as age, party affiliation etc and stated voting intention and then using deriving the weights from these associations combined with population data on ages, party affiliation etc?,

  10. Thanks Anthony for such a thoughtful and detailed piece.

    For what it’s worth, I still agree that it’s the sample frames that are the likely root cause of polling error. Too many politically engaged people, not enough respondents from certain age, ethnic and social class groups. These two factors result in a skewed sample magnified by weighting.

    Added to this is the possibility that the additional voters who came out for referendum may stay engaged and could well be from difficult to reach groups – meaning that polling might be about to get harder.

    On the up side, though the outcomes of using social attitudes as a filter were mixed on this occasion, there does seem to be some potential mileage in this approach, so this could be an area worth experimenting with. Plenty of below the line questions have fairly long histories to use.

    Good luck!

  11. i said consistently during the campaign, that the pollsters were underestimating the strength of the Leave vote.

    This was based on unscientific, unweighted and self-selecting responses on non-political forums, comments under newspaper articles, social media, posters on display, canvass returns and what the blokes down the pub said.

    But for the tragic death of Jo Cox, and Farage’s ill-advised immigration poster, I believe the Leave vote would have been in the region of 56-58%. The final week saw a swing back to Remain, but not by enough to change the result.

  12. Very interesting blog, Anthony. I do think that the referendum was different to GEs, in that it brought out voters who do not usually bother, and also that traditional Tory and Labour voters could vote for the way they felt without feeling disloyal to their party. Although Cameron and Osborne were strong remainers there were several prominent Tories on the other side, and apparently Labour’s campaign was so invisible that many Labour voters did not even know what the official position was.

    Any methodological changes to correct for the errors might run the risk of being wrong for the next GE, because the Referendum and GE are such different beasts. Party loyalty being a key difference.

  13. @DAVID CARROD

    Interesting – I think I agree with you. My gut feeling all along was that leave was running away with it until the last few days when remain mounted a nearly successful comeback.

  14. Anthony, these are interesting observations.

    However – what are the questions that this post gives the answer to. Without the questions it’s rather difficult to agree or disagree beyond the statement that it is interesting.

  15. O/T

    Theresa May has a lot in common with ordinary Brits. See

    https://blog.findmypast.co.uk/famous-family-trees-theresa-may-1406260824.html

    Her paternal grandmother Amy Patterson Brasier was a parlour maid before she got married. Her maternal grandmother Violet Welland Barnes was a nanny/nurse at the age of 17.

    Maternal great grandfather was a carpenter and builder at various points. Paternal grandfather was a butler.

    If the Bullingdon Boys were patronising her because of her roots, it explains why she was so brutal and culled them all.

  16. PeteB – ” I do think that the referendum was different to GEs, in that it brought out voters who do not usually bother”

    True.

    The question now is whether they will continue to vote, now they’ve got a taste for what it can achieve. Or whether they will lapse back into non-voting.

  17. The political parties are going to be analyzing the raw data frantically. Trying to find out who these extra voters were and what makes them tick and more importantly how they can be enticed to become regular voters. Personally I think they will search in vain

  18. Thanks for your impeccable analysis, AW but really it’s all a bit of a waste of time because @David Carrod, backed up by @Tancred, have come up with a foolproof polling methodology:

    “This was based on unscientific, unweighted and self-selecting responses on non-political forums, comments under newspaper articles, social media, posters on display, canvass returns and what the blokes down the pub said.”

    Perhaps they should go into the polling business together, they could make a fortune.

  19. Guymonde

    If its any consolation my gut feeling was wrong but then I thought remain would win by a point or less, which was my opinion for a long time

  20. @Guymonde

    You may scoff at unscientific analysis of social media and forums etc

    But algorithmic analysis of the above produced an almost spot on result. See

    http://sense-eu.info/

    They predicted Leave by 51.79%. The actual result was 51.9%

  21. Candy

    Thanks for that, it was fascinating but also quite scary

  22. I don’t think the question is being framed correctly.

    Some of the polls were properly wrong, and even those that were almost right all erred in the wrong direction. But what was the overall impression created by the opinion polling, taken as a whole?

    That the referendum would be very tight.

    What was the result?

    The referendum was very tight.

    For me it’s not a question of what went wrong with the polling. I think the errors in the polling were trivial, and although it’s right for the pollsters to try and learn from each election, I don’t think there’s much point in making a big deal out of it.

    The tools of polling simply aren’t sharp enough to reliably predict election outcomes. And I don’t think they ever will be.

    What needs to change is the over-reliance on polls in political discourse. Yes, we’re all polling geeks here, but I do get a bit fed up of the “your side’s definitely going to lose, just look at the opinion polls” theme that sometimes permeates.

    Not to mention the instant “outlier” shouts whenever a poll looks like the Tories might be doing half-decently.

    Margins of error sit at around +/- 3% for most polls. Pretty much every election comes in somewhere close to that margin. There’s simply nothing wrong.

  23. Candy

    That is an interesting link. Thanks.

    It makes sense, if you can measure the changes in opinion through examining millions of social interactions of real people chatting amongst themselves – as opposed to only the tiny set of opinions that most of us can sample.

  24. You can always tweak weighting methods for class and age and gender and self-reported likelihood to vote and so on. There are interesting questions about the best way to model these so that your sample is the most representative possible.

    It seems to me that you can do this for pretty much any characteristic you choose to define, except perhaps with what you might call “engagement”. If voting intentions happen to divide along such lines, with different results from those who are more engaged with society, and those who are less engaged or even disengaged, is there any method to pick this up? Is any sample population not going to be very much skewed towards the “engaged”, those who tend to interact more widely with society? Are these not ultimately the only people you can ever hope to model with any sampling method?

    Anecdotally, this certainly seemed to be a dividing line with respect to the EURef. Is modelling intention on this characteristic simply practically difficult, or is it even theoretically a no-go?

    For instance, on the point of the changing responses by phone call contact attempts, the most interesting distinction it seems to me (beyond the difference between those with and without a phone line at all) would be between those attempted contactees which are successful at some point and those who are never successful at all. What happens to those null responses? Is there any possibility of modelling them from the trend in contact attempts? It looks like there are attempts to try and assess the impact of engagement, but to truly address it do you not need *some* method of estimating and integrating the null data points, whatever sampling method you use?

  25. The flaw in the UK GE election polling in 2015 seemed to be concentrated in England, outwith London.

    I had a look at the 2016 polls for Scotland as recorded in the Wiki article https://en.wikipedia.org/wiki/Opinion_polling_for_the_United_Kingdom_European_Union_membership_referendum

    Other than TNS, which consistently over-estimated [1] the Remain vote (but had around 30% of Respondents as Undecided), Survation over-estimated Remain by just under 4%, the polling was reasonably accurate.

    Indeed, Panelbase in April and ICM in May, were less than 1% out.

    Excluding TNS, the 2016 Scottish “poll of polls” had Remain at 64%, compared with the actual 62%.

    That doesn’t seem too bad!

    [1] While I say “over-estimated”, that isn’t really the case. It may well have been the case that Scottish support for Remain was higher earlier in 2016, and that some more people shifted to Leave by the time of the Referendum itself.

  26. A very interesting piece on pb by Mike Smithson :-

    “h1A staggering 54% of Corbyn supporters in the YouGov members’ poll think their man will lead them to victory
    July 20th, 2016”

    He concludes:
    “How can you argue with people who totally believe this?
    Whenever I get into arguments with them they’ll tell you that you can’t trust the polls because of what happened in May 2015.

    What they don’t appreciate is that the the three general elections in modern times where the polls were badly wrong the LAB position was overstated in every case. Go look at the numbers for 1970,1992, and 2015”

    ………..so ……..the only way to get rid of Corbyn is to show his supporters that the Polls can be trusted :-)

  27. The thing is, of course, that the political shibboleths have been proven wrong in recent years time after time. The old certainties no longer apply. It’s not at all impossible to imagine a Corbyn-led Labour gaining ground through a period of economic turmoil at least to the extent of being able to force the Tories from power in favour of a rainbow coalition. An uphill struggle, but eminently feasible.

  28. The question which perhaps the pollsters should have asked to anyone aged 59 or over, was whether they voted in the original 1975 referendum, and if so how they voted then and whether they have changed their position now.

    A common theme on the doorsteps was that people voted Yes in 1975 to a free trade club, but had they known what it would evolve into, they would have voted differently.

  29. it seems to me that there are a number of reasons the polls may be ‘wrong’ about a vote

    a) people change their minds between the poll and the vote
    b) what is measured (stated voting intention) is a poor indicator of what is of interest (future voting behaviour)
    c) the composition of those responding to the poll is very different to the composition of those actually voting (e.g. there are too many educated people or too many old people in the sample)
    d) the fact of participating in the poll is associated with voting behaviour (e.g. anyone who responded to an internet poll is per se more likely to vote in a particular way)

    My suggestion to AW above is only related to difficulty C. I made it because there seems to be uncertainty over what to weight by (e,g, should we take into account education and if so how important is it). This difficulty appears to be related to the fact some variables (e.g. education) play out differently in some contexts as against others.

    My suggestion was that one might approach this problem ’empirically’ it is possible in early polls to predict voting intention (e.g. to run a logistic regression which will in essence yield a probability that an individual with certain characteristics will have a stated intention to vote in a certain way). Such a regression will suggest that ‘having a degree’ adds a certain amount to the ‘logit’ (a function of the probability) relating to voting, being over 60 or whatever adds or subtracts a bit more and so on. If one has a very large random sample of the population one can then use these estimated values to caclulate how each individual in that population would be expected to vote. This in turn should yield you an overall estimate of how the population overall would vote. Obviously one can make the model more or less complicated by adding in such things as probability of voting and seeing what difference that makes.

    It may be that this suggestion would not work at all. It is, however, something that could be done with the data that the polling companies have available to them (At least I assume that they can lay their hands on a very large random population with the needed measures). I am not sure why they do not do it.

  30. @Oldnat – “It makes sense, if you can measure the changes in opinion through examining millions of social interactions of real people chatting amongst themselves….”

    I find this fascinating, for another reason.

    When government ministers like Theresa may try to initiate ‘snoopers charters’ and other eavesdropping techniques, there is general outrage among the liberal campaign set about infringement of civil liberties etc etc.

    Yet these same people are happy to pass over vast amounts of personal information to their credit card companies, to supermarkets and shops through their store cards, and to global mega corporations via their lives on social media. But let government get hold of any data – that’s an enormous no no.

    I personally trust government much more than big business, albeit still with worries regarding protection for individuals, but I find the fact that private companies can collect invidual postings on social media and from this aggregate the national will very interesting – not least because had the government done this there would reams of press articles denouncing ‘Big Brother’. (Or is it ‘Big Sister’ these days?

  31. @Alec

    You’re quite right. Tracking the activities of “normal” people is very straightforward. It’s the clandestine, secretive people who are difficult to get a handle on. By and large, criminals, terrorists and other bad people fall into the latter group.

    That’s why we need surveillance powers. Not because I want to read your emails to your wife. If I wanted to know about your relationship with your wife I could probably read about it on social media, or look at your shopping habits via your Loyalty Cards, or check where your car had been on ANPR cameras.

    You’re probably not chatting with your wife via TOR using an unattributable mobile mi-fi device, and embedding secret messages to her in the metadata of image files (etc, etc).

  32. @Candy
    Fascinating link, thanks. So on 15th June some catastrophe happened to Remain, from which it never really recovered. Any theories?

  33. Great post from AW. I think he and his friends will have another opportunity to test their EU referendum techniques quite soon.

    @Colin – that is the staggering delusion that is destroying Labour, and which we can see some evidence of on UKPR.

    Alongside the 54% of Labour members polled who actually think Corbyn can win, I’d like to know how many of them think he can’t but that this doesn’t matter, as that is the other side of Labour’s problem.

    This is one of the reasons why I think the coup plotters have been very dim. While I thought Corbyn’s delivery of labour referendum campaign was dreadful, his central message I thought was sound – he was just unable or unwilling to articulate this properly or get on the stump in the places that mattered. Launching a coup at this time was a mistake.

    Similarly, the may elections were poor for Labour, but not disastrous, even while the coup plotters tried to make them so. In the English locals, while there were obvious stresses in some of Labour’s core heartlands, they managed to hold and win some seats in southern areas. Wales was bad, and Scotland a complete disaster, but it’s a moot point how much blame should be laid at Corbyn’s door for that.

    If Corbyn is going to be deposed, his accolytes need to be faced with much more incontrovertible evidence that the party is heading for electoral oblivion. A couple more local election cycles will probably do the trick, in terms of providing the numbers, but whether his fans accept these (‘it’s a MSM plot to smear him!’) we don’t know.

    Labour’s fear now must be that by being too twitchy and trying to launch a coup not once but twice, the plotters have created resistance to any move to unseat Corbyn and made their task even harder.

  34. Alec

    In that poll only 10% of Corbyn supporters said he was ‘Likely to lead Labour to defeat at the next general election.’

    Not much ammunition there for people who think his supporters aren’t interested in parliamentary success. Unless there’s massive social desirability bias affecting responses, which seems fairly unlikely.

  35. @Neil A – very sensible.

    I personally don’t have a problem with government eavesdropping. My only concern is more on the practical side.

    My impression (from things like the Iraq invasion, for example) is that western governments think that intelligence can be gathered by spending lost of money on electronic surveillance in all it’s forms. In reality, generating a mass of data doesn’t give us the protection we need unless we can analyse it successfully. In the case of Iraq, they had reams of data, but almost no analysts with the right language skills to assess the stuff.

    I’m not sure what the score is with criminal intelligence or with current UK security services, but for a long time we seem to have put less emphasis on human intelligence and getting infiltrators where we need them.

  36. @ Alec

    I think what they try to say that they (their ideas) can win rather than Corbyn (he is just a representation of these).

    If you recall @ Guymonde’s and @ Norbold’s comments on the relatively low participation of the new members, it suggests that it is not the groupthink of the Corbyn followers, but a particular social group’s (or several groups) view. In that YouGov poll used by Sussex Uni’s research it said that a sizeable proportion of committed followers of Corbyn voted for Greens against Khan in the London mayoral elections.

    As to MSM, they have a point.

    http://www.lse.ac.uk/[email protected]/research/pdf/JeremyCorbyn/Cobyn-Report-FINAL.pdf

  37. Committed Labour supporters perceive themselves on the left on their party as a self-declaration, but in their more detailed values, they are markedly to the left. These were the people to whom Corbyn appealed last year, then attracted more of them to the party.

    Here’s some data on it: http://blogs.lse.ac.uk/politicsandpolicy/ideology-is-in-the-eye-of-the-beholder/

  38. When studies like the SENSE-EU piece linked above come close to actual results they look impressive, but it’s pure coincidence.

    There are huge structural problems with social media data that mean it’s functionally useless for large scale quant / longitudinal studies of this type (it can be useful for more qual-ish studies, but that’s not what is attempted here).

    Those issues:

    1. Only a tiny amount of social data (substantially less than 10%) has trustworthy (or any) metadata about posters. For a study of this type, you would want to at bare minimum validate that comments are made by people that might be qualified to vote (i.e. identified as UK citizens and / or physically located in the UK). Without this, a substantial portion of posts will be sourced from the US in particular (along with Kremlin troll ops; asian content farms and the like). Those that do have location metadata in particular are mostly from an unusual niche group of posters (people posting to Twitter from mobile devices, with a specific set of posting settings enabled)

    2. The social conversations collected in the study would be extremely unrepresentative. Social listening focusing overwhelmingly on Twitter (because it’s easy to collect Twitter data, and the source delivers relatively high volumes of data. I expect to see 60 – 80% of content in studies of this type coming from Twitter). The demographic of active Twitter users is not remotely representative of the UK voting population. Facebook, which is fairly representative, would not be covered by a study of this type at all (Facebook conversations are not available for text mining in anything other than a very superficial way)

    I stress these are structural problems – they mean that the sample being analysed doesn’t correlate to the group that performance is being benchmarked to.

  39. As 74% of the £3 supporters were ABC1, and their average age was 51, if they had not become a member by January, and if they remained supporters of Corbyn (judging from the YouGov survey of new members they remained), they won’t have a problem with paying the £25 in the short window of signing up.

  40. @ Chris

    Thank you for the detailed comment on analysing social media content.

    Roughly this is what an industry expert said on a VC fund application for an analytical software for this purpose (the VC turned the application down – it was the second round funding).

  41. ALEC

    The really dramatic & relevant factor in that Poll is the difference in attitudes to Corbyn of the two cohorts-those at GE 2015 & those joining after it.

    The Poll sample has more of the latter than the former-I presume this reflects the whole?

    It certainly spells out Owen Smith’s task in no uncertain terms. ( 6% of the post 2015 membership believe he will lead Labour to victory at the next GE)

    I hope YouGov track LP members opinions through the hustings-though Smith was complaining this morning about the lack of head to head debates with JC.

  42. There is an interesting difference between those who became LP members and those who chose the £3 route after the last GE.

    Union membership was lower among the latter, and very few of them were Momentum members (proportionally – 3%, while the party recruits were 9%). So these people are likely to decide whether they pay the £25 on their own rather than being mobilised by Momentum or the unions.

    It is also interesting that registers supporters were more likely to have voted for Greens (19%) and less likely to have voted for Labour (64%) in comparison of the party joiners (13% and 72% respectively).

  43. Laszlo

    Just because £3 supporters were predominantly ABC1 last year doesn’t mean that the same demographic is in play this year. Or that the people who aren’t ABC1 should be ignored. Unfortunately there are many who have been disenfranchised as members that would like to vote in the leadership election but can’t afford the £25.

    According to one MP anyone who can’t afford the 25 quid is unlikely to be interested in politics, this may be true but it’s not something we should happy about and as a defense of the £25 voting tax it embodies all that’s wrong with labour.

    I suppose that suspending members for organizing to help pay the poll tax of those that can’t afford it is going to endear the party establishment to ordinary members who have been saved from their socialist tendencies by the party. I see the legal point but its difficult to see the moral point unless we actually see the unwanted as non people

  44. @ CambridgeRachel

    Yes, I know, and it is unfortunate.

    I just wanted to provide data for a better evaluation, and counter the general (false) impression (with data).

  45. Labour can at least be grateful for one thing.

    They are still in a better state than the GOP.

  46. Somehow I managed to cut off the bottom of my previous comment.

    There is a sizeable upper middle, middle and upper working class population that has moved distinctly to the left. Many of them found home in the LP, but many of them actually don’t have high loyalty to the LP.

    Some call them purists, which might as well be true, but the key point is that they decide on what they do (like voting in an election) on a case by case basis. They should be in leadership, but they tend to choose a vague followership role. This is what makes them so elusive – the new swarm.

  47. “If you recall @ Guymonde’s and @ Norbold’s comments on the relatively low participation of the new members”

    ———–

    Worth considering, that if you are a new, young, pro-Corbyn member, joining an established group of members who are decidedly not-for-Corbyn, and you wanna campaign for Corbyn, but the established members would rather you didn’t… your participation might not be that welcome. If course they might blame you for this…

  48. @Cambridgerachel – I genuinely think it’s a difficult issue. The low cost way to become a voting member of the party frankly distorts internal Labour party politics. For example, Toby Young of the Daily Telegraph joined last time round to vote for Corbyn. [he didn’t, as his membership was blocked, but believe me – there will be many other anti Labour people out their who are extremely happy to sign up and vote Corbyn, happy in the knowledge that they are destroying theor political opponents].

    There is a really tough balancing act between trying to broaden out political engagement and allowing entryism. The open primary to select Sarah Wollaston seems to have worked (when I thought it was nothing more than a gimmick at the time) but Labour’s cut price membership has led to problems.

    I personally don’t believe political parties should have their leadership decided upon by people who have no real engagement with the party other than joining for the price of a cup of coffee. We know from both anecdote but also from detailed polling evidence that the £3 members are not very active in the party. I think it would be perfectly justified for parties to require voting members to have been members for 6 or 12 months before the announcement of a leadership campaign and to have attended a minimum number (3?) of local party meetings. Otherwise you will just attract the Toby Young’s of this world and leave your party in the hands of a mixture of it’s enemies and delusional nutters who believe what they read on social media.

  49. A week before the referendum on this website (see below) I forecast turnout of 72 per cent (it was 71.8) and that Remain would poll Gibraltar 95 per cent (it was 95.9), Scotland 70 (62), NI 65 (55), Wales 50 (52.5), London 58 (60), Rest of England 45 (44.7).

    I also recommended the best punt – William Hill had Wales Leave 5/2, which you would have won (as did I).

    ——-

    Remain’s problem is they do best in territories with the smallest electorates. Approximate eligible electorates: England 37.4 million (of which London 5.2 million), Wales 2.2 million, Scotland 3.9 million, NI 1.2 million, Gibraltar 23,000. Total 44.7 million.
    Scottish referendum turnout 85 per cent, GE 66 per cent. Assume 72 per cent, so England 26.9 million (of which London 3.74 million), Wales 1.58 million, Scotland 2.8 million, NI 860,000, Gibraltar 20,000 (assumes 90 per cent).
    Remain polls, say, Gibraltar 95 per cent, Scotland 70 per cent, NI 65 per cent, Wales 50 per cent (William Hill has Wales Leave on 5/2, which is about the best punt), London 58 per cent, Rest of England 45 per cent.
    So Remain/Leave = Gibraltar 19,000/1,000, Scotland 1,960,000/840,000, NI 559,000/301,000, Wales 790,000/790,000, London 2,169,200/1,570,800, Rest of England 10,422,000/12,738,000. Total 15,919,200/16,240,800. 49.5/50.5 per cent
    Of the 5.6 million ex-pats, three million can vote (15 year rule). Two years ago only 23,366 were registered, but of course many more could now do so. About a million live in the EU, so possibly 700,000 are eligible. Let’s assume 350,000 register and that 90 per cent vote to Remain. This adds 315,000/35,000. (Others outside the EU will also register, but assume they’re evenly split).
    So the new Remain/Leave total is 16,234,200/16,275,800. Leave majority 41,600.

  50. “We know from both anecdote but also from detailed polling evidence that the £3 members are not very active in the party.”

    —————-

    Worth considering, that if you are a new, young, pro-Corbyn member, joining an established group of members who are decidedly not-for-Corbyn, and you wanna campaign for Corbyn, but the established members would rather you didn’t… your participation might not be that welcome. If course they might blame you for this…

1 2 3 18