It is year since the 2017 general election. I am sure lots of people will be writing a lot of articles looking back at the election itself and the year since, but I thought I’d write a something about the 2017 polling error, something that has gone largely unexamined compared to the 2015 error. The polling companies themselves have all carried out their own internal examinations and reported to the BPC, and the BPC will be putting out a report based on that in due course. In the meantime, here are my own personal thoughts about the wider error across the industry.

The error in 2017 wasn’t the same as 2015.

Most casual observers of polls will probably have lumped the errors of 2015 and 2017 in together, and seen 2017 as just “the polls getting it wrong again”. In fact the nature of the error in 2017 was completely different to that in 2015. It would be wrong to say they are unconnected – the cause of the 2017 errors was often pollsters trying to correct the error of 2015 – but the way the polls were wrong was completely different.

To understand the difference between the errors in 2015 and the errors in 2017 it helps to think of polling methodology as being divided into two bits. The first is the sample – the way pollsters try to get respondents who are representative of the public, be that through their sampling itself or the weights they apply afterwards. The second is the adjustments they make to turn that sample into a measure of how people would actually vote, how they model things like turnout and accounting for people who say don’t know, or refuse to answer.

In 2015, the polling industry got the first of those wrong, and the second right (or at least, the second of those wasn’t the cause of the error). The Sturgis Inquiry into the 2015 polling error looked at every possible cause of error, and decided that the polls had samples that were not representative. While they didn’t think the way pollsters predicted turnout was based on strong enough evidence and recommended improvements there too, they ruled it out as cause of the 2015 error.

In 2017 it was the opposite situation. The polling samples themselves had pretty much the correct result to start with, showing only a small Tory lead. More traditional approaches towards modelling turnout (which typically made only small differences) would have resulted in polls that only marginally overstated the Tory lead. The large errors we saw in 2017 were down to the more elaborate adjustments that pollsters had introduced. If you had stripped away all the attempts aimed at modelling turnout, don’t knows and suchlike (as in the table below) then the underlying samples the pollsters were working with would have got the Conservative lead over Labour about right:

What did pollsters do that was wrong?

The actual things that pollsters did to make their figures wrong varied from pollster to pollster. So for ICM, ComRes and Ipsos MORI, it looks as if new turnout models inflated the Tory lead, for BMG it was their new adjustment for electoral registration, for YouGov it was reallocating don’t knows. The actual details were different in each case, but the thing they had in common was that pollsters had introduced post-fieldwork adjustments that had larger impacts than at past elections, and which ended up over-adjusting in favour of the Tories.

In working out how pollster came to make this error we need to have closer look at the diagnosis of what went wrong in 2015. Saying that samples were “wrong” is easy, if you are going to solve it you need to identify how they were wrong. After 2015 the broad consensus among the industry was that the samples had contained too many politically engaged young people who went out to vote Labour and not enough disinterested young people who stayed at home. Polling companies took a mixture of two different approaches towards dealing with this, though most companies did a bit of both.

One approach was to try and treat the cause of the error by improving the samples themselves, trying to increase the proportion of respondents who had less interest in politics. Companies started adding quotas or weights that had a more direct relationship with political interest, things like education (YouGov, Survation & Ipsos MORI), newspaper readership (Ipsos MORI) or straight out interest in politics (YouGov & ICM). Pollsters who primarily took this approach ended up with smaller Tory leads.

The other was to try and treat the symptom of the problem by introducing new approaches to turnout that assumed lower rates of turnout among respondents from demographic groups who had not traditionally turned out to vote in the past, and where pollsters felt samples had too many people who were likely to vote. The most notable examples were the decision by some pollsters to replace turnout models based on self-assessment, with turnout models based on demographics – downweighting groups like the young or working class who have traditionally had lower turnouts. Typically these changes produced polls with substantially larger Conservative leads.

So was it just to do with pollsters getting youth turnout wrong?

This explanation chimes nicely with the idea that the polling error was down to polling companies getting youth turnout wrong, that young people actually turned out at an unusually high level, but that polling companies fixed youth turnout at an artificially low level, thereby missing this surge in young voting. This is an attractive theory at first glance, but as is so often the case, it’s actually a bit more complicated than that.

The first problem with the theory is that it’s far from clear whether there was a surge in youth turnout. The British Election Study has cast doubt upon whether or not youth turnout really did rise that much. That’s not a debate I’m planning on getting into here, but suffice to say, if there wasn’t really that much of a leap in youth turnout, then it cannot explain some of the large polling misses in 2017.

The second problem with the hypothesis is that there isn’t really that much relationship between those polling companies who had about the right proportion of young people in their samples and those who got it right.

The chart below shows the proportion of voters aged under 25 in each polling company’s final polling figures. The blue bar is the proportion in the sample as a whole, the red bar the proportion in the final voting figures, once pollsters had factored in turnout, dealt with don’t knows and so on. As you would expect, everyone had roughly the same proportion of under 25s in their weighted sample (in line with the actual proportion of 18-24 year olds in the population), but among their sample of actual voters it differs radically. At one end, less than 4% of BMG’s final voting intention figures were based on people aged under 25s. At the other end, almost 11% of Survation’s final voting figures were based on under 25s.

According to the British Election Study, the closest we have to authorative figures, the correct figure should have been about 7%. That implies Survation got it right despite having far too many young people. ComRes had too many young people, yet had one of the worst understatements of Labour support. MORI had close to the correct proportion of young people, yet still got it wrong. There isn’t the neat relationship we’d expect if this was all about getting the correct proportion of young voters. Clearly the explanation must be rather more complicated than that.

So what exactly did go wrong?

Without a nice, neat explanation like youth turnout, the best overarching explanation for the 2017 error is that polling companies seeking to solve the overstatement of Labour in 2015 simply went too far and ended up understating them in 2017. The actual details of this differed from company to company, but it’s fair to say that the more elaborate the adjustments that polling companies made for things like turnout and don’t knows, the worse they performed. Essentially, polling companies over-did it.

Weighting down young people was part of this, but it was certainly not the whole explanation and some pollsters came unstruck for different reasons. This is not an attempt to look in detail at each pollster, as they may also have had individual factors at play (in BMG’s report, for example, they’ve also highlighted the impact of doing late fieldwork during the daytime), but there is a clear pattern of over-enthusiastic post-fieldwork adjustments turning essentially decent samples into final figures that were too Conservative:

  • BMG’s weighted sample would have shown the parties neck-and-neck. With just traditional turnout weighting they would have given the Tories around a four point lead. However, combining this with an additional down-weighting by past non-voting and the likelihood of different age/tenure groups to be registered to vote changed this into a 13 point Tory lead.
  • ICM’s weighted sample would have shown a five point Tory lead. Adding demographic likelihood to vote weights that largely downweighted the young increased this to a 12 point Tory lead.
  • Ipsos MORI’s weighted sample would have shown the parties neck-and-neck, and MORI’s traditional 10/10 turnout filter looks as if it would have produced an almost spot-on 2 point Tory lead. An additional turnout filter based on demographics changed this to an 8 point Tory lead.
  • YouGov’s weighted sample had a 3 point Tory lead, which would’ve been unchanged by their traditional turnout weights (and which also exactly matched their MRP model). Reallocating don’t knows changed this to a 7 point Tory lead.
  • ComRes’s weighted sample had a 1 point Conservative lead, and by my calculations their old turnout model would have shown much the same. Their new demographic turnout model did not actually understate the proportion of young people, but did weight down working class voters, producing a 12 point Tory lead.

Does this mean modelling turnout by demographics is dead?

No. Or at least, it shouldn’t do. The pollsters who got it most conspicuously wrong in 2017 were indeed those who relied on demographic turnout models, but this may have been down to the way they did it.

Normally weights are applied to a sample all at the same time using “rim weighting” (this is an iterative process that lets you weight by multiple items without them throwing each other off). What happened with the demographic turnout modelling in 2017 is that companies effectively did two lots of weights. First they weighted the demographics and past vote of the data so it matched the British population. Then they effectively added separate weights by things like age, gender and tenure so that the demographics of those people included in their final voting figures matched the people who actually voted in 2015. The problem is this may well have thrown out the past vote figures, so the 2015 voters in their samples matched the demographics of 2015 voters, but didn’t match the politics of 2015 voters.

It’s worth noting that some companies used demographic based turnout modelling and were far more successful. Kantar’s polling used a hybrid turnout model based upon both demographics and self-reporting, and was one of the most accurate polls. Surveymonkey’s turnout modelling was based on the demographics of people who voted in 2015, and produced only a 4 point Tory lead. YouGov’s MRP model used demographics to predicts respondents likelihood to vote and was extremely accurate. There were companies who made a success of it, and it may be more of a question about how to do it well, rather than whether one does it at all.

What have polling companies done to correct the 2017 problems, and should I trust them?

For individual polling companies the errors of 2017 are far more straightforward to address than in 2015. For most polling companies it has been a simple matter of dropping the adjustments that went wrong. All the causes of error I listed above have simply been reversed – for example, ICM have dropped their demographic turnout model and gone back to asking people how likely they are to vote, ComRes have done the same. MORI have stopped factoring demographics into their turnout, YouGov aren’t reallocating don’t knows, BMG aren’t currently weighting down groups with lower registration.

If you are worried that the specific type of polling error we saw in 2017 could be happening now you shouldn’t be – all the methods that caused the error have been removed. A simplistic view that the polls understated Labour in 2017 and, therefore, Labour are actually doing better than the polls suggest is obviously fallacious.
However, that is obviously not a guarantee that polls couldn’t be wrong in other ways.

But what about the polling error of 2015?

This is a much more pertinent question. The methodology changes that were introduced in 2017 were intended to correct the problems of 2015. So if the changes are reversed, does that mean the errors of 2015 will re-emerge? Will polls risk *overstating* Labour support again?

The difficult situation the polling companies find themselves in is that the methods used in 2017 would have got 2015 correct, but got 2017 wrong. The methods used in 2015 would have got 2017 correct, but got 2015 wrong. The question we face is what approach would have got both 2015 and 2017 right?

One answer may be for polling companies to use more moderate versions of the changes them introduced in 2017. Another may be to concentrate more on improving samples, rather than post-fieldwork adjustments to turnout. As we saw earlier in the article, polling companies took a mixture of two approaches to solving the problem of 2017. The approach of “treating the symptom” by changing turnout models and similar ended up backfiring, but what about the first approach – what became of the attempts to improve the samples themselves?

As we saw above, the actual samples the polls used were broadly accurate. They tended to have smaller parties too high, but the balance between Labour and Conserative was pretty accurate. For one reason or another, the sampling problem from 2015 appears to have completely disappeared by 2017. 2015 samples were skewed towards Labour, but in 2017 they were not. I can think of three possible explanations for this.

  • The post-2015 changes made by the polling companies corrected the problem. This seems unlikely to be the sole reason, as polling samples were better across the board, with those companies who had done little to improve their samples performing in line with those who had made extensive efforts.
  • Weighting and sampling by the EU ref made samples better. There is one sampling/weighting change that nearly everyone made – they started sampling/weighting by recalled EU ref vote, something that was an important driver of how people voted in 2017. It may just be that providence has provided the polling companies with a useful new weighting variable that meant samples were far better at predicting vote shares.
  • Or perhaps the causes of the problems in 2015 just weren’t an issue in 2017. A sample being wrong doesn’t necessarily mean the result will be wrong. For example, if I had too many people with ginger hair in my sample, the results would probably still be correct (unless there is some hitherto unknown relationship between voting and hair colour). It’s possible that – once you’ve controlled for other factors – in 2015 people with low political engagement voted differently to engaged people, but that in 2017 they voted in much the same way. In other words, it’s possible that the sampling shortcomings of 2015 didn’t go away, they just ceased to matter.

It is difficult to come to firm answer with the data available, but whichever mix of these is the case, polling companies shouldn’t be complacent. Some of them have made substantial attempts to improve their samples from 2015, but if the problems of 2015 disappeared because of the impact of weighting by Brexit or because political engagement mattered less in 2017, then we cannot really tell how successful they were. And it stores up potential problems for the future – weighting by a referendum that happened in 2016 will only be workable for so long, and if political engagement didn’t matter this time, it doesn’t mean it won’t in 2022.

Will MRP save the day?

One of the few conspicuous successes in the election polling was the YouGov MRP model (that is, multi-level regression and post-stratification). I expect come the next election there will be many other attempts to do the same. I will urge one note of caution – MRP is not a panacea to polling’s problems. They can go wrong, and still relies on the decisions people make in designing the model it runs upon.

MRP is primarily a method of modelling opinion at lower geographical areas from a big overall dataset. Hence in 2017 YouGov used it to model the share of the vote in the 632 constituencies in Great Britain. In that sense, it’s a genuinely important step forward in election polling, because it properly models actual seat numbers and, from there, who will win the election and will be in a position to form a government. Previously polls could only predict shares of the vote, which others could use to project into a result using the rather blunt tool of uniform national swing. MRP produces figures at the seat level, so can be used to predict the actual result.

Of course, if you’ve got shares of the vote for each seat then you’ll also be able to use it to get national shares of the vote. However, at that level it really shouldn’t be that different from what you’d get from a traditional poll that weighted its sample using the variables and the same targets (indeed, the YouGov MRP and traditional polls showed much the same figures for much of the campaign – the differences came down to turnout adjustments and don’t knows). Its level of accuracy will still depend on the quality of the data, the quality of the modelling and whether the people behind it have made the right decisions about the variables used in the model and on how they model things like turnout… in other words, all the same things that determine if an opinion poll gets it right or not.

In short, I do hope the YouGov MRP model works as well in 2022 as it did in 2017, but MRP as a technique is not infallible. Lord Ashcroft also did a MRP model in 2017, and that was showing a Tory majority of 60.

TLDR:

  • The polling error in 2017 wasn’t a repeat of 2015 – the cause and direction of the error were complete opposites.
  • In 2017 the polling samples would have got the Tory lead pretty much spot on, but the topline figures ended up being wrong because pollsters added various adjustments to try and correct the problems of 2015.
  • While a few pollsters did come unstuck over turnout models, it’s not as simple as it being all about youth turnout. Different pollsters made different errors.
  • All the adjustments that led to the error have now been reversed, so the specific error we saw in 2017 shouldn’t reoccur.
  • But that doesn’t mean polls couldn’t be wrong in other way (most worryingly, we don’t really know why the underlying problem in 2015 error went away), so pollsters shouldn’t get complacent about potential polling error.
  • MRP isn’t a panacea to the problems – it still needs good modelling of opinion to get accurate results. If it works though, it can give a much better steer on actual seat numbers than traditional polls.


138 Responses to “Why the polls were wrong in 2017”

1 2 3
  1. test

  2. Today’s Times reports YouGov Poll

    Con 44 +2
    Lab 37 -2
    LD 8 -1

  3. Thanks again AW for a fascinating insight into polling and the adjustments made to the raw figures. I agree with Pete B, polling is going to be very difficult over the next 3-4 years.

    OLDNAT

    Thanks for the polling details from the latest Scottish poll, seem to be contrary to claims that Labour are consistently second in Scotland now which i have seen here.

  4. Colin

    Thanks for the latest YouGov figures in the Times. That’s the biggest Tory lead for some time I think.

    Have a good day all. Off to the coast for some walking and a view rounds of putting.

  5. few not view!! Senior moment when in a hurry. :-)

  6. few, not view! Senior moment when in a hurry. :-)

  7. @AW – not sure if others are experiencing problems, but I’m experiencing intermittent problems with missing posts? Don’t know if it’ to do with my browser (Firefox) or the UKPR site?

  8. TOH

    “Have a good day all. Off to the coast for some walking and a view rounds of putting.”

    Fascinating stuff as always there, TOH.

  9. TO @ 7.22 am

    I agree with both your reasons why Jeremy Corbyn has lost popularity in Scotland.

    But the latest YouGov polling, that ON has put up, looks too extreme a drop to me. There have been other polls showing SLAB in front of SCON, which suggests this YouGov sample in Scotland has been inadequate.

    Indeed I find it bizarre that for the whole GB, YouGov have shown the Tory lead increasing when the government is struggling in disunity and not achieving anything positive.

  10. I am having endless problems with access , passwords etc etc-whats going on ?

  11. Problem getting access/commenting/passwords -whats going on ?

  12. I know it is only 1 poll so far in June but the YG averages for each month this year are showing the trend clearly (hope the formatting holds)
    Con Lab
    41.0 41.7
    41.0 41.4
    42.0 39.8
    42.0 39.3
    42.4 38.2
    44.0 37.0

  13. One post vanished so trying again

    Only 1 poll in June so far but the YG averages for each month this year show a clear trend.

    Con Lab
    41.0 41.7
    41.0 41.4
    42.0 39.8
    42.0 39.3
    42.4 38.2
    44.0 37.0

  14. another test !

  15. help !

  16. 2 attempts at a post disappeared, into moderation I presume. Not trying any more. They may pop up later – just a comment on YG figures.

  17. Yougov table is out. Seriously don’t understand the movement from last week, can only think it is a rogue poll? Con have gone up 2 in the raw data from 29 to 31. Interestingly there are more voters going from Con to Lab than last week. In the subsamples it looks like lib dems are tending to break to con a bit more than labour now but it is a very small subsample.

    Right to leave and wrong to leave are now level (on 44), despite a slight net increase in the number of people who think the negotiations are going badly. Labour’s wrong to leave figure is now 63 (-10).

    https://d25d2506sfb94s.cloudfront.net/cumulus_uploads/document/z1w1jcj6s9/TimesResults_180605_VI_Trackers_w.pdf

  18. Perhaps the rise of CON VI to 44% reflects some voters thinking TM is going to avoid a Hard Brexit and ignore the demands of Boris Johnson and JRM.

    But the latest leaks and news will surely bring down the YouGov lead.

    Boris Johnson`s speech where he says some people (who??) have been turned into “a quivering wreck” because of a fear of Hard Brexit, won`t help.

    Nor the comment from Donald Trump`s aides that he is tired of Theresa May`s schoolmistress lecturing, and won`t waste his time talking to her at the G7.

    TM ought weeks ago have stood up to Trump`s steel tariffs, and told him his July visit to the UK was cancelled. Bullies are best dealt with by puncturing their vanity.

  19. @BazinWales I was thinking similarly and all this at a time when the Government’s publicity is almost universally bad. It’s quite difficult to assume there’s polling error when so many polls in different places point in the same general direction (as referenced by the Scottish poll). Interestingly though that the Corbyn critics within the party aren’t using any of this as a stick to beat him with so either they don’t think this has any internal traction (who cares about winning when you can be principled?), less important than Brexit (true) or their internal polls say something different.

    The other slight comfort for Labour is that they will probably face a more right wing leader than TM at the next election who may be significantly less popular (perhaps more unpopular is the right phrase) than her.

  20. Just looking at the Con popularity amongst leave/remain in the last 5 yougov polls (headline figure), with the most recent one on the right

    Leave +43 +45 +42 +43 +42
    Remain -28 -28 -28 -33 -24

    Its only the most recent poll which shows Con doing better amongst remainers. The idea that going for a soft brexit will result in remainers returning seems plausible, but will the leavers start moving away?

  21. @Alec

    Yeah, I’ve had some missing posts, even when switching browsers and devices. By the looks of repeated posts, so have some others.

  22. @Carfrew
    Interesting study in the Times today about when tipping points can occur.
    I agree it is very interesting. In essence the results seem to depend on a) the existence of an intransigent minority, b) random pairing c) salience they had to name the individual and got money if they got the ‘right’ answer. I suspect that the mechanisms will produce different answers in different circumstances.
    Where the issue is the adoption of new technology or the growth of mass enthusiasm for thorium, there will not be an intransigent minority of enthusiasts, initially low salience, and initially no counter enthusiasts. Gradually the enthusiasts may be expected to convert those in immediate contact with them to whom the issue is salient (i.e pairing is not random). Once a critical mass is reached the salience will also increase and others will see the benefits and a mass adoption may occur with only misanthropes and isolates staying apart from the herd.
    Where the issue is political there will probably be opposing intransigents. Nothing much may happen until the issue becomes salient (e.g. there is a referendum). In practice intransigents on both sides are likely to be in touch with others who are less interested in the topic but whose self-interest is served by the same result (i.e. contact is again not random). They will tend to convert these groups. There will thus grow up two opposing groups who will find it aversive to have contact with each other on this issue and who will also be unreceptive to facts and arguments that do not suit their position. Tipping points in this situation are more applicable to the sub-groups than to the population as a whole,
    Applying this to the last election, it looked to me as if the tipping point applied to labour and not the conservatives. The latter had made up their minds and did not waver. Labour voters may well have had labour attitudes (as per my interpretation of Laszlo) but have been less likely to vote because they felt they were going to lose, were disenchanted with labour politicians and so on. Enter the intransigent moment and they swept up the potential labour vote only to come up against the solid wall of Tory diehards to whom Jeremy Corbyn is the route to Venezuela.
    Applying this to the issue of this thread, a crucial thing to measure becomes ‘momentum within groups’ i.e. the degree to which the campaign is converting potential (say) labour voters into people who will actually turn out.

  23. Testing

  24. I don’t know whether anyone can provide any insight into the Yougov weightings that are being applied, but I find them interesting. Nice to be able to discuss some actual polling too!

    The unweighted sample includes data on how respondents said that they’d voted at the EU Ref. It shows slightly more people who said that they’d voted Remain than had voted Leave.

    The weighted sample reverses the position, showing approximately 48% Remain and 52% Leave (in line with the Referendum result). So is the raw data being weighted based upon the Referendum results, or just coincidence?

    The weighted data suggests the EU Ref preferences of 1,327 respondents out of 1,619. That would be equivalent to an 82% turnout at the referendum. The actual turnout was 72%.

    I’m not particularly interested in the relationship between current polling and the referendum result, but I do wonder whether Yougov (and possibly the other polling companies) are picking up too many people who are “politically active”. There must be a danger of that and I’ve no idea what impact it would have on the figures that are being produced.

    I also note that they are significantly having to weight those in the 18-24 age group, presumably because they are struggling to find enough individuals who are genuinely in that group?

  25. testing again

  26. there will not be an intransigent minority of enthusiasts

    there will be an intransigent minority of enthusiasts

  27. there will be an intransigent minority of enthusiasts

  28. sorry can’t correct typos in the above as am getting through randomly

  29. Scottish poll tables out too.

    One thing i have noticed about Scottish polls is that people who voted for the Unionist parties in 2017 are a lot more likely to be don’t know than the SNP voters in 2017 are. This is perhaps not surprising because, if you are a Scottish nationalist, then there is one main party which supports your views. If you are a Unionist then you have several different options. A simplification perhaps but one which i think might be quite valid.

    The don’t knows in this poll from the 2017 VI are as follows

    SNP 9
    CON 20
    LAB 15
    LIB 25

    If there were to be an election shortly then it seems reasonable to assume that the don’t knows would tend to break for the Unionist parties, so the headline figure (which i don’t think reassigns don’t knows, they are just excluded?) is probably overplaying the SNP vote and underplaying the Unionist parties.

    Looking at the party leaders, Richard Leonard has a -20% approval figure with don’t knows lying at 54%. Amongst labour voters it is +2% approval with don’t knows lying at 57%. Quite a high number of don’t knows for a party leader, is he struggling to get much visibility?

    https://d25d2506sfb94s.cloudfront.net/cumulus_uploads/document/yd4yntmtw2/TheTimes_180605_Scotland_Westminster_VI_for_web_FRIDAY.pdf

  30. Geordie Gregg is to be the new editor of the Daily Mail.

    This is potentially interesting, as he is, apparently, a ‘staunch remainer’ and the Guardian have said – “A source with knowledge of the discussions told the Guardian that Greig’s appointment was part of a process of “detoxifying the Daily Mail” after Dacre’s editorship.”

  31. testing yet again

  32. Ouch, missed that stat. Just 52% of Labour 2017 voters think that Corbyn would make the best PM, and just 63% of current Labour VI.

  33. Frosty,
    “Right to leave and wrong to leave are now level (on 44),”

    I havn’t checked it rigorously, but I think any poll with con up also shows leave up. The two are very strongly correlated. So whichever poll is right or wrong, the two are moving together, and presumably also other attitude on Brexit questions.

    I am starting to get interested in changes in the table details poll by poll, which unfortunately are more complicated to track. It might be that the con sample attitude is rock steady, the lab attitude rock steady and really what we see is about the total assigned to lab or con, which is subject to random fluctuations and the normalising process.

    Incidentally, some weirdness about posting here too. Post didnt show up after posting, but in my case the site rejected a repost. Possibly after a firefox update.

  34. It looks like someone, probably God, has unilaterally decided to restrict posting by certain posters.

  35. Charles

    Yes, I think that the hypothetical (far away) party preference questions are attitude ones – and the methodology of the polling reflects it (reflecting this attitude better and better).

    When it comes to elections – the polling has to be action oriented, and methodologies are changed during the campaign (actuall, a few polling companies changed their methodology in the last week of the campaign).

    ————

    There is one point that reflects the change in particular. The sample, through adjustments are made representative (there are slightly different interpretations of this by polling companies) – except for the DKs. Those are not representative which introduces an error . It is not particularly important in the inter-election period (although it can introduce flawed interpretations), but becomes a distribution-distorting factor in election campaign periods.

  36. BazinHampshire;

    Re ”The weighted sample reverses the position, showing approximately 48% Remain and 52% Leave (in line with the Referendum result). So is the raw data being weighted based upon the Referendum results, or just coincidence?”

    Yes Roger M has posted about this as we know Remain voters were more likely to vote in the GE than leave voters so much so that had the referendum been based on 2017 GE voters alone remain would have won. (probably not true as some leave voters who thought job done and did not bother at the GGE may well have voted for a leave/ref candidate).

    Non the less weighting to 52/48 probably does over weight leave which in turn boost Con v Labour.

    Not withstanding that, and that this may be at the edge of MOE, the Tories clearly have a lead over Labour over 2% or more across many pollsters.

    It could be that a few soft Tory remainers are switching back having lent their vote to Labour at the GE believing a soft Brexit (or even naively a reversal) was more likely with Labour. They may have concluded that there is little no between HMG and Labour positions.

  37. Try again ?

  38. Hmmm-my posts appear-but when I refresh all the recent posts by everyone disappear .

    AW -any help would be appreciated -thanks.

  39. @Colin (and others) – what appears to be happening is that when I submit a post, it then disappears for a while. If I refresh the page or close the browser and reopen, posts then sometimes appear as normal submitted posts, but usually there is a delay or varying length. If I click the ‘go back one page’ button I get back to the page with my post written out but unsubmitted.

    Please note that these missing posts aren’t going into moderation – if they were, you would see your own post with the italics message telling you it is in moderation. This means that if your post does go missing, and you then submit a second attempt, after a while both posts will appear.

    This is a little odd, and is something that AW will want to sort out, but lets look on the bright side – it slows down responses and gives everyone a chance to think more clearly about what we are saying.

  40. https://www.bbc.co.uk/news/uk-northern-ireland-44398502

    Fascinating poll out today showing that a lot of people now want NI to leave the UK as a consequence of Brexit.

  41. Frosty:

    As a poster having no problems with messages getting up, I can answer that Leonard is struggling to get attention.

    His Yorkshire accent could be putting off a few Scottish voters,, but his problem is much more the Corbyn indecision on Brexit. Most Labour voters in Scotland want continued membership of the SM and CU, and he needs to say this should be Labour policy.

    Ruth Davidson has been vocal in her opposition to Theresa May, and Nicola Sturgeon is totally clear in demanding we stay in the SM and CU and have continued immigration. RL needs to shout out, or else people will decide he is a failure.

    Potentially SLAB could win many seats at an election soon, since voter fatigue with the SNP has set in about non-Brexit issues.

  42. There’s a huge gender gap in this poll:

    F: Con 29 Lab 29
    M: Con 33 Lab 24

    F: Right to leave 39 Wrong 47
    M: Right to leave 49 Wrong 40

  43. Hampshirebaz,
    “is the raw data being weighted based upon the Referendum results, or just coincidence?”

    I think the answer is yes. I made a post on page 1 where I said weighting by referendum result probably helped the accuracy of polling results, and I still think so. But there is a but. Doing this assumes that choosing people in the ratio of their referendum responses still gives an accurate representation. It might not, and in general any such process gets less accurate as time goes by. In 2017 it was still pretty close to the referendum, it is twice as far away now, and things are starting to happen which might affect results.

    It seems likely that if there is an age effect, whereby leavers are gradually dying off, then this would cause the normalising ratio to be wrong. Applying it would have the effect of cancelling out a slow drift to remain for such reasons, so it does not show up in the polling. This looks likely to be a systematic error across all polling companies, if it is happening. (there was a kellner article discussing it)

    People simply changing their minds ought to work out correctly. So of 52% leavers at the referendum, if 4% had changed their minds, then that ought to reflected in the results. Assuming they honestly reply that they voted leave but now have changed their view. If they dont admit to having voted leave, or have honestly fotgotten (and we are perhaps especially interested in those not very interested in politics who might), then the process wont work. Yougov though uses panels of respondents who are questioned over years, so they should have previous answers on record to check.

    There is another possible problem with normalisation I can see. If people disproportionately on one side or other refuse to answer surveys. Maybe they were motivated before, but are now disilusioned. So if either remainers or leavers are disproportionatley more inclined to refuse to do surveys now, once again normalising by leave/remain might introduce errors. Say, 2% of remainers are so sick of it all they will not vote and will not answer questions. Whereas no leavers are so affected. Applying normalisation back to the referendum result will promote more remainers into the sample to replace the disaffected ones. But this is wrong, because the disaffected need to be counted as such and no longer supporting remain.

    The figures we have suggest a slight move to remain and slightly more disaffection by leavers than remainers. If there is a normalising bias, it would probably tend to suppress a swing from showing up, and in this case would be underestimating the swing we do see, to remain.

    Are there too many politically active people being sampled? Yes. There are far too few non voters being polled. On the one hand it is obvious why pollsters would do this: if they arent going to vote, why waste time asking them. The problem would come if there is a turnover from voting to non voting groups, and we certainly saw this in the referendum and last election. So it is important if normal non voters are suddenly getting involved.

  44. Two reflections on NE Scotland attitudes.

    I had dealings with my local Tory MP`s aides yesterday about the inflexibility of the Right-to-Work legislation, and the demands I have had to make a 300-mile journey to prove my identity and show birth certificate, passport, in person. Fortunately I have avoided that by arguing with my bureaucrats, plus telling them I have been in contact with my MP, so eventually we Skyped.

    The aide seemed very sympathetic, which maybe isn`t surprising since he is obviously foreign, and the other staffers working for the MP also seemed foreign.

    And second, just now my wife has been talking to some very hard-working builders operating near us and talking in a foreign language. In just two minutes it emerged, “we Polish, I here 10 years, if Brexit we go back to Poland, then building stops”. These three men had no need to turn the conversation to Brexit.

    And likewise other neighbours commenting previously on this house-building had no need to applaud the foreigners presence and hard work, since to some extent it could be seen as depriving their own youngsters of jobs. But I feel sure that local opinion here is near unanimous in approving this immigration and also that at professional levels in our society here.

  45. @Charles

    Thanks for your comment Charies, and yes, your point is taken. The study picked an issue for transmission which people were unlikely to be all that invested in, whereas in the real world, issues may be clung to more fiercely. And as you point out, in the real world pairings may not be random.

    Hence your conclusion about movement bring more likely among subgroups who are in regular contact and not clinging so forcefully to the opposing view. As a result of your concerns, the tipping point may not always be 25%, however, despite this, it’s particularly of note how rapidly things shift once you get past 25%. To quote a bit more of the article…


    “The researchers argue the effect is likely to be applicable to wider social norms — such as the acceptance of homosexuality. The 25 per cent figure is likely to be dependent upon the specifics of the experiment, rather than an absolute rule. But the scientists said that wherever the tipping point did lie, the research showed that change when it comes could be extremely sudden.

    “When a community is close to a tipping point to cause large-scale social change, there’s no way they would know this,” said Damon Centola, from the University of Pennsylvania. “But remarkably, just by adding one more person and getting above the 25 per cent tipping point, their efforts can have rapid success in changing the entire population’s opinion.”

    Regarding more movement on Labour than Conservatives, costly this was the case in the election, but that may be because there was more new information about Labour. Partly because media forced to be more balanced, and also Labour keeping powder dry till the election.

    I should add that of course, over time, views may die off however forcefully clung to a the time. The Max Planck effect. A typical example might be renewables. Eventually, if renewables dominate, those who said it couldn’t happen will be rather in the minority, as facts prove their view increasingly barking.

    The interesting thing here might be to consider just where lies the tipping point in terms of evidence. How much renewable power does there need to be before someone goes “ok, I’m utterly barking!” 30%? 70%? 90%?

  46. costly = obviously

  47. On tipping points, it can often be more than just numbers, with events being just as important. We have just seen a classic example of this with attitudes to plastics.

    Up until a few months ago, concerns over plastic pollution was largely restricted to a few hairy @rsed environmentalists like myself, dedicated national campaign groups like Surfers Against Sewage and also international conservation bodies.

    This has been talked about for years (decades, in fact) with almost zero traction, until the last year or so. Many people cite BBC’s Blue Planet 2 as creating the tipping point, almost single handedly. Whether that is true or not I don’t know, and I would suspect it’s a little more complex than one TV series, but there is no doubt that seeing media coverage of the impacts of plastic on the ocean’s wildlife has been transformative at every level.

  48. @Carfrew – thanks for extra information and thoughts.
    they need thinking about. One of the frustrating things about UKPR is that the breadth of experience and expertise on show is very great but our ability to build on each others posts seems very limited.

    @’Davywell – Nick Robinson did an interesting program in, I think, Mansfield. He asked people if there were to many immigrants and everyone said ‘yes’. He then went through a list of jobs – lorry driver, chef etc and asked which ones people should not be freely available to immigrants and with the exception of one stalwart woman everyone welcomed immigrants for all the jobs chosen

  49. @ Alec – In terms of the experiment I would have thought that what the media does Is create salience and talking points. the experiment did this by requiring people to choose names for a photo and paying them if they got the same one. It also had people who were amenable to this financial inducement. My guess is that the people who got concerned about plastic were the kind of people who were in any case vaguely environmentally minded and were galvanised by the program and discussing it with other like minded people,

  50. From today’s You Gov Poll:-

    ABC1-42/38
    C2DE 48/37

    !!!!

1 2 3

Leave a Reply

NB: Before commenting please make sure you are familiar with the Comments Policy. UKPollingReport is a site for non-partisan discussion of polls.

You are not currently logged into UKPollingReport. Registration is not compulsory, but is strongly encouraged. Either login here, or register here (commenters who have previously registered on the Constituency Guide section of the site *should* be able to use their existing login)