The MRS/BPC Polling Inquiry under Pat Sturgis is due to release it’s initial findings at a meeting this afternoon (and a final written report in March). While I expect much more detail this afternoon they’ve press released the headline findings overnight. As I suggested here, they’ve pointed to unrepresentative samples as the main cause of the polling error back in May. There is not yet any further detail beyond “too much Labour, not enough Conservative”; we’ll have to wait until this afternoon to find out what they’ve said about the exact problems with samples and why they may have been wrong.

The inquiry conclude that other potential causes of error (such as respondents misreporting their intentions (“shy Tories”), unregistered voters, question wording and ordering) made at most a modest contribution to the error. They say the evidence on late swing was inconclusive, but that even if it did happen, it only accounted for a small proportion of the error. The inquiry also say they could not rule out any herding.

The overnight press release doesn’t hint at any conclusions or recommendations about how polls are reported or communicated to the press and the public, but again, perhaps there will be more on that this afternoon. Till then…


37 Responses to “Initial findings of the Polling Inquiry”

  1. @Hannah

    “From the last thread, I’m unconvinced that “it’s all about the leadership ratings (or the economy) really” is something you can take home from the headline polling failure.”

    It would appear that the consensus as to why the pollsters got it so wrong appears to be that they tended to use samples that weren’t representative of the electorate, inadvertently selecting more Labour voters than Tory ones. That said, I do think that the questions and responses on leadership and economic credibility are key indicators of eventual voting behaviour. I can well believe a voter responding with his or her heart in terms of voting intention but responding with his or her head when asked about leadership qualities and economic competence.

    If that theory is correct, the question that is then begged is to what extent the head rules the heart in the secrecy of a polling booth. I’ve told a pollster I’m going to vote Labour, but are the troubling questions and nagging doubts making my pencil hover over the ballot form? I think there may well have been quite a few people caught in this dilemma last May, certainly enough to distort opinion polls. They said one thing, but did another on the day.

    I’m also attracted to the theory that more and more people “game” surveys and polls these days, either mischievously or out of sheer spite. In other words, their responses are some way away from their real views and intentions. Maybe some people respond peevishly and angrily out of pure irritation for being asked to participate. Who knows, but my personal and recent experience of a phone poll conducted by Populus tells me that patience and attention can be severely tested!

    How well are the telephone callers trained and how reliable is their data capture? Again, my experience with Populus didn’t inspire confidence.

    Who’d be a pollster, hey?

    Anthony – answers on a postcard, please!

  2. Surely everyone is mis-reporting this, and I fear this includes you. The problem is not that the samples were unrepresentative — the samples have always been unrepresentative, and everyone knew that and corrected for it using well-proven statistical techniques. The problem is that the way in which the samples were unrepresentative changed unexpectedly, and so the corrections to the sample were not adequate, since they were based on the old sampling bias and not on the new and unknown one.

  3. Ben Bradshaw MP (he of ‘red Tory’/ ‘neoliberal’ sic fame) on Twitter:

    “Until the pollsters spend enough to produce accurate forecasts we must subtract 3% from @UKLabour & add 3% to the Tories in every poll.”

    Seems about correct to me – aside from ICM where perhaps (until full rollout of Methodology changes) it should be +4 and -4 respectively.

    However Corbynites seem to believe (sincerely) that the polls are massively under representing “Jeremy’s” huge popularity ‘amongst the masses’!

  4. Presumably the 20milion plus ‘sample size’ of the 2015 general election 66% turnout was big enough to rule out it being “unrepresentative”? Or did Corbyn supporters not vote?

  5. @Dave

    “Presumably the 20milion plus ‘sample size’ of the 2015 general election 66% turnout was big enough to rule out it being “unrepresentative”? Or did Corbyn supporters not vote?”

    Don’t know about it being an unrepresentative sample, but the respected Electoral Reform Society described it as producing the “most disproportionate result in British election history”.

    Their extensive report makes very interesting, and dispiriting, reading for all those more interested in the health of our democracy as opposed to whether their team “won”. or not.

    https://www.electoral-reform.org.uk/file/1767/download?token=cY4ruQ3t

  6. I am uncomfortable about saying, as the Inquiry seems about to, that the wieghting of the sample was wrong, and presumably that we should therefore change the weighting to include more of the under-represented group, i.e. Conservatives. This seems to me not far short of asking what answer we want and then choosing people to question who will give us that answer. Would it not be better to have a truly random sample and then adjust the data after it has been collected?

    The Inquiry is reported as indicating that there are large number of “Victor Meldrews”, older Tories who refuse to answer, or, worse, lie. In my experience as a “battery cage” telephone interviewer a worse problem is the huge number of young voters who only use mobile phones, and who again are too busy to answer (they are often down the pub!). There are two problems here. One is that the pollsters have been slow catching up with the proportion of calls they should do to mobiles. The other is that I find it doubtful whether these non-responders vote Conservative disproportionately.

    If people are “gaming” the polls, wouldn’t they be likely to claim to be voting for minor parties such as the Greens and UKIP? But the pollsters got the percentages for these parties roughly correct.

    In answer to Crossbat11, I worked on the telephones for one of the pollsters (not Populus – and of course the work was contracted out to a call centre) a couple of elections ago. We got two days general training and about an hour on the specific survey (we didn’t ony, or mainly, do election polls).

    To be honest, I don’t think we have solved this problem yet.

  7. I remember trying to find evidence of swing (late or otherwise) and only found a bit of slight wobbling around both ways over the last 7-10 days of the campaign. The last (tiny) wobble was toward Labour. I think basically everyone just got their methodology wrong.

    Interesting to note the effect of a sudden fall of a third party in FPTP. Releasing a tide of votes up for grabs in a few areas makes a heck of difference for everyone else. Did the pollsters anticipate this?

  8. One problem with a late swing hypothesis is that these days large numbers of votes have been cast by post days before the nominal polling day.

  9. The CON figure was understated – but a few of us were saying this for weeks before the election date – see especially 20 April and 27 April upthread.

    KIERAN W for example: “…the error will be entirely in the direction of understating the status quo and/or centre right option, as has so often been the case (in previous elections)…” 27 April

    And a week before the election NICK ROBINSON said on BBC NEWS that “….we might well yet see a good old fashioned majority government”.

    Four things occur to me amongst others:

    1. It is much more likely that the undecided or unsure will stick with the status quo and vote for the government. And the more harsh or right wing the Tories are, the more the “shy Tories” factor will be. I am, here, mainly thinking about the Tories hard policies on benefits.

    2. The undecided figures weren’t treated seriously enough or properly by the pollsters.

    3. Call centre market research telephonists are poorly paid and treated and most people know or suspect this (or might glean it from the telephone manner of the caller – accent etc).

    Is it possible that a small but significant amount of respondents cannot find it in them to admit to the person on the other end of the phone that they’ll vote for a right wing party, who they suspect the telephonist is against?

  10. @Deepthroat
    ‘ It is much more likely that the undecided or unsure will stick with the status quo and vote for the government. ‘

    But that is contradicted by quite a few General Election results when support fell away -if anything – from the incumbent in the last week of the campaign. I would cite – 1964 – 1966 – 1970 – Feb 1974 – Oct 1974 – 1979 – 1983 -2001 – 2005.

  11. Ipsos seems to be suggesting that now as political polling damaged reputation, and it is a low income earner, there shouldn’t be political polling in 2020. Talking about bath water …

    Yesterday I met a friend (stock broker), and I kind of remembered ToH’s correct calling of the elections. Remembering that he specialises in retails, I asked if he made money on the unexpected turnover by Tesco. He did, and he used his ‘normal’ method – visiting Tescos on certain days, making notes of the number of customers and the approximate value of their shopping trolley (he claims that online shopping didn’t much alter the predictive power except for wine) and makes his conclusion.

    Now it is certainly not scientific (although successful) in the proper sense of the word, although it is a kind of sampling and there is an implicit regression model behind it. However, quite clearly, most of the times when I try to apply the method, I fail.

    So, even if it works, it is not recommended for polling companies.

  12. @Crossbat.
    “most disproportionate result in British election history”
    The Electoral Reform Society report is an interesting document, but its main concern – the mismatch between seats and votes – is totally irrelevant to the question of whether total votes cast for each party in the election were accurately predicted by the polls. That mismatch would have been just as serious if the polls had predicted the actual result 100% accurately.

  13. @ Crossbat
    I should also have said I share your concern for the health of our democracy, but with turnout down to the low 60%s this century, some fingers have to be pointed at electors failure to vote, and others at the failure of the major political parties to offer policies which might attract support from 50% of the electorate. The latter failure goes unpunished if voters do not vote against the culprits. If that led to the election of a short-lived Monster Raving Loony government, so be it, especially if that was swiftly followed by victory for an Electoral and Representational Reform Party.

  14. @Dave,

    I share your views but I was responding to your somewhat tongue-in-cheek (or, at least, I thought it was!) comment about the 20 million sample size in the May 2015 general election and whether this was representative or not.

    Of course, you’re absolutely right when you say that the accuracy or otherwise of opinion polls is a completely different issue to how representative and fair our voting system is. There’s no link between the two and the polls could have been spot on last May and we would still have got the most unrepresentative result in British Election history.

    If Corbyn wanted to really “renew” our politics then there’s no better place to start than the reform our obsolete electoral system. How ironic, looking back, that old dinosaurs like Blunkett, Reid and Beckett joined Cameron and the right wing media in scuppering Clegg’s feeble attempts to introduce AV. Talk about not knowing their backsides from the elbows doesn’t begin to describe the parochial idiocy of their position.

  15. I’m getting increasingly irritated by the “well we know you can’t trust the polls” cliché in the media whenever a opinion poll result is cited these days.

    The election polls understated the Labour position by 2-3% and understated the Conservative position by a similar amount (less if one includes margin of error).

    If I’m looking at a poll where a +/- 2-3% difference is likely to be crucial I would be very cautious about calling the result, but that was obvious before the election.

    The relatively small error in the May election is being used to dismiss polls with margins of political significance of 10-20% and higher.

  16. “The inquiry conclude that other potential causes of error (such as respondents misreporting their intentions (“shy Tories”), unregistered voters, question wording and ordering) made at most a modest contribution to the error. They say the evidence on late swing was inconclusive, but that even if it did happen, it only accounted for a small proportion of the error. The inquiry also say they could not rule out any herding.”

    ———-

    Thus far, there does not appear to be any conclusive (or palatable) way of avoiding any of these sources of error, or even reliably quantifying their effects…

    Which makes things a bit problematic. Even if you fix the sampling…

  17. This is quite a useful finding for the pollsters, in that it identifies a specific problem(s) which can theoretically be addressed by methodological adjustments.

    I wonder what the response would have been if no such specificity had been identified.

  18. There doesn’t seem to be anything more substantial, but the slides from today’s presentation are available from their documents list here:

    http://www.ncrm.ac.uk/polling/document.php

    as The Inquiry into the 2015 pre-election polls: preliminary findings and conclusions.

    It also includes the associated press release, a background document on the last acknowledged disaster (1992) and links to the responses to the situation from the main pollsters (ComRes, ICM, Ipsos-MORI, Opinium, Populus, Survation, YouGov. Unfortunately Ashcroft doesn’t seem to have agreed to co-operate (he’s not a BPC member so there’s no obligation).

  19. GRAHAM – OK Evidence seems inconclusive although it’s easy to instinctively believe that this is the case.

    But the swing back to CON seems a real phenomenon.

    Differences between polls one week out and the election result:

    Ave poll share in week before election compared to actual GE share (difference in %)

    Year/ Con / Lab/ govt
    1959 0.6 / -1.6 Con
    1966 1.1 / -2.8 Lab
    1970 1.9 / -3.9 Lab
    Oc74 2.6 / -3.0 Lab
    1979 1.2 / -2.8 Lab
    1987 0.3 / -2.6 Con
    1992 5.5 / -3.6 Con
    1997 0.7 / -3.1 Con
    2001 2.0 / -3.6 Lab
    2005 1.2 / -1.6 Lab
    ——————————————————-
    Fb74 0.5 / 2.1 Con (Lib -3.3)
    1983 – Falklands factor: -2.2 / 2.2
    2010 – Cleggasm – 1.7 / 2.2 Lab (LD -3.9%)
    ——————————————————–
    So just to be clear for example. ..
    1970 Con did 1.9% better in GE result than poll average one week before GE and LAB did 3.9% worse than their poll average a week out.

  20. I took the trouble to re-read Numbercuncher’s article from 06/05/15. It is quite spine-tingling to see the clarity with which he identified the statistical pattern of inaccuracy in the polls.

  21. @Deepthroat
    “Is it possible that a small but significant amount of respondents cannot find it in them to admit to the person on the other end of the phone that they’ll vote for a right wing party, who they suspect the telephonist is against?”

    No.

    Initially pollsters had trouble with UKIP, it was new and rapidly growing….but they get it right these days. The GE they were close to spot on IIRC. IMO (and I say this as a UKIP voter), that pretty much rules out this theory.

  22. Wood: the findings from the inquiry suggest that unrepresentative samples (or rather, inaccurate weighting of those samples) was the biggest problem, rather than people lying about their voting intentions.

  23. Sample wrong seems a non excuse. Samples are always wrong and that is why they are adjusted to make them representative. The polling companies adjustments were wrong.

  24. I think TONY CORNWALL may be close to the truth.

    NEIL A…where exactly is the post by NUMBER CRUNCHER?

  25. @John

    The issue here is not margin of error stuff, in my opinion.

    My day job includes MSA (Measure System Analysis). Imagine trying to measure C (the likely Conservative share of the vote) and L (the likely Labour share of the vote). You have a number of instruments to measure C and L – the different polling companies methodologies.

    What happened?

    They pretty much all overestimated L and underestimated C. That is principally an issue of gauge bias (bias in the sense an error is created systematically, not political bias). All the methodologies made the same error to some scale or another.

    If it was just MOE, you get some pollsters getting L and C very close or on the money, with others above and below these figures.

    I worry that political polling to the scale seen up to the 2015 GE is now dead. If the sample is the issue, better methods are likely to be too expensive for frequent polling.

    AW,

    Roughly, how many times more does a 1,000 poll cost by personal interview, rather than by telephone or t’internet please?

  26. Haven’t posted on here for years. Way back in 2010 I predicted the result of the election – I said it would be either a majority for Labour or the Tories. Not such a joke actually – just about everyone spent the next 5 years predicting a hung parliament because we were in an “era of coalition”. In fact, under FPTP we will never really be in an era of coalions.
    And there’s the problem for the pollsters. The FPTP system creates a kind of chaotic behaviour (in the mathematical sense of the word). Small variations create results which are pulled to one of two attractors – in this case a Tory majority or a Labour majority. For pollsters to always be able to predict this kind of chaotic outcome will always be difficult if not impossible to do.
    Interestingly, this is good news for any opposition party. For all the present talk of a “mountain to climb” for Labour to win a majority at the next election – I remember exactly the same talk of an unprecedented mountain for the Tories to climb to win a majority after the 2010 election. In reality, just a relatively few votes changing hands in a few marginals can be all it takes to create dramatic changes.

  27. To make it clear, I predicted the 2015 election would be a majority for Lab or Tories in 2010. Do I win a prize?

  28. The reason the Conservatives got a majority in the 2015 election was because they swept the south-west and won seats off the LibDems.

    If people from the south-west weren’t well represented in the polls, the polls would have missed that development completely. Yougov seems to have a “Rest of the South” category – but I don’t know if they try to get a proper south-east/south-west split within that.

    It seems to me that the reason they got Scotland right (despite it having only 8% of the population) was that during the referendum they were forced to build a representative panel of that region, the urban/rural split, the west Scotland/east Scotland split and so on.

    I doubt they’ve done that for any other part of Britain apart from London – so if one region suddenly deviates from the group and it can throw the entire poll off.

    The answer has to be larger samples, with an attempt to get representation from all regions within it – so instead of “Rest of South” you have South-East and South-west and instead of “Midlands/Wales” you have Midlands separately from Wales, and an attempt to get a response from each of those sub-regions.

  29. @catmanjeff “They pretty much all overestimated L and underestimated C. ”
    That all polls made the same error means that the position is analogous to measuring a distance with a stretched tape measure. To put this right you need only know how much the tape was stretched, not how or why. The indication is that make-up of sample by the usual methods led to an error such that the conservative share of the vote needed to be corrected upwards by about 5%. It is not unreasonable to apply a similar correction to current VI polls.
    Whether this will still be so with a different demographic in 5 years time is open to question. Unfortunately voting in other elections (council, by-elections, EU or regional Assemblies) may not be comparable to general elections, especially in turnout, so these will not reliably indicate any future ‘stretch’ in the tape.

  30. @Dave

    Unfortunately voting in other elections (council, by-elections, EU or regional Assemblies) may not be comparable to general elections, especially in turnout, so these will not reliably indicate any future ‘stretch’ in the tape.

    This is the crux of GE polling issues.

    You only get to ‘calibrate’ the system once every five years.

    If you have been wrong up to that point, there’s nothing you can do about it.

  31. Candy

    “It seems to me that the reason they got Scotland right (despite it having only 8% of the population) was that during the referendum they were forced to build a representative panel of that region, the urban/rural split, the west Scotland/east Scotland split and so on.”

    Alas for your theory, “Full Scottish” polls weren’t invented in 2014! They have been around for at least 40 years!

    However, Scots polls (like London and Welsh ones) are more accurate than the wee samples from those areas in GB/UK polls.

    Trying to measure “GB political opinion”, as if it was still the single phenomenon measurable by a cardboard swingometer in the 1970s, is patent nonsense.

    As you suggest, building up the overall picture from polling regions separately might be more successful – if more expensive.

  32. @Polltroll, I know, was just responding to DT wrt something I saw had an easy way of refuting.

  33. On polling by region – this clearly would have picked up those Lib Dem collapses; that’s why the polls were wrong and why we have a Conservative Majority. Labour/Conservative discussions seem to miss the point and clearly miss that the Conservatives were targeting that area.

  34. @ CMJ

    Reading the above, you may like this (and of course anyone, who has a deep suspicion about Fisher and any methology built around his stuff).

    https://m.youtube.com/watch?v=yy4nsEvKh2E

  35. @Laszlo

    I enjoyed that, thanks :-)

  36. @JULIAN GILBERT

    Good to see you back .. and I’ll give you a prize for your prediction. And another to CMJ who I remember saying that Labour would need the LDs to maintain their support.

    It is extraordinary the level of support that the LDs lost in just about every constituency… the parallel of how the Labour vote collapsed in Scotland. It was the relative redistribution of those lost votes between Con and Lab, plus their respective core levels of support, that determined the outcome in many constituencies. Just as you say ‘small variations create results which are pulled to one of two attractors’. Bloody moths, flapping around in Siberia!