We don’t have any more information on how the British Polling Council’s review of the election polls will progress beyond it being chaired by Pat Sturgis, but several pollsters have given some thoughts today beyond the initial “We got it wrong and we’ll look at it” statements most pollsters put out on Friday. Obviously no one comes to any conclusions yet – there’s a lot of data to go through and we need thoughtful analysis and solutions rather than jumping at the first possibility that raises its head – but they are all interesting reads:

Peter Kellner of YouGov has written an overview here (a longer version of his article in the Sunday Times at the weekend), covering some of the potential causes of error like poor sampling, late swing and “shy Tories”.

Martin Boon of ICM has written a detailed deconstruction of ICM’s final poll which would be have been an interesting piece anyway in terms of giving a great overview of how the different parts of ICM’s methodology come together to turn the raw figures into the final headline VI. Martin concludes that all of ICM’s techniques seemed to make the poll more accurate, but the sample itself seemed to be at fault (and he raises the pessimistic possibility that sampling techniques may no longer be up to delivering decent samples)

Andrew Cooper of Populus has written an article in the Guardian here – despite the headline most of the article isn’t about what Cameron should do, but about how the polls did.

Finally ComRes have an overview on their site, discussing possibilities like differential response and the need to identify likely voters more accurately.


674 Responses to “The polling post-mortem – some pollsters’ thoughts”

1 2 3 14
  1. I think that this is going to take a bit of sorting out

  2. Nobody mentions retired people, typical Conservative voters, who don’t have mobile phones or use computers, so that pollsters miss them out; they are an ever growing part of the population.

  3. Jonathan Frewen – all polls are weighted by age, so all contain the right proportion of older people. The reason is – alas – going to be more complicated than that…

  4. It’s obvious where the pollsters got it wrong – they just asked the wrong people.

  5. Were Ashcroft’s constituency polls more accurate (locally)?

    If so, perhaps it would be more sensible doing away with national polls. People are behaving differently by region and constituency.

    If Cameron sent out a letter to people in Con marginals, but not in safe seats, polling won’t account for people changing their minds late on.

    How can pollsters factor in tactical offensives by parties, weeks and months before (or even if ) they happen? I don’t think they can.

  6. @AW

    Thanks for the reply (previous post). I’m happy to stand corrected.

  7. Anthony

    ” (and he raises the pessimistic possibility that sampling techniques may no longer be up to delivering decent samples)”

    Any idea how that will affect the more important part of pollsters’ work – convincing commercial clients that you can accurately measure the effects of their branding exercises?

    Were I in charge of a research budget for a large company, I might currently be searching around for alternative approaches.

  8. May I say on behalf of many many people I know .. how disappointed it is to be blocked by Mike Smithson for making comments on his Twitter page after his immediate Boast about getting it right about liberal lost deposits and NOTHING about getting anything thing else wrong ( this was on May 7th/8th ) Thank you as we cant say anything directly and I noticed his followers went down by 100s ,. How many were Blocked !!

  9. Survation should be fined for burying that more accurate poll!

  10. I hope a poll is coming out tonight

  11. All pretty unconvincing stuff so far isn’t it?

  12. Trust everyone on here – and in the polling companies- know how serious this is.
    Credibility is zero, no one will take opinion polls seriously until the next national elections.
    All the gear and no idea. Come back Bob Worcester- no computer but much more accurate.

  13. Perhaps a truly random sample has become tougher to achieve. It is also,possible that the electorate have become more “sophisticated” in their response to polls. Is there an element of gaming involved here? I have never been randomly polled by phone in my 32 years as a voter, but I have self selected to participate in online polls. I dont really understand how a meaningful poll can be achieved when there are a significant proportion of Dont Knows, nor can I see how weighting “probability to vote” based on prior participation can be usefully achieved. Moreover allocating DK’s on the basis of prior voting seems a bit too ad hoc. There is clearly nothing that pollsters can do about late swings. As a layman, I await the report with interest.

  14. I suspect the lost Survation poll was not an accurate poll, but luckily on the money by random chance.

    Don’t forget a broken clock is right twice a day.

  15. JASPER 22.
    Rallings and Thrasher: Do you know what they said about the GE? I think they used to be good, basing their numbers on local election results

  16. I think it was two things that tripped the pollsters up

    1) Shy Tories, I expect they still exist and all the bombardment from the media about cuts to disabled and such like had an effect that we have all seen before.

    2) Asking the wrong people, probably way off the mark but I think that Labour supporters are much more proactive and engaging in politics, Conservative voters strike me as not the sort of people to bother with polling companies and a large part of the You Gov sample was riddled with a Labour bias.

    3) Pollsters tinkering with there own polls if they looked too out of line with other pollsters, everyone was afraid to be the odd one out.

    Like I say, im probably way off but I expect we will never know.

  17. @ CMJ

    Anyone who looks at the tables of that poll would get to the same conclusion- under the sampling, methodology it was a wide outlier.

  18. The summary statement in ICM’s analysis suggests something quite serious

    “The British Polling Council inquiry will no doubt uncover much to discuss, but this paper’s simple analysis of our final poll supports a nagging fear that has been present for some time: whether or not telephone (but this also very much applies to online) data collection techniques remain capable of delivering a representative sample which can be modelled into an accurate election prediction”

    Which reminds me of this article I read a few days ago

    http://www.spectator.co.uk/features/9520412/what-opinion-polls-feel-like-from-the-other-end-of-the-line/

    That included this comment

    “Towards the end of every shift the upper age categories are closed and there is a mad scramble to get enough 18- to 44-year-olds and go home. I receive a group memo. ‘Those of you coding pensioners as 18–24. We know what you are doing. If they sound elderly when we listen in we will go back and check!’”

    I can imagine finding 18-24’s can be especially difficult with constituency polling when you are trying to find 1000 voters in a constituency of only 70 or 80k and only 20% answer the phone, I was wondering how they managed to do that.

    It looks like finding people who admitted they voted Lib Dem in 2010 was also very difficult.

    But that would only apply to phone polls, online of course, esp Yougov have a massive panel and recorded vote from 2010, so easy to find those folk.

  19. @ Bluebob

    Could be, these are plausible, but I still think that the collapse of LibDems caused a hell of a lot of sampling problem, especially as the regional churns seem to be very different.

    I’m bothered by the crossover on policy issues in March 2014 (in yougov) and then stuck VIs from October.

  20. I’d expect to find a combination of these:
    – UKIP coming home to the Tories (not stating correct VI)
    – Late Lab-to-UKIP switchers (not stating correct VI)
    – Lib protest voters abandoning their stated VI (away from Lib, but not to Lab)
    – Undecideds being a large group of each of the above

    , but this hinges on me having a blind faith in the pollsters’ methodology being flawless (and the respondents being dishonest). Fits better in my partisan universe than other theories.

  21. I have not read all of these post-mortem’s, but are any of them taking into account the possibility that the polls in the final two days (showing Labour closing the small Tory lead) may not only have been wrong but may have also affected either people’s VI or their decision as to whether or not to vote?

    It is plausible to suggest that the ‘real’ VI on Wednesday was Con 35, Lab 32, rather than the 34/34 reported by the polls, but that the 34/34 polls actually caused the eventual outcome to be 37/31 as ‘bluekippers’ et al panicked at the thought of Red Ed and voted Tory instead of other parties or not voting at all.

    If the final polls had reported 35/32, maybe the outcome would have been 35/32 or some difference in outcome that could have meant 5 to 10 marginals not going Tory.

    Just a thought.

  22. The information the electorate have is changing and will continue to change dramatically. Has the amount of tactical voting changed? Voters use of internet, phone land lines, newspapers has definitely changed in last 5 years.
    Changing the polling methodologies to get the right result in this election could well result in an incorrect forecast in 5 years time.
    Difficult times.

  23. My two main theories are:

    Undecided voter’s breaking decidedly towards the Tories.
    Quite a lot of Bluekippers backing the Tories and a late swing of Labour voters to UKIP (as it would appear that UKIP support didn’t really drop much from what the last polls were saying).

    One thing that some pollsters (especially YouGov) can be quite pleased with is how accurately they predicted UKIP VI as this was expected to be a big challenge bearing in mind their huge spike in support etc.

  24. @ Laszlo

    “I’m bothered by the crossover on policy issues in March 2014 (in yougov) and then stuck VIs from October.”

    Yes agreed, I wrote here before the Election that issues that were showing big movement were having no effect on the VI, it just did not sit right that nothing was changing.

  25. Oh, sorry, to develop my point above. The small difference in election outcome that could conceivably (I would actually say probably) have been caused by the faulty tied polls last Wednesday could well have big consequences.

    I went to bed at 11.30pm on election night as I had work in the morning, so I went to sleep thinking it was going to be another coalition. When I awoke to find a small Tory majority, my first thought was ‘OMG, the Tories have won too well – it is going to be the mid-1990s all over again’.

  26. @billywhitehurst

    I think there was definitely some sort of recursive effect – the existing error in the polls pushed the actual result even FURTHER out. The hung parliament scenario was all over the media. Everyone was saying it’s the tightest election ever.

    So this isn’t as simple as looking at methodology. To account for the full difference between polls and reality, you have to look at how the polls changed real VI.

    It’s a real mess.

  27. What has annoyed me is the faith all political parties are putting into polls.
    Many constituencies were not defended or attacked because one party thought it had already lost it and the other the opposite.

  28. @statgeek

    There have been problems with Ashcroft’s polls as well. They were reasonably accurate in Scotland, but in England they painted too rosy a picture of the Con v Lab battles for Labour (as with the national polls) and the SVI results were closer to the mark than (headline) CVI for the Con v LD seats.

    ComRes have kind of talked up their regional polls (particularly the Con v LD and Con v Ukip polls), but also admit they were too optimistic about Labour in Scotland and in the Con / Lab marginals.

  29. @Bluebob

    Generally, the way politicians use polls makes bad politics, in my very humble opinion.

    I’d rather a party work out what it wants to do, follow that instinct, and argue their point up and down the land. Instead, they float an idea, watch the polls, consult focus groups….

    We end up getting the politics we deserve. We are poorer for it.

  30. Labour pollster claiming their internal data showed crossover last Autumn.

    http://www.bbc.co.uk/news/uk-politics-32606713

  31. Also there might be too much focus on “shy Tories” in the discussion on here.

    Equally as important as shy Tories is the Labour overstatement. The polls had Labour on 34%, 35%.

    The outcome was 31%.

    That’s a big gap

  32. Democracy
    I agree with your comments. I also agree with those who say “it will take some sorting out”. It is wrong in both directions.

  33. I wonder if response bias could also be a problem? We could see the UKIPpers had to be significantly downweighted, correctly as it now turns out, because they were more keen to respond to polls, and that was across the board.

    I was listening to John Mann on BBC news earlier explaining what he found on the doorstep in a very frank interview. His conclusion – people didn’t like the Tories, but they also didn’t like Ed or Labour’s message, so they voted for the status quo.

    I think that was a recurring theme over the last few years, there was no enthusiasm for either party, because there was no vision and such negative campaigns.

    But there were some Labour voters who were as enthusiastic as the UKIPpers. The disabled, and those hit by bedroom tax, other benefit tax were as animated as the UKIPpers. So maybe they were over represented in the polls? ie the 2010 Labour voters who responded to the polls wanted to send a message that they were unhappy with the cuts, so they agreed to answer the poll, and voted Labour. The 2010 Labour voters who didn’t really care, and who swapped to the Tories when they voted didn’t respond to the poll as they had no message they wanted to send.

    I have been polled twice by the local council. I asked what the poll was about, the one covered local services, so allowed me to have my say about the parking situation here, and the expensive leisure centre. So I said ok. The next one wanted to poll me on health services, and as I had nothing I wanted to send a message on about health, I said no thanks.

  34. So far everyone is talking about methodology which is of course important but there is another issue that has received less attention – the almost total absence of control on when polls can be published. The mistakes in the polls also raise this issue because there is at least a chance that the figures they were showing had a material impact on the outcome ie fear of some form of Labour/SNP government was exaggerated causing more people to vote Tory than would otherwise have been the case. Given this, and the clear questions that now have to be asked about the reliability of any future polling, I wonder whether we should move to an arrangement under which polls are banned in a specified period (say a week) before election day. I appreciate that this is not a perfect solution to the problem but it would at least add a little more uncertainty in the final week, discouraging people from voting on the basis of what they think the result will be.

  35. @ James

    It is interesting, but by YouGov’s measures immigration wasn’t an issue. The Con advantage was keep on coming down. Now it could be a UKIP affect, of course (and indeed the others were higher than the Conservatives by the Autumn and as Labour was 4-7% behind the conservatives …).

  36. I know in polling a lot of the trouble is in identifying swing voters and trying to pin down who they are most likely to support. It’s just that maybe UKIP-Lab supports are not the necessarily the same sort of voters as Con-Lab ones.

    Perhaps pollsters should ask coters who is their 2nd choice party?

  37. Robert Newark
    ‘Yes of course a majority of 12 is slim but remember it wasn’t just bye elections that Major suffered from, it was his mP’s joining other parties as well. Lose a bye election and you have lost one seat, if somebody crosses the floor the effect is double that.’

    I think you need to recalculate. When a Government loses a seat at a by election the Opposition gains a seat so that its majority declines by two!

  38. @billywhitehurst.

    I agree with the tenet of what you are saying.

    The pollsters need to recognise the active role they play in our democracy. With the benefit of hindsight, we had quantity but not quality. To me, there needs to be more openness about the assumptions that they are making (such that they are accessible not just to polling geeks) so that anymore, wether party strategist or humble voter, understands the risk they take if they accept the poll at face value

  39. Tune into Newsnight

    It’s all about polls, Labour talking about early crossover, and Kellner is there

  40. I suggested on the last thread to AW that some more polling is needed!!!

    First poll all the previously polled online YG respondents to find out if how many changed at the last minute. At present we have no idea about these sort of developments and one has to strike while the memories are still green.

  41. @NewForestRadical

    Oddly, I was saying exactly the same thing to my wife at the weekend. Other countries do it. A week is to short though; I’d stop polling for the duration of the “short campaign” (3 to 4 weeks).

    This would have to apply to private as well as public polls, as otherwise leaks happen; and the internet, of course, can’t be controlled.

  42. Graham

    What a duh brain I am. It’s late. Time to go to bed I think.

  43. I actually used to work as phone interviewer for a market research company. We were paid per interview (although with the minimum wage floor as well). Difficult jobs resulted in a bonus as well. Zero hours contract. (Okay as I was a student at the time).

    Obviously the more interviews you did, the more money you earned.

    One crafty old worker realised that positive interviews took less time than ones which led to lots of complaints. He could anticipate which would be which by the way people reacted when he called. On the easy jobs, he would tell the complainers he’d call back and stick to the positive ones. This would distort the results. No one shopped him (up the workers) ;-)

    How interviews are carried out and how worker incentives work might be worth investigating.

  44. @Omni
    Also there might be too much focus on “shy Tories” in the discussion on here.
    Equally as important as shy Tories is the Labour overstatement. The polls had Labour on 34%, 35%.
    The outcome was 31%.
    That’s a big gap”

    Survation’s ballot paper polling showed Labour around 31% on a couple of occasions in the last week. And of course their only foray into phone ballot paper polling was pretty spot on.

  45. Strauss tells Pietersen England career over – BBC.

  46. I’ve noticed that pollsters don’t like to admit that they might actually be affecting future opinion with their poll results

  47. @AW

    “Jonathan Frewen – all polls are weighted by age, so all contain the right proportion of older people. The reason is – alas – going to be more complicated than that”

    Must a poll that contains the right proportions of elderly people contain the different kinds of elderly people in the right proportions?

  48. @James – that BBC article is fascinating.

    I can’t see Labour would have any reason to not be truthful about their internal polling, and their general observations about the pattern of the polls ‘feels’ right. People on UKPR did feel Labour were sliding in autumn 2014, but also that they had a good start to the campaign itself. Then came the SNP scare tactic, which many of us felt to be highly effective, but didn’t seem to shift the public polls much.

    The main points made seem to be;
    1) Online polling is fine
    2) Changing sampling method wasn’t a cause of the divergence between the public polls and Labour’s internal polling.
    3) The key difference seems to be in public polls asking VI first, when the internal polls went through issues before VI.

    This matches with numbers reported about a Lab/SNP pact being the least favoured option for the government.

    If Labour’s analysis is correct, it is encouraging in that it may mean that sampling and weighting techniques themselves aren’t flawed, but we need to move to more in depth surveys. This is bad news for telephone pollsters, but at least the article suggests online panels work well enough as a sampling technique.

  49. As this article written before the election shows, Labour’s canvass returns were warning that they were losing and more so in the target seats. The point about canvassing is that it aims (ideally) to contact most of the electorate rather than just being a sample, has reasonable response rates if done at times when people are generally at home and also focuses only on the registered electorate. So that supports the idea that the problems were down to (a) the difficulty of getting representative samples from phone and online polls and (b) polls but not canvassers contacting people who were not registered, suggesting a need for a question on registration.

    http://www.newstatesman.com/politics/2015/05/are-labour-losing

  50. @newforestradical

    The argument for not publishing polls in the pre-election period is that it in some sense it allows a “purer” response from the electorate by withholding information of the sort that the parties and some others are likely to have access to. However, in a FPTP system voters commonly, as is their right, prefer to use their vote against someone rather than for someone. Polling information helps them decide whether to use their vote this way and how to cast it.

    Even without polling, there is differential access to information within the electorate in other ways. Compare and contrast the position of a voter in a (very) safe constituency and one in a marginal. I have lived in both types. In a safe constituency, I can confidently vote for my favoured candidate (whether it is the certain winner or one of the certain losers) that I will not affect the seat or national outcome, but help the vote share or my favoured party. However, in a marginal I may chose to vote for my 2nd preference party A to stop party B who I dislike rather than voting for my 1st preference party. No one seems to have any great moral objection to this behaviour, and indeed all parties encourage it to some degree, and especially in tight contests.

    This most recent election was effectively a marginal at national scale. If (and it is still if) the vote moved toward the Tories because some of the electorate did not want a hung parliament that would not an unfair outcome if the decision was made on accurate information. The legitimate concern is that the electorate were reacting to inaccurate information. However, unless you are a conspiracy theorist the inaccuracy is due to error rather than deception.

    Consider the alternation where no polling data is available but party X puts out deliberately misleading information that a hung parliament is likely to scare soft support back into their camp, and party Y puts out conflicting misleading information that in fact party X has a comfortable lead. As a vote who do you believe, and what do you do? Without independent polling the electorate is robbed of the opportunity to make an informed choice.

    The case for withholding polling information is not a simple one.

1 2 3 14