I’ve just got back from the BBC after working all night (you may have seen my bald spot sat just to the left of Emily Maitlis’s big touchscreen last night) and am about to go and put my feet up and have a rest – I’ll leave other thoughts on the election until later in the weekend or next week, but a few quick thoughts about the accuracy of the polls.

Clearly, they weren’t very accurate. As I write there is still one result to come, but so far the GB figures (as opposed to the UK figures!) are CON 38%, LAB 31%, LDEM 8%, UKIP 13%, GRN 4%. Ten of the final eleven polls had the Conservatives and Labour within one point of each other, so essentially everyone underestimated the Conservative lead by a significant degree. More importantly in terms of perceptions of polling it told the wrong story – when I was writing my preview of the election I wrote about how an error in the Scottish polling wouldn’t be seen so negatively because there’s not much difference between “huge landslide” and “massive landslide”. This was the opposite – there is a whole world of difference between polls showing a hung Parliament on a knife edge and polls showing a Tory majority.

Anyway, what happens now is that we go away and try and work out what went wrong. The BPC have already announced an independent inquiry to try and identify the causes of error, but I expect individual companies will be digging through their own data and trying to work out what went wrong too. For any polling company, there inevitably comes a time when you get something wrong – the political make up, voting drivers and cleavages of society change, how people relate to surveys change. Methods that work at one election don’t necessarily work forever, and sooner or later you get something wrong. I’ve always thought the mark of a really good pollster is someone who puts their hands up to the error, says they’ve messed up and then goes and puts it right.

In terms of what went wrong this week, we obviously don’t know yet, certainly I wouldn’t want to rush to any hasty decisions before properly looking at all the data. There are some things I think we can probably flag up to start with though:

The first is that there is something genuinely wrong here. For several months before the election the polls were consistently showing Labour and Conservative roughly neck-and-neck. Individual polls exist that showed larger Conservative or Labour leads and some companies tended to show a small Labour lead or small Conservative lead, but no company consistently showed anything even approaching a seven point Conservative lead. The difference between the polls and the result was not just random sample error, something was wrong.

I don’t think it was a late swing either. YouGov did a re-contact survey on the day and found no significant evidence of this. I think Populus and Ashcroft did some on the say stuff too (though I don’t know if it was a call-back survey), so as the inquiry progresses other evidence may come to light, but I’d be surprised if any survey found enough people changing their minds between Wednesday and Thursday to create a seven point lead.

Mode effects don’t seem to be the cause of the error either, as the final polls conducted online and the final polls conducted by telephone produced virtually identical figures in terms of the Labour/Conservative lead (though as I said on Wednesday, they were different on UKIP). In fact, having a similar error with both telephone and online polls is evidence against some other possibilities too – unless by freakish co-incidence unrelated problems with online and telephone polling produced almost identical errors it means things that only affect one modeare unlikely to have been the cause. For example, if the problem was caused by more people using mobile phones, it shouldn’t have affected online polls. If the problem was caused by panel effect, it shouldn’t have affected phone polls.

Beyond that there are some obvious areas to look at. Given that the pre-election polls were wrong but the exit polls were right, how pollsters measure likelihood is definitely worth looking at (exit polls obviously don’t have to worry about likelihood to vote – they only interview people physically leaving a polling station). I think differential response rates is something worth examining (“shy voters”… though I think enthusiastic voters is just as risky!), and the make-up of samples is obviously a major factor in the accuracy of any poll.

And of course, it might be something completely unrelated to these things that hasn’t crossed our minds yet. Time will tell, but first some sleep.


710 Responses to “Back from the election”

1 13 14 15
  1. “addressing the issues is much easier before a viable alternative for your vote share emerges and starts to devour your base”

    Never truer words spoken as once great political parties the world over have learnt to their cost.

  2. I doubt if the right could “pollute” the sample population sufficiently across so many pollsters with different pools.

    I think it more likely if a social category used in pool construction has is actually two groups with different views, e.g. the working class as Lab and red-kippers. The demise of UKIP may very well mask that difference again but it is there to be exploited.

    In the longer term, a trend away from Labour can continue, e.g. the older generation Asian vote Labour but that is not their natural home by cultural values. I think those sentiments are in decline in the next generation.

    Society is steadily fragmenting because shared experience is shrinking – TV is balkanised, the local church is no longer the social centre, the uni experience of the Russell Group student is entirely different from that of the bottom-of-table post-92, etc. We seek entertainment in increasingly diverse ways. Information is drawn online from a wide range of sources which can differ greatly between individuals. There are more groups that only have limited contact with each other. Sampling opinion accurately in such an environment is going to be a hell of a job.

  3. New Lib Dem members apparently over 4,000 since Thursday evening?!

    I think that is an increase of about 10% in the membership in two days!

  4. Fewmet
    Good points. Lots of pubs closing as well. That doesn’t help social cohesion.

  5. I think Angela Eagle could be a good Labour leadership contender.

  6. UkIP won control of its first UK council on Thursday. They took overall control of Thanet district council from Labour, increasing seats on the local authority from 2 to 33.

    Will be interesting to see how this first test of forming an administration plays out. It is a key stage for any party that aspires to greater things than just being a protest movement.

  7. A failure of this magnitude by the polling companies is a disaster and seriously questions the validity of a pure statistical approach, and not with regard to polling. There was an enormous level of arrogance involved. The fact is, the “shy tory” effect was well known and predictable, (I did in fact predict it, some made big money out of it at the bookies). It isn’t a factor that shows up with any consistency and isn’t reducible to a formula-but it was pretty easy to figure out a bunch of people were going to lie about voting Tory this time around.
    Remember this the next time some statistician is pontificating about oil prices, the global economy or whatever, they just don’t know because the universe is a more complex place than they imagine.

  8. New thread (quite a few hours ago)

  9. while the polls did get the tories and labour figures wrong they appear to have been spot on with Libdem and ukip polling.

    even the tory and lab figures were not far outside margin of error surely

    could it be that many people when sat with pencil in hand just decided the risk of a lab/snp gov was too much and could not bring themselves to vote for a candidate that might help that happen

  10. Flap Zappa, if the polling methodology was correct the errors would be roughly equally split between one side and the other. When the errors almost) all go one way there is a systemic problem, even if they fall withing the sampling error for an individual poll.

1 13 14 15