A few people have asked me if I know where there is a spreadsheet of the general election results available so they can crunch the numbers and explore results themselves. Until now I’ve been using results scraped off the BBC website, but the British Election Study team have now released a data set of the election results for download here.


Stephen Bush over at the New Statesman has written an interesting article about the mountain that faces Labour at the next election. I’ve now had chance to sit down and play with the election results and the picture is as bleak for Labour as Stephen paints – for various reasons, the electoral system has now tilted against Labour in the same way it was tilted against the Conservatives at the last few elections.

Looking at how the vote was distributed at the general election the Conservatives should, on a uniform swing, be able to secure a majority on a lead of about 6%. Labour would need a lead of almost thirteen points. On an equal amount of votes – 34.5% a piece – the Conservatives would have almost fifty seats more than Labour, Labour would need to have a lead of about four points over the Conservatives just to get the most seats in a hung Parliament. The way the cards have fallen, the system is now even more skewed against Labour than it was against the Conservatives.

How did this happen? It’s probably a mixture of three factors. One is the decline of the Liberal Democrats and tactical voting – one of the reasons the electoral system had worked against the Tories in recent decades was that Labour and Lib Dem voters had been prepared to vote tactically against the Tories, and the Lib Dems have held lots of seats in areas that would otherwise be Tory. Those factors have vanished. At the same time the new dominance of the SNP in an area that was a Labour heartland has tilted the system against Labour. Labour had a lead over the Conservatives of 9% in Scotland, but Labour and Conservative got the same number of Scottish seats because the SNP took them all.

Finally there is how the swing was distributed at this election. Overall there was virtually no swing at all between Labour and Conservative across Great Britain, but underneath this there were variances. In the Conservative held target seats that Labour needed to gain there was a swing towards the Conservatives (presumably because most of these seats were being contested by first time Conservative incumbents). In the seats that Labour already held there was a swing towards Labour – in short, Labour won votes in places where they were of no use to them, piling up useless votes in seats they already held.

labourswing

And, of course, these are on current boundaries. Any boundary review is likely to follow the usual pattern of reducing the number in seats in northern cities where there is a relative decline in population and increasing the number of seats in the south where the population is growing… further shifting things in the Conservatives favour.


-->

We don’t have any more information on how the British Polling Council’s review of the election polls will progress beyond it being chaired by Pat Sturgis, but several pollsters have given some thoughts today beyond the initial “We got it wrong and we’ll look at it” statements most pollsters put out on Friday. Obviously no one comes to any conclusions yet – there’s a lot of data to go through and we need thoughtful analysis and solutions rather than jumping at the first possibility that raises its head – but they are all interesting reads:

Peter Kellner of YouGov has written an overview here (a longer version of his article in the Sunday Times at the weekend), covering some of the potential causes of error like poor sampling, late swing and “shy Tories”.

Martin Boon of ICM has written a detailed deconstruction of ICM’s final poll which would be have been an interesting piece anyway in terms of giving a great overview of how the different parts of ICM’s methodology come together to turn the raw figures into the final headline VI. Martin concludes that all of ICM’s techniques seemed to make the poll more accurate, but the sample itself seemed to be at fault (and he raises the pessimistic possibility that sampling techniques may no longer be up to delivering decent samples)

Andrew Cooper of Populus has written an article in the Guardian here – despite the headline most of the article isn’t about what Cameron should do, but about how the polls did.

Finally ComRes have an overview on their site, discussing possibilities like differential response and the need to identify likely voters more accurately.


I’ve just got back from the BBC after working all night (you may have seen my bald spot sat just to the left of Emily Maitlis’s big touchscreen last night) and am about to go and put my feet up and have a rest – I’ll leave other thoughts on the election until later in the weekend or next week, but a few quick thoughts about the accuracy of the polls.

Clearly, they weren’t very accurate. As I write there is still one result to come, but so far the GB figures (as opposed to the UK figures!) are CON 38%, LAB 31%, LDEM 8%, UKIP 13%, GRN 4%. Ten of the final eleven polls had the Conservatives and Labour within one point of each other, so essentially everyone underestimated the Conservative lead by a significant degree. More importantly in terms of perceptions of polling it told the wrong story – when I was writing my preview of the election I wrote about how an error in the Scottish polling wouldn’t be seen so negatively because there’s not much difference between “huge landslide” and “massive landslide”. This was the opposite – there is a whole world of difference between polls showing a hung Parliament on a knife edge and polls showing a Tory majority.

Anyway, what happens now is that we go away and try and work out what went wrong. The BPC have already announced an independent inquiry to try and identify the causes of error, but I expect individual companies will be digging through their own data and trying to work out what went wrong too. For any polling company, there inevitably comes a time when you get something wrong – the political make up, voting drivers and cleavages of society change, how people relate to surveys change. Methods that work at one election don’t necessarily work forever, and sooner or later you get something wrong. I’ve always thought the mark of a really good pollster is someone who puts their hands up to the error, says they’ve messed up and then goes and puts it right.

In terms of what went wrong this week, we obviously don’t know yet, certainly I wouldn’t want to rush to any hasty decisions before properly looking at all the data. There are some things I think we can probably flag up to start with though:

The first is that there is something genuinely wrong here. For several months before the election the polls were consistently showing Labour and Conservative roughly neck-and-neck. Individual polls exist that showed larger Conservative or Labour leads and some companies tended to show a small Labour lead or small Conservative lead, but no company consistently showed anything even approaching a seven point Conservative lead. The difference between the polls and the result was not just random sample error, something was wrong.

I don’t think it was a late swing either. YouGov did a re-contact survey on the day and found no significant evidence of this. I think Populus and Ashcroft did some on the say stuff too (though I don’t know if it was a call-back survey), so as the inquiry progresses other evidence may come to light, but I’d be surprised if any survey found enough people changing their minds between Wednesday and Thursday to create a seven point lead.

Mode effects don’t seem to be the cause of the error either, as the final polls conducted online and the final polls conducted by telephone produced virtually identical figures in terms of the Labour/Conservative lead (though as I said on Wednesday, they were different on UKIP). In fact, having a similar error with both telephone and online polls is evidence against some other possibilities too – unless by freakish co-incidence unrelated problems with online and telephone polling produced almost identical errors it means things that only affect one modeare unlikely to have been the cause. For example, if the problem was caused by more people using mobile phones, it shouldn’t have affected online polls. If the problem was caused by panel effect, it shouldn’t have affected phone polls.

Beyond that there are some obvious areas to look at. Given that the pre-election polls were wrong but the exit polls were right, how pollsters measure likelihood is definitely worth looking at (exit polls obviously don’t have to worry about likelihood to vote – they only interview people physically leaving a polling station). I think differential response rates is something worth examining (“shy voters”… though I think enthusiastic voters is just as risky!), and the make-up of samples is obviously a major factor in the accuracy of any poll.

And of course, it might be something completely unrelated to these things that hasn’t crossed our minds yet. Time will tell, but first some sleep.


Most pollsters produced their final polls last night, ready to go in the first edition of whichever paper commissioned them. Today we have the final few companies – Ipsos MORI, who do polling for the Evening Standard so always publish on election day itself, Populus and Ashcroft, who do their polls on their own accord, so didn’t have to finish in time for a print deadline last night. We also have the final figures from ICM, who put out interim figures for the Guardian yesterday, but then continued fieldwork into the evening.

  • Lord Ashcroft’s final poll has topline figures of CON 33%, LAB 33%, LDEM 10%, UKIP 11%, GRN 6%. Full tabs are here
  • Ipsos MORI have final figures of CON 36%, LAB 35%, LDEM 8%, UKIP 11%, GRN 5%. Full details are here.
  • Populus have final figures of CON 33%, LAB 33%, LDEM 10%, UKIP 14%, GRN 5%. Tabs are here.
  • Finally ICM have published their final figures for the Guardian. Yesterday’s interim numbers were 35-35, today’s final figures shift only slightly to CON 34%, LAB 35%, LDEM 9%, UKIP 11%, GRN 4%. Tabs are here.

I said on Tuesday I’d revisit my final prediction in light of the final polls. My earlier prediction was based on Con and Lab being neck and neck, so no change there. The final few Scottish polls have shown slightly smaller leads for the SNP – between 20% and 23% – so while Labour are still neck-and-neck nationally, perhaps they are doing a little better in Scotland and a little worse in England than I predicted. We shall see.

As was the picture yesterday, all the polls are essentially showing a neck and neck race – they’ll either all be about right, or all be wrong. The only company showing a gap of more than one point between Conservative and Labour is Panelbase, who have a two point Labour lead. Over the past few weeks there has been some comment on the apparent difference between phone polls and internet polls, whether phone polls were showing a Conservative lead and online polls not. If this ever was a pattern, rather than just co-incidence, it’s not present in the final results, the average for the final telephone polls is CON 34.5%, LAB 34.3%; the average for the final online polls if CON 33.0%, LAB 33.0%. You’ll note that online polls have both Lab and Con lower – that’s because there is a significant difference between the pollsters on how well they think UKIP will do – telephone pollsters all have UKIP on 11-12%, but online pollsters vary between 12% from YouGov, Opinium and BMG right up to 16% from Survation and Panelbase.

And, that’s it. The next poll will be the broadcasters/NOP/MORI poll at 10pm. I’ll be working on the BBC election coverage through the night so won’t be posting any analysis here overnight, but feel free to stay and chat in the comments section if you want. In the meantime, good luck to all standing and campaigning. Good luck to all pollsters on getting it right. And good luck to those poor souls who keep or lose their jobs tonight based on a public vote.