I think I can assume everyone reading this is already aware there will likely be an early general election on the 8th June. There will be lots of polling ahead, but here are a few initial thoughts:

The overall polling position is a strong lead for the Conservative party. There is some variation between different polling companies, but all the polls are showing robust leads for the Conservative party, most are showing extremely strong leads up in the high teens, a few breaking twenty. As the polls currently stand (and, obviously, there are seven weeks to go) a Conservative majority looks very, very likely. The size of it is a different matter – the twenty-one point lead in the recent YouGov, ICM and ComRes polls would produce a majority in excess of a hundred, a nine point lead like in the Opinium poll at the weekend would only see a small increase in the Tory majority.

It’s harder to tell from the polls how well the Liberal Democrats will do. The swing between Labour and the Conservatives will normally give us a relatively good guide to the outcome between those two parties. The Lib Dems are a trickier question – the polls generally show them increasing their support, and this has been more than backed up by local by-elections. How it translates into seats is a more difficult question, my guess is that their support will be concentrated more in areas that voted Remain and the Lib Dems have a history of very effective constituency campaigning. I would expect them to do better in terms of seats than raw swing calculations would suggest.

The elections will be the test of to what degree pollsters have corrected the problems of 2015. The BPC inquiry into what went wrong at the general election concluded that the main problem was with sampling. Polling companies have reacted to that in different ways – some have adopted new quotas or weighting mechanisms to try and ensure their polls have the correct proportions of non-graduates and people who are have little interest in politics; others have instead concentrated on turnout models, moving their turnout models to ones based upon respondents’ age and social class, rather than just how likely they say there are to vote; some have switched from telephone to online (and some have done all of these!). The election will be a chance to see whether these changes have been enough to stop the historical overestimation of Labour support, or indeed whether they’ve gone too far and resulted in a pro-Tory skew. I’ll look in more detail at the different methodological approaches during the campaign.

Elections that look set to produce a landslide results may bring their own problems – in 1983 and 1997 (both elections that mostly relied on face to face polling, so not necessarily relevant to today’s polling methodologies) we saw polls that largely overstated the victorious party’s lead.

The local elections will still happen part way through the campaign. The local elections will still go ahead at the start of May. It’s been a long time since that happened – in recent decades general elections have normally been held on the same day as the local elections – but it’s not unprecedented. In 1983 and 1987 the local elections were in May and the general elections followed in June. Notably they were really NOT a good predictor of the general election a month later. Comparing the Rallings & Thrasher estimates for the local elections those years with the subsequent general elections, in the 1983 local elections the Conservatives were ahead by 3 points… they won the general election the next month by 14 points. In the 1987 local elections the Conservatives were ahead by 6 points, in the general election a month later they were ahead by 11 points. In both cases the SDP-Liberal Alliance did much better in the locals than the general a month later. In short, when the local elections happen in May and Labour aren’t 20 points behind don’t get all excited/distraught about the polls being wrong… people just vote differently in local elections. It may well give the Lib Dems a nice boost during the campaign though.

The Fixed Term Parliament Act probably ended up a bit of a damp squib. There is still a vote to be had tomorrow and the government still need two-thirds of all MPs so it’s not quite all tied up, but as things stand it appears to have been much less of an obstacle than many expected. There was no need for a constructive vote of no confidence, the opposition just agreed to the election. The problem with the two-thirds provision was always the question of whether it would be politically possible for an opposition to say no to a general election.

The boundary changes obviously won’t go ahead in time for the general election. However, it does not mean they won’t happen. The legislation governing the boundary reviews doesn’t say they happen each Parliament, but that they happen each five years. Hence unless the government change the rules to bring them back into line with the election cycle the review will continue to happen, will still report in 2018, but will now first be used in the next general election in 2022. If the election results in an increased Tory majority it probably makes it more likely that the boundary changes will go ahead – getting changes through Parliament always looked slightly dodgy with a small majority.


A few people have asked me if I know where there is a spreadsheet of the general election results available so they can crunch the numbers and explore results themselves. Until now I’ve been using results scraped off the BBC website, but the British Election Study team have now released a data set of the election results for download here.


-->

Stephen Bush over at the New Statesman has written an interesting article about the mountain that faces Labour at the next election. I’ve now had chance to sit down and play with the election results and the picture is as bleak for Labour as Stephen paints – for various reasons, the electoral system has now tilted against Labour in the same way it was tilted against the Conservatives at the last few elections.

Looking at how the vote was distributed at the general election the Conservatives should, on a uniform swing, be able to secure a majority on a lead of about 6%. Labour would need a lead of almost thirteen points. On an equal amount of votes – 34.5% a piece – the Conservatives would have almost fifty seats more than Labour, Labour would need to have a lead of about four points over the Conservatives just to get the most seats in a hung Parliament. The way the cards have fallen, the system is now even more skewed against Labour than it was against the Conservatives.

How did this happen? It’s probably a mixture of three factors. One is the decline of the Liberal Democrats and tactical voting – one of the reasons the electoral system had worked against the Tories in recent decades was that Labour and Lib Dem voters had been prepared to vote tactically against the Tories, and the Lib Dems have held lots of seats in areas that would otherwise be Tory. Those factors have vanished. At the same time the new dominance of the SNP in an area that was a Labour heartland has tilted the system against Labour. Labour had a lead over the Conservatives of 9% in Scotland, but Labour and Conservative got the same number of Scottish seats because the SNP took them all.

Finally there is how the swing was distributed at this election. Overall there was virtually no swing at all between Labour and Conservative across Great Britain, but underneath this there were variances. In the Conservative held target seats that Labour needed to gain there was a swing towards the Conservatives (presumably because most of these seats were being contested by first time Conservative incumbents). In the seats that Labour already held there was a swing towards Labour – in short, Labour won votes in places where they were of no use to them, piling up useless votes in seats they already held.

labourswing

And, of course, these are on current boundaries. Any boundary review is likely to follow the usual pattern of reducing the number in seats in northern cities where there is a relative decline in population and increasing the number of seats in the south where the population is growing… further shifting things in the Conservatives favour.


We don’t have any more information on how the British Polling Council’s review of the election polls will progress beyond it being chaired by Pat Sturgis, but several pollsters have given some thoughts today beyond the initial “We got it wrong and we’ll look at it” statements most pollsters put out on Friday. Obviously no one comes to any conclusions yet – there’s a lot of data to go through and we need thoughtful analysis and solutions rather than jumping at the first possibility that raises its head – but they are all interesting reads:

Peter Kellner of YouGov has written an overview here (a longer version of his article in the Sunday Times at the weekend), covering some of the potential causes of error like poor sampling, late swing and “shy Tories”.

Martin Boon of ICM has written a detailed deconstruction of ICM’s final poll which would be have been an interesting piece anyway in terms of giving a great overview of how the different parts of ICM’s methodology come together to turn the raw figures into the final headline VI. Martin concludes that all of ICM’s techniques seemed to make the poll more accurate, but the sample itself seemed to be at fault (and he raises the pessimistic possibility that sampling techniques may no longer be up to delivering decent samples)

Andrew Cooper of Populus has written an article in the Guardian here – despite the headline most of the article isn’t about what Cameron should do, but about how the polls did.

Finally ComRes have an overview on their site, discussing possibilities like differential response and the need to identify likely voters more accurately.


I’ve just got back from the BBC after working all night (you may have seen my bald spot sat just to the left of Emily Maitlis’s big touchscreen last night) and am about to go and put my feet up and have a rest – I’ll leave other thoughts on the election until later in the weekend or next week, but a few quick thoughts about the accuracy of the polls.

Clearly, they weren’t very accurate. As I write there is still one result to come, but so far the GB figures (as opposed to the UK figures!) are CON 38%, LAB 31%, LDEM 8%, UKIP 13%, GRN 4%. Ten of the final eleven polls had the Conservatives and Labour within one point of each other, so essentially everyone underestimated the Conservative lead by a significant degree. More importantly in terms of perceptions of polling it told the wrong story – when I was writing my preview of the election I wrote about how an error in the Scottish polling wouldn’t be seen so negatively because there’s not much difference between “huge landslide” and “massive landslide”. This was the opposite – there is a whole world of difference between polls showing a hung Parliament on a knife edge and polls showing a Tory majority.

Anyway, what happens now is that we go away and try and work out what went wrong. The BPC have already announced an independent inquiry to try and identify the causes of error, but I expect individual companies will be digging through their own data and trying to work out what went wrong too. For any polling company, there inevitably comes a time when you get something wrong – the political make up, voting drivers and cleavages of society change, how people relate to surveys change. Methods that work at one election don’t necessarily work forever, and sooner or later you get something wrong. I’ve always thought the mark of a really good pollster is someone who puts their hands up to the error, says they’ve messed up and then goes and puts it right.

In terms of what went wrong this week, we obviously don’t know yet, certainly I wouldn’t want to rush to any hasty decisions before properly looking at all the data. There are some things I think we can probably flag up to start with though:

The first is that there is something genuinely wrong here. For several months before the election the polls were consistently showing Labour and Conservative roughly neck-and-neck. Individual polls exist that showed larger Conservative or Labour leads and some companies tended to show a small Labour lead or small Conservative lead, but no company consistently showed anything even approaching a seven point Conservative lead. The difference between the polls and the result was not just random sample error, something was wrong.

I don’t think it was a late swing either. YouGov did a re-contact survey on the day and found no significant evidence of this. I think Populus and Ashcroft did some on the say stuff too (though I don’t know if it was a call-back survey), so as the inquiry progresses other evidence may come to light, but I’d be surprised if any survey found enough people changing their minds between Wednesday and Thursday to create a seven point lead.

Mode effects don’t seem to be the cause of the error either, as the final polls conducted online and the final polls conducted by telephone produced virtually identical figures in terms of the Labour/Conservative lead (though as I said on Wednesday, they were different on UKIP). In fact, having a similar error with both telephone and online polls is evidence against some other possibilities too – unless by freakish co-incidence unrelated problems with online and telephone polling produced almost identical errors it means things that only affect one modeare unlikely to have been the cause. For example, if the problem was caused by more people using mobile phones, it shouldn’t have affected online polls. If the problem was caused by panel effect, it shouldn’t have affected phone polls.

Beyond that there are some obvious areas to look at. Given that the pre-election polls were wrong but the exit polls were right, how pollsters measure likelihood is definitely worth looking at (exit polls obviously don’t have to worry about likelihood to vote – they only interview people physically leaving a polling station). I think differential response rates is something worth examining (“shy voters”… though I think enthusiastic voters is just as risky!), and the make-up of samples is obviously a major factor in the accuracy of any poll.

And of course, it might be something completely unrelated to these things that hasn’t crossed our minds yet. Time will tell, but first some sleep.