The day after the election the British Polling Council announced it was going to have an inquiry into what went wrong with the polls, we’ve now got some more information about how the inquiry is going to proceed. Over on the National Centre for Research Methods website they have announced the membership of the inquiry team, timings and terms of reference.
The Chairman of the inquiry, Pat Sturgis, was announced earlier this month. The rest of the team include several names who regular readers will be familiar with: Steve Fisher from Oxford University who ran the ElectionEtc model and worked on the exit poll, as did Jouni Kuha of the LSE, Will Jennings of Southampton University who is part of the Polling Observatory team, Jane Green of Manchester University who is the current Director of the British Election Study and Ben Lauderdale of the LSE who did the ElectionForecast model that was on Newsnight and 538. The rest of the inquiry team are Nick Baker of Quadrangle Research, Mario Callegaro of Google and Patten Smith of Ipsos MORI.
The terms of reference for the inquiry are to assess the accuracy of the 2015 polls and investigate the cause of any inaccuracy, whether it’s connected to inaccuracy at previous elections, to look into the possibility of herding, to see if enough information was provided and communicated to people about how polls were done and what they meant and make recommendations on how polls are conducted and published in the future and on the rules and obligations of the BPC.
The inquiry are inviting written submissions via their website, and there will be a public meeting on the 19th June – it’s due to report to the BPC and MRS by the 1st March next year.
Opinion polls are a little light at the moment, and probably will be for the next few weeks. Even at the best of times there is little polling in the weeks immediately following a general election – we’ve just had an actual general election to judge people’s voting behaviour, attention is elsewhere and newspapers will generally have blown their polling budgets in the campaign. I’d expect even less polling over the next few weeks because of the errors in the polls at the general election. Some of the long running trackers like the ICM/Guardian series and MORI political monitor will likely continue just to avoid a gap in the data series, but generally speaking most of the regular polls will probably pause for a bit while they work out what went wrong and sort out solutions to it.
As it is, the next political events we have too look forward to aren’t about Great Britain anyway, but the Scottish, Welsh and London elections next year – I’m sure polling on them will start firing up in the next few months. The other, more immediate, race is the Labour leadership election.
We have had a little polling on that already – the YouGov/Sunday Times poll at the weekend (results here) asked the general public their preferences for Labour leader. Chuka Umunna came first on 17% (fieldwork was conducted before he withdrew), followed by Andy Burnham on 14%, Yvette Cooper on 8%, Tristram Hunt on 3%, Liz Kendall on 2% and Mary Creagh on 1%. Amongst Labour’s own voters Andy Burnham was ahead on 22%, with Chuka Umunna on 19%.
Obviously the key conclusion here isn’t really who is ahead… it’s how low anyone’s figures are. 55% of the general public said don’t know, 40% of Labour voters said don’t know. YouGov also asked separately about if people thought each of the contenders would make a good or bad leader, and in each case a clear majority of respondents said they didn’t know or didn’t know enough about the person to say. This is a race where the public simply aren’t familiar with the personalities of the candidates to have any clear opinion yet. That’s not necessarily a bad thing for the next Labour leader – the public having no clear image of you is better than having negative baggage – it just means they need to be pretty careful to make sure people’s first impressions are good ones, as they are difficult to shift once the public have formed an impression.
On the other outstanding issue – what caused the polling error – I’m beavering away at looking at what caused the errors and how to put them right, as I am sure are the other companies. I’m not planning on giving a running commentary, though I gave some thoughts at the end of last week on Keiran Pedley’s Polling Matter’s podcast here.
A few people have asked me if I know where there is a spreadsheet of the general election results available so they can crunch the numbers and explore results themselves. Until now I’ve been using results scraped off the BBC website, but the British Election Study team have now released a data set of the election results for download here.
Stephen Bush over at the New Statesman has written an interesting article about the mountain that faces Labour at the next election. I’ve now had chance to sit down and play with the election results and the picture is as bleak for Labour as Stephen paints – for various reasons, the electoral system has now tilted against Labour in the same way it was tilted against the Conservatives at the last few elections.
Looking at how the vote was distributed at the general election the Conservatives should, on a uniform swing, be able to secure a majority on a lead of about 6%. Labour would need a lead of almost thirteen points. On an equal amount of votes – 34.5% a piece – the Conservatives would have almost fifty seats more than Labour, Labour would need to have a lead of about four points over the Conservatives just to get the most seats in a hung Parliament. The way the cards have fallen, the system is now even more skewed against Labour than it was against the Conservatives.
How did this happen? It’s probably a mixture of three factors. One is the decline of the Liberal Democrats and tactical voting – one of the reasons the electoral system had worked against the Tories in recent decades was that Labour and Lib Dem voters had been prepared to vote tactically against the Tories, and the Lib Dems have held lots of seats in areas that would otherwise be Tory. Those factors have vanished. At the same time the new dominance of the SNP in an area that was a Labour heartland has tilted the system against Labour. Labour had a lead over the Conservatives of 9% in Scotland, but Labour and Conservative got the same number of Scottish seats because the SNP took them all.
Finally there is how the swing was distributed at this election. Overall there was virtually no swing at all between Labour and Conservative across Great Britain, but underneath this there were variances. In the Conservative held target seats that Labour needed to gain there was a swing towards the Conservatives (presumably because most of these seats were being contested by first time Conservative incumbents). In the seats that Labour already held there was a swing towards Labour – in short, Labour won votes in places where they were of no use to them, piling up useless votes in seats they already held.
And, of course, these are on current boundaries. Any boundary review is likely to follow the usual pattern of reducing the number in seats in northern cities where there is a relative decline in population and increasing the number of seats in the south where the population is growing… further shifting things in the Conservatives favour.
We don’t have any more information on how the British Polling Council’s review of the election polls will progress beyond it being chaired by Pat Sturgis, but several pollsters have given some thoughts today beyond the initial “We got it wrong and we’ll look at it” statements most pollsters put out on Friday. Obviously no one comes to any conclusions yet – there’s a lot of data to go through and we need thoughtful analysis and solutions rather than jumping at the first possibility that raises its head – but they are all interesting reads:
Peter Kellner of YouGov has written an overview here (a longer version of his article in the Sunday Times at the weekend), covering some of the potential causes of error like poor sampling, late swing and “shy Tories”.
Martin Boon of ICM has written a detailed deconstruction of ICM’s final poll which would be have been an interesting piece anyway in terms of giving a great overview of how the different parts of ICM’s methodology come together to turn the raw figures into the final headline VI. Martin concludes that all of ICM’s techniques seemed to make the poll more accurate, but the sample itself seemed to be at fault (and he raises the pessimistic possibility that sampling techniques may no longer be up to delivering decent samples)
Andrew Cooper of Populus has written an article in the Guardian here – despite the headline most of the article isn’t about what Cameron should do, but about how the polls did.
Finally ComRes have an overview on their site, discussing possibilities like differential response and the need to identify likely voters more accurately.