Today the polling inquiry under Pat Sturgis’ presented its initial findings on what caused the polling error. Pat himself, Jouni Kuha and Ben Lauderdale all went through their findings at a meeting at the Royal Statistical Society – the full presentation is up here. As we saw in the overnight press release the main finding was that unrepresentative samples were to blame, but today’s meeting put some meat on those bones. Just to be clear, when the team said unrepresentative samples they didn’t just mean the sampling part of the process, they meant the samples pollsters end up with as a result of their sampling AND their weighting: it’s all interconnected. With that out the way, here’s what they said.

Things that did NOT go wrong

The team started by quickly going through some areas that they have ruled out as significant contributors to the error. Any of these could, of course, have had some minor impact, but if they did it was only minor. The team investigated and dismissed postal votes, falling voter registration, overseas voters and question wording/ordering as causes of the error.

They also dismissed some issues that had been more seriously suggested – the first was differential turnout reporting (i.e, Labour people overestimating their likelihood to vote more than Conservative people), in vote validation studies the inquiry team did not found evidence to support this, suggesting if it was an issue it was too small to be important. The second was the mode effect – ultimately whether a survey was done online or by telephone made no difference to its final accuracy. This finding met with some surprise from the audience, given there were more phone polls showing Tory leads than online ones. Ben Lauderdale of the inquiry team suggested that was probably because phone polls had smaller sample sizes and hence more volatility, hence spat out more unusual results… but that the average lead in online polls and average lead in telephone polls were not that different, especially in the final polls.

On late swing the inquiry said the evidence was contradictory. Six companies had conducted re-contact survey, going back to people who had completed pre-election surveys to see how they actually voted. Some showed movement, some did not, but on average they showed a movement of only 0.6% to the Tories between the final polls and the result, so can only have made a minor contribution at most. People deliberately misreporting their voting intention to pollsters was also dismissed – as Pat Sturgis put it, if those people had told the truth after the election it would have shown up as late swing (but did not), if they had kept on lying it should have affected the exit poll, BES and BSA as well (it did not).

Unrepresentative Samples

With all those things ruled out as major contributors to the poll error the team were left with unrepresentative samples as the most viable explanation for the error. In terms of positive evidence for this they looked at the differences between the BES and BSA samples (done by probability sampling) and the pre-election polls (done by variations on quota sampling). This wasn’t a recommendation to use probability sampling (while they didn’t do recommendations, Pat did rule out any recommendation that polling switch to probability sampling wholesale, recognising that the cost and timing was wholly impractical, and that the BES & BSA had been wrong in their own way, rather than being perfect solutions).

The two probability based surveys were, however, useful as comparisons to pick up possible shortcomings in the sample – so, for example, the pre-election polls that provided precise age data for respondents all had age skews within age bands, specifically within the oldest age band there were too many people in their 60s, not enough in their 70s and 80s. The team agreed with the suggestions that samples were too politically engaged – in their investigation they looked at likelihood to vote, finding most polls had samples that were too likely to vote, and didn’t have the correct contrast between young and old turnout. They also found samples didn’t have the correct proportions of postal voters for young and old respondents. They didn’t suggest all of these errors were necessarily related to why the figures were wrong, but that they were illustrations of the samples not being properly representative – and that ultimately led to getting the election wrong.

Herding

Finally the team spent a long time going through the data on herding – that is, polls producing figures that were closer to each other than random variation suggests they should be. On the face of it the narrowing looks striking – the penultimate polls had a spread of about seven points between the poll with the biggest Tory lead and the poll with the biggest Labour lead. In the final polls the spread was just three points, from a one point Tory lead to a two point Labour lead.

Analysing the polls earlier in the campaign the spread between different was almost exactly what you would expect from a stratified sample (what the inquiry team considered the closest approximation to the politically weighted samples used by the polls). In the last fortnight the spread narrowed though, with the final polls all close together. The reason for this seems to be because of methodological change – several of the polling companies made adjustments to their methods during the campaign or for their final polls (something that has been typical at past elections, companies often add extra adjustments to their final polls). Without those changes them the polls would have been more variable….and less accurate. In other words, some pollsters did make changes in their methodology at the end of the campaign which meant the figures were clustered together, but they were open about the methods they were using and it made the figures LESS Labour, not more Labour. Pollsters may or may not, consciously or subconsciously, have been influenced in the methodological decisions they made by what other polls were showing. However, from the inquiry’s analysis we can be confident that any herding did not contribute to the polling error, quite the opposite – all those pollsters who changed methodology during the campaign were more accurate using their new methods.

For completeness, the inquiry also took everyone’s final data and weighted it using the same methods – they found a normal level of variation. They also took everyone’s raw data and applied the weighting and filtering the pollsters said they had used to see if they could recreate the same figures – the figures came out the same, suggesting there was no sharp practice going on.

So what next?

Today’s report wasn’t a huge surprise – as I wrote at the weekend, most of the analysis so far has pointed to unrepresentative samples as the root cause, and the official verdict is in line with that. In terms of the information released today there were no recommendations, it was just about the diagnosis – the inquiry will be submitting their full written report in March. It will have some recommendations on methodology – but no silver bullet – but with the diagnosis confirmed the pollsters can start working on their own solutions. Many of the companies released statements today welcoming the findings and agreeing with the cause of the error, we shall see what different ways they come up with to solve it.


224 Responses to “What the Polling Inquiry said”

1 2 3 4 5
  1. Carfrew
    Hmm. Not sure you’ll be able to find a suitable binary question for political polling. That was why I was asking for an example. If you’re suggesting that one should ask people a question but then classify their responses according to some algorithm one hasn’t disclosed then how do you convince your audience that your response mapping is accurate and 100 percent reliable?

    Even if – for argument’s sake – there were a physiological marker of VI and you could poll by taking finger-prick blood samples, the chances are your marker is still a continuous variable. Best you can hope for is a sharp change, so that almost all responses will fall one side or the other of your threshold.

    Or have I misunderstood you?

  2. @Sorbus

    Oh, I wasn’t saying it was easy to overcome the problem. I was just highlighting the nature of the problem, such that in other related fields they have to go the extra mile.

    Even trick questions would be awkward in polling because polling is repeated, it’s not a one-off experiment so peeps may know they’re coming and start second guessing.

    (Though at times, some of the questions already seem like trick questions…)

  3. @Sorbus

    I suppose what I’m really saying, is that at the very least you need to test for the same thing in lots of different ways, some indirect, so you can build up a more reliable picture. If you want to know what someone really thinks about summat, maybe a single direct question is insufficient.

  4. @Sorbus

    And also, some of the time, pollsters seem keen to avoid questions that give flawed responses. You can see the point, but at the same time it can be useful to have more of those in there to check the validity of responses.

  5. @Sorbus

    Obviously I wasn’t suggesting pollsters visit people taking samples of chemical markers. I was pointing out the challenge that afflicts polling, in terms of overcoming misleading responses. It’s so bad that in other fields, they have to do that sort of thing, is the point…

  6. @Sorbus

    But everyone else seems to get what I mean so it’s OK…

  7. Roger Mexico
    ‘ Of the 1001 people ICM contacted 251 said they had voted Labour and only 242 Conservative[1]. But ICM do then weight their sample so it reflects the actual result of the election[2] among other things. Even after that ICM only show Conservatives just 4 voters ahead, before they get to LTV and all the other adjustments ICM use or have added since the election. And even after those the Conservatives have a 5 point lead, which is smaller than most online pollsters are showing.’

    I may be missing something here – though I am usually up to speed with psephological issues. The ICM sample was initially clearly too Labour in terms of how people had voted in 2015.Surely ,though, the weighting process carried out by ICM should deal with that! Why is it at all problematic that after this process the Conservatives were only 4 voters ahead? Why would that not represent the real underlying picture before taking account of LTV etc?

  8. It’s worth pointing out that it’s not even clear that the problems with the polls can be analysed from the BES and BSA surveys. For example, both clearly found fewer UKIP voters than they should have done – presumably with UKIP voters over-represented in the nearly half of respondents who refused to take part. So the UKIP voters who they did interview[1] may not be representative of all UKIP voters and may not be a good guide

    There may also not be enough of them. BES will only have spoken to around 236 UKIP voters and BSA only 277. Given that many of these will have voted UKIP in 2010, not voted, voted for other Parties etc[2] getting the allocation of movement to UKIP from Con and Lab is probably based on under 150 voters in each. As such the numbers probably aren’t enough to suggest these surveys can assure us that the UKIP vote can equally from Lab and Con rather than much more from Con as has been suggested as a solution to why the polls were wrong. It may indeed be the case but, the sub-samples aren’t big enough to confirm the hypothesis, even if we are sure they are representative of all UKIP voters.

    [1] BSA had 9.0% UKIP in a sample of 4328 of whom 71% voted (3073). BES had 10.7% in a sample of 2987 of whom 73.8% voted (2204). The actual UKIP GB vote was 12.9%. (All figures derived from Curtice BSA report).

    [2] Ashcroft’s immediate post-voting poll, based on 1672 UKIP voters (from a mixture of online and telephone):

    http://lordashcroftpolls.com/wp-content/uploads/2015/05/Post-vote-poll-GE-2015-150507-Full-tables.pdf#page=5

    gave the 2010 votes of these as: Con 37%, Lab 12%, Lib Dem 16%, UKIP 16% Green/SNP/PC 2% Other/DNV 18%. While there’s nothing to say that this is a more accurate sample than the BES/BSA ones (at 14% it got the UKIP percentage nearer, but was far worse getting the Con lead). It suggests that a rule of thumb that half the UKIP vote came from other than Con and Lab might be a good one.

  9. Would it be outrageous to suggest that the pollsters should just carry on as normal, and the rest of us can apply whatever rule of thumb we feel is applicable to the poll numbers to reflect “what usually happens”?

    If the polls say that Labour are 5 points ahead of the Tories a year before the next GE, I don’t see any problem with saying “but of course the polls generally overstimate Labour’s position so they may not be ahead by much if at all”.

    Hitherto, such comments have generally been met with responses like “This is a polling site, can you point to a poll that shows that?” or “What’s your evidence for that?”.

    Perhaps in the future we can be more sanguine about the polls.

  10. @Neil A

    Sure, it’s fine to say that. It may even be what happens.

    It’s just that, not being police peeps, we’re not all in a rush to go “case closed!!” and move on. It’s interesting wrestling with these tricky, quicksilver issues. Well, tricky for the likes of me anyways…

  11. Looking back at General Elections final poll predictions overstated Labour in relation to the Tories in – 1959 – 1966 – 1970 – Oct 1974 – 1979 – 1987 – 1992 – 1997 – 2001 – 2005 – 2015.
    Labour was understated in relation to the Tories in – Feb 1974 – 1983.
    The Labour – Tory gap was much as predicted in – 1964 – 2010. (2005 was not far out either)
    Some allowance probably has to be made for the fact that several of the elections were landslides in the making so that by Polling Day the result appeared to have become a foregone conclusion – perhaps discouraging some supporters of the party apparently miles ahead from turning out. This would apply to -1966 – maybe Oct 1974 where polls suggested a big Labour win – 1983 – 1997 – 2001.

  12. “Perhaps in the future we can be more sanguine about the polls.”

    I think we need to be more sanguine about a lot of things. Not long before Christmas, the Chancellor and the Bank of England Governor were telling us the economy was going to grow, wages were going to outstrip inflation, the deficit would disappear and interest rates were about to rise.

    It’s not just the pollsters that get things wrong.

  13. Well, I’m waiting for the final report, because I have rarely seen such a flawed (at least presentation-wise) analysis publicised so widely (yes, I spent a lot of time scrutinising the slides tonight). There are no real hypotheses to test against the data – it is data-trawling and then speculating about the cause. There is actually only a narrative and selectively data is presented (not really by the way, we don’t have any mention of statistical testing of the claims).

    It could have been an application of an anomaly detecting method (it seems it starts as such), but then it is clearly abandoned half way through the presentation. It is unfortunate – as then there is no actionable recommendation – so no chance of reliably testing the conclusions.

    It also leaves the door wide open to selective reporting, false positives and false negatives all over the place (as Roger Mexico pointed it out using a different frame and different terminology).

    Ascroft’s absence is troubling as it is unlikely that he wouldn’t have the means to ensure the appropriate sampling (plus constituency level polling).

    There are a few things: those secondary indices … They were pro Con. The polling companies have the individual level data, so creating a proper multilevel correlation/regression should be feasible (and hence offering some assumptions). The other one is analysing the DKs – again against the secondary indices (e.g. If you really think that EM would be a disaster to the country, how could you be DK).

    It comes back to a fundamental question (raised in different ways in the last few days – and earlier by different people): what is really aggregated in the headline figures? The polling companies have the individual respondent data (hopefully) to evaluate it, but these seem to be just bypassed. Fisher’s curse?

  14. Roger Scully clearly knows the changes that YouGov are planning to roll out in Wales (also presumably Scotland too – maybe even in England, or parts of it).

    “recent changes by YouGov to their methodology – which should be rolled out in full for the first time in Wales with our next Barometer poll ”

    However, his main point is that, while Welsh polls are pretty accurate, Lab votes in Wales not only underperform electoral votes, but especially so in low turnout elections.

    “What I am saying is that the evidence seems to suggest that Welsh Labour have been having some problems getting their vote out in recent years; moreover, those problems seem to have been of greater magnitude in those elections that generally attract a lower voter turnout. Unless Labour can successfully address that problem before May, this year’s National Assembly election might just turn out to be a little more competitive than the polls would currently seem to indicate is probable.”

    http://blogs.cardiff.ac.uk/electionsinwales/2016/01/18/all-that-is-solid-2/

  15. Well here’s one thought.

    Those 5% of voters who’re hard to find for polling purposes (or at least, hard to poll) voted Tory last time. But that doesn’t prove they are overwhelmingly Tory, just that they voted Tory.

    It is possible (I’m not saying likely, mind) that this group will be much more likely to favour Corbyn and old style Labour. If you can’t ask them, or they won’t answer, then you can’t know.

    Even if they split 50-50 you have a tie and Labour are back in the game, possibly (probably?) even in power.

  16. @NIckP

    I suspect that among the hard to reach voters includes the very elderly, who basically don’t answer the phone much, are unlikely to use the internet and naturally suspicious of people knocking at their door.

    From years of canvassing experience, they are very reluctant to tell canvassers how they will vote either.

    I can’t see Corbyn getting any traction among this group.

  17. A big problem with the post-mortem research is that it does not seem to consider the impact of the polling results on the actual election result.

    Polling, especially in the 2015 election, was most clearly not a neutral act of reporting in terms of its *impact* on the final result. Both in terms of the political issues that were given prominence as a consequence of the hung parliament polls (SNP support, Miliband’s credibility and so on) and the more general energising of the grey vote, flawed polling had a significant impact on the result. To suggest otherwise is to embrace positivist naivety in its crudest form and to suspend the experience of being human during an election campaign where, it was obvious, frontline sentiments from doorstep canvassing did not in any way, shape or form match what the polls were saying.

    My fear is that the post-morten research is, intentional or not, a self-serving retrospective by an industry that needs to reassert its value and relevance: by improving our methodology through statistically valid sampling we can – and will – provide a better service in future.

    With only a few exceptions – and some of Peter Kellner’s journalistic posts did seem to grasp this – the polls did not pick up the massive and in your face mismatch between a tangible and easily accessible pro Tory mood music and the alternate reality that the polls were projecting.

    There was clearly an anxiety based over mobilisation of the grey vote (especially in the 75+ age range) and a complacent mobilisation of the younger vote (18-35). Some of the latter was about political strategy on the part of the Labour Party but the former was a direct and powerful consequence of a destructive feedback loop created by polling methodologies that did not, would not and could not step back and reflect on the lived experience of a powerful, sectional and increasing pro Tory sentiment.

  18. Catmanjeff

    “The state and religion should not mix, and certainly politics and religion should not either, even in polling.”

    Well however you want to define it if there’s a subgroup who make up to 25% of the electorate in some seats (like Oldham recently) and have a dramatically higher turnout than the average then polling isn’t going to work until it’s included.

  19. Laszlo – the inquiry team also had access to a lot of individual level data. All the pollsters provided them with the raw individual level data from their final poll, their penultimate poll, their first poll of the campaign and any re-contact polls they did.

  20. @catmanjeff
    “I suspect that among the hard to reach voters includes the very elderly, who basically don’t answer the phone much, are unlikely to use the internet and naturally suspicious of people knocking at their door. From years of canvassing experience, they are very reluctant to tell canvassers how they will vote either.”

    Have you considered the possibility that the characteristics you describe as ‘belonging to the very elderly’ are strongly linked to characteristics which help them to reach a great age, and so are spread through the younger parts of the population, albeit in smaller proportions?
    I have in mind especially the characteristics “That’s my business, not some stranger’s” and “Don’t believe all you see on TV or the Internet, or read in the papers”

  21. If you want to predict elections, you need to look at economic as well as polling data.

    If there is a another crisis, all bets are off.

  22. @Hawthorn

    It depends very much on where the crisis is.

    In the pre-globalisation era there might have been a correlation between the FTSE and the UK economy but this broke down long ago.

    In 2007, while the credit markets were tightening and there was a bank run on Northern Rock, the FTSE was making new highs. Because the sovereign funds of the oil producers were awash with cash and looking to park their money anywhere they could. Now that’s gone into reverse, they’re liquidating. Both times the FTSE was a reflection of their economies, not ours.

    The cheap oil should be very good for the eurozone, and should help them come out of their doldrums, which will be a great relief to us.

    P.S. The extent of the liquidation of the sovereign funds is amazing.

    China’s stabalisation fund has fallen from $15.5 billion to $14 billion at the end of 2015, and their foreign exchange reserves have gone from $4 trillion in 2014 to $3.5 trillion. Azerbaijan’s foreign reserves have gone from $16.5 billion in 2014 to $7.3 billion. Saudi’s sovereign fund has fallen from $737 billion in 2014 to $635 billion now. Even Norway has been drawing down it’s sovereign fund – it’s gone from $900 billion in 2014 to $790 billion now.

    We’re witnessing the Great Fund Unwinding – but it should affect asset prices not the real economy.

  23. Neil
    “Would it be outrageous to suggest that the pollsters should just carry on as normal, and the rest of us can apply whatever rule of thumb we feel is applicable to the poll numbers to reflect “what usually happens”

    I agree but you know what these psephologists are like.

    Another alternative would be for Anthony to ask TOH for his take once a month. I’m sure his fee would be reasonable.

  24. Well he currently does it more often than that for free….

  25. “My fear is that the post-morten research is, intentional or not, a self-serving retrospective by an industry that needs to reassert its value and relevance: by improving our methodology through statistically valid sampling we can – and will – provide a better service in future.”

    ———

    do pollsters actually do anything, or plan to do anything, about the impact their polls may have on polling itself?

  26. I.e., to the extent that polling may have a predicitive element, to what extent should polling’s impact on itself be taken into account?…

  27. GRAHAM

    Why is it at all problematic that after this process the Conservatives were only 4 voters ahead? Why would that not represent the real underlying picture before taking account of LTV etc?

    Well it could well be, but if it is, it would mean that the phone polls were producing different results from the online ones[1] and we’d now have to be looking at why the latter had now got a pro-Con bias that is wrong. So it’s problematic for the pollsters who are now huddling together, a little scared to produce figures that stand out from the crowd.

    But the potential problem is that the wisdom of crowds doesn’t always work – it didn’t in May after all. And so even the certainties of political establishment groupthink – such as ‘Corbyn Can’t Win’ may turn out to be false; they have had a lot of shocks this year: the General Election, the SNP landslide, Corbyn winning the leadership, even the Oldham by-election.

    However the media in the UK are highly conformist and it is usually better to be wrong with the pack than right on on your own. So the pollsters are edgy about what might happen if they alter their methods and end up muffling the signals of change, but at the same time worried if they are reporting anything that looks different.

    [1] Not just about UKIP – the big gap on which has re-appeared – but on the all-important Con-Lab gap.

  28. Candy

    Asset prices include house prices.

    Terrifying middle England homeowners is what did for both Major and Brown.

    I do not agree that it would not have effect in the real economy either, but that is not a discussion for a polling board. We shall have to see.

  29. @GRAHAM, ROGER

    The problem would seem to be compounded by there being too many adjustments which often counter each other. Adjusting for the correct number of 2015 voters in each camp means reducing the Labour number by 31 and increasing the Conservative number by 15. The VI numbers,however, only show a Conservative increase of 10 and a Labour reduction of 12. All of the other adjustments push them closer together. Of course, in the real world, there will be some new voters and some deaths but this would hardly explain the discrepancy here as there is no specific adjustment that takes these changes into account.

    In short, the figures, before turnout adjustments, just look wrong and if they look wrong then they probably are.

  30. @ Anthony Wells

    Thank you for your answer. Hopefully there will be references to these in the final report (aggregating the aggregates – as in the slides – is very problematic).

    I’m sure that all the polling companies did the analysis – yet I haven’t seen (or missed it) any reference to it when arguing for the sampling error.

    I’m not saying that the real cause was not the sampling error, only that I haven’t seen the evidence (and explanations for the false negatives – SNP in particular, but also of regional variations), and also the variance of errors among polling companies.

    The presentation kicks off really well by listing the anomalies. However, if one follows this method, first it has to show that these are systemic anomalies and then attributing them to a cause while explicitly excluding other causes (hence making the assumptions falsifiable). This is a terribly long cascading process (especially in this case), so the law of diminishing utility comes in. I know that the people involved in the report know all these very well, this is why it puzzles me.

    A further difficulty with the aggregation is that while all pollsters made the error, it is quite possible that they made it in their own way, and hence the aggregation of pollsters headlines eliminates this path.

    The conclusion is very similar to “management by exception” – in Argyris’s terminology single loop learning, when it is possible that double loop learning is needed.

    Well, we will see the final report.

  31. Has the polling industry considered and explained the large discrepancy between the bookmakers and the polls prior to the election?

    The bookmakers, or more correctly the punters, were predicting a significant Tory win, whilst the polls were suggesting a neck-and-neck situation.

    Why was the average punter sceptical of the polling evidence? I don’t buy the idea that most punters are Tories as an explanation.

    My suspicion is that punters to a certain extent rely upon their own experience – conversations at work and at the pub, and posters in windows – and concluded that Tory support was solid and likely to vote.

  32. @Candy and Hawthorn

    In the Times today, they’re throwing another thing into the mix, or the mire: the balance of payments deficit.

    Now, this has frequently been a deficit of some significance since the Eighties, but peeps didn’t bother too much because offset by income from overseas investments.

    However, there’s now a new fly in the ointment, because the income from these overseas investments is now falling…

    Just putting it out there…

  33. @Millie
    “My suspicion is that punters to a certain extent rely upon their own experience – conversations at work and at the pub, and posters in windows – and concluded that Tory support was solid and likely to vote.”

    The bookies must have had better Intel than anyone else.

    Maybe they had their own no expense spared polls commissioned with tons of metadata that proved far more accurate than the others.

  34. Raf – as Millie says it was the punters not the bookies themselves that in effect made the odds.

    Does anyone know if the wisdom index did any better VI polls?

  35. @Jim Jam

    Noted. But the bookies seemed very confident that the punters were right – especially that bloke from Ladbrokes who spoke to the media a couple of weeks before election day telling them what was about to happen.

  36. Millie
    “The bookmakers, or more correctly the punters, were predicting a significant Tory win, whilst the polls were suggesting a neck-and-neck situation.”

    I remember mentioning the bookies odds before the election, but there was wide disagreement that the bookies odds (and therefore punters’ opinions) could be right. Perhaps we’ll know better next time.

  37. Apparently Labour gained a council seat (by election) from UKIP in Thanet. Don’t know anything about the ward …

  38. Laszlo

    I presume the febrile Twitter political community are getting excited about a by election in Thanet?

    Much as their Scots equivalents are focussing on the council by election in Hamilton, where the SNP moved to 1st place compared to 2nd in 2012, but by a smaller swing – Tories up by 8% being the big story.

    On a 20% turnout, I would hesitate to draw any conclusions, but that some former Labour voters might be attracted to Davidson’s message of resolute enthusiasm for the UK Union has always been a possibility.

    That the BBC’s “major” party, the LDs, came 5th out of 5 may be of more importance (unlike the LDs themselves).

  39. @ OldNat

    I didn’t draw any conclusion, but indeed it was on Twitter.

    I tried to make my feelings clear by “apparently” and stating my complete ignorance about the ward.

    As to Scotland – virtually all my relatives moved to vote for SNP (Glasgow and its Eastern suburbs). One is still Labour and one is S. Green.

  40. Laszlo

    Unlike Zola, I wasn’t accusing. :-)

    My comment was about the haruspicy of political tweeters examing the entails of council by-elections to foretell the future! :-)

  41. MILLIE

    The bookmakers, or more correctly the punters, were predicting a significant Tory win, whilst the polls were suggesting a neck-and-neck situation.

    No the bookies weren’t weren’t. Here for example is an article from 1 May discussing the differences between the two:

    http://theconversation.com/pollsters-v-bookies-whos-on-the-money-in-election-2015-40933

    which states:

    Certainly by comparison with the polls, the bookies are united: increasingly confident the Conservatives will get most seats, but that Miliband is the more likely next PM. Best odds on most seats, therefore, from any of Oddschecker’s 24 bookmakers are 3/10 on the Conservatives and shortening, 3/1 on Labour and lengthening. For next PM, Miliband is available at 4/6, Cameron at 11/8.

    which rather misses the point that bookies are always going to be more united than the polls. Because if any bookie was offering more advantageous odds, the punters would flock there and they would be forced to moves their odds to nearer their rivals. Pollsters ought to vary more or we would be suspecting ‘herding’ as discussed.

    But in terms of general prediction, the polls and the bookies basically agreed. Cameron was expected to get more votes and probably seats, but Miliband was more likely to become PM. Both turned out to be wrong.

  42. MRJONES

    Well however you want to define it if there’s a subgroup who make up to 25% of the electorate in some seats (like Oldham recently) and have a dramatically higher turnout than the average then polling isn’t going to work until it’s included.

    The trouble with that is that given that traditionally members of Asian communities vote Labour disproportionately and we know that such groups are also under-represented in opinion polls, then regularly weighting nationally[1] for ethnicity or religion is only going to make the problem of polls being ‘too Labour’ worse.

    But there’s no evidence that they do have a ‘dramatically higher’ turnout. If that was true you would expect constituencies with a high Muslim population to have a high turnout, which doesn’t seem to be the case.

    It works on a smaller scale as well. You mentioned Oldham and it happened I looked at the wards of Oldham West and Royton using this May’s result from Oldham Council’s website:

    http://committees.oldham.gov.uk/mgElectionElectionAreaResults.aspx?EID=26&RPID=9738732

    What is more, because those elections were held on the same day as the General Election, they will give us a good idea of the respective turnouts for a parliamentary election[2]. A by-election is lower of course (40.3% turnout versus 59.6% in May) but it should give some idea.

    Ward / LG Turnout / % Muslim

    Chadderton C 62.89% / 7.30%

    Chadderton N 64.70% / 17.60%

    Chadderton S 56.10% / 4.40%

    Coldhurst 61.81% / 64.20%

    Hollinwood 51.55% / 6.90%

    Medlock Vale 55.88% / 32.30%

    Royton N 62.88% / 0.90%

    Royton S 60.11% / 2.80%

    Werneth 59.21% / 68.20%

    The first figure is the turnout and the second the percentage of Muslims in the ward from the 2011 Census. As you see there isn’t really any relation between the two figures with the two most Asian wards being in the middle of a range that doesn’t vary that much. This is a good constituency to judge any effect because the percentage of Muslims varies so much from nearly 70% down to almost nothing. If there was any effect then it would show up strongly with such a range.

    It is possible that in lower turnout elections the Asian/Muslim vote (there pretty similar in this seat) might vote more strongly and this could have happened in the by-election. But certainly where it matters most in a general election it’s simply not happening.

    [1] YouGov does weight on ethnicity for their London polls, where the percentages are large, and I have seen weighting by ethnicity and religion on polls where it was relevant to the topics. When polling individual some constituencies it might be useful as well (Ashcroft used a one size fits all approach and that may have given him poor samples). But across GB individual ethnic groups are too small in percentage terms to weigh individually.

    [2] Strictly speaking turnout won’t be the same because it is based on the number of local government electors, which will include some EU citizens, who can’t vote for MPs. But I get the impression that there are that that many of those in Oldham and there is probably massive under-registration of those that are. (There probably aren’t many members of the House of Lords either). Any any effect would mean LG turnout would be slightly lower than for Westminster.

  43. Labour gain from UKIP in Newington, Thanet (council by-election):

    Newington (Thanet) result:
    LAB: 37.7% (+1.3)
    UKIP: 30.0% (-14.2)
    CON: 20.4% (+0.9)
    IND: 6.4% (+6.4)
    GRN: 2.6% (+2.6)
    LDEM: 1.6%
    IND: 1.3%

    Perhaps not terribly important in the great scheme of things.

    But look at that drop in UKIP support….

  44. lurkinggherkin

    And just look at the performance of a “major” party – even in England! 2nd bottom.

  45. Actually the UKIP result in Thanet isn’t too bad when you consider the mess that the local Party has got into running the local council, including splits which presumably gave rise to some of the Independents. Holding onto 30% shows more resilience than you might expect, especially given that UKIP aren’t good at by-elections and this is a natural Labour ward.

    One of the little remarked features of recent months has been the solidity of the UKIP vote. Many assumed that it would melt away after the failure in terms of seats in May, but it’s barely been affected.

  46. ALec

    “the Chancellor and the Bank of England Governor were telling us the economy was going to grow, wages were going to outstrip inflation, the deficit would disappear and interest rates were about to rise.”

    Well there is still growth in the economy, wages are still growing faster than inflation, the deficit is still reducing and interest rates will eventually rise.

    I’m not sure i get the point your making. We could perhaps agree that there are threats in the World economy that could affect the UK economy but that has always been true.

  47. This all highlights that particular methodologies only work in particular political conditions. The next GE, for all we know, could have conditions much more similar to 2010, meaning that the old methodology would be fine.

    I’m actually pleased they got it wrong (though didn’t say so at the time because I was still stunned by the awful result). This now means that politicians will be able to dismiss polls much easier in their interviews, and might mean the questions evolve to be more about….well, POLICY!

  48. Laszlo –

    I believe the inquiry team worked by splitting up the different areas, with different people responsible for each one (E.g. Jane Green analysed the data on late swing, Ben Lauderdale on herding, etc).

    Given he presented it, I’m assuming Jouni Kuha was responsible for the sampling stuff. He is very much a numbers guy, his background is the Statistics and Methodology departments at the LSE rather than political science like many of his colleagues on the inquiry, so I’d be surprised if the sampling chapter in the final report isn’t quite stats heavy. Time will tell

  49. RM

    Point taken with regard to UKIP’s resilience.

    However, it does demonstrate how much of their support in the area that elected the nation’s first UKIP-controlled council, was actually gained from them campaigning on a specific issue local to Thanet; making a promise regarding the airport that they then reneged on in fairly short order. In what is arguably one of the most UKIP-friendly areas of the country, they didn’t win power on their core platform of anti-immigration, anti-EU sentiment.

    I don’t know how many people here know this, but this by-election was triggered by the sitting UKIP councillor emigrating to Thailand.

1 2 3 4 5