In January the BPC inquiry team announced their initial findings on what went wrong in the general election polls. Today they have published their full final report. The overall conclusions haven’t changed, we’ve just got a lot more detail. For a report about polling methodology written by a bunch of academics it’s very readable, so I’d encourage you to read the whole thing, but if you’re not in the mood for a 120 page document about polling methods then my summary is below:

Polls getting it wrong isn’t new

The error in the polls last year was worse than in many previous years, but wasn’t unprecedented. In 2005 and 2010 the polls performed comparatively well, but going back further there has often been an error in Labour’s favour, particularly since 1983. Last year’s error was the largest since 1992, but was not that different from the error in 1997 or 2001. The reason it was seen as so much worse was twofold – first, it meant the story was wrong (the polls suggested Labour would be the largest party, when actually there was a Tory majority, in 1997 and 2001 the only question was scale of the Labour landslide), second in 2015 all the main polls were wrong – in years like 1997 and 2001 there was a substantial average error in the polls, but some companies managed to get the result right, so it looked like a failure of particular pollsters rather than the industry as a whole.

Not everything was wrong: small parties were right, but Scotland wasn’t

There’s a difference between getting a poll right, and being seen to get a poll right. All the pre-election polls were actually pretty accurate for the Lib Dems, Greens and UKIP (and UKIP was seen as the big challenge!) it was seen as a disaster because they got the big two parties wrong, and therefore they got the story wrong. It’s the latter bit that’s important – in Scotland there was also a polling error (the SNP were understated, Labour overstated) but it was largely unremarked because it was a landslide. As the report says, “underestimating the size of a landslide is considerably less problematic than getting the result of an election wrong”

There was minimal late swing, if any

Obviously it is possible for people to change their minds in those 24 hours between the final poll fieldwork and the actual vote. People really can tell a pollster they’ll vote party A on Wednesday, but chicken out and vote party B on Thursday. The Scottish referendum was probably an example of genuine late swing – YouGov recontacted the same people they interviewed in their final pre-referendum poll on polling day itself, and found a small net swing towards NO. However, when pollsters get it wrong and blame late swing it does always sound a bit like a lame excuse “Oh, it was right when we did it, people must have changed their minds”.

To conclude there was late swing I’d want to see some pretty conclusive evidence. The inquiry team looked, but didn’t find any. Changes from the penultimate to final polls suggested any ongoing movement was towards Labour, not the Conservatives. A weighted average of re-contact surveys found change of only 0.6% from Lab to Con (and that was including some re-contacts from late campaign surveys, rather than final call surveys. Including only re-contact of final call surveys the average movement was towards Labour)

There probably weren’t any Shy Tories

“Shy Tories” is the theory that people who were not natural Tories were reluctant to admit to interviewers (or perhaps even to themselves!) that they were going to vote Conservative. If people had lied during the election campaign but admitted it afterwards, this would have shown up as late swing and it did not. This leaves the possibility that people lied before the election and consistently lied afterwards as well. This is obviously very difficult to test conclusively, but the inquiry team don’t believe the circumstantial evidence supports it. Not least, if there was a problem with shy Tories we could reasonably have expected polls conducted online without a human interviewer to have shown a higher Tory vote – they did not.

Turnout models weren’t that good, but it didn’t cause the error

Most pollsters modelled turnout using a simple method of asking people how likely they were to vote on a 0-10 scale. The inquiry team tested this by looking at whether people in re-contact surveys reported actually voting. For most pollsters this didn’t work out that well, however, it it was not the cause of the error – the inquiry team re-ran the data replacing pre-election likelihood to vote estimates with whether people reported actually voting after the election and they were just as wrong. As the inquiry team put it – if pollsters had known in advance which respondents would and would not vote, they would not have been any more accurate.

Differential turnout – that Labour voters were more likely to say they were going to vote and then fail to do so – was also dismissed as a factor. Voter validation tests (checking poll respondents against the actual marked register) did not suggest Labour voters were any more likely to lie about voting than Tory voters.

Note that in this sense turnout is about the difference between people *saying* they’ll vote (and pollsters estimates of if they’ll vote) and whether they actually do. That didn’t cause the polling error. However, the polling error could still have been caused by samples containing people who are too likely to vote, something that is an issue of turnout but which comes under the heading of sampling. It’s the difference between having young non-voters in your samples and them claiming they’ll vote when they won’t, and not having them in your sample to begin with.

Lots of other things that people have suggested were factors, weren’t factors

The inquiry put to bed various other theories too – postal votes were not the problem (samples contained the correct proportion of them), excluding overseas voters was not the problem (there are only 0.2% of the electorate), voter registration was not the problem (in the way it showed up it would have been functionally identical to misreporting of turnout – people who told pollsters they were going to vote, but did not – for the narrow purpose of polling error it doesn’t matter why they didn’t vote).

The main cause of the error was unrepresentative samples

The reason the polls got it wrong in 2015 was the sampling. The BPC inquiry team reached this conclusion to begin with by using the Sherlock Holmes method – eliminating all the other possibilities, leaving just one which must be true. However they also had positive evidence to back up the conclusion – the first is the comparison with the random probability surveys conducted by the BES and BSA later in the year, where past recall more closely resembled the actual election result, the second are some observable shortcomings within the samples. The age distribution within bands was off, the geographical distribution of the vote was wrong (polls underestimated Tory support more in the South East and East). Most importantly in my view, polling samples contained far too many people who vote, particularly among younger people – presumably because they contain people too engaged and interested in politics. Note that these aren’t necessarily the specific sample errors that caused the error: the BPC team cited them as evidence that sampling was off, not as the direct causes.

In the final polls there was no difference between telephone and online surveys

Looking at the final polls there was no difference at all between telephone and online surveys. The average Labour lead in the final polls was 0.2% in phone polls, and 0.2% in online polls. The average error compared to the final result was 1.6% for phone polls and 1.6% for online polls.

However, at points during the 2010-2015 Parliament there were differences between the modes. In the early part of the Parliament online polls were more favourable towards the Conservatives, for a large middle part of the Parliament phone polls were more favourable, during 2014 the gap disappeared entirely, phone polls started being more favourable towards the Tories during the election campaign, but came bang into line for the final polls. The inquiry suggest that could be herding, but that there is no strong reason to expect mode effects to be stable over time anyway – “mode effects arise from the interaction of the political environment with the various errors to which polling methods are prone. The magnitude and direction of these mode effects in the middle of the election cycle may be quite different to those that are evident in the final days of the campaign.”

The inquiry couldn’t rule out herding, but it doesn’t seem to have caused the error

That brings us to herding – the final polls were close to each other. To some observers they looked suspiciously close. Some degree of convergence is to be expected in the run to the election, many pollsters increased their sample sizes for their final polls so the variance between figures should be expected to fall. However, even allowing for that polls were still closer than would have been expected. Several pollsters made changes to their methods during the campaign and these did explain some of the convergence. It’s worth noting that all the changes increased the Conservative lead – that is, they made the polls *more* accurate, not less accurate.

The inquiry team also tested to see what the result would have been if every pollster had used the same method. That is, if you think pollsters had deliberately chosen methodological adjustments that made their polls closer to each other, what if you strip out all those individual adjustments? Using the same method across the board the results would have ranged from a four point Labour lead to a two point Tory lead. Polls would have been more variable… but every bit as wrong.

How the pollsters should improve their methods

Dealing with the main crux of the problem, unrepresentative samples, the inquiry have recommended that pollsters take action to improve how representative their samples are within their current criteria, and to investigate potential new quotas and weights that correlate with the sort of people who are under-represented in polls, and with voting intention. They are not prescriptive as to what the changes might be – on the first point they float possibilities about longer fieldwork and more callbacks in phone polls, and more incentives for under-represented groups in online polls. For potential new weighting variables they don’t suggest much at all, worrying that if such variables existed pollsters would already be using them, but we shall see what changes pollsters end up making to their sampling to address these recommendations.

The inquiry also makes some recommendations about turnout, don’t knows and asking if people have voted by post already. These seem perfectly sensible recommendations in themselves (especially asking if people have already voted by post, which several pollsters already do anyway), but given none of these things contributed to the error in 2015 they are more improvements for the future than addressing the failures of 2015.

And how the BPC should improve transparency

If the recommendations for the pollsters are pretty vague, the recommendations to the BPC are more specific, and mostly to do with transparency. Pollsters who are members of the BPC are already supposed to be open about methods, but the inquiry suggest they change the rules to make this more explicit – pollsters should give the exact variables and targets they weight to, and flag up any changes they make to their methods (the BPC are adopting these changes forthwith). They also make recommendations about registering polls and providing microdata to help any future inquiries, and for changes in how confidence margins are reported in polls. The BPC are looking at exactly how to do that in due course, but I think I’m rather less optimistic than the inquiry team about the difference it will make. The report says “Responsible media commentators would be much less inclined, however, to report a change in party support on the basis of one poll which shows no evidence of statistically significant change.” Personally I think *responsible* media commentators are already quite careful about how they report polls, the problem is that not all media commentators are responsible…

There’s no silver bullet

The inquiry team don’t make recommendations for specific changes that would have corrected the problems and don’t pretend there is an easy solution. Indeed, they point out that even the hugely expensive “gold standard” BES random probability surveys still managed to get the Conservatives and UKIP shares of the vote outside of the margin of error. They do think there are improvements that can be made though – and hopefully there are (hopefully the changes that some pollsters have already introduced are improving matters already). They also say it would be good if stakeholders were more realistic about the limits of polling, of how accurately it is really possible to measure people’s opinions.

Polling accuracy shouldn’t be black and white. It shouldn’t be a choice between “polls are the gospel truth” and “polls are worthless, ignore them all”. Polls are a tool, with advantages and limitations. There are limits on how well we can model and measure the views of a complex and mobile society, but that should be a reason for caveats and caution, not a reason to give up. As I wrote last year despite the many difficulties there are in getting a representative sample of the British public, I still think those difficulties are surmountable, and that ultimately, it’s still worth trying to find out and quantify what the public think.


151 Responses to “What the BPC inquiry’s final report says”

1 2 3 4
  1. Magisterial ! It’s the sampling …

  2. Shocking council/counsel of despair error on page 79

  3. The shift from random samples taken on the street to “politically interested” people forming parts of panels has, despite all statistical manipulation, led to these errors.

  4. I am pretty sure the report is right, though other people are probably more qualified to judge.

    The opinion polls certainly had Labour and Conservatives very close and the actual result was the Conservatives had 99 seats more than Labour, and also had a small overall majority.

    The mechanism by which FPTP delivered a majority to the Conservatives was not widely predicted at the time.
    What was remarked just after the election was how this worked with regional variations and FPTP.

    No doubt some seats were won direct by Conservatives from Labour, but at least as important were the loss by Labour of all but one seat in Scotland, and the loss of all seats in south-west England by the LibDems all or nearly all to Conservatives.

    There was much discussion at the time, if I remember rightly, of how inadequate uniform swing might be in such a multi-party election, and I think that was true. The devil really was in the detail.

  5. Cracking post. It seems to be a fairly basic error in the end, and I suppose that it is probably a good thing that this has been identified, a would suggest that much of the rest of polling methodology is functioning reasonably well.

    My only concern is that the BPC missed the obvious source or error – namely that the polls were indeed correct, but the people just voted for the wrong government.

  6. So it will be interesting to see if the pollsters get their Scottish, Welsh and London Assembly election predictions right.

    I still think the issue of frequency of voting is an important factor as different people come out in different elections.

  7. Watching part of the coverage of the 1966 election on Parliamentary Channel on Monday, I was struck by how uniform the swing seemed to be around the bulk of the country (Scotland’s swing to Labour was much less than that in England, but had been noticeably greater in 1964).
    Surely the 2015 election showed us that ‘uniform swing’ is all but dead as a concept and that much more regional polling is required. Had there been such polling, for example in the SW of England, then the Tory victory might not have come as such a shock to so many.
    Just a thought……

    Excellent summary, btw, so thanks, as ever, to AW

  8. Ha-ha. Some others might say it was wrong, but perhaps the least wrong, as you can never have a perfect government. Yet others…enough from me – at least with politics there is always another chance for parties if not for individuals.

  9. Let’s have less polls but bigger samples.

  10. In relation to
    ‘Pollsters who are members of the BPC are already supposed to be open about methods, but the inquiry suggest they change the rules to make this more explicit – pollsters should give the exact variables and targets they weight to, and flag up any changes they make to their methods (the BPC are adopting these changes forthwith)’

    Does any one know if this be retrospective to include any changes already made since the last General Election.

  11. @Jasper22

    Why would you want bigger samples?

    If the issue is biased samples, then bigger samples will give you the same bias with a bit less variance! What seems to be needed is *better* sampling. So if you have so much resource (time, money) at your disposal, you ought to be looking at getting fewer but more representative responses by using a method which is more tuned to representative sampling at the expense of being more resource-intensive per response.

    If necessary, be prepared to sacrifice precision (moe) for accuracy!

  12. Whether or not the pollsters are able to improve their methodology in such ways as to make them more accurate and reliable, I’m still of the view, especially after last year’s debacle, that they should be banned during general election campaigns. The motivation behind Ashcroft’s intense polling in the marginals worried me and I think there was no doubt that the polls orchestrated too much of the debate and influenced the press narrative and party campaign strategies. The fact that all the polls turned out to be misinforming and not educating compounded the problem.

    It’s time we joined the banana republics and tyrannies of Brazil, Canada, Greece, Mexico, Norway, Poland, Venezuela, France, Italy and Spain who all have various restrictions on opinion polling during election campaigns. Many ban them all together.

  13. Good evening all from Hampshire.

    Very interesting article AW.

    IMO the polls were correct all along and all this incredible in depth analyses is over complicating what was a very simple factor during the election and into polling day itself which was the electorate simply had a good look at the ballot sheet and didn’t like the alternative.

    The hints were in the detailed breakdown of the polls around the economy and trust etc…..wake up and smell the Lidl coffee!!

    Sorry to hear Ronnie Corbett has passed away. They seem to be dropping off at an alarming rate this year.

  14. So all this polling is just a waste of money – nothing new there then.

  15. are there going to be any polls on the tata steel crisis and the governments handling of it?

  16. ALLAN “MUDWRESTLING” CHRISTIE:
    “the electorate simply had a good look at the ballot sheet and didn’t like the alternative.”

    If you’d bothered to read AW’s excellent summary, you’d see that the report specifically rules this out. No late swing. The polls were unrepresentative. The polls were accurate in measuring whom those polled would vote for, but those polled were a skewed section of the population.

  17. JOHN POOLE:
    “So all this polling is just a waste of money”

    We’re all entitled to our own opinions. If that is your opinion, I recommend you cease commissioning and paying for polls. That way you won’t feel you’ve wasted your money. How much money did you waste, in the end?

  18. I can understand the desire to ban polls in the immediate run-up to elections. However I have come to the conclusion that it won’t work in the modern age. You can’t stop people carrying out polling. The results of this will be selctively leaked, with a nod and a wink, and we will be worse off than if data had been made freely available. The formidable ability to use big data is already a worry in relation to democracy.

    The only thing is for people to understand that you really can’t get a fully representative sample, quite apart from random sampling error. Surely the last election will help in this understanding. Next time less credence will be placed on the polls.

    Polling is still the best method we have for ascertaining public views on issues. Predicting elections is not the best use for it.

  19. “Are there going to be any polls on the tata steel crisis and the governments handling of it?”

    ———–

    Well there might be polling on how badly Corbyn might handle it…

  20. Pollsters admitting that they got the wrong result because they used unrepresentative samples seems pretty fundamental. It’s not some abstruse peripheral or unusual factor .

    Its like a GP saying -sorry but I shouldn’t have told you you have bronchitis. I used the wrong tests -you have lung cancer.

  21. “It’s time we joined the banana republics and tyrannies of Brazil, Canada, Greece, Mexico, Norway, Poland, Venezuela, France, Italy and Spain who all have various restrictions on opinion polling during election campaigns. Many ban them all together.”

    This would be very much in Labour’s interest if/when they return to power. They are invariably the ones traumatized by inaccurate polling.

  22. COLIN:
    “they used unrepresentative samples”

    I think the more charitable way of saying it is that they tried to control for all factors they could, to make it representative, but they fell short in that respect. It’s not like pollsters didn’t know they had to take a representative sample.
    I think the issue is about there being a confounding variable that they weren’t controlling for that turned out to have an important correlation on both the voting intention and the likelihood that the pollster would successfully request and receive a response to the poll. There are literally thousands of things that one could think of that could, theoretically, have such an effect. The problem with people is that they are highly individual and hard fit into neat boxes. Anything that tries to extrapolate relies on being able to do just that. It’s actually really difficult even when you know what you’re doing, so don’t be too hard on them.

  23. @ Allan Christie

    As Alun009 has said, AW ‘s article answers exactly your point. See the paragraph:

    “There was little late swing, if any”.

  24. John B

    I’m catching up with the 1966 election broadcast on BBC i-player. It was more exciting the first time around, when I watched it after casting my first ever vote – for Donald Dewar in Aberdeen South, having been working for the Liberals in Aberdeen North (the candidate being the Mum of a friend), since the SNP had no candidates in Aberdeen back then!

    What a very different political world back then!

    For the first time, NI sent a non-Unionist to Westminster. Women candidates were all labelled as “Mrs” or “Miss”, as opposed to the men who were just identified by surname.

    And the accents of both politicians (of all stripes) and the presenters – all clipped English RP.

    It is just as different from modern Scottish politics as the US primaries that I have just escaped from.

  25. Anthony

    “ultimately, it’s still worth trying to find out and quantify what the public think.”

    I agree – though since your income depends on people believing that that proposition, and mine doesn’t alter by a penny, your viewpoint may be a lot more biased by self-interest than mine! :-)

  26. The English EUref campaign has now been described as “officially insane”. “Project Fear has now become Project Zombie Invasion” says a hitherto ignored Scot (me).

    http://www.telegraph.co.uk/news/2016/03/31/exclusive-england-to-face-euro-2016-ban-if-britain-votes-to-leav/?utm_source=dlvr.it&utm_medium=twitter

    “England’s hope of winning the Euro 2016 football championships were thrown into turmoil last night after it emerged that French and German Uefa officials were poised to file a legal petition suspending England from the competition if Britain votes to leave Europe on June 23.

    The motion, which would also impact Wales and Northern Ireland if they clear the group stages, would cast the three home nations into a legal limbo just days before the start of the Round of 16 on June 25.”

    Oldnat commented “Jeez! there were lots of F***ing stupid statements by many on all sides during the indyref, but the English have reasserted their claim to be top of the world’s stupidity political league – surpassing even Trump in the process.”

  27. Regional polling – All the way.

    YouGov was polling around 10,000 per week (5 x 2000) or so.

    With 12 regions, including Northern Ireland, each region could get a fortnightly poll of 1,001 amounting to a UK-wide poll every fortnight of 12K with a UK margin of error of less than 1% (ok, so it’s not really weighted that way, but it’s far better than 10 x polls of +/- 3% that didn’t work so well).

    Certainly far better than lumping Wales in with Midlands to hide the tiny Welsh samples.

  28. Oldnat:
    The football story sounds like an April 1st story :)

  29. In 1997 the opinion polls were spot on with the Tory share for the whole of April. The only thing they couldn’t get right is the way the anti Tories would vote. They overstated Labour while underplaying the LIbDems and the smaller parties.

    In the last election it was the LibDem collapse that gave the Tories a majority although they would have still been the largest party without that help.

  30. Good Morning All from sunny Bournemouth.
    When Labour lost in 1951 Mr Attlee was asked for the reasons.

    He replied to the journalist: ‘Didn’t get enough votes’.

    AW thanks as always for your new thread here.

  31. There was a lot of tactical anti Tory voting in 1997 which is why the Libs gained so many seats without a massive overall vote increase. I think its likely the polls didn’t pick up how many basically Labour supporters were going to vote something else in a particular constituencty to get the Tory out,

  32. The greatest concern about the polls up to GE 2015 is they influenced part strategies, media coverage and, possibly, voter behaviour.

    Surely, had Lab not been convinced the party would poll circa 34% (the so-called 35% vote strategy) it would IMO have been more ambitious and left wing with its manifesto etc.

    Similarly the Cons might have been less inclined to include things in their manifesto that they thought would never be implemented.

    The common error in the polls calls into question the purpose/motives/effect of polling and the commissioners of polls.

  33. Morning folks. Thanks AW for the summary – the full report might have to wait until bedtime reading :)

    I couldn’t agree more with Statgeek on the necessity for regional breakdowns – if the UK had a PR-type system, then the national vote share would make sense, but with a FPTP constituency-based system, the national vote share is fairly meaningless.

    In an ideal world, with infinite cash resources, we’d have lovely polls of each constituency. However, we saw clearly that Ashcroft’s constituency polls were wrong. The whole, “think about your own constituency” question led to highly misleading conclusions – and my feeling was that this fed into some of the models of seat counts.

    Regional samples (using Statgeek’s six-poll averages), even if the overall picture was a bit skewed, showed that the Lib Dems were going to lose a lot to the Conservatives in the South. On the other hand, the Midlands samples were way off – and illustrated a need to have properly-weighted polls of the regions.

    John Curtice gave a talk after the election (it may have been linked from here?) about the results of the election, and one point that was very clear was that, in order to achieve an overall majority in 2020, Labour need to have a very large swing (10-11%?). With a cursory analysis of some results in the South and Midlands, it does indeed appear that even with a 34%-34% split in the overall vote shares, the Conservatives would have had a large seat advantage – because they lost votes in the North-East and some urban areas, they had less “wasted losing votes” than in previous elections. Good regional polling would have indicated (even if the overall national sample was a bit wrong) that Labour were not going to make many gains in the Midlands, and instead would pile on votes in areas where they would win in any case (a bit like the Conservatives in 2005, for example).

    I’m not too keen on last-minute eve-of-election polls, but I don’t really believe that the story of the polls being even during the election actually had much effect on the outcome. If the fear of a Labour-SNP coalition was such a motivation for voters, then I would have expected a bigger turnout (like 1992) – turnout was basically flat (outwith of Scotland).

  34. @MikeN – I see where you’re coming from here – but if Labour’s real strategy was a 35% one (which it may well have been), it indicates a great lack of understanding of the electoral system on their behalf. Forget vote share – to have any chance of a credible government, Labour needed to be winning 25-30 seats directly from the Conservatives – knowing that they would definitely lose 20+ seats to the SNP, and that the Lib Dems would lose a few to the Tories, the key to winning had to be convincing moderate Conservatives in the Midlands and Wales to vote Lab.

    There’s a bit of a message to Corbynites here too – I’m not convinced that he’s unelectable, but the Labour manifesto in 2020 cannot be seen as preaching to the converted – that’s not how to win elections of any kind. But at that stage, who knows how many Conservative parties there will be…. the People’s Conservative Party, anyone? ;)

  35. The debate over Tata has initially settled to the kneejerk ideological battleground of nationalise/don’t nationalise. Frankly, Corbyn has missed a trick here (there is still time) as the precise ownership structure of the UK steel industry will merely mean a shifting around of losses as long as the structural problems remain. Nationalisation can work (Rolls Royce anyone – nationalised by Tories, privatised by Labour, and now one of UK’s few genuinely world beating companies) but this time it looks to me to be a simple retreat into Corbyn’s comfort blanket rather than a genuine plan.

    This time, the structural issues are largely, although not completely, external to the UK. In some ways that makes nationalisation for a period more attractive, but only if those external factors can be addressed. News that the UK government was instrumental in blocking EU tarrifs on Chinese steel imports is the real campaign issue that Corbyn should be focusing on. This damages Tories, and Cameron/Osborne in particular, as well as diverts attention from blaming the EU itself.

    Both Sanders and Trump have shown the benefits of campaigning against unfettered free trade, and I suspect Labour would mine a rich electoral seam if they contrasted Osborne’s ‘march of the makers’ claims with his willingness to sacrifice UK manaufacturing jobs in order to keep China happy. But this needs explaining to a confused electorate, willing to blame general economic conditions and the EU for anything.

    The truth is that the Tata sale is one issue where the current UK government’s industrial policy has been found out, with possibly catastrophic consequences. There is no ‘Labour’s fault’ argument here, and they own this issue, in the parlance.

    If Port Talbot goes the way of Redcar, Labour should be able to make a very big issue of this, but not if they insist on only shouting about nationalisation.

  36. I did enjoy Carfrew’s post of 2:28 am, though I did not read it until later this morning.

    It’s from the Independent.

    http://www.independent.co.uk/news/uk/home-news/scotland-and-wales-form-own-country-if-uk-votes-brexit-leave-eu-a6962431.html

    There is one obvious error though. Isle of Man is the correct spelling, not with two ‘n’s.

  37. Alec

    The argument is about nationalisation as a short-term stabilisation plan to avoid the plants being closed or bought by asset strippers. No good talkin about long-term if you never get there in the first place.

    The rest is wonk-stuff that does not cut through to the public. If it did then Ed Miliband would be PM.

    What needs to cut through is the message that the government is letting down the country. That seems to be getting through if the newspaper coverage is anything to go by. There is also the strategy of moving the debate leftwards which is happening with steel as it did with disabled benefits.

    Whatever Corbyns (many) flaws, he (or his advisors) at least appears to understand that this difficult political work has to be done rather than trying to fool the electorate with cheap triangulation.

  38. On “unrepresentative samples”, if you end up with a “representative sample” – perfectly stratified by how people voted in the last election / age / socioeconomic class etc. etc. – you’re going to end up with… …a poll that shows how people voted in the last election! (or pretty close). This shows the idiocy and futility of low budget polling. It really is a load of bunkum pseudo science and not worth anything at all really. I used to read what the pollsters and YouGov in particular said with a modicum of interest but not any more. It’s complete waffle and an utter waste of time!

  39. Disappointing- obviously not Anthony’s write up which was as clear and honest as ever but the lack of anything approaching a silver bullet and no clear guide as to how polling companies are going to get the “right” samples.

    One of the worst aspects of the polling I think was that all the analysis people were doing on the “churn” became meaningless because the figures in those polls were wrong. It is still not clear where the movements came from. We can guess that the Lab>UKIP vote held up more than the Con>UKIP vote and we obviously know that the LD>Con vote was a deciding factor in the outcome but equally we don’t know the detailed movements and whether turnout among Tory voters was higher than among Labour voters.

    My only conclusion is that the less politically engaged changed their vote in different ways to the politically engaged (or at least the group willing to answer surveys).

  40. There really is no point in doing any kind of analysis on polls data when the input samples are so rubbish and the polling methodologies so cheap. And even less point in commenting thereon. It’s 30 years since I was at Cambridge, but maths and statistical analysis fundamentals haven’t changed. GIGO.

  41. @ Mike N
    I am just reading John O’Farrell’s brilliant book ‘Things can only get better’ about life as a Labour activist during the Thatcher/Major years.

    One of the main themes is the ten year delusion suffered by most Labour activists (including the author) that if they only offered the British electorate a truly socialist solution Labour would romp home; it only took four consecutive defeats to finally convince enough Labour members that this was simply rubbish and get Blair elected.

    Even as a non-Labour supporter I fear for this country if Labour goes down the same route again – talk about the definition of insanity…

    Oh, and it is also quite interesting looking back at how often the polls then over-stated Labour support – 1987 and even more so in 1992 (not to mention over-stating Alliance support in 1983). It’s not a new phenomenon that Tory voting intentions are understated in polls.

  42. @Mark Perrett – actually, statistical analysis has changed significantly in the last 30 years.

    Simply stating that the input samples are wrong isn’t terribly helpful. The whole point of the BPC investigation was to find reasons why the input samples might be wrong (spoiler alert: not everyone is interested enough in politics to answer questions on it).

    Your 10.37 post didn’t make a lot of sense to me – as it doesn’t take into account the rather obvious effect – people changing their voting intention over time……

    Out of interest, seeing as you think what pollsters do is “psuedo-science”, what would you recommend?

  43. louiswalshvotesgreen and bigfatron

    The point about Lab’s 35% strategy (if there ever was one), is that it would (probably) mean no Con govt – so a Lab govt whether or not in coalition. So, based on the polls in the months years ahead of GE 2015, this 35% looked ‘certain’. If however the polls had been more representative (accurate) it is highly probable that Lab would have a different strategy (and ditto the Cons).

    I can only conclude that the polls (whether or not there was ‘herding’) influenced the outcome of GE 2015.

    I am also concerned that commissioners of these polls (eg The Times and The Sun, predominantly) used them to influence voter behaviour.

  44. Bigfatron

    John O’Farrell’s book is great, in fact I have met quite a few of the people he writes about.

    In 1959, Labour lost with a “modernising” leader and in 1945 won on a manifesto much further to the left on economic policy than the current Labour Party. Some of the Labour 1983 manifesti is now Conservative Party policy.

    Times change and the early 1980s are as distant from today’s world as the 1940s were back then.

  45. BIGFATRON

    Actually you could argue that some of the Labour right are making the same mistake as the Labour left in the 1980s by living 30-40 years in the past.

  46. @MikeN – I don’t really know (or care) about whether the Times or Sun used the polls to influence voter behaviour (I don’t really read either) – so I’ll leave that for others to comment on. All I’d say is that the polls commissioned for the Mirror and Guardian/Observer gave pretty much the same results.

    As I said above, the 35% strategy was flawed even in its own context. Firstly, it was by no means certain – from the middle of 2014 Labour’s poll average was never above 35% (even with the faulty polls). Secondly, on a 35-35% or 35-33% split between Labour and the Conservative, it’s not clear that Labour would have even been the biggest party in seat numbers and we’d be back to the “who has the moral right to govern” story again.

    Of course, it’s impossible to prove that the result was or wasn’t affected by the polls, but it’s never a great idea for a party which has lost to blame the polls (or the media, or the electoral system, or the stupid public)! It’s better to withdraw with a bit of dignity, and try harder the next time :)

  47. louiswalshvotesgreen

    “it’s never a great idea for a party which has lost to blame the polls”

    I’m not making excuses about Lab’s 2015 performance. I think it could/should have been better.

    But there was a lot of discussion on threads here prior to GE 2015 about the outcome based on the VI that YG and other pollsters were presenting. These influenced our behaviour – it is not a big crazy step to imagine the behaviour of joe public (indeed the political parties) was influenced too.

  48. Mike N
    You never know, discussions on here may even have had a small influence on some politicians. I’d be surprised if none of them read this site, as it’s one of the more intelligent ones around. Having said that, they’d be fools to take too much notice as we’re all much more politically engaged than most voters so our views won’t be typical.

  49. @MikeN – Sorry, I’m not picking on you – I meant that more generally in terms of Labour’s performance. Unless you’re actually Alistair Campbell ;)

    I think it would be interesting (and it would certainly have aided the BPC inquiry) if Labour and the Conservatives gave some details on what their internal polls were saying. There were claims after the election by both sides that they knew the polls were wrong – but without any evidence or hard numbers that’s not proof of anything. Canvas returns can be a bit misleading (certainly they were for the Tories in ’97, who were genuinely shocked by the magnitude of the defeat), but surely in individual constituencies, the parties must have had a sense of the way things were going. Unless of course, they were deluding themselves.

1 2 3 4