Today the polling inquiry under Pat Sturgis’ presented its initial findings on what caused the polling error. Pat himself, Jouni Kuha and Ben Lauderdale all went through their findings at a meeting at the Royal Statistical Society – the full presentation is up here. As we saw in the overnight press release the main finding was that unrepresentative samples were to blame, but today’s meeting put some meat on those bones. Just to be clear, when the team said unrepresentative samples they didn’t just mean the sampling part of the process, they meant the samples pollsters end up with as a result of their sampling AND their weighting: it’s all interconnected. With that out the way, here’s what they said.

Things that did NOT go wrong

The team started by quickly going through some areas that they have ruled out as significant contributors to the error. Any of these could, of course, have had some minor impact, but if they did it was only minor. The team investigated and dismissed postal votes, falling voter registration, overseas voters and question wording/ordering as causes of the error.

They also dismissed some issues that had been more seriously suggested – the first was differential turnout reporting (i.e, Labour people overestimating their likelihood to vote more than Conservative people), in vote validation studies the inquiry team did not found evidence to support this, suggesting if it was an issue it was too small to be important. The second was the mode effect – ultimately whether a survey was done online or by telephone made no difference to its final accuracy. This finding met with some surprise from the audience, given there were more phone polls showing Tory leads than online ones. Ben Lauderdale of the inquiry team suggested that was probably because phone polls had smaller sample sizes and hence more volatility, hence spat out more unusual results… but that the average lead in online polls and average lead in telephone polls were not that different, especially in the final polls.

On late swing the inquiry said the evidence was contradictory. Six companies had conducted re-contact survey, going back to people who had completed pre-election surveys to see how they actually voted. Some showed movement, some did not, but on average they showed a movement of only 0.6% to the Tories between the final polls and the result, so can only have made a minor contribution at most. People deliberately misreporting their voting intention to pollsters was also dismissed – as Pat Sturgis put it, if those people had told the truth after the election it would have shown up as late swing (but did not), if they had kept on lying it should have affected the exit poll, BES and BSA as well (it did not).

Unrepresentative Samples

With all those things ruled out as major contributors to the poll error the team were left with unrepresentative samples as the most viable explanation for the error. In terms of positive evidence for this they looked at the differences between the BES and BSA samples (done by probability sampling) and the pre-election polls (done by variations on quota sampling). This wasn’t a recommendation to use probability sampling (while they didn’t do recommendations, Pat did rule out any recommendation that polling switch to probability sampling wholesale, recognising that the cost and timing was wholly impractical, and that the BES & BSA had been wrong in their own way, rather than being perfect solutions).

The two probability based surveys were, however, useful as comparisons to pick up possible shortcomings in the sample – so, for example, the pre-election polls that provided precise age data for respondents all had age skews within age bands, specifically within the oldest age band there were too many people in their 60s, not enough in their 70s and 80s. The team agreed with the suggestions that samples were too politically engaged – in their investigation they looked at likelihood to vote, finding most polls had samples that were too likely to vote, and didn’t have the correct contrast between young and old turnout. They also found samples didn’t have the correct proportions of postal voters for young and old respondents. They didn’t suggest all of these errors were necessarily related to why the figures were wrong, but that they were illustrations of the samples not being properly representative – and that ultimately led to getting the election wrong.

Herding

Finally the team spent a long time going through the data on herding – that is, polls producing figures that were closer to each other than random variation suggests they should be. On the face of it the narrowing looks striking – the penultimate polls had a spread of about seven points between the poll with the biggest Tory lead and the poll with the biggest Labour lead. In the final polls the spread was just three points, from a one point Tory lead to a two point Labour lead.

Analysing the polls earlier in the campaign the spread between different was almost exactly what you would expect from a stratified sample (what the inquiry team considered the closest approximation to the politically weighted samples used by the polls). In the last fortnight the spread narrowed though, with the final polls all close together. The reason for this seems to be because of methodological change – several of the polling companies made adjustments to their methods during the campaign or for their final polls (something that has been typical at past elections, companies often add extra adjustments to their final polls). Without those changes them the polls would have been more variable….and less accurate. In other words, some pollsters did make changes in their methodology at the end of the campaign which meant the figures were clustered together, but they were open about the methods they were using and it made the figures LESS Labour, not more Labour. Pollsters may or may not, consciously or subconsciously, have been influenced in the methodological decisions they made by what other polls were showing. However, from the inquiry’s analysis we can be confident that any herding did not contribute to the polling error, quite the opposite – all those pollsters who changed methodology during the campaign were more accurate using their new methods.

For completeness, the inquiry also took everyone’s final data and weighted it using the same methods – they found a normal level of variation. They also took everyone’s raw data and applied the weighting and filtering the pollsters said they had used to see if they could recreate the same figures – the figures came out the same, suggesting there was no sharp practice going on.

So what next?

Today’s report wasn’t a huge surprise – as I wrote at the weekend, most of the analysis so far has pointed to unrepresentative samples as the root cause, and the official verdict is in line with that. In terms of the information released today there were no recommendations, it was just about the diagnosis – the inquiry will be submitting their full written report in March. It will have some recommendations on methodology – but no silver bullet – but with the diagnosis confirmed the pollsters can start working on their own solutions. Many of the companies released statements today welcoming the findings and agreeing with the cause of the error, we shall see what different ways they come up with to solve it.


224 Responses to “What the Polling Inquiry said”

1 2 3 5
  1. “They also dismissed some issues that had been more seriously suggested – the first was differential turnout reporting (i.e, Labour people overestimating their likelihood to vote more than Conservative people), in vote validation studies the inquiry team did not found evidence to support this, suggesting if it was an issue it was too small to be important.”

    If it was an issue with specifically wwc voters then you’d need to compare by class and ethnicity.

    You’d also need to know if Muslim voter turnout was substantially higher than other groups as then lumping the total C2/D/E Lab vote together would be pointless.

  2. Mr Jones,

    Muslims are not a separate ethnicity.

    Muslims follow the religion of Islam, and I can assure you (living in West Yorkshire) there are Muslims who are white, working class and with a longer British heritage than our Royal Family. There are Muslims of Asian heritage and African heritage, even Russian and Chinese heritage.

    The state and religion should not mix, and certainly politics and religion should not either, even in polling.

  3. Not sure the state and politics should mix either…

  4. “As Pat Sturgis put it, if those people had told the truth after the election it would have shown up as late swing (but did not), if they had kept on lying it should have affected the exit poll, BES and BSA as well (it did not).”

    ————-

    What if, the exit poll just felt like an extension of voting, only minutes later, at the booth, so they told the truth, but polling later felt like, well, polling, so they didn’t…

  5. Interesting. The more politically engaged people are (ie. the more well-read, the more you have learned about the subject, the more you care… ) the more they are likely to vote Labour.

    For the left that fits with the argument: people should have to pass a test to vote. It shouldn’t be a given. If you don’t know some of the basics, you can’t vote.

    There is some sense in that.

  6. D in F
    “The more politically engaged people are (ie. the more well-read, the more you have learned about the subject, the more you care… ) the more they are likely to vote Labour.”

    The bit in brackets is your own assumption. My own experience is that politically engaged people are not necessarily more well-read or have learned about the subject, but might just have various personal grievances that they think it is someone else’s job to sort out for them.

  7. The accusations of herding were never taken that seriously – the BPC guidelines make openness an essential. Any attempt to adjust results to get the ‘right’ answer would usually be spotted and the miscreants subject to the derision of their peers and loss of commercial reputation. Because British pollsters are commercial organisations, mainly working in non-political fields, rather than more specialised (as you see in the US) there is too much to lose.

    The only complaints really came from the Telegraph Columnist Who Must Not Be Named and, like most effusions from New Labour Dogmatists, were based on looking at the US and assuming the same applied in the UK, without actually understanding how either country works. In this case Nate Silver had done some sophisticated analysis on herding in US pollsters and, because the polls were wrong, it was vaguely assumed that the same thing had happened in Britain.

    Matt Singh demolished the whole idea back in June:

    http://www.ncpolitics.uk/2015/06/is-there-evidence-of-pollsters-herding.html/

    applying the same techniques and finding no evidence. In any case inaccuracy is not a sign of herding – if anything it’s the opposite.

    I actually think that ‘herding’ is probably the wrong word for this sort of phenomenon as it implies some sort of outside agency pushing the polls together. ‘Huddling’ might be a better term as the pollsters look at each other to make sure they are doing things ‘right’. Where such copying confines itself to methodology, it’s more about best practice than plagiarism. And some of the changes that took place in polling during the campaign were actually pre-planned and part of a regular routine such as YouGov moving to using LTV or maybe MORI adopting political weighting.

    That’s not to say that there might not have been some limited suppression of polls that didn’t ‘look right’ – most famously Survation’s late poll. But vanishing polls are more likely to be due to the desire of commissioners to tell the story they want rather than pollsters manipulation of the actual result.

  8. @ Roger Mexico

    I think differentiation has to be made among conspiracies (there was none), caution against data (quite clearly some), overall polls and individual polls (as methodology).

    When I find some really striking data, let’s say in US public companies based on research design – I don’t have any problem with taking up the argument against the majority. If I find striking data by accident and I can only create the assumption post-hoc, I’m much more cautious even if I know I’m right, and dim the argument … There seems so be some of the in between areas in the polling that makes me uncomfortable (and I really would like to see some reflections on those constituency based experimental data by YouGov). The reason being the somewhat uniform adjustment that makes the assumption of continuity rather than radical change.

    I believed the polling headlines but looking back. I’m quite confident that the change that caused the off-Mark was not continuous, but a radical change (essentially UKIP, LibDem and DKs) in the interpretation. If it is the case, chasing after the allusive Con voters is an assumption of continuity, while something more fundamental has been going on (radicalisation in all directions). So, while I don’t believe that Labour could win an election tomorrow (the proper sense of VI), I also think that the polls overestimate the gap between Con and Lab (VI as it was interpreted – by most – in the last parliament).

  9. Not all methodological changes may have helped however – even if, as Anthony states, the general effect was to move them a bit nearer to reality (though nowhere near enough). For example the decision by YouGov to only poll panelists in the election period who had responded to political polls in January and February was probably a mistake.

    It presumably meant polls got completed quicker and also meant that it was easy to trace how individuals were changing their votes. But such ‘first responders’ seem to be an atypical group and will certainly belong to the ‘more interested in politics’ class. There may also be an under-representation from certain groups because the requisite number of participants has been found before they get a chance to reply.

    As has been noted on here, the YouGov London and Scottish polling was fairly accurate in May, and I wonder if it is a coincidence that these polls normally have a survey period of two to four days rather than the 20-22 hours that YouGov’s GB polls normally take. There’s other differences as well of course, including different weighting, but a longer and perhaps more selective approach might help polling companies locate the harder to reach members of the public.

  10. “For the left that fits with the argument: people should have to pass a test to vote. It shouldn’t be a given. If you don’t know some of the basics, you can’t vote.”

    ——-

    The standard test should naturally be: do you understand margin of error?

  11. CATMANJEFF
    what about Irish Catholics? Gays? retired people? Single mums?
    Is it just Muslims you don’t want to know about?

  12. Polling companies are going to discourage as much as possible the notion that a significant portion of respondents lie to them, thus screwing up the results.

    They don’t want to admit this as this has implications for the wider market research / marketing areas.

    I think that the huge emphasis on – “we picked the wrong people” doesn’t sound right.

    Instinct surely is telling us that a small BUT SIGNIFICANT amount of respondents either lied consciously to call centre market research telephonists (a lot of whom are on approx £6.50 per hr; on zero hours contracts with no paid annual leave) – or simply couldn’t bring themselves to admit that yes, they supported a govt tough on benefits claimants and on the side of business owners.

    And They knew fairly well they would end up voting CON.

    This thinking may be supported by the fact that the online polls showed regular CON leads.

  13. It seems extraordinary to me that a product which claims to estimate the Opinion of the UK Voting Public , is now declared to be systemically unrepresentative.

    Its like selling something on Ebay with the wrong photo attached-and when the purchaser complains, saying-ah , sorry it doesn’t look like the photo we showed.

  14. @Catmanjeff::

    Thanks for your response to my comment on the last thread but I think I need to clarify my last post as I wasn’t denying that there was a systematic error.

    I was simply saying that the error was nowhere as significant as suggested in the media and was even smaller if you consider margin of error.

    Here’s what I wrote:

    ————–
    “I’m getting increasingly irritated by the “well we know you can’t trust the polls” cliché in the media whenever a opinion poll result is cited these days.

    The election polls understated the Labour position by 2-3% and understated the Conservative position by a similar amount (less if one includes margin of error).

    If I’m looking at a poll where a +/- 2-3% difference is likely to be crucial I would be very cautious about calling the result, but that was obvious before the election.

    The relatively small error in the May election is being used to dismiss polls with margins of political significance of 10-20% and higher.”

    ———-

    If, for instance, the EU referendum polling showed a 53%-47% differential I would have no confidence in calling the result ie in almost any other situation other than general election polling most sensible people are cognisant that there is an inevitable uncertainty in polling that goes BEYOND statistical margin of error.

    However the media are using the relatively small May 2015 polling error to dismiss polls that have differentials well beyond this margin of uncertainty.

  15. Carfrew – it sure isn’t a perfect test, but if people lie consistently before and after it is extremely difficult to evidence. You end up having to rely on that sort of indirect evidence.

    Roger – the fieldwork period is a good question. I don’t like polls that have too short a fieldwork period. However, the “go back to people who answered in February” thing we did during the election campaign should actually have cancelled that out because of the way we did it – essentially, we ran samples over several days but sliced the data by when people answered (so, for example, Monday’s results would be made up of the past responders from the sample that went out on Sunday, the medium speed respondents from the sample that went out on Saturday and the slow respondents from the sample that went out on Friday). Anyway, our final poll and the Sun on Sunday poll during the campaign were done using normal methodology (just to check the different sampling technique wasn’t skewing things!) and they had the same error.

    Deepthroat – online polls did not show more Tory results. You’re quite right in your test you propose (if people were reluctant to tell the truth to pollsters they should be more honest on a computer screen than to a human interviewer, and online polls should have shown better figures for the Tories) but that isn’t the pattern that we found.

  16. David in France

    “Interesting. The more politically engaged people are (ie. the more well-read, the more you have learned about the subject, the more you care… ) the more they are likely to vote Labour.”

    LOL :-0

    I care passionately and have never voted Labour in my life and could not imagine doing so. What twaddle.

  17. One could equally argue that voting should be restricted that those with the most experience of life, the over 60s.

  18. It so happens that around the time of the General Election, my wife was contacted by British Social Attitudes Survey, a Government organisation, and their method of operation is very different from that of the opinion polls.

    1. They have a totally random method of choosing respondents. They use the post office list of addresses, and choose (if I remember correctly) one address in 25,000.
    2. They then contact that address. If it is a single-person household, they interview that person. If it is a two-person household, then they interview the second person alphabetically. I think there is a formula for which person they interview in larger households. So, unlike in opinion polls, it is completely random.

    3. Then they must interview that person. They will not be fobbed off with someone else.

    4. The interview is then conducted in person face-to-face. It takes around three-quarters of an hour, no-one else present.

    5. If that person is difficult to get hold of they will come back time and time again till they get them.

    Obviously they ask many different questions, but as regards the election, they report:

    “In contrast to the polls, BSA 2015 replicates the outcome of the election quite closely. Just 70% said that they had voted, only a little above the official turnout figure of 66%. Meanwhile, amongst those who did say that they voted, support for the Conservatives is six points higher than that for Labour, very close to the near seven point lead actually recorded in the election.”

    There is much more in the report for those who want to delve deeper, see:

    http://www.bsa.natcen.ac.uk/latest-report/british-social-attitudes-33/random-sampling.aspx

    This says to me that the defects in opinion polling are due to sampling error as reported. Opinion poll respondents are to a great extent self-selecting, being members of online panels of the politically-interested, or at least being interested enough to be willing to undergo a long phone interview. The ‘results’ obtained from these interviews are then subject to a huge amount of weighting to try and get the right proportions of different kinds of people. In the end the relationship of the outcome to what you would get from a truly random sample must be enormously variable, and it is easy to see how a pro-Labour bias can be universally present.

    Pollsters have saved a lot of time and money by resorting to phone and internet polling, but in doing so they have sacrificed accuracy, and credibility.

  19. Getting a representative sample is never an easy thing. I have no doubt that the BBC tries it’s best to get a representative sample for QT but they often fail, with an audience seemingly full of doctors for a recent programme.

    In what seems like a previous life, as a Conservative councillor, I was aware of a large number of Conservative voters who really did not want to be contacted at all but would certainly vote. I would say that there is a significant group of mainly older people who think that their vote should be an entirely private matter and I would judge are overwhelmingly Conservative by inclination.

  20. Anthony,
    Just a possibility that’s occurred to me, when thinking about this, and I don’t recall seeing mentioned. Also tying into whether the electorate are just deceitful so-and-so’s, just doing it to annoy the innocent pollster-about-town.

    You’ve always said that a poll is just a snapshot of current opinion. Which of course it is, it’s all it can be. But we all know that the press report it as the possible election result – and they’re your clients for most polls. But also the public are almost certainly confused about this. Many people suspect that voters “game” the polls – so might change their vote preference to pollsters over a single issue they’re unhappy about, knowing that this wouldn’t really alter their voting intention. But will give the government the kick up the backside they feel they deserve. This is after all how voters often treat by-elections and European/council ones, which are far more serious than just lying to a pollster.

    So, has there been an effect (conscious or subconscious) on polling methodology and weighting?

    After all, what you’re trying to do is capture current public mood. But that’s not actually the question any of us are really asking. We want to know who’ll win next time – so are pollsters also tempted by predicting the future election result, and if so, is this having a significant effect?

    This is almost unprovable of course. And presumably unsolvable. As your only check on what you’re doing is elections. So you have to change weightings and methods accordingly. But there’s no easy way to test public opinion now – and compare it to opinion-as-polled. Unless we conclude that the BSA is more accurate, being larger and because it has the money to try to contact the harder to reach people. And then use that as a test on mid-term polls.

  21. Theoretically, shouldn’t online panels be more accurate than telephone polling? Or become so over time.

    Given that you’ve got a history on every panellist, so are able to compare their answers to past answers. It’s one thing to lie to pollsters, but it’s surely more effort than anyone’s going to bother to maintain it consistently over time. Or is the churn of panellists so large that this data isn’t available?

  22. @ANTHONY WELLS

    “Carfrew – it sure isn’t a perfect test, but if people lie consistently before and after it is extremely difficult to evidence. You end up having to rely on that sort of indirect evidence.”

    Indeed. This is why in marketing, they use control questions and stuff, to elicit what peeps really think. Or in psychological research, they’ll test subconscious responses. Or pretend they’re testing summat completely different.

    Whatever you effing do though, don’t be trusting the buggers to play fair, even with themselves. If you could do that, you wouldn’t need modding. Many, many questions you don’t even think are worth asking, so little do you value the responses (e.g. would x, y or z change your vote).

  23. “If you could do that, you wouldn’t need modding.”

    ————-

    Just wanna clarify, this bit sounds ambiguous. I wasn’t suggesting that your posts need modding. I meant that you wouldn’t need to mod other peeps posts.

  24. Not that I’m saying your posts don’t need modding either. It’s just a different thing…

  25. “After all, what you’re trying to do is capture current public mood. But that’s not actually the question any of us are really asking. We want to know who’ll win next time – so are pollsters also tempted by predicting the future election result, and if so, is this having a significant effect?”

    I don’t know how it was treated by the inquiry, but there is an argument that the pollsters who reallocate “don’t knows” are effectively attempting to “predict” preferences in a real election and thus moving away from the ‘snapshot’ that they claim to be taking. But given that the inaccuracy was across the board including among pollsters who don’t reallocate (such as YG) this was presumably not a decisive fault.

  26. I should add, you already do some stuff at times that assumes incorrect responses. E.g. the shy Tories thing. Which is introducing a fudge factor to compensate for the less than one hundred percent honest answer.

    But the problem with this is that it’s after the event. One might as well play safe and go the whole hog and see polling as a kind of lie detector test.

  27. @Carfrew:

    To an extent that’s what political analysts (as opposed to pollsters) do.

    They look at the topline poll results but then interpret these figures through the prism of the internals such as assessments of leadership and economic competence.

    However, I’m not sure that pollsters could get away with adjusting the topline voting intention figure using these internals. I think most would consider that this oversteps the bounds of simple weighting.

    Peter Kellner is an interesting example, being both an analyst and a pollster. He kept writing articles describing what the internals were telling him but then just couldn’t bring himself to deviate his projections too far from his polling figures.

    Perhaps we should accept voting intention opinion polls for what they are and then use or own judgement to analyse them?

  28. Surely this just goes back to my original suggestion from a couple of days ago. That the pollsters should club together and fund a voter capture and torture centre.

    Wouldn’t we all enjoy the election night broadcasts more if Jeremy Vine were standing in front of the Scream-o-meter?

  29. One question I’d like to pose which doesn’t seem to have been commented on is that if things were fine 5,10,15 years ago with the polling compared to actual result then what has changed in such a short space of time?

    Surely if it was just down to sampling issues this would gradually creep in over time and polls would have gradually become less accurate?

    The only theory I can come up with is that the people less interested/active in politics this time behaved differently to the ones who were more interested in politics, whereas before the different groups had behaved in much the same way (assuming political interest equates to willingness to answer polling questions).

    For me the two key elections were 1992 and 2015 (and an element of 2010) and the polls got it wrong both times. I can’t recall 1992 that well and how rabid the right wing press were back then (they may have mellowed a tiny bit from the 1980’s but I’m pretty sure they were still very aggressive towards Kinnock) and they were certainly rabid this time around.

    So perhaps, if polling companies can’t get their samples right, maybe there will always be a problem with polling in a tight year when you have a flat out campaign by various elements influencing the less politically inclined voter?

  30. SHEV11
    Do you think the polls would be more accurate if the right wing press were less rabid?

  31. Shevii,
    I’d say that Kinnock got far more of a monstering than Miliband ever suffered from the tabloids. But I’m not sure that’s particularly relevant to polling changes over time.

    Firstly, this is not a new problem. The polls have been too “Labourey”, as AW put it in a post once, for a couple of decades now.

    Several things have also changed in recent years. There were no online panels for polling back then. I don’t know exactly when they started, but I’m pretty sure it was around or after 2005.

    People’s attitude to the telephone has also changed. I know plenty of people now that don’t have a landline. Or at least it’s only used for broadband, and there’s no phone handset plugged into it. There’s also been a huge inrease in cold calling – particularly the more unscrupulous end – now that you can make international calls for virtually nothing via the internet. So people who do have phones are less likely do answer, or to believe it when the caller says this is just a survey.

    Many people also don’t answer their phones (landline or mobile) if they don’t recognise the number. And there’s still a resistance to accepting cold calls on mobiles (seen as more personal), even though many people don’t have/use landlines.

    I believe that all methods of polling are suffereing from declining response rates. Not sure if that’s as true for online panel type surveys, but then it’s looking like the problem with those is that the sample is “too young” and “too likely to vote” – and thus throwing the results off.

  32. TOH
    “What twaddle”
    What a great word that is and so much more genteel than the American version.

    At the back of my mind is a response a pannelist on QT (David Steel?) once gave to a question along the lines of should of should you have to pass a test before being allowed to vote. His answer ran along the lines of, “Every adult should have the right to vote, some might not be very intelligent but they are probably the best represented group in the HoC.”

  33. Anthony

    The fieldwork dates on the pre-election polls gave the usual one night gap, but in any case that’s not really the problem. What YouGov did was to exclude everyone who was not quick on the trigger in January and February. How you then arranged things later on is irelevant, slow responders were already missing from the pool you were drawing your samples from.

    There may of course be a bias against such people in all online polling. If you’re not able to respond quickly and constantly get screened out because the survey has been completed (or the quota they are in has been filled) then they may stop responding altogether and drop from the panel. It would be interesting to look at response times to surveys and whether the ‘hard to get’ people are being used.

    I know pollsters try to avoid such bias by sending out their requests in more than one tranche, but even then within those tranches there may be problems if quotas get filled too quickly

  34. @ Carfrew

    The probing questions don’t always work in marketing either. For example, green-credential products score high in both surveys and focus groups, and it is reported to the client, who has the sales figures which show that when it comes to actually spending money, people act differently from the survey or focus group results.

    I don’t know if the marketing companies go back to the participants and ask them if they subsequently bought products with green credentials. If they do, and people say they did, then obviously the marketing company had the errors in the sampling ….

    What fascinates me is the contradiction between the VI figures and the results to the leadership, etc. questions.

    Do people actually reply to a different question compared to the one they are asked? E.g. I would vote for Labour, but not in my constituency/not with this leader/not so soon, etc.

    Wouldn’t it be a better way (I know it came up back in April) to ask conditional questions (or negative ones)? The former would mean a different realm of statistics, the latter would set the maximum VI for a party (rather than the relative share.

  35. Carfrew
    As you say psychologists use various techniques to minimise socially desirable responding and get around the problem of lying. Asking people to report their own behaviour (in this case voting behaviour) is notoriously unreliable. It’s much, much better to measure actual behaviour. In some fields researchers are experimenting with implicit association tests to assess attitude. These involve measuring reaction times for various word association tests. The principle is that you’ll respond more quickly when the association is congruent with your attitude (X-‘good things’ faster than X-‘bad things’). I think ICM have been experimenting with a variation on this.

    The problem is that the different in reaction times is a population difference. In other words you can’t reliably allocate a respondent to a category on the basis of one data point (think height: men are on average taller than women, but given the height of a person you can’t reliably say what sex that person is). The smaller the difference in population means the less confident you can be about classifying one data point. I suspect that in the case of reactions time to voting intention or political issue questions the difference will be small…

  36. David in France writes

    For the left that fits with the argument: people should have to pass a test to vote. It shouldn’t be a given. If you don’t know some of the basics, you can’t vote.

    I find this surprising. If such a test was applied to Labour candidates, most would be disqualified as being unable to distinguish between debt and deficit.

  37. Good Afternoon everyone from a sunny Bournemouth East; east Southbourne Ward.

    I have read somewhere that Survation had the correct poll, but they did not publish the figures; does anyone know if this is true?

  38. @ Chris Lane

    They themselves claimed it.

    http://survation.com/snatching-defeat-from-the-jaws-of-victory/

    What we don’t know if there were other, opposite direction polls not published by them or others.

    I really don’t think this particular poll and the events are important (in isolation).

  39. @ Athony Wells

    Sorry for asking it again (I promise this is the last), but how reliable those constituency level bands of VI predictions on the YouGov site were? Has YouGov done any analysis on them? They felt intuitively good, and for the NW they were correct (except for VIs for parties not standing …).

  40. @John

    The internals can mislead though, they aren’t necessarily as definitive as some tests you might run. And they may of course conflict.

    Meanwhile, trusting your data is an age old problem. Even Einstein, not liking a particular consequence of a theory, introduced a fudge factor to change the outcome, then regretted it later.

    Then years later peeps start wondering if the fudge factor actually had some merit…

    It’s difficult, this thing of finding out what the hell is really going on…

  41. @Laszlo

    “I don’t know if the marketing companies go back to the participants and ask them if they subsequently bought products with green credentials. If they do, and people say they did, then obviously the marketing company had the errors in the sampling…”

    ———

    Unless… They’re telling porkies.

    And yes, to confirm, sometimes people do answer a different question to that which is asked, e.g. when registering disquiet as a protest.

  42. Kaslo, British Columbia
    Canada

    In the recent Canadian federal election three pollsters were spot on with their prediction of what the winning Party (Liberals) were going to get. .4 to .5 too low and .5% too high.

    One other pollster was off by 3.7%, but interestingly was absolutely on the money in terms of what the opposition Conservatives would get.

    Two were spot on with what the third party social democrat NDP would achieve and two were slightly off:

    https://en.wikipedia.org/wiki/Opinion_polling_in_the_Canadian_federal_election,_2015#Campaign_period

    Only one pollster correctly predicted the Green vote, but two correctly predicted the size of the fourth placed Bloc Quebecois.

    Size of sample does not seem to make a difference and all but one pollster was well within the quoted margin of error.

    One pollster, Nanos, took the time to poll how voters were/would consider voting and this helped explain that the Conservatives had little room for growth and that a large number of voters were trying to choose between the Liberals and NDP so they could ensure the Conservatives were no longer government.

    I think if UK pollsters had adopted Nik Nanos’s approach they would have found that while some UK voters wanted change it was not enough to switch to Labour and Ed Miliband.

    I remember thinking at one point that the Liberal Democrats were in deep trouble, because one poll I read showed that there was an inherent danger that the bleed off to other parties had the potential to be huge.

    In the end it appears that a considerable number of Liberal Democrats opted to vote Conservative to keep Labour out, which supports my theory that the left leaning Liberal Democrats had long gone on to support the Labour and Green parties.

    I chickened out on my LD seat prediction and went with the herd.

    I still feel that the one way to get truth in polling and canvassing is to actually visit the voter in person and then you can read their body language.

    At least with telephone polling you can get the sound of their voice, it’s inflections, whereas with online communication I am not sure how you read the authenticity of what a person is saying.

    I would be very interested to know if the Canadian pollsters did a better job of getting the age demographic of their sample right, noting that the one pollster, EKOS, who reported that the result would depend on whether the cell phone generation would show up to vote was spot on with their Conservative vote prediction.

    Our turnout went up from 61.1% to 68.5% across Canada, and was even higher in some provinces, such as 77.4% on Prince Edward Island and 70.4% in British Columbia

  43. @Sorbus

    Yes, something with a continuous range of values can be problematic. You can get around this using tests with more definitive yes/no responses, but subconsciously given.

    Otherwise, you have to calibrate, as with lie detectors.

  44. […] ultimately whether a survey was done online or by telephone made no difference to its final accuracy. This finding met with some surprise from the audience, given there were more phone polls showing Tory leads than online ones. Ben Lauderdale of the inquiry team suggested that was probably because phone polls had smaller sample sizes and hence more volatility, hence spat out more unusual results… but that the average lead in online polls and average lead in telephone polls were not that different, especially in the final polls

    This only really works if you concentrate on Lab/Con lead as being the one thing that polling should be assessed on and (as stated) only judge polling on the last polls before the election. But if you’re assessing the difference between modes of polling, how accurate they are is irrelevant. What matters is whether there is a systematic difference and what causes it.

    Restricting any assessment to one poll from each pollster is also going to mean that the sample you are using to judge the situation is also very small – something that pollsters are usually concerned about – so the absence of an effect could be due to chance. In practice there does seem to have been a weak effect where phone polls showed a bigger Conservative VI, though whether this was due to the mode or just an accidental outcome of the various house effects is another matter.

    A more serious mode effect though is in the assessment of the UKIP VI which was usually (and continues to be) much lower in phone polls than in online ones. Oddly enough the two tended to converge just before the election (evidence perhaps of ‘huddling’?). This may be evidence of UKIP supporters being split into both ‘shy’ and ‘loud’, (with perhaps a gender link) with online polls having too many of the latter, phone polling not getting enough of the former. Though again the ‘shy’ UKIP voters may be more absent than reticent.

  45. Related to what peeps have been saying… Another problem is the situation-specifity of some questions.

    Thus, the question whether someone is trusted. This can depend on the situation. You might say polling takes this into account, by asking if trusted on the economy etc.

    But peeps can change their view according to the nature of a situation. Thus, some folk are more trusted in a crisis than at other times. Or when dispassionate, even ruthless decision-making is required, but not so much at other times.

    Which can mess with your internals, as it were…

  46. Carfrew
    Not sure what kind of test you mean. Can you give an example?

  47. Is it true that on the exit polls people were asked to put on a piece of paper how they voted rather than verbally telling the pollster?

  48. I wonder if the polling companies will consider it worth the extra financial cost to get to those seemingly difficult to find respondents that were missing from last year’s samples?

  49. @Sorbus

    For example, looking for a chemical marker that might not normally be present, is only produced under certain situations.

    Alternatively, sidestep the problem with using trick questions requiring binary choices, but where you’re actually testing summat different to what they think. Then you avoid the problem of a continuous range of values requiring calibration, but keep it subconscious.

  50. On the subject of phone polls, Martin Boon having One Of His Meltdowns is always good fun, but if you look at the tables for the latest ICM, you can see why he is worried. Of the 1001 people ICM contacted 251 said they had voted Labour and only 242 Conservative[1]. But ICM do then weight their sample so it reflects the actual result of the election[2] among other things. Even after that ICM only show Conservatives just 4 voters ahead, before they get to LTV and all the other adjustments ICM use or have added since the election. And even after those the Conservatives have a 5 point lead, which is smaller than most online pollsters are showing.

    One possible reason may be suggested in John Curtice’s report using the BSA data:

    http://www.bsa.natcen.ac.uk/media/39018/random-sampling.pdf#page=15

    regarding “Reported Vote By Number of Calls Made to Achieve Interview”:

    Call 1: Con 35% Lab 41% (13% of sample)[3]

    Call 2: Con 39% Lab 33% (23%)

    Call 3-6: Con 42% Lab 31% (55%)

    Call 7-9: Con 34% Lab 42% (9%)

    So among those who were interviewed on the first call there were disproportionately many Labour voters[4]. But these would be the same people who answer the phone and respond to a phone survey – phone pollsters don’t ring back 9 (or 16) times. This means the same Labour bias would be found among those who reply to phone polls[5].

    It’s been known for a long time that phone polls tend to find too many people who vote Labour. The real worry is that even after adjusting for this using conventional methods the sample may give a false reading. And if the reason for a bias is unknown, then not only is it difficult to correct for it, but those corrections may swing things the wrong way if the reason ceases to apply.

    That said, even if it’s only 5% who are responding, that’s still more that belong to any one particular online panel. So phone polls still get to people that the internet pollsters don’t.

    [1] This is a current problem with telephone polls. If you look at ComRes’s December poll for the Mail, they had 245 Labour voters and only slightly more Tory ones (249). And after weighting to the result, they actually had Labour ahead (modified to a 4 point Tory lead in the final figure).

    [2] One perpetual problem is how you weight the people who didn’t vote last time (or refuse to say how). There is a danger that you can upweigh this group too much before you then lose most of them through LTV or other adjustments. This may distort your other weights and also increases the margin of error (already large in phone polls).

    [3] I actually find this distribution of the percentages of calls needed a bit odd. Normally if there was a fixed probability of contacting someone each time, you would expect the most productive number of calls to be 1, instead of, as it appears, 2 or even 3. Either the first call is scheduled oddly or interviewers are using it to arrange a time for a later meeting (the BSA interview takes about an hour) which it the one that ‘counts’.

    [4] There were some factors that would account for this – too many SEG DEs for example, Similarly the 7-9 group had too many young voters. But even after adjusting for such things the pro-Labour lean was strong.

    [5] Also given that a bit under half of those contacted by BSA (and BES) refused to take part in the survey, if you have that 13% you get fairly near to the sort of 1 in 20 response rate that telephone pollsters currently report.

1 2 3 5