“But the sheer size of the survey […] makes it of interest…”

One of the most common errors in interpreting polls and surveys is the presumption that because something has a really huge sample size it is more meaningful. Or indeed, meaningful at all. Size isn’t what makes a poll meaningful, it is how representative the sample is. Picture it this way, if you’d done an EU referendum poll of only over 60s you’d have got a result that was overwhelmingly LEAVE… even if you polled millions of them. If you did a poll and only included people under 30 you’d have got a result that was overwhelmingly REMAIN… even if you polled millions of them. What matters is that the sample accurately reflects the wider population you want them to represent, that you have the correct proportions of both young and old (and male & female, rich & poor, etc, etc). Size alone does not guarantee that.

The classic real world example of this is the 1936 Presidential Election in the USA. I’ve referred to this many times but I thought it worth reciting the story in full, if only so people can direct others to it in future.

Back in 1936 the most respected barometers of public opinion was the survey conducted by the Literary Digest, a weekly news magazine with a hefty circulation. At each Presidential election the Digest carried out a survey by mail, sending surveys to its million-plus subscriber base and to a huge list of other people, gathered from phone directories, membership organisations, subscriber lists and so on. There was no attempt at weighting or sampling, just a pure numbers grab, with literally millions of replies. This method had correctly called the winner for the 1920, 1924, 1928 and 1932 Presidential elections.

In 1936 the Digest sent out more than ten million ballots. The sample size for their final results was 2,376,523. This was, obviously, huge. One can imagine how the today’s papers would write up a poll of that size and, indeed, the Digest wrote up their results with not a little hubris. If anything, they wrote it up with huge, steaming, shovel loads of hubris. They bought all the hubris in the shop, spread it across the newsroom floor and rolled about in it cackling. Quotes included:

  • “We make no claim to infallibility. We did not coin the phrase “uncanny accuracy” which has been so freely applied to our Polls”
  • “Any sane person can not escape the implication of such a gigantic sampling of popular opinion as is embraced in THE LITERARY DIGEST straw vote.”
  • “The Poll represents the most extensive straw ballot in the field—the most experienced in view of its twenty-five years of perfecting—the most unbiased in view of its prestige—a Poll that has always previously been correct.”

digestpoll

You can presumably guess what is going to happen here. The final vote shares in the 1936 Literary Digest poll were 57% for Alf Landon (Republican) and 43% for Roosevelt (Democrat). This worked out as 151 electoral votes for Roosevelt and 380 for Landon. The actual result was 62% Roosevelt, 38% for Landon. Roosevelt received 523 in the electoral college, Landon received 8, one of the largest landslide victories in US history. Wrong does not nearly begin to describe how badly off the Literary Digest was.

At the same time George Gallup was promoting his new business, carrying out what would become proper opinion polls and using them for a syndicated newspaper column called “America Speaks”. His methods were quite far removed from modern methods – he used a mixed mode method, mail-out survey for richer respondents and face-to-face for poorer, harder to reach respondents. The sample size was also still huge by modern standards, about 40,000*. The important different from the Literary Digest poll however was that Gallup attempted to get a representative sample – the mail out surveys and sampling points for face-to-face interviews had quotas on geography and on urban and rural areas, interviewers had quotas for age, gender and socio-economic status.

pic2097election

Gallup set out to challenge and defeat the Literary Digest – a battle between a monstrously huge sample and Gallup’s smaller but more representative sample. Gallup won. His final poll predicted Roosevelt 55.7%, Landon 44.3%.* Again, by modern standards it wasn’t that accurate (the poll by his rival Elmo Roper, who was setting quotas based on the census rather than his turnout estimates was actually better, predicting Roosevelt on 61%… but he wasn’t as media savvy). Nevertheless, Gallup got the story right, the Literary Digest hideously wrong. George Gallup’s reputation was made and the Gallup organisation became the best known polling company in the US. The Literary Digest’s reputation was shattered and the magazine folded a couple of years later. The story has remained a cautionary tale of why a representative poll with a relatively small sample is more use than a large poll that makes no effort to be representative, even if it is absolutely massive.

The question of why the Digest poll was so wrong is interesting itself. Its huge error is normally explained through where the sample came from – they drew it from things like magazine subscribers, automobile association members and telephone listings. In depression era America many millions of voters didn’t have telephones and couldn’t afford cars or magazine subscriptions, creating an inbuilt bias towards wealthier Republican voters. In fact it appears to be slightly more complicated than that – Republican voters were also far more likely to return their slips than Democrat voters were. All of these factors – a skewed sampling frame, differential response rate and no attempt to combat these – combined to make the Literary Digest’s sample incredibly biased, despite its massive and impressive size.

Ultimately, it’s not the size that matters in determining if a poll is any good. It’s whether it’s representative or not. Of course, a large representative poll is better than a small representative poll (though it is a case of diminishing returns) but the representativeness is a prerequisite for it being of any use at all.

So next time you see some open-access poll shouting about having tens of thousands of responses and are tempted to think “Well, it may not be that representative, but it’s got a squillion billion replies so it must mean something, mustn’t it?” Don’t. If you want something that you can use to draw conclusions about the wider population, it really is whether it reflects that population that counts. Size alone won’t cut it.

=

* You see different sample sizes quoted for Gallup’s 1936 poll – I’ve seen people cite 50,000 as his sample size or just 3,000. The final America Speaks column before the 1936 election doesn’t include the number of responses he got (though does mention he sent out about 300,000 mailout surveys to try and get it). However, the week after (8th Nov 1936) the Boston Globe had an interview with the organisation going through the details of how they did it that says they aimed at 40,000 responses.
** If you are wondering why the headline in that thumbnail says 54% when I’ve said Gallup called the final share as 55.7%, it’s because the polls were sometimes quoted as share of the vote for all candidates, sometimes for share of the vote for just the main two parties. I’ve quoted both polls as “share of the main party vote” to keep things consistent.


475 Responses to “Size alone is not enough – the tale of the Literary Digest”

1 2 3 10
  1. A quick note about the Trinity Mirror survey that inspired this post, but which I don’t have enough info on to draw a firm conclusion.

    As readers may have seen, Mirror group newspapers had a survey of 44,000 people yesterday about Brexit. The methodology information provided is somewhat unclear… but on either interpretation it would be of dubious worth. According to the Mirror’s write up it was a Google Survey of people visiting Mirror Group websites. According to one write up Google surveys uses age, gender and location to build a representative sample, but Jason Beattie’s commentary on the poll caveated it by saying “it is not a poll which are weighted by age, geography and class and use previous polls as benchmarks”.

    There are two possibilities here. One is that it was just a newspaper website poll done using google’s software. In that case, it’s just a voodoo poll done using a particular software platform that can be safely ignored.

    The other possibility is that it was a UK equivalent of the Google Consumer Surveys in the US. These are interesting. Google do only weight by age, gender and location, but nevertheless their track record is pretty good. In the US election they had Clinton 2 points ahead in their final national poll, so got it bang on. However, the reason such a lightly weighted poll is accurate is probably because of Google’s sheer ubiquity – everyone uses Google, so drawing a sample from Google users probably gives them a damn good sample. Drawing a sample just from readers of Mirror Group newspapers… perhaps less so.

    For that reason I’d be extremely cautious and keep on watching Brexit questions in regular polls… which continue to show no obvious sign of any Bregret.

  2. Many thanks, Anthony.

    Would be worth pinning this somewhere on the home page for easy reference to the uninitiated.

  3. Thank you AW excellent topic.

    BTW: Do pollsters analyse the raw data from other pollsters using your own methodology to check the sense on data collection techniques?

    [No, we don’t often have any access to the raw data from other pollsters. In the aftermath of the 2015 election errors some companies did release raw individual level data so we got to compare and contrast, but as a general rule it’s rarely available – AW]

  4. Great article AW.

    Leading up to the 2015 GE, Ashcroft conducted quite a lot of polling in individual constituencies, some were quite close to the actual results and in the case of the Scottish areas where he polled was quite a bit off the mark by overstating Labour and understating the SNP.

    Each of his individual constituency polls would have had a representative sample but even he got it wrong in quite a few seats polled despite a 1,200+ sample being quite a large sample in a single seat using all the usual polling methodology.

    Big or small representative polls…..polling peeps can still get it wrong.

  5. On the Mirror poll…

    “For that reason I’d be extremely cautious and keep on watching Brexit questions in regular polls… which continue to show no obvious sign of any Bregret.”
    _________

    Yeah I think most of us dissmsed its authenticity instantly.

  6. I think we can all accept that having a representative sample is the foundation of credible polling.

    But the point AW doesn’t address is: representative of what?

    It’s no good having a perfect representation of the electorate, because anything up to 50% of the electorate won’t be voting.

    Ideally you’d have a sample composed entirely of people who are going to vote. In practice that’s not possible, so you have to weight by likelihood to vote. But that depends on self-assessed likelihood. And motivation depends on myriad factors: the nature of the election or referendum (Brexit brought out lots of ‘unlikely’ voters, probably leading the polls to underestimate the leave vote), the weather on the day…

    The sample base of an internet polling company – say YouGov – is a self-selected group of people willing to spend time completing boring questionnaires for negligible reward. Clearly not representative, so the panel for any give survey has to be pushed and pulled into shape by weighting by age, sex, income and presumably lots of other criteria. The political questions are usually tacked onto the end of a tedious commercial poll on TV channels, coffee machines, takeaways or pensions. The people who are interested enough to wade through an interminable questionnaire on takeaways or electronic devices are probably not the same as those who stick with a questionnaire on pensions.

    So after all that, is a ‘proper’ YouGov poll necessarily more representative than one based on Mirror readership? The representativeness of both depends largely on the efficacy of the weighting and statistical manipulation – which is pretty much an unknown. That, after all, is surely why one current poll can show UKIP on 6% and another on 16% (or whatever).

    In short, I think promoting YouGov as the gold standard and dismissing the Mirror/Google approach as voodoo risks a visitation of – to use one of AW’s favourite words – hubris.

  7. “The people who are interested enough to wade through an interminable questionnaire on takeaways or electronic devices are probably not the same as those who stick with a questionnaire on pensions.”

    ———

    I dunno, I think I could prolly find it ok if it was a survey on synths.

  8. “not entirely sure about the size won’t cut it” thing.

    I mean, for example, surely it would cut it if they polled everyone, or nearly everyone…

  9. @ carfrew
    “not entirely sure about the size won’t cut it” thing.

    I mean, for example, surely it would cut it if they polled everyone, or nearly everyone…

    Ah you are forgetting the shy Tory/Kipper/Labour/Green/LD voters who wouldn’t tell the truth: so some statistical weighting method would have to be applied >:-0

  10. @WB

    well yes, I suppose if one were using polling to try and predict the outcome, heaven forfend, you’d have that issue. Although you could always buy some of Howard’s hats/wipes/bins etc.

    However, as an explanatory thing after the event to determine how many shysters etc., it would still be useful…

  11. @ Carfrew

    Shysters! Hm what a useful double meaning.

  12. @AW

    okies. (tbh honest thrashing out the ideology thing can save a lot of hassle in future, but to be fair the ‘beverage’ thing probably not!!)

  13. @Graham

    If you want a poll of polls that claims to weight YouGov equally with other, less frequent polling companies you can try BritainElects http://britainelects.com/ It is not entirely clear what they do but I imagine they average a series of YouGov polls and then treat it as one. It is a seven point moving average which with that methodology could stretch back into 2016 (it was last updated on Feb 10th). If change is in fact still occurring in the direction it has since the Labour peak in late April, the Labour average of 27.9% on Feb 10th is likely to be an overestimate

    Change in Labour vote from Sept 5th to Feb 10th is minus 1.9%. Change in Lib Dem vote over the same period is +1.7%. If you pick the most favourable late summer starting date (Aug 28th) you can get the fall in Labour vote down to 1.5%. If you pick a starting date in the Labour trough of Nov 4th you can imagine the Labour vote has stabilised, but that is certainly not what a regression line through the graph would show

    You may not like YouGov but there is absolutely no objective reason to exclude it…

  14. AW

    Here is an article I would like to see you write:

    “Is accurate constituency polling possible?”

    My own feeling is that leaving aside the Ashcroft questions that clearly favoured the Lib Dems, (for reasons that are still not very apparent to me – why would people suddenly lie when asked how they would vote in their constituency??), the demographic corrections based on national data may not be accurate on a constituency basis…

  15. well the problem with accuracy is that summat new could have arisen to skew things. Hard to be sure till after an election. Whereupon it may change again.

    Polling can be akin to shutting stable doors after the event. Or tail-chasing…

  16. Meanwhile, on the question of the article and the Mirror “poll”..

    Clearly this is not an opinion poll… it is a survey..

    However, to make it wrong you would have to assume that people who voted Leave and have now switched to Remain are more likely to take part in such a poll than people who voted Remain and have now switched to Leave.. Perhaps that is true.. I am sure people on here could come up with a reason for it… And others could find reasons why it may underestimate the switch..

    It is interesting that quite a lot of people are changing in both directions, suggesting the argument is far from over. Again it could be that such a survey attracts people who have changed their minds…

  17. We do know if the Ashcroft constituency polls were wrong when they were conducted just that the GE result was different.

    There is evidence that faced with the prospect of a rainbow coalition with the SNP tail wagging the Labour and LD dogs people genuinely telling Ashcroft’s pollsters in 2014 that they would vote for the local sitting LD MP, switched to Conservative in the run up to the 2015 GE. Certainly this was a large part of the Tory campaign in such seats.

    You could argue that Ashcroft achieved his aim of furthering a Cons Government in that his polls focused CCO’s mind about the need to develop a narrative to woo key voters in such seats?

  18. Aaarrgghh!

    We do NOT know if the Ashcroft constituency polls were wrong when they were conducted just that the GE result was different

  19. “You could argue that Ashcroft achieved his aim of furthering a Cons Government in that his polls focused CCO’s mind about the need to develop a narrative to woo key voters in such seats?”

    ———–

    they could have furthered the aim of selling more of Howard’s hats…

  20. Don’t see why the Chocolate hats can’t be flavoured, Strawberry, Banana, Honeycombe maybe, not coffee though as no-one would buy (always left in the box of chocolates last)

    When Howards business is floated I will claim my share.

  21. http://www.ediblehats.co.uk

    Howard’s business is already up and running: that’s him in the photo…

  22. Some of the Ashcroft polls like Eastbourne were conducted in 2014 and you could argue that things changed.

    Others showed quite stable intentions in several polls (sometimes right up to April 2016), but the results were very different..

    I was particularly surprised by Watford where the three Ashcroft polls in 2014 showed the Lib Dems gaining ground on the Tories and in a close second, but in the end Dorothy Thornhill came third with 18%, despite having been elected by large majorities as Mayor more than once, including during the coalition years when the Lib Dems were on 8%… It seemed like opinion in that constituency had actually been tested with the Lib Dems in the doldrums, unlike all the others…

  23. “You could argue that Ashcroft achieved his aim of furthering a Cons Government in that his polls focused CCO’s mind about the need to develop a narrative to woo key voters in such seats?”

    This is to completely misunderstand what Ashcroft was doing. Opinion polling was only the surface activity. On top of the polls that were published, he was testing out how different wordings and formulations of policy played with target floating voters. These were then used for targeted campaigning in these and other marginal seats, as well as shaping the national campaign.

    It was a large-scale covert focus group exercise, designed to enable the Tories to hone their message for maximum impact. Far from “focussing CCO’s mind about the need to develop a narrative to woo key voters in such seats”, it was performed for that precise purpose.

  24. @Millie

    Millie that’s an excellent find. That could easily be Howard in the photo. Not only does the hat harbour the flora and fauna of his travels, and is almost an allotment in itself, but that could easily be a game of cricket going on in the background. It all fits…

  25. “not coffee though as no-one would buy (always left in the box of chocolates last)”

    ———

    I like the coffee ones!!!…

  26. Well it would be interesting to see a Stoke Central poll today.

    In a normal world, Paul Nuttall would be in trouble. I think he certainly would be if he was standing in Liverpool, but it begins to look like he is little more than a serial l!ar.

    It was previously known that his Linkdin page implied he had a PhD, and when challenged on this, he denied he had put the page up but the reference was mysteriously edited out thereafter.

    He has also made numerous claims about being at Hillsborough, despite no one actually thinking he was who knew him at the time. Today he has admitted that a claim on his webpage that he lost close personal friends at the disaster were false. After first claiming he never said such a thing, he then changed tack and said he didn’t know who had put these onto his personal website after the claims were read out to him.

    Meanwhile, the organisations and people who have campaigned long and hard on the Hillsborough case remain somewhat aghast that a serving politician who was apparently present right in the centre of the dreadful events hasn’t ever spoken out, appeared in support, or generally lifted a finger to help in all these years.

    It all seems pretty contemptible, and could well be another example of fake news coming home to roost. Whether Labour is so weakened that it can’t take advantage is another matter, but there have been non Labour voices in Stoke suggesting UKIP is insuficiently organised to win.

  27. NC Politics tweets:

    Panelbase/Wings Scotland (Scot locals):

    SNP 47 (+15)
    CON 26 (+13)
    LAB 14 (-17)
    LD 5 (-2)
    UKIP 3 (+3)
    GRN 4 (+2)
    OTH 1 (-11)

    Chg vs 2012

    Those are dramatic swings, if accurate. Do any of our Scots friends here have a view on the state of play, or on PanelbaseWings accuracy?

  28. Somerjohn
    “Ideally you’d have a sample composed entirely of people who are going to vote. In practice that’s not possible, so you have to weight by likelihood to vote. But that depends on self-assessed likelihood.”

    I wonder whether there would be any point in using actual turnout to calculate likelihood to vote? e.g. youngster’s votes could be weighted down compared to old folks.

    I suppose one snag is that every survey would have to ask questions about age, educational attainment and so on, and people might lie about those.

  29. Alec

    Why the UKIP leader is made of alternative facts, the Labour candidate also has an interesting (awful, really) Twitter history.

  30. @Alec

    His school has strong roots in the local community and links to LFC – Jamie Carragher went there. They say they identified all their students who were at Hillsborough and ensured they all got counselling. Nuttall was not identified as having been there. Still, you know what Catholic priests are like (it was a Catholic school), so I’m sure it’s them and not Nuttall at fault.

    He’s also claimed to be an LFC season ticket holder. If you read any LFC fans forum, let’s just say that they all take a keen interest in this story, nobody recalls seeing him at Anfield – or on aways for that matter – and there are a great many Reds who would love to have a nice chat with him to put these allegations to bed.

    Given that noted Everton fan Andy Burnham has been happy to talk to LFC fan groups over Hillsborough, I’m sure Nuttall will be happy to chat with his fellow Reds on the matter and we can all put this behind us.

  31. The reliability of constituency polling is an interesting one on many levels.

    The population is obviously smaller, so the sampling has more noise (the confidence intervals would probably be less reliable). On the other hand back in 1959, the Kennedy campaign experimented with micro polling with great success (in a similar way as @Robin described Ashcroft above).

  32. Alec and Chris Riley

    The close friend he lost at Hillsborough as he had claimed earlier wasn’t really close now, only somebody he saw on the street. At least one of his Hillsborough stories must be an alternative fact, but maybe both. He is also damaging the Harris Tweed brand,

    But, these are things I would have expected from him. The lack of vetting of the Labour candidate is more problematic for me.

  33. Gallup’s tracking poll continues to show Trump ‘s approval rating continuing to decline.

    And today it seems we are into “what did the President know and when did he know it?” over the Flynn resignation. It appears that the acting Attorney General Sally Yates (whom Trump later fired) and a senior intelligence officer advised the Trump White House soon after the inauguration that Flynn had been compromised by his telephone conversation with the Russain ambassador and was susceptible to blackmail. It was only when this leaked that Flynn went so raising huge questions about why the WH allowed him to stay in post.

    Fascinating article in the Washington Post about the 10 unanswered questions:

    https://www.washingtonpost.com/news/powerpost/paloma/daily-202/2017/02/14/daily-202-10-unanswered-questions-after-michael-flynn-s-resignation/58a25127e9b69b1406c75cb0/?tid=sm_tw&utm_term=.0e546993e4ce

  34. The disgusting twisting of reality re Hillsborough in Stoke is shameful, but given even people on here are happy to believe it, seems like it’s going to work. There are so many things to criticise Nutall for, it’s sick that they’re bringing this stuff into it. It seems things that are beyond the pale one minute are suddenly endorsed by all when they can be made anti-UKIP.

    I really hope people, by which I mean everyone, yes including pseudonyms on this website, thinks some more about what they’re doing & apologise. I forsee arguments about 7/7 in 30 years, and it’s just plain sick. This piece of spin is foul.

  35. @Saffer

    I’ll start with the rider that I know next to nothing about Scottish politics, but..

    I think the swings only look dramatic because they are between two polls five years apart.

    Taking a step back and looking at the relative fortunes of SNP, Labour and Tory over the long view is quite breathtaking.

    However, the poll looks about par for the course for recent polls and crossbreaks. Maybe a bit flattering for the Tories, and I wonder if the Scottish LDs might actually get some of the fairy dust of their Southern Cousins sometime soon.

  36. Good evening all from central London.

    MILLIE

    That’s some hat Howard has. If any of his future electoral predictions are wrong then he can certainly do some major foraging in that hat. ;-)

  37. @Wood

    Disgust works both ways.

    To lie about Hillsborough for political advantage would be grotesque, whoever does it.

  38. If however a polling company conducts two polls using identical methods, but one has 1000 participants and the other two thousand, the margin of error on the larger poll would still be lower.
    I think that many people tend to lump ‘the polls’ and ‘pollsters’ together, hence at any point in time, ‘They’ get it right or ‘They’ get it wrong. Then it starts to make sense to think that the polls with the larger samples might be a bit more accurate than the smaller ones… just as sub-prime mortgages get safer and safer the more of them you buy. :)

  39. @David Colby

    It’s obviously true that the sample size reduces the margin of error, but of course even that doesn’t mean that a bigger poll is more accurate than a smaller one.

    If a poll with a margin of error of +/- 2 gives the Tories a 10% lead over Labour (in effect a lead between 6 and 14%) and a poll with a margin of error of +/- 3 gives the Tories a 6% lead over Labour (in effect a lead between 0% and 12%) there is a temptation to assume that the “bigger” poll is “better” and that the Tories are well ahead, whereas in fact they may well both be accurate with a true lead around the bottom end of the bigger poll’s MOE.

    Instinctively people cling to the polls that appear to “look better” for the party or cause they support, and therefore pick on any factor that differentiates the “good” polls from the “bad” ones. I think that’s the tendency AW’s tilting at really.

    In reality the sample size is almost always going to be way down the list of relevant factors that determine how accurate a poll is.

  40. SAFFER
    NC Politics tweets:
    Panelbase/Wings Scotland (Scot locals):
    SNP 47 (+15)
    CON 26 (+13)
    LAB 14 (-17)
    LD 5 (-2)
    UKIP 3 (+3)
    GRN 4 (+2)
    OTH 1 (-11)
    Chg vs 2012
    Those are dramatic swings, if accurate. Do any of our Scots friends here have a view on the state of play, or on PanelbaseWings accuracy?
    ___________

    As an ex Scottish peep I think I can comment on this. Not sure how accurate PanelbaseWings poll is but it’s pretty much in lin with all the Scottish sub-samples in the national polls.

    If this poll reflects how Scots actually vote in Mays locals then it really would mean the end for Scottish Labour. Even after the 2011 Scottish GE where the SNP won their first landslide, Labour still won big in the subsequent local elections.

    Currently, the SNP have the most councillors but Labour controls more councils outright and via coalitions thanks to the Tories.
    Based on this poll Labour would almost certainly lose Glasgow, North Lanarkshire, South Lanarkshire and West Dunbartonshire which have been Labour since the first brick was laid for the Great Wall of China.

    My feelings are Labour will be wiped out in every council in West Central Scotland, Aberdeen and Stirling and in Edinburgh where they are in coalition with the SNP, the SNP will form a minority administration.
    The SNP don’t do deals with the Tories in local gov (so they say) so after the elections, we may see a lot of minority controlled councils.

    The big elephant in the room, of course, is the electoral system …STV so any polling might be of little help as to what happens on the night.

  41. I am slightly perturbed by that big drop in “other”.

    Does that represent independents? And if so are opinion polls accurately capturing the support for what are by definition local candidates?

  42. Back to the Ashcroft constituency and national polls. I think at the time they were accurate and people were not lying about their voting intentions but when it came to the election people had a right good old look at what was on offer and a good number switched direction at the last minute.

    It’s easier to predict landslides but almost impossible to predict a late switch when the polls had both the main parties neck and neck for months.

  43. NEIL A

    It will be mostly independents who form the second or 3rd biggest block in terms of elected councillors but they are mostly confined to the large rural councils like the Western isles, Orkney & Shetland, Argyle and Bute, Highland and Moray. A big drop in independents won’t have much impact for the central belt.

  44. @Wood – not quite sure what your point is?

    Is it Nuttal’s behaviour that you find so distasteful, or the fact that it is being reported on? Or do you believe the facts as reported aren’t true?

  45. I know I’ll get pasted for saying this, and clearly it’s obvious that Labour’s problems in Scotland have a much longer genesis, but didn’t Corbyn set one of his objectives as recovery in Scotland?

    Now, they are about half the Con score, let alone recovering vis a vis the SNP.

    For politicians, one day, responsibility comes knocking, and it’s a bit like death – it might not be your fault, but you are the one it’s looking for.

  46. “I think we can all accept that having a representative sample is the foundation of credible polling. But the point AW doesn’t address is: representative of what? It’s no good having a perfect representation of the electorate, because anything up to 50% of the electorate won’t be voting. Ideally you’d have a sample composed entirely of people who are going to vote. In practice that’s not possible, so you have to weight by likelihood to vote. But that depends on self-assessed likelihood. And motivation depends on myriad factors: the nature of the election or referendum (Brexit brought out lots of ‘unlikely’ voters, probably leading the polls to underestimate the leave vote), the weather on the day…”

    As I have commented several times before, it depends on whether you are trying to find out what people think about an issue – often – indeed usually – the point of polling. Or whether you are trying to predict the result of an election, which is a very different and rather specialist thing.

  47. Robin – your 3.38,
    Perhaps in my naivety I had not considered that level of complicity but I guess your premise supports my notion that we can’t say the Ashcroft constituency polls where wrong as they may have been correct at the time taken.

  48. Carfrew & Millie

    As it happens I was wearing my biological recorders hat all day as I was out recording.

  49. @JimJam

    Indeed, I come across the same problem in my own social research field (where sample lead times can be rather longer)

    it is possible that public opinion is currently so volatile and events move so quickly that polling could both be completely accurate at the time taken and perfectly wrong in predicting the future election that they’re polling.

  50. @ Alec

    Mr Nuttall said he was 12 at the time of the Hillsborough disaster. He said he went with his Dad. Also said he had known one of the victims but had not been close friends with any.

1 2 3 10