General election campaigns provoke a lot of attention and criticism of opinion polls. Some of that is sensible and well-informed… and some of it is not. This is about the latter – a response to some of the more common criticisms that I see on social media. Polling methodology is not necessarily easy to understand and, given many people only take an interest in it at around election time, most people have no good reason to know much about it. This will hopefully address some of the more common misapprehensions (or in those cases where they aren’t entirely wrong, add some useful context).

This Twitter poll has 20,000 responses, TEN TIMES BIGGER than so-called professional polls!

Criticisms about sample size are the oldest and most persistent of polling criticism. This is unsurprising given that it is rather counter-intuitive that only 1000 interviews should be enough people to get a good steer on what 40,000,000 people think. The response that George Gallup, the founding father of modern polling, used to give is still a good one: “You don’t need to eat a whole bowl of soup to know if it’s too salty, providing it’s properly stirred a single spoonful will suffice.”

The thing that makes a poll meaningful isn’t so much the sample size, it is whether it is representative or not. That is, does it have the right proportions of men and women, old and young, rich and poor and so on. If it is representative of the wider population in all those ways, then one hopes it will also be representative in terms of opinion. If not, then it won’t be. If you took a sample of 100,000 middle-class homeowners in Surrey then it would be overwhelmingly Tory, regardless of the large sample size. If you took a sample of 100,000 working class people on Merseyside it would be overwhelmingly Labour, regardless of the large sample size. What counts is not the size, it’s whether it’s representative or not. The classic example of this is the 1936 Presidential Election where Gallup made his name – correctly predicting the election using a representative sample when the Literary Digest’s sample of 2.4 million(!) called it wrongly.

Professional polling companies will sample and weight polls to ensure they are representative. However well intended, Twitter polls will not (indeed, there is no way of doing so, and no way of measuring the demographics of those who have participated).

Who are these pollsters talking too? Everyone I know is voting for party X!

Political support is not evenly distributed across the country. If you live in Liverpool Walton, then the overwhelming majority of other people in your area will be Labour voters. If you live in Christchurch, then the overwhelming majority of your neighbours will likely be Tory. This is further entrenched by our tendency to be friends with people like us – most of your friends will probably be of a roughly similar age and background and, very likely, have similar outlooks and things in common with you, so they are probably more likely to share your political views (plus, unless you make pretty odd conversation with people, you probably don’t know how everyone you know will vote).

An opinion poll will have sought to include a representative sample of people from all parts of the country, with a demographic make-up that matches the country as a whole. Your friendship group probably doesn’t look like that. Besides, unless you think that literally *everyone* is voting for party X, you need to accept that there probably are voters of the other parties out there. You’re just not friends with them.

Polls are done on landlines so don’t include young people

I am not sure why this criticism has resurfaced, but I’ve seen it several times over recent weeks, often widely retweeted. These days the overwhelming majority of opinion polls in Britain are conducted online rather than by telephone. The only companies who regularly conduct GB voting intention polls by phone are Ipsos MORI and Survation. Both of them conduct a large proportion of their interviews using mobile phones.

Polls of single constituencies are still normally conducted by telephone but, again, will conduct a large proportion of their calls on mobile phones. I don’t think anyone has done a voting intention poll on landlines only for well over a decade.

Who takes part in these polls? No one has ever asked me

For the reason above, your chances of being invited to take part in a telephone poll that asks about voting intention are vanishingly small. You could be waiting many, many years for your phone number to be randomly dialled. If you are the sort of person who doesn’t pick up unknown numbers, they’ll never be able to reach you.

Most polls these days are conducted using internet panels (that is, panels of people who have given pollsters permission to email them and ask them to take part in opinion polls). Some companies like YouGov and Kantar have their own panels, other companies may buy in sample from providers like Dynata or Toluna. If you are a member of such panels you’ll inevitably be invited to take part in opinion polls. Though of course, remember that the vast majority of surveys tend to be stuff about consumer brands and so on… politics is only a tiny part of the market research world.

The polls only show a lead because pollsters are “Weighting” them, you should look at the raw figures

Weighting is a standard part of polling that everyone does. Standard weighting by demographics is unobjectionable – but is sometimes presented as something suspicious or dodgy. At this election, this has sometimes been because it has been confused with how pollsters account for turnout, which is a more controversial and complicated issue which I’ll return to below.

To deal with ordinary demographic weighting though, this is just to ensure that the sample is representative. So for example – we know that the adult British population is about 51% female, 49% male. If the raw sample a poll obtained was 48% female and 52% male then it would have too many men and too few women and weighting would be used to correct it. Every female respondent would be given a weight of 1.06 (that is 51/48) and would count as 1.06 of a person in the final results. Every male respondent would be given a weight of 0.94 (that is 49/52) and would count as 0.94 of a person in the final results. Once weighted, the sample would now be 51% female and 49% male.

Actual weighting is more complicated that this because samples are weighted by multiple factors – age, gender, region, social class, education, past vote and so on. The principle however is the same – it is just a way of correcting a sample that has the wrong amount of people compared to the known demographics of the British population.

Polls assume young people won’t vote

This is a far more understandable criticism, but one that is probably wrong.

It’s understandable because it is part of what went wrong with the polls in 2017. Many polling companies adopted new turnout models that did indeed make assumptions about whether people would vote or not based upon their age. While it wasn’t the case across the board, in 2017 companies like ComRes, ICM and MORI did assume that young people were less likely to vote and weighted them down. The way they did this contributed to those polls understating Labour support (I’ve written about it in more depth here)

Naturally people looking for explanations for the difference between polls this time round have jumped to this problem as a possible explanation. This is where it goes wrong. Almost all the companies who were using age-based turnout models dumped those models straight after the 2017 election and went back to basing their turnout models primarily on how likely respondents say they are to vote. Put simply, polls are not making assumptions about whether different age groups will vote or not – differences in likelihood to vote between age groups will be down to people in some age groups telling pollsters they are less likely to vote than people in other age groups.

The main exception to this is Kantar, who do still include age in their turnout model, so can fairly be said to be assuming that young people are less likely to vote than old people. They kept the method because, for them, it worked well (they were one of the more accurate companies at the 2017 election).

Some of the criticism of Kantar’s turnout model (and of the relative turnout levels in other companies’ polls) is based on comparing the implied turnout in their polls with turnout estimates published straight after the 2017 election, based on polls done during the 2017 campaign. Compared to those figures, the turnout for young people may look a bit low. However there are much better estimates of turnout in 2017 from the British Election Study, which has validated turnout data (that is, rather than just asking if people voted, they look their respondents up on the marked electoral register and see if they actually voted) – these figures are available here, and this is the data Kantar uses in their model. Compared to these figures the levels of turnout in Kantar and other companies’ polls look perfectly reasonable.

Pollster X is biased!

Another extremely common criticism. It is true that some pollsters show figures that are consistently better or worse for a party. These are know as “house effects” and can be explained by methodological differences (such as what weights they use, or how they deal with turnout), rather than some sort of bias. It is in the strong commercial interests of all polling companies to be as accurate as possible, so it would be self-defeating for them to be biased.

The frequency of this criticism has always baffled me, given to anyone in the industry it’s quite absurd. The leading market research companies are large, multi-million pound corporations. Ipsos, YouGov and WPP (Kantar’s parent company) are publicly listed companies – they are owned by largely institutional shareholders and the vast bulk of their profits are based upon non-political commercial research. They are not the personal playthings of the political whims of their CEOs, and the idea that people like Didier Truchot ring up their UK political team and ask them to shove a bit on the figure to make the party they support look better is tin-foil hat territory.

Market research companies sell themselves on their accuracy, not on telling people what they want to hear. Political polling is done as a shop window, a way of getting name recognition and (all being well) a reputation for solid, accurate research. They have extremely strong commercial and financial reasons to strive for accuracy, and pretty much nothing to be gained by being deliberately wrong.

Polls are always wrong

And yet there have been several instances of the polls being wrong of late, though this is somewhat overegged. The common perception is that the polls were wrong in 2015 (indeed, they nearly all were), at the 2016 referendum (some of them were wrong, some of them were correct – but the media paid more attention to the wrong ones), at Donald Trump’s election (the national polls were actually correct, but some key state polls were wrong, so Trump’s victory in the electoral college wasn’t predicted), and in the 2017 election (most were wrong, a few were right).

You should not take polls as gospel. It is obviously possible for them to be wrong – recent history demonstrates that all too well. However, they are probably the best way we have of measuring public opinion, so if you want a steer on how Britain is likely to vote it would be foolish to dismiss them totally.

What I would advise against is assuming that they are likely to be wrong in the same direction as last time, or in the direction you would like them to be. As discussed above – the methods that caused the understatement of Labour support in 2017 have largely been abandoned, so the specific error that happened in 2017 is extremely unlikely to reoccur. That does not mean polls couldn’t be wrong in different ways, but it is worth considered that the vast majority of previous errors have been in the opposite direction, and that polls in the UK have tended to over-state Labour support. Do not assume that polls being wrong automatically means under-stating Labour.


965 Responses to “How not to interpret opinion polls”

1 18 19 20
  1. Ha ha:

    I probably shouldn`t have talked about a revolution in the NE, but maybe my mind got derailed by the amazing roundabout in the middle of Mintlaw.

    It`s where the CONs have their office for both Banff & Buchan and also the Gordon constituencies.

    The roundabout is amazing because there is radial parking almost all the way round, and everyone parks nose-in to the pavements. Which means they have to back out into the flow of traffic going clockwise round the roundabout. People`s backing is often impeded by vans delivering to the shops, so it`s a highly dangerous place.

    A more go-ahead community would have sorted the situation decades ago. But there are a lot of stick-in-the muds in this district, like farmers who don`t believe in climate change and think conservation is totally unnecessary and a waste.

  2. @PETE B

    “I had the opposite experience. After I came round from a 10-day coma I tried to get discharged and they wouldn’t let me. I suppose I could have insisted but I was pretty weak. It just shows how anecdotal experience isn’t a reliable guide to anything.“

    ———-

    You can learn a lot from anecdotal experience. Including that it’s an idea to be suspicious if they try and discharge you too soon.

    That you didn’t have the same experience doesn’t disprove my anecdote anyway. My experience was nearly thirty years ago, and hopefully they learned a few lessons since then. Also, some hospitals did it more “successfully” than others. Mine had a particularly low unit cost.

    But there may be new targets and new screw-ups to go with them.

  3. @Davwel

    A link to the seats changed is here@

    https://drive.google.com/open?id=1OsV6cosYSJa8K2lcN6HJ-cJwTTEPGc0D

    If I paste the list it goes into moderation.

    Regarding particular seats, bear in mind they are based on regional swings, and individual seats may vary according to local circumstances.

    We know in practise some seats swing more or less, on local independent factors a model can’t really account for.

    Banff & Buchan looks like:

    SNP 47
    Con 44

    Two weeks ago it looked like:

    Con 45
    SNP 44

    You gov MRP has the seat at likely Conservative (Con 49, SNP 38).

  4. Catmanjeff

    The SNP have withdrawn support from their candidate in Kircaldy&Cowdenbeath due to anti – semitic issues. A Labour Hold is now more likely – though not certain.

  5. @Graham

    Thanks for that information.

  6. Davwel

    I know that roundabout.

    Came across it when visiting the Aberdeenshire Farming Museum – which was excellent (even if it had so many aspects of my early life “in a museum”!)

  7. CMJ:

    Many thanks for your hard work.

    I take it that since WAK is not in your link`s list of seats changing, then it is a CON hold.

  8. New thread

  9. @ MOG

    “the top 5 are now:

    1 – TrigGuy
    2= Hireton / RedRich
    3 – Millie
    5= OldNat / TonyBTG

    Wow, thanks for that MOG. I had noticed some numbers starting to creep close to my guess. You of course realise my LAB/CON prediction of 333/222 was entirely based on liking the numbers. (Also SNP 44 and LD 22.)

  10. OLDNAT
    PASSTHEROCKPLEASE
    HULAGU

    Long article from Peter Ungphakorn whom I should have read first.

    https://tradebetablog.wordpress.com/2019/08/21/bother-at-wto-court/#more-7070

    “So, WTO dispute settlement can muddle along, at least for a while, despite the crisis in the Appellate Body. Other WTO functions — monitoring and peer review of how the present agreements are implemented (see below), negotiation, and technical assistance and capacity-building — can continue.

    It’s not even clear if those areas of work would be hampered if dispute settlement broke down completely, although ultimately it would depend on countries continuing to have faith in the system as a whole. For now, we are a long way away from that.”

  11. Neil issues a challenge to Johnson

  12. @ Theexterminatingdalek

    For the record, I was referring to central, not local, government deciding what otherwise confidential information could be sold. I’m fully aware the decision is not made by councils.

  13. @ Theexterminatingdalek

    For the record, I was referring to central, not local, government deciding what otherwise confidential information could be sold. I’m fully aware the decision is not made by councils.

1 18 19 20

Leave a Reply

NB: Before commenting please make sure you are familiar with the Comments Policy. UKPollingReport is a site for non-partisan discussion of polls.

You are not currently logged into UKPollingReport. Registration is not compulsory, but is strongly encouraged. Either login here, or register here (commenters who have previously registered on the Constituency Guide section of the site *should* be able to use their existing login)