I’m writing a longer piece rounding up public opinion towards Brexit, but rather than let that get dominated by a big rant, I thought I’d write a separate piece about self-selecting newspaper polls apparently showing changing attitudes to Brexit. Several newspapers in the North East and the Black Country have had open-access “polls” on their websites that have had more responses supporting remain than leave. These have been widely picked up on social media by people who support Britain remaining in the EU, who have given them rather more weight that they probably would do to most open-access polls on newspaper websites.

I have spent a decade making the same post about self-selecting polls – the phone in polls in papers, press-the-red-button polls on TV, click here to vote at the bottom of newspaper articles – are bunkum. That venerable old sage Bob Worcester has been doing it for decades before me, coining the term “voodoo polls” to refer to them.

However, they keep on coming back. This time it’s a little different because of the people doing it. I keep seeing otherwise intelligent and learned people on social media quoting them. Even people who recognise that they are not a robust way of measuring public opinion sometimes suggest they must mean “something”.

They really don’t.

To roll over the old arguments again…

1) Polls are only meaningful if they are representative, and self-selecting polls are not. To be meaningful, a sample needs to mirror whatever population it is trying to measure. If it has the same gender balance, age balance, class balance, etc, then it should have the same balance of opinion. Properly conducted polls ensure this is the case using the sampling (normally using randomisation and/or demographic quotas) and weighting (adjusting the sample after fieldwork to ensure it does indeed have the right number of men and women, old and young, rich and poor). Self-selecting website & newspaper polls do not do this.

More specifically, if you look at the professionally conducted polls asking how people would vote in a referendum now, they are almost all weighted to ensure they are representative in terms of how people voted last time. For example, look at the ComRes/CNN poll, the most recent poll to ask how people would vote in a referendum today. The first question ComRes actually asked was how people voted back in June, this ensured the sample was representative, that it matched the actual figures and did not have a skew towards people who voted Remain in June or who voted Leave in June. This means we know ComRes’s sample accurately reflects the British population, and if it had shown people would vote differently now, we would have known there had been a genuine change in opinion. Without controls on this, without an attempt to make sure the sample is representative, it could just be the case that a sample is full of people who voted Remain in June to begin with.

2) Getting a large number of responses does not make a poll meaningful. Self-selecting “polls” often trumpet a large sample size as a way of suggesting their data have some validity. This is false. If you conducted a poll entirely of, say, Guardian readers, then sheer numbers would never make it representative of the whole country. This was famously tested to destruction in 1936. Back then the best known “pollster” was the Literary Digest magazine, which conducted mail in surveys. They sent out tens of millions of surveys based on subscription directories, phone directories and so on, and received literally millions of responses. Based on those they confidently predicted that Alf Landon would win the 1936 Presidential election. George Gallup, the pioneer of modern polling methods, had a far smaller sample, but made it properly representative – as a result he accurately predicted Roosevelt’s landslide victory. It was the birth of modern polling but, alas, not the death of newspapers conducting voodoo polls.

3) Self-selecting polls reflect the views of those who are interested and have an axe to grind. Any poll where people can choose to take part, rather than the polling company controlling who is invited, will attract those people with strong views on the subject and under-represent those with only a limited interest (indeed, this is a problem that even proper polls are not immune to, given low response rates). It probably explains some of the apparent shift from the pre-referendum newspaper “polls” to the current ones – before the referendum it was angry Leave supporters with the motivation to take part in “polls” on obscure newspaper websites; now the Leave supporters have what they want, and it is the frustrated Remainers hammering away on website “polls”, grasping for evidence that public opinion has moved their way. To measure public opinion properly, you need to represent all the public, not just those who are the most fired up.

4) Self-selecting polls can be orchestrated and fixed. Open access polls don’t usually have any limits on who can take part (do you actually live in the Black Country?) or any method of preventing multiple votes. They can never stop people sharing or distributing the link to like minded people to encourage them to vote (indeed, given they often exist purely as clickbait, they are very much intended to be distributed in that way). This doesn’t have to be an organised attempt, as much as people sharing links on social media with a “vote here and show people that not everyone wants Brexit”. Again, properly controlled polls have measures preventing such manipulation or skews.

Almost no new information is entirely useless. If we lived in an information vacuum then these sort of things would perhaps be an interesting straw in the wind, a pointer to something that might be worth proper investigation. We do not live in an information vacuum though – there have been numerous properly conducted opinion polls using properly representative samples over the last six months (I will write something on them later, but in the meantime John Curtice has collected them at the beginning of this article.) and these have painted a different picture. Properly conducted polls in recent months have consistently shown very little net movement in whether the British public want to leave or remain in the EU.

There are only three obvious ways of resolving the conflict between the picture painted by professionally conducted national polls and the self-selecting website polls. One, that that has been a vast shift of opinion in favour of Remain in the North East and the Black Country, but it has been balanced out by a big shift in the other direction elsewhere in the country so at a national level it evens out. Two, that professionally conducted polls with representative samples (polls that even on their very worst days, still end up within a handful of percentage points of the actual result) have somehow completely missed a 45% swing in public attitudes to Brexit. Or three, that open-access polls on newspaper websites that let any old bugger vote multiple times without any attempt to get a sample that is politically or demographically representative are an utterly useless way of measuring opinion.

I know it’s the third explanation, and deep in their hearts, I think most of those people sharing them know it’s the third explanation too. The kindest advice I can give to those who would like Britain to remain in the EU is that they need to change public opinion, not grasp at voodoo polls to kid themselves that it already agrees with them.


Ipsos MORI have re-asked their questions on the junior doctors’ dispute ahead of the second strike today. The overall level of support remains the same, with two-thirds backing the strike, but underneath that opinions appear to be polarising. While the 66% of people supporting the strike is the same percentage as last month, within that the proportion saying “strongly support” has risen, those saying “tend to support” has fallen. Among the other third of the population the proportion of people saying they don’t know or have no feelings either way has fallen (from 19% to 12%), the proportion of people saying they oppose the strike has risen (from 15% to 22%).

Asked who is to blame for the dispute continuing this long 64% blamed the government, 13% the doctors and 18% both equally. Full details of the poll is here, and my write-up of the January figures is here.

As well as the quality polling by MORI, there is also sadly a new outbreak of newspaper reporting of voodoo polls on the issue. The Indy and Mirror are reporting a “poll” apparently showing 90% of junior doctors would resign if the contract was imposed. We’ve already had one outbreak of voodoo polling in this dispute, that one claiming 70% of junior doctors would resign… which turned out to be a “survey” conducted among the members of a Facebook group campaigning against the contract. This time the two papers reporting it are very tight lipped about where it was conducted, so I don’t know if it’s the same forum – the only clue is that it was organised by Dr Ben White, who is campaigning against the contract. From the Mirror’s write up Dr White did at least ensure respondents were real doctors, but false or multiple responses is far from the only thing that stops voodoo polls being meaningful, it’s also where you do it, whether you recruit respondents in a manner that gets a representative and unbiased survey. You would, for example, get a very different result on foxhunting in a survey conducted on a Countryside Alliance Forum or a League Against Cruel Sports Forum, even if you took measures to ensure all participants were genuine countryside dwellers.

Questions along the lines of “If thing you oppose happens, will you do x?” are extremely dicey anyway – people pick the answers that will best express their anger and opposition (Dr White himself seems to take that perfectly sensible angle in his quote to the Mirror, presenting his findings as an expression of anger). To quote what I wrote last time…

From a respondent’s point of view, if you are filling in a survey about something you oppose, you’re are likely to give the answers that most effectively express your opposition. Faced with a question like this, it’s far more effective to say you might leave your job if your contract is changed than say you’d meekly accept it and carry on as usual.

We see this again and again in polls seeking to measure the impact of policies. For example, before tuition fees were increased there were lots of polls claiming to show how many young people would be put off going to university by increased fees (such as here and here). After the rise, they miraculously continued to apply anyway. Nobody wants to tell a pollster that they would just swallow the thing they oppose.

I don’t doubt that many or most junior doctors are unhappy with the new contract […but…] you shouldn’t necessarily believe people telling pollsters about the awful consequences that will happen if something they don’t like happens. It’s a lot easier to make a threat to a pollster that you’ll resign from your job than it is to actually do it.

And that’s before we get to fact that “considering resigning” is very different to “resigning”. I consider taking up jogging every January, yet the people of Dartford are yet to be subjected to even the briefest glimpse of me in jogging gear.)


-->

I thought it a good opportunity to provide a round up of the available evidence we have about what the public think of airstrikes against Islamic State in Syria… and to give a reminder to people of what is NOT good evidence. First, here’s the recent polling evidence:

  • Survation/Mirror polled after Cameron’s statement this week, and found 48% agreed with Britain beginning airstrikes against Islamic State alongside France and the US, 30% of people disagreed (tabs here)
  • YouGov earlier this week asked if people would approve or disapprove of the RAF taking part in airstrikes against ISIS in Syria. 59% would approve, 20% would disapprove (tabs here).
  • ComRes for the Indy on Sunday asked a question on whether people would support Britain taking part in airstrikes against ISIS without UN approval (there wasn’t a parallel version with UN approval). 46% would support airstrikes without UN approval, 32% would not. Tabs are here. A slightly earlier ComRes poll for the Daily Mail asked if people would back British military air strikes against Syria – 60% would, 24% would oppose (tabs here)
  • BMG for the Standard asked a question on whether Britain should extend it’s current airstrikes against ISIS in Iraq to cover Syria as well. This found an even split – 50% thought they should, 50% thought they should not (tabs here)
  • I don’t think ICM have asked a direct support/oppose bombing question, but last week they asked a question about Parliamentary consent. It found 46% supported airstrikes if Parliament agreed, 23% supported airstrikes without Parliamentary consent, 12% opposed airstrikes regardless, 19% didn’t know (tabs here)

The precise levels of support differ from poll to poll as they are all asking slightly different questions using slightly different wordings. However the overall picture is pretty clearly one where the balance of public support is in favour of airstrikes – even the most sceptical finding, from BMG, has people evenly divided. That’s not to say British public opinion is gung-ho enthusiasm for conflict, if you look through the rest of those surveys there is plenty of doubt (for example, several polls have found that people think intervening will increase the risk of terrorism here in Britain). On balance, however, public opinion seems to be in favour.

On twitter and other social media there is lots of sharing of a “poll” by ITV that apparently shows a large majority against Britain taking part. The reason this “poll” gives such sharply different answers is because it is not representative and has no controls upon it. I have written about this many, many times (and for many decades before I was writing the great Bob Worcester dutifully fought that same long fight). The sort of open access polls that used to be on Ceefax, and for people to phone in to newspapers, and these days pop up at the bottom of newspaper stories and the sidebar of websites are completely useless as a way of accurately measuring public opinion.

Opinion polls are meaningful for one reason and one reason alone, because the sample is representative. It has the right number of young people and old people as Britain as a whole, the same number of rich people and poor people as Britain as a whole, the same numbers of left-wing and right-wing people… and therefore, it should have the same proportion of people who are anti-bombing and pro-bombing as there are in Britain as a whole. An open-access poll on a website has no such controls.

When a poll is properly done the researcher will use some sort of sampling method that produces a sample that is demographically representative of the country as a whole. Then when it’s finished, they’ll fine tune it using weighting to make sure it is representative (e.g. if the proportion of women in the sample is lower than 51% they’ll weight the female respondents up). The people answering the poll will be invited and contacted by the researcher, preventing people or organised campaigns skewing a poll by deliberately directing lots of people who share their views to fill it in.

Numbers alone do not make a poll representative. A common error is to see a professionally conducted poll of a 1000 people and a bigger open-access “poll” of 10,000 people and think that the latter is therefore more meaningful. This is wrong – it’s how representative the sample is that matters, not how big it is. The classic example of this is the 1936 US Presidential race, the one that made the reputation of George Gallup. Back then the Literary Digest conducted a mail-in poll with a sample size of 2.3 million people, Gallup conducted a normal sized professional poll. The Digest’s poll predicted that Alf Landon would easily win the election, Gallup correctly predicted that Roosevelt would win a crushing landslide. The problem was that while the Literary Digest’s poll had a vast sample (probably the biggest sample of any opinion poll, ever) it wasn’t representative, it was skewed towards richer people who were more likely to vote Republican. Gallup’s sample was tiny compared to his competitor, but it had proper controls and was properly representative.

Unlike the polls by ComRes, ICM, Survation and YouGov the ITV “poll” won’t have controls to make sure the sample is representative of the British public – indeed, they don’t even collect any demographics to see whether it is or not. There is nothing stopping organised campaigns seeking to influence an open poll – for example, StoptheWar could’ve sent an email out to their mailing list encouraging them all to fill it in. There is nothing stopping anyone with the wherewithal to delete a cookie from their computer voting many, many times. It is, in short, meaningless.

Following May the properly conducted polls got something wrong too of course – but that’s a reason to be wary of even properly conducted polls, not a reason to suddenly put trust in “polls” that don’t even attempt to do things properly.

“Polls” like this, which Bob Worcester christened “Voodoo polls”, are a blight on journalism and opinion research. Because to the casual observer they can’t be easily distinguished from a properly conducted poll they mislead readers into thinking they are meaningful. I assume newspaper websites use them because they drive traffic, but serving up “news” that any journalist with a basic grasp of stats should know is meaningless – or in this case, actively wrong – is doing a disservice to their readers and listeners. At least when the BBC do things like this caveats are normally added saying the “poll” was self-selecting and isn’t representative. ITV don’t caveat it at all, so who can blame all the excited anti-airstrikes people on social media for thinking it means something and jumping upon it? I’m sorry, but it doesn’t – properly conducted polling suggests the public split broadly in favour.

Of course, none of this means that it is necessarily correct for Britain to take part in airstrikes. Polls are not a magic 8 ball where you ask the public and they spit out the “correct” answer. Public opinion can be wrong, and often is. The evidence is that public opinion favours bombing ISIS in Syria, it doesn’t necessarily follow that the government or opposition should do so.


Voodoo polling corner

Back in 2012 I wrote about the Observer reporting an open-access poll on a website campaigning against the government’s health bill as if it was representative of members of the Royal College of Physicians. I also wrote to the Observer’s readers’ editor, Stephen Pritchard, who wrote this article about it.

The Guardian today is making the same error – they have an article claiming that seven out of ten junior doctors will leave the profession if the new junior doctor’s contract goes through. The headline presents it as representative of all junior doctors and it is referred to as a poll and a survey in the first two paragraphs. Only in the final, seventeenth paragraph is it revealed that it wasn’t conducted by any reputable market research organisation, but a self conducted survey of members of a Facebook group, the Junior Doctors Contract Forum, which is campaigning against the new contract (the Telegraph had a similar article earlier this month that appears to be based on the same data).

We cannot tell if efforts were made to limit the poll to actual doctors or to make it representative of junior doctors in terms of career stage, age, region and so on – it doesn’t really matter, as it is fatally undermined by being conducted in a forum campaigning against a contract. It would be like conducting a poll on fox hunting in the Countryside Alliance’s Facebook group and presenting that as representative of the countryside’s views on foxhunting. The flaw should be screamingly obvious.

Questions along the lines of “If thing you oppose happens, will you do x?” are extremely dubious anyway. The problem is that respondents to opinion polls are not lab rats, they are human beings who seek to use polls to express their opinion, even when it’s not exactly what the question asks. From a respondent’s point of view, if you are filling in a survey about something you oppose, you’re are likely to give the answers that most effectively express your opposition. Faced with a question like this, it’s far more effective to say you might leave your job if your contract is changed than say you’d meekly accept it and carry on as usual.

We see this again and again in polls seeking to measure the impact of policies. For example, before tuition fees were increased there were lots of polls claiming to show how many young people would be put off going to university by increased fees (such as here and here). After the rise, they miraculously continued to apply anyway. Nobody wants to tell a pollster that they would just swallow the thing they oppose.

I don’t doubt that many or most junior doctors are unhappy with the new contract, but you can’t get a representative poll by surveying campaigning groups, and you shouldn’t necessarily believe people telling pollsters about the awful consequences that will happen if something they don’t like happens. It’s a lot easier to make a threat to a pollster that you’ll resign from your job than it is to actually do it.

UPDATE: While I’m here in voodoo polling corner, I should also highlight this cracking example of a voodoo poll in the Daily Mail. It claims “One in three women admit they watch porn at least once a week”… but it seems to be an open access poll of Marie Claire readers, certainly it is in no way representative of all women in terms of things like age. It contains the delightful line that “Out of the more than 3,000 women surveyed, 91 per cent of the survey’s respondents identify as female, eight per cent identify as men and one per cent is transgender.” I don’t know how to break it to them, but you probably can’t include the 8% who are men in a survey of 3000 women.


I hope most of my regular readers would assume a Daily Express headline about a “poll” showing 80% of people want to leave the EU was nonsense anyway, but it’s a new year, a new election campaign, and it’s probably worth writing again about why these things are worthless and misleading as measures of public opinion. If nothing else, it will give people an explanation to point rather overexcited people on Twitter towards.

The Express headline is “80% want to quit the EU, Biggest poll in 40 years boosts Daily Express crusade”. This doesn’t actually refer to a sampled and weighted opinion poll, but to a campaign run by two Tory MPs (Peter Bone and Philip Hollobone) and a Tory candidate (Thomas Pursglove) consisting of them delivering their own ballot papers to houses in their constituencies. They apparently got about 14,000 responses, which is impressive as a campaigning exercise, but doesn’t suddenly make it a meaningful measure of public opinion.

Polls are meaningful only to the extent that they are representative of the wider public – if they contain the correct proportions of people of different ages, of men and women, of different social classes and incomes and from different parts of the country as the population as a whole then we hope they should also hold the same views of the population as a whole. Just getting a lot of people to take part does not in any way guarantee that the balance of people who end up taking the poll will be representative.

I expect lots of people who aren’t familiar with how polling works will see a claim like this, see that 14,000 took part, and think it must therefore be meaningful (in the same way, a naive criticism of polls is often that they only interview 1000 people). The best example of why this doesn’t work was the polling for the 1936 Presidential election in the USA, which heralded modern polling and tested big sample sizes to destruction. Back then the most well known poll was that done by a magazine, the Literary Digest. The Literary Digest too sent out ballot papers to as many people as it could – it sent them to its subscribers, to other subscription lists, to everyone in the phone directory, to everyone with a car, etc, etc. In 1936 it sent out 10 million ballot papers and received two point four million responses. Based on these replies, they confidently predicted that the Republican candidate Alf Landon would win the election. Meanwhile the then little known George Gallup interviewed just a few thousand people, but using proper demographic quotas to get a sample that was representative of the American public. Gallup’s data predicted a landslide win for the Democrat candidate Franklin D Roosevelt. Gallup was of course right, the Literary Digest embarrassingly wrong. The reason was that the Literary Digest’s huge sample of 2.4 million was drawn from the sort of people who had telephones, cars and magazine subscriptions and, in depression era America, these people voted Republican.

Coming back to the Express’s “poll”, a campaign about leaving Europe run by three Tory election candidates in the East Midlands is likely to largely be responded to by Conservative sympathisers with strong views about Europe, hence the result. Luckily we have lots of properly conducted polls that are sampled and weighted to be representative of whole British public and they consistently show a different picture. There are some differences between different companies – YouGov ask it a couple of time a month and find support for leaving the EU varying between 37% and 44%, Survation asked a couple of months ago and found support for leaving at 47%, Opinium have shown it as high as 48%. For those still entranced by large sample sizes, Lord Ashcroft did a poll of 20,000 people on the subject of Europe last year (strangely larger than the Express’s “largest poll for 40 years”!) and found people splitting down the middle 41% stay – 41% leave.

And that’s about where we are – there’s some difference between different pollsters, but the broad picture is that the British public are NOT overwhelmingly in favour of leaving the EU, they are pretty evenly divided over whether to stay in the European Union or not.