Compare and contrast

A couple of weeks ago someone sent me a link to a “poll” in the Tab (which, one understands, is some form of newspaper for students) that claimed to show Conservatives were in the lead amongst students. Nonsense of course, it was an open access voodoo poll with no attempt to get a meaningful or representative sample (hell, 10% of the sample were Cambridge students!). Of course, it was only a poll in a campus newspaper so I didn’t bother writing rude things about it, the only other media I found foolish enough to cite it were Vice and Breitbart.

Just for the record though, today’s Independent has a properly conducted poll of students by YouthSight (we’ve met them here before, under the name of Opinionpanel). This was a panel based survey amongst undergraduate full-time students, recruited via UCAS and validated through an ac.uk email address, weighted by type of university (Russell, pre-1992, post-1992, specialist), year of study and gender. In contrast to the voodoo poll above, it shows Labour with a solid lead amongst students who say they are likely to vote – Labour 43%, Conservatives 24%, Lib Dems 6%, Greens 14%, UKIP 5%. Compare and contrast.


A couple of years ago I wrote a piece giving advice on how to report opinion polls, or rather, how not to. Look specifically at the third point on being careful of extremely small sample sizes in cross-breaks.

There was a cracking example of the media failing on this front on BBC Look East this week, which has done the rounds on Twitter. The clip is here, the offending portion starts at the three minute mark. It claims to show the results of a poll of the Eastern region that put UKIP on 44% of the vote.

The figures come from page 36 of this ComRes poll. It wasn’t a bespoke, properly weighted poll of the Eastern region. It’s a crossbreak on a normal national poll. The figures are based upon only only 58 respondents, giving a margin of error of plus or minus 13 points. The figures are not even accurately quoted, the Lib Dems are actually on 7%. The were no caveats about sample size offered (the youtube clip from UKIP cuts out suddenly, but at the moment the full programme is on iplayer). This is truly appalling reporting of polls – they is no way that such a tiny cross-break should be reported out of context as if it were a representative poll.


Time for some bad poll reporting, or more specifically, bad poll headlining (Nicholas Watt’s actual article is eventually perfectly clear about the details of the poll). Tonight the Guardian report that “Labour support up 14 points after Miliband’s energy pledge”. Now, one might very well interpret that as meaning Labour’s share of support in the polls has risen fourteen points since Ed Miliband made his pledge on energy prices. Of course, this isn’t the case. Labour were up in the high thirties before conference and now they are in the high thirties – perhaps a tad higher, it’s still unclear. What the poll actually shows is that amongst middle class people who say they are struggling to make ends meet Labour are up 14 points since the general election in 2010. Given the vast majority of Labour’s increase in the polls happened in the tail end of 2010 or after the omnishambles budget in 2012, it’s fair to assume this was not the result of Ed Miliband’s energy pledge.

That said, 14 points is a big increase considering Labour are only up about 8 or 9 points overall. Once
Peter’s actual article and the tables are out it will be interesting to see the contrast between those people who are struggling and those who are doing well (Though its worth considering that correlation will not only work one way – people who feel badly off may be more likely to support Labour, but I suspect people who support Labour are also more likely to say they are struggling. Poorer people will already be more Labour anyway, the interesting contrast will be the changes). It’s not up on the Progress website yet, but presumably will be in the next few days.

Today’s papers also have some ropey poll reporting from a different source in the the Telegraph. It reports a poll of Countryside Alliance members, but headlines as if it were representative the views of rural voters as a whole. Again, the problem is the headline, Steven Swinford’s actual article is fine. Needless to say, the membership of the Countryside Alliance is not interchangeable with the entire population of rural areas, for reasons which I would hope were blindingly obvious (it’s a pressure group, so it attracts more politically active and engaged people. It grew from the campaign against the hunting ban, so it attracts more pro-hunting people. It doesn’t restrict it’s membership to people actually from rural areas, etc, etc). The Speccy has got very excited about the same poll because it shows 13% of Countryside Alliance members saying they’d vote UKIP… so, roughly the same proportion of people as in the country as a whole. If anything, one might have expected a more rural and conservative demographic to be more supportive of UKIP than the population as a whole, in fact, they seem to be exactly the same. It strikes me a bit as a “Pope in no more Catholic than anyone else shocker”.

Finally, while I’m picking on people, I might as well waste a few pixels being horrid to the Daily Express, which today claims 98% of people think Britain should close its doors to all new immigrants. It seems almost superfluous to point out that almost any survey in the Express is complete tripe, like making the effort to write that things in the National Enquirer may be untrue. Perhaps so, but I feel the need to point it out occasionally – it would hardly be fair for me to pick upon upon the motes in the eyes of the Guardian and the Telegraph and ignore the forest sprouting from the Express. Express “phone polls” are premium rate numbers they put in the paper, to get people to ring up to vote yes or no (multiple times if they wish), presumably after reading a foam-flecked Express rant on the subject in question. There is obviously no attempt to get a representative sample and they always show around 97%, 98% in agreement with whatever the Express’s line is. On the Express’s old website they used to have a wonderful archive of them but they don’t seem to be put up online anymore, presumably to stop people laughing at them.


You may remember my blog post about the Observer reporting a voodoo poll as if were representative of members of the Royal College of Physicians a couple of weeks ago. Back then an open access poll hosted on a website campaigning against the government’s NHS bill found 92.5% of respondents wanted the RCP to “publicly call for the withdrawal of the Health and Social Care Bill”, and this was reported as being representative of the RCP’s membership.

I dutifully wrote a letter to the Observer’s readers’ editor, Stephen Pritchard, and he addresses the report in his column for the Observer today. Mr Prichard writes “we know opposition among hospital doctors is extremely high, but readers have a right to expect that things that we proclaim to be polls are properly conducted, using scientifically weighted samples of a population or group” and, as I have before, points journalists to Peter Kellner’s British Polling Council guide for journalists on how to report opinion polls. Full marks to the Observer for addressing the matter seriously.

Meanwhile, the RCP has since commissioned a ballot of its whole membership, professionally carried out by Electoral Reform Services. The ballot managed a 35% response rate. It found that 69% of members were opposed to the bill, but that only 49% thought the RCP should seek the withdrawal of the bill, with 46% saying the College should work constructively with the government to try and improve it.

That’s 49% who wanted the RCP to call for the Bill to be withdrawn, not 92.5%. That, dear readers, is an example of why voodoo polls are bunkum.

(Nigel Hawkes at StraightStatistics also has a post welcoming the RCP conducting a proper survey of their membership, rather than touting voodoo polls here)


Last month Chris Elliot, the Guardian’s readers’ editor, quoted a letter from a reader saying there “seemed to be a cultural problem among Guardian reporters that it is of no consequence if you completely misunderstand or mis-report the figures in a story [...] I hope that you can urge on the editor some training of reporters on basic understanding of statistics”. Chris Elliott said he had organised three sessions with external statistical experts for Guardian journalists in the past year (and Nigel Hawkes at Straight Statistics reveals he was one of them).

The Observer’s readers editor should probably do the same. Earlier this month the Guardian’s front page story mentioned an open-access voodoo poll on the Royal Medical Journal’s website that had been touted round Twitter as if it was meaningful. The Observer this weekend was on a similar subject, but was worse – hanging a whole story on very dubious figures.

The story is titled “Nine out of 10 members of Royal College of Physicians oppose NHS bill”, and claims that “a new poll reveals that nine out of ten members of the Royal College of Physicians – hospital doctors – want the NHS shake-up to be scrapped.”

The story is based upon an open access survey created by and linked from a website campaigning against the heath bill, callonyourcollege.blogspot.com, and again, bandied around Twitter. The survey was open access, so there could have been no attempt at proper sampling and contained no demographic information that could have been used to weight it. It should go without saying that a survey from a website campaigning against the NHS reforms and co-ordinating opposition to it amongst the Medical Royal Colleges is more likely to be found and completed by opposed to the bill (in much the same way that a poll carried out on, say, the Conservative party’s website, might be considerably more supportive).

Any poll actually measuring the opinion of members of the RCP would have needed to randomly sample members, or at least contact members in a way that would not have introduced any skew in those likely to reply. For all we know this may have also shown overwhelming opposition – but we cannot judge that from an open-access survey liable to have obtained an extremely biased sample.

Once again, I would urge any journalist thinking of including any polling figures in a story to look at this guidance from the British Polling Council, particularly on how to judge whether to take a poll seriously or not. If these had been looked at, the Observer should never have got to this point…

Who conducted the poll? Was it a reputatle, independent polling company? If not, then regard its findings with caution

In this case, the poll was not conducted by a polling company, but by a group lobbying against the bill they were asking about. This should have been the first alarm bell.

How many people were interviewed for the survey? The more people, the better — although a small-sample scientific survey is ALWAYS better than a large-sample self-selecting survey.

In this case, the number of people interviewed is not mentioned. It could be high, it could be low. But note Peter’s other point… this was a self-selecting survey anyway…

How were those people chosen? If the poll purports to be of the public as a whole (or a significant group of the public), has the polling company employed one of the methods outlined in points 2,3 and 4 above? If the poll was self-selecting — such as readers of a newspaper or magazine, or television viewers writing, telephoning, emailing or texting in — then it should NEVER be presented as a representative survey.

This was a self-selecting poll of doctors directed there from a site campaigning against the legislation. There is no way it should have been presented as a representative survey.

UPDATE: Credit where it is due. Denis Campbell, one of the authors of the piece, wrote about the same poll on the Guardian’s rolling blog the next day, but this time caveated it with “But that was to a website run by anti-Bill doctors and a self-selecting rather than scientific poll, so may not reflect opinion precisely.” In a perfect world I’d hope that journalists would spurn non-representative polls completely, but progress nonetheless.