I hope most of my regular readers would assume a Daily Express headline about a “poll” showing 80% of people want to leave the EU was nonsense anyway, but it’s a new year, a new election campaign, and it’s probably worth writing again about why these things are worthless and misleading as measures of public opinion. If nothing else, it will give people an explanation to point rather overexcited people on Twitter towards.

The Express headline is “80% want to quit the EU, Biggest poll in 40 years boosts Daily Express crusade”. This doesn’t actually refer to a sampled and weighted opinion poll, but to a campaign run by two Tory MPs (Peter Bone and Philip Hollobone) and a Tory candidate (Thomas Pursglove) consisting of them delivering their own ballot papers to houses in their constituencies. They apparently got about 14,000 responses, which is impressive as a campaigning exercise, but doesn’t suddenly make it a meaningful measure of public opinion.

Polls are meaningful only to the extent that they are representative of the wider public – if they contain the correct proportions of people of different ages, of men and women, of different social classes and incomes and from different parts of the country as the population as a whole then we hope they should also hold the same views of the population as a whole. Just getting a lot of people to take part does not in any way guarantee that the balance of people who end up taking the poll will be representative.

I expect lots of people who aren’t familiar with how polling works will see a claim like this, see that 14,000 took part, and think it must therefore be meaningful (in the same way, a naive criticism of polls is often that they only interview 1000 people). The best example of why this doesn’t work was the polling for the 1936 Presidential election in the USA, which heralded modern polling and tested big sample sizes to destruction. Back then the most well known poll was that done by a magazine, the Literary Digest. The Literary Digest too sent out ballot papers to as many people as it could – it sent them to its subscribers, to other subscription lists, to everyone in the phone directory, to everyone with a car, etc, etc. In 1936 it sent out 10 million ballot papers and received two point four million responses. Based on these replies, they confidently predicted that the Republican candidate Alf Landon would win the election. Meanwhile the then little known George Gallup interviewed just a few thousand people, but using proper demographic quotas to get a sample that was representative of the American public. Gallup’s data predicted a landslide win for the Democrat candidate Franklin D Roosevelt. Gallup was of course right, the Literary Digest embarrassingly wrong. The reason was that the Literary Digest’s huge sample of 2.4 million was drawn from the sort of people who had telephones, cars and magazine subscriptions and, in depression era America, these people voted Republican.

Coming back to the Express’s “poll”, a campaign about leaving Europe run by three Tory election candidates in the East Midlands is likely to largely be responded to by Conservative sympathisers with strong views about Europe, hence the result. Luckily we have lots of properly conducted polls that are sampled and weighted to be representative of whole British public and they consistently show a different picture. There are some differences between different companies – YouGov ask it a couple of time a month and find support for leaving the EU varying between 37% and 44%, Survation asked a couple of months ago and found support for leaving at 47%, Opinium have shown it as high as 48%. For those still entranced by large sample sizes, Lord Ashcroft did a poll of 20,000 people on the subject of Europe last year (strangely larger than the Express’s “largest poll for 40 years”!) and found people splitting down the middle 41% stay – 41% leave.

And that’s about where we are – there’s some difference between different pollsters, but the broad picture is that the British public are NOT overwhelmingly in favour of leaving the EU, they are pretty evenly divided over whether to stay in the European Union or not.


Rob Hayward, the former Tory MP turned psephologist, gave a presentation at ComRes on Monday which has stirred up some comment about whether the polls are underestimating Conservative support.

Historically the polls have tended to underestimate Conservative support and/or overestimate Labour support. It was most notable in 1992, but was a fairly consistent historical pattern anyway. Since the disaster of 1992 this bias has steadily reduced as pollsters have gradually switched methods and adopted some form of political control or weighting on their samples. In 2010 – at last! – the problem seemed to have been eliminated. I hope that the polling industry has now tackled and defeated the problem of Labour bias in voting intention polls, but it would be hubris to assume that because we’ve got it right once the problem has necessarily gone away and we don’t need to worry about it anymore.

In his presentation Rob compared polls last year with actual elections – the polls for the European elections, for the by-elections and for the local elections.

I looked at how the polls for the European election did here and have the same figures as Rob. Of the six pollsters who produced figures within a week or so of the election five underestimated Conservative support. The average level of Tory support across those polls was 22.2%, the Tories actually got 23.9%. The average for Labour was 27%, when they actually got 25.4%.

Looking at by-elections, Rob has taken ten by-election polls from 2014 and compared them to results. Personally I’d be more wary. By-election campaigns can move fast, and some of those polls were taken a long time before the actual campaign – the Clacton polls, for example, were conducted a month before the actual by-election took place, so any difference between the results and the polling could just as likely be a genuine change in public opinion. Taking those polls done within a week or so of the actual by-elections shows the same pattern though – Conservatives tend to be underestimated (except in Heywood and Middleton), Labour tends to be overestimated.

Finally in Rob’s presentation he has a figure for polls at the local elections in 2014. I think he’s comparing the average of national Westminster polls at the time with Rallings and Thrasher’s NEQ, which I certainly wouldn’t recommend – the Lib Dems for example always do better in local election NEQ than in national polls, but it’s because they are different types of election, not because the national polls are wrong). As it happens there was at least one actual local election poll from Survation.

Survation local election: CON 24%, LAB 36%, LDEM 13%, UKIP 18%, Others 10%
R&T local election vote: CON 26%, LAB 36%, LDEM 11%, UKIP 16%, Others 12%

Comparing it to the actual result (that is, the actual total votes cast at the local election, which is what Survation were measuring, NOT the National Equivalent Vote) these figures were actually pretty good, especially given the sample size was only 312 and that it will be skewed in unknown ways by multi-member wards. That said, the pattern is the same- it’s the Conservatives who are a couple of points too low, Labour spot on.

So, Rob is right to say that polls in 2014 that could be compared to actual results tended to show a skew away from the Conservatives and towards Labour. Would it be right to take a step on from that and conclude that the national Westminster polls are showing a similar pattern? Well, let me throw out a couple of caveats. To take the by-election polls first, these are conducted solely by two companies – Lord Ashcroft and Survation… and in the case of Survation they are done using a completely different method to Survation’s national polling, so cannot reasonably be taken as an indication of how accurate their national polling is. ICM is a similar case, their European polling was done online while all their GB Westminster polling is done by telephone. None of these examples includes any polling from MORI, Populus or ComRes’s telephone polling – in fact, given that there were no telephone based European polls, the comparison doesn’t include any GB phone polls at all, and looking at the house effects of different pollsters, online polls tend to produce more Labour-friendly figures than telephone polls do.

So what can we conclude? Well, looking at the figures by-election polls do seem to produce figures that are a bit too Laboury, but I’d be wary of assuming that the same pattern necessarily holds in national polls (especially given Survation use completely different methods for their constituency polling). At the European elections the polls also seemed to be a bit Laboury… but the pollsters who produced figures for that election included those pollsters that tend to produce the more Laboury figures anyway, and didn’t include any telephone pollsters. It would be arrogant of me to rule out the possibility that the old problems of pro-Labour bias may return, but for the time being consider me unconvinced by the argument.

UPDATE: Meanwhile the Guardian have published their monthly ICM poll, with topline figures of CON 30%(+2), LAB 33%(nc), LDEM 11%(-3), UKIP 11%(-3), GRN 9%(+4) – another pollster showing a significant advance for the Green party.


-->

Three new polls today – two GB polls and one Scottish one (and YouGov to come later).

A week ago we had sharply contrasting polls from Lord Ashcroft and Populus – one showing a chunky Conservative lead, one showing a chunky Labour lead, both probably outliers. Today’s Ashcroft and Populus polls are far more normal, both showing a tight race between Conservative and Labour.

Topline figures from Populus are CON 35%, LAB 36%, LDEM 8%, UKIP 13%, GRN 4%. (tabs). Lord Ashcroft’s weekly poll has topline figures of CON 29%(-5), LAB 28%(nc), LDEM 9%(+1), UKIP 15%(-1), GRN 11%(+3) (tabs). While Ashcroft’s gap between Labour and Conservatives looks a little more normal, the poll has an eye-catching Green score – up to 11%. This is the highest the Greens have scored in any poll since their initial but short-lived breakthrough back in 1989.

As ever, be wary of giving too much attention to the poll that looks interesting and exciting and ignoring the dull ones. The Greens certainly are increasing their support, but there is much variation between pollsters. Below are the latest levels of Green support from those companies who have polled so far in 2015:

greensupport

Support varies between 11 percent from Ashcroft and just 3 percent from Populus. For the very low scores from Populus and ComRes there are at least clear methodological reasons: Populus downweight voters who identify as Green supporters quite heavily, while in ComRes’s online polls they appear to have added a much stricter turnout filter to Green and UKIP voters since they started prompting for UKIP. At the other end of the scale Lord Ashcroft’s polls have consistently tended to show a higher level of support for parties outside the traditional big three, but the reasons for this are unclear.

Meanwhile there was a new Scottish poll from Survation from the Daily Record. Topline Westminster voting intentions with changes from Survation’s previous poll are CON 14%(-2), LAB 26%(+2), LDEM 7%(+2), SNP 46%(-2), GRN 3%(+2), UKIP 4%(nc). (tabs). It shows a small narrowing in the SNP lead, but it was from an extremely large lead last time, so it still leaves them with a huge twenty point lead.


YouGov’s first poll of the year is out tonight, with topline figures of CON 31%, LAB 34%, LDEM 7%, UKIP 14%, GRN 8%.

YouGov have made a couple of methodological changes to start the election year. The first and most interesting is to include UKIP in the main prompt for voting intention. Prompting is something I’ve written about here many times before, most recently here. It’s tricky because it really can make a difference, yet there is no real way of knowing when it is appropriate and when it isn’t. There are instances when pollsters have overestimated the level of support for minor parties because they prompted when they probably shouldn’t have (YouGov & UKIP in the 2004 European election, and the Greens in the 2007 Scottish election), but go back to the 1980s and polls that failed to prompt for the Liberals & SDP tended to underestimate their support. It’s clear from history that you can both get it wrong by prompting when you shouldn’t, and get it wrong by failing to prompt when you should.

I’ve written before about the difficulties of making the judgement call on this. There is no obvious way of drawing the line – whenever I write about it in the comments section people make helpful comments saying “why not if the party is in third place” or “if the party is over x%” or whatever… but all these are utterly arbitrary – perfectly reasonable in themselves, but no help in getting it right. It’s not even clear what we should be looking for – it it a certain level of support, or a level of public awareness and familiarity, or a level of media coverage?

In the event however the difficult decision pretty much made itself. It’s something YouGov have been quietly testing on and off over the years, and it seems to be making less of a difference. Testing at the end of last year showed about the same level of support for UKIP in prompted polls as in unprompted polls. Presumably UKIP are established enough in the public mind for prompting not to make a difference, at which point the decision became a simple one. Note that the fairly low UKIP score in today’s YouGov poll – 14% – is not a result of the change, in our testing last year we were showing UKIP at around 17% when prompted, at at time when they were around 16%-17% in unprompted polls.

With YouGov shifting over it means three companies (YouGov, ComRes and Survation) now include UKIP in the main prompt, Populus, Opinium, Ipsos MORI, ICM and Lord Ashcroft polls do not. When ComRes made the switch they too seemed to show very little difference in levels of UKIP support (though their online polls seem to have made some changes to turnout weighting at the same time) but just because prompting doesn’t make much difference to YouGov polls, it does not follow that it won’t make a difference anywhere else – in particular, the impact of prompting may be very different in an internet survey with two pages to click through than in a telephone survey with a human interviewer. What is the correct approach for one sort of polling will not necessarily be the correct approach for another company’s polls.

The other change in YouGov’s methods is much smaller, a tidying up of the sample spec to try and reduce some of the oversampling and reducing the amount of weighting needed. The overall quota targets are the same and the weighting remains exactly the same so there should be no difference at all in the published voting intention figures. The only difference anyone might notice is in crossbreaks: YouGov have started to include ethnicity in the sample quotas for London, which may have an impact in the London crossbreak.

UPDATE: I somehow managed to miss the first Populus poll of this year this morning – figures there were CON 34%, LAB 36%, LDEM 9%, UKIP 12%. Tabs are here.


ComRes’s monthly online poll for the Indy on Sunday and Sunday Mirror is out tonight and has topline figures of CON 30%, LAB 34%, LDEM 8%, UKIP 19%, GRN 3%. Tabs are here.

On the face of it there is very little change from a month ago, the Conservatives are down one, Lib Dems up one. However, there is actually an important methodological change. As regular readers will remember, last month ComRes did a split sample experiment in their online poll, with half the sample being asked voting intention with UKIP in the main prompt, half not. This apparently made 5 points difference to UKIP, with the prompted half of the sample showing UKIP up on 24%. ComRes have now switched over to prompting for UKIP all the time in their online and telephone polls, but it obviously didn’t have the same dramatic effect in this month’s poll. I suppose comparing prompted-poll to prompted-poll UKIP are down 5 points since last month, but perhaps last month’s was an anomoly and the impact of prompting is just less than the split-sample experiment suggested.

ComRes’s press release suggests they have also tweaked their weightings this month. I’ll update with details once they are confirmed, but looking through the tables nothing jumps out at me so it is probably relatively minor.