In this post back in January I wrote about the partisan effects of the different methodologies the different polling companies used, of how some companies tend to show consistently higher or lower scores for different parties. Since then I’ve been meaning to do a reference post explaining those different methods between pollsters. This is that – an attempt to do a summary of different companies methods in one place so you can check whether company A prompts for UKIP or what company B does with their don’t knows. As ever, this is from the published methodology details of each company and my own understanding of it – any mistakes are mine and corrections are welcome!

Phone polls

There are four regular telephone polls – Ipsos MORI, ICM, Ashcroft and ComRes/Daily Mail (ComRes do both telephone and online polls). All phone polls are conducted using Random Digit Dialing (RDD) – essentially taking phone numbers from the BT directory and then randomising the digits at the end to ensure the sample includes some ex-directory numbers, all polls will now also include some mobile phone numbers, though the pollsters have all said this makes little actual difference to results and is being done as a precaution. All telephone polls are weighted by some common demographics, like age, gender, social class, region, housing tenure, holidays taken and car ownership.

Ipsos MORI

Now the most venerable of the regular pollsters, Ipsos MORI are also the most traditional in their methods. They currently do a monthly political poll for the Evening Standard. Alone among GB pollsters they use no form of political weighting, viewing the problem of false recall as unsurmountable, their samples are weighted using standard demographics, but also by public and private sector employment.

MORI do not (as of March 2015) include UKIP in their main prompt for voting intention. For people who say don’t know, MORI ask who people who are most likely to vote for and count that equally as a voting intention. People who still say don’t know or won’t say are ignored. In terms of likelihood to vote, MORI have the tightest filter of any company, including only those respondents who say they are absolutely 10/10 certain to vote.

ICM

ICM are the second oldest of the current regular pollsters, and were the pioneer of most of the methods that became commonplace after the polling industry changed methods following the 1992 debacle. They currently do a monthly poll for the Guardian. They poll by standard demographics and by people’s past vote, adjusted for false recall.

ICM don’t currently include UKIP in their main voting intention prompt. People who say they don’t know how they will vote are reallocated based on how they say they voted at the previous election, but weighted down to 50% of the value of people who actually give a voting intention. In terms of likelihood to vote, ICM weight by likelihood so that people who say they are 10/10 certain to vote are fully counted, people who say they are 9/10 likely to vote count as 0.9 of a vote and so on. Additionally ICM weight people who did not vote at the previous election down by 50%, the only pollster to use this additional weighting.

Ashcroft

Lord Ashcroft commissions a regular weekly poll, carried out by other polling companies but on a “white label” basis. The methods are essentially those Populus used to use for their telephone polls, rather than the online methods Populus now use for their own regular polling. Ashcroft polls are weighted by standard demographics and by past vote, adjusted for false recall.

Ashcroft’s voting intention question has included UKIP in the main prompt since 2015. People who say they don’t know how they will vote are reallocated based on how they say they voted at the previous election, but at a different ratio to ICM (Ashcroft weights Conservatives and Labour down to 50%, Lib Dems down to 30%, others I think are ignored). In terms of likelihood to vote, Ashcroft weights people according to how likely they say they are to vote in similar way to ICM.

ComRes

ComRes do a monthly telephone poll, previously for the Independent but since 2015 for the Daily Mail. This is separate to their monthly online poll for the Independent on Sunday and there are some consistent differences between their results, meaning I treat them as two separate data series. ComRes’s polls are weighted using standard demographics and past vote, adjusted for false recall – in much the same way as ICM and Ashcroft.

ComRes have included UKIP in their main voting intention prompt since late 2014. People who say they don’t know how they will vote or won’t say are asked a squeeze question on how they would vote if it was a legal requirement, and included in the main figures. People who still say don’t know are re-allocated based on the party they say they most closely identify with, though unlike the ICM and Ashcroft reallocation this rarely seems to make an impact. In terms of likelihood to vote ComRes both filter AND weight by likelihood to vote – people who say they are less than 5/10 likely to vote are excluded completely, people who say they are 5/10 to 10/10 are weighted according to this likelihood.

Online Polls

Online poll sampling can be somewhat more opaque than telephone sampling. In most cases they are conducted through existing panels of online volunteers (either their own panels, like the YouGov panel or PopulusLive, or panels from third party providers like Toluna and Research Now). Surveys are conducted by inviting panellists with the required demographics to complete the poll – this means that while panels are self-selecting, surveys themselves aren’t (that is, you can choose to join a company’s online panel, but you can’t choose to fill in their March voting intention survey, you may or may not get randomly invited to it). Because panellists demographics are known in advance, pollsters can set quotas and invite people with the demographics to reflect the British public. Some pollsters also use random online sampling – using pop-ups on websites to randomly invite respondents. As with telephone polling, all online pollsters use some common demographic weighting, with all companies weighting by things like age, gender, region and social class.

YouGov

YouGov are the longest standing online pollster, currently doing daily voting intention polls for the Sun and Sunday Times. The length of time they have been around means they have data on their panellists from the 2010 election (and, indeed, in some cases from the 2005 election) so their weighting scheme largely relies on the data collected from panellists in May 2010, updated periodically to take account of people who have joined the panel since then. As well as standard demographics, YouGov also weight by newspaper readership and party identification in 2010 (that is, people are weighted by which party they told YouGov they identified with most in May 2010, using targets based on May 2010).

YouGov have included UKIP in their main prompt since January 2015. They do not use any weighting or filtering by likelihood to vote at all outside of the immediate run up to elections (in the weeks leading up to the 2010 election they weighting by likelihood to vote in a similar way to Ashcroft, Populus and ICM). People who say don’t know are excluded from final figures, there is no squeeze question or reallocation.

Populus

Populus used to conduct telephone polling for the Times, but since ceasing to work for the Times have switched to carrying out online polling, done using their PopulusLive panel. Currently they publish two polls a week, on Mondays and Fridays. As well as normal demographic weightings they weight using party identification, weighting current party ID to estimated national targets.

Populus have included UKIP in their main prompt since February 2015. They weight respondents according to their likelihood to vote in a similar way to ICM and Ashcroft. People who say don’t know are excluded from final figures, there is no squeeze question or reallocation.

ComRes

Not to be confused with their telephone polls for the Daily Mail, ComRes also conduct a series of monthly online polls for the Independent on Sunday and Sunday Mirror. It is conducted partially from a panel, partially from random online sampling (pop-ups on websites directing people to surveys). In addition to normal demographic weightings they weight using people’s recalled vote from the 2010 election.

ComRes have included UKIP in their main prompt since December 2014. Their weighting by likelihood to vote is slightly different to their telephone polls – for the Conservatives, Labour and Liberal Democrats it’s the same (include people who say 5+/10, and weight those people according to their likelihood) but for UKIP and Green I believe respondents are only included if they are 10/10 certain to vote. Their treatment of don’t knows is the same as in their phone polls: people who say they don’t know how they will vote or won’t say are asked a squeeze question and included in the main figures, people who still say don’t know are re-allocated based on the party they say they most closely identify with.

Survation

Survation do a regular poll for the Daily Mirror and occasional polls for the Mail on Sunday. Data is weighted by the usual demographics, but uses income and education rather than social class. Recalled 2010 vote is used for political weighting. Survation have included UKIP in their main prompt for several years. They weight by likelihood to vote in the same way as ICM, Populus and Ashcroft. People who say don’t know are reallocated to the party they voted for in 2010, but weighted down to 30% of the value of people who actually give a voting intention.

Note that Survation’s constituency polls are done using a completely different method to their national polls, using telephone sampling rather than online sampling and different weighting variables.

Opinium

Opinium do regular polling for the Observer, currently every week for the duration of the election campaign. Respondents are taken from their own panel and is weighted by standard demographics. Historically Opinium have not used political weighting, but from February 2015 they switched to weighting by “party propensity” for the duration of the election campaign. This is a variable based on which parties people would and wouldn’t consider – for practical purposes, it seems to be similar to party identification.

Opinium do not include UKIP in their main prompt (meaning they only appear as an option if a respondent selects “other”). They filter people by likelihood to vote, including only respondents who say they will definitely or probably vote. People who say don’t know are excluded from the final figures.

TNS

TNS are a huge global company with a long history in market research. In terms of public opinion polling in this country they are actually the successors to System Three – who used to be a well known Scottish polling company and ended up part of the same company through a complicated series of mergers and buy-outs by BMRB, NFO, Kantar and WPP, currently their ultimate parent company. At the last election TNS were the final company doing face-to-face polling, since then they have switched over to online. The sample is taken from their Lightspeed panel and is weighted using standard demographics and recalled 2010 vote. TNS do include UKIP in their main prompt, and also prompt for the BNP and Green. TNS filter and weight people according to likelihood to vote and exclude don’t knows and won’t says from their final figures.

Putting all those together, here’s a summary of the methods.

methods2

As to the impact of the different methods, it not always easy to say. Some are easy to quantify from published tables (for example, ICM and Ashcroft publish their figures before and after don’t knows are reallocated, so one can comfortably say “that adjustment added 2 points to the Lib Dems this week”), others are very difficult to quantify (the difference the choice of weighting regimes makes is very difficult to judge, the differences between online and telephone polling even more so), many methods interact with one another and the impacts of different approaches changes over time (a methodology that helps the Tories one year may help the Lib Dems another year as public opinions change). Rather than guess whether each pollsters methods are likely to produce this effect or that effect, probably best to judge them from actual observed results.

UPDATE:
TNS have confirmed they do prompt for UKIP, and also prompt for the BNP and Green – I’ll update the table later on tonight.


Survation have a new poll out in Sheffield Hallam which gives a ten point lead to Labour. Naturally this has produced a lot of crowing from people who don’t much like Nick Clegg and some possibly unwise comments from Nick Clegg about the poll being “bilge”, commissioned by the Labour affiliated Unite (which is was, but it shouldn’t make any difference to the voting intention figures). Tabs are here.

The poll has been compared to Lord Ashcroft’s one last year which showed Nick Clegg ahead in his seat, albeit, only narrowly. The reason for the difference is nothing at all to do with who commissioned the polls though, and everything to do with differences between the methodology Ashcroft uses and the methodology Survation use for all their clients (Unite, and anyone else).

One difference that people commented on yesteday is that Lord Ashcroft uses political weighting in his constituency polls, but Survation do not. This has the potential to make a sizeable difference in the results, but I don’t think it is the case here – looking at the recalled vote in Survation’s poll it looks fairly close to what actually happened, weighting by past vote would probably have bumped up the Lib Dems a little, but the reason the Lib Dems are so far behind is not because of the weighting, it’s because more than half of the people who voted Lib Dem in 2010 aren’t currently planning on doing so again.

However, there are other methodology differences that probably do explain the gap between the Ashcroft poll and the Survation one. If we start off with the basic figures each company found we get this:

In Survation’s poll the basic figures, weighted by likelihood to vote, were CON 22, LAB 33, LD 23, UKIP 9
In Ashcroft’s poll the basic figures, weighted for likelihood to vote, were CON 23, LAB 33, LD 17, UKIP 14

Both had a chunky Labour lead, in fact, Ashcroft’s was slightly bigger than Survation’s. Ashcroft however did two things that Survation did not do. He asked a two stage question, asking people their general voting intention and then their constituency question, and he reallocated don’t knows.

When Lord Ashcroft does constituency polls he asks a standard voting intention question, then asks people to think about their own constituency. This makes a minimal difference in most seats, where people’s “real” support is normally the same as how they actually vote. In seats with Lib Dem MPs it often makes a massive difference, presumably because tactical voting and incumbency are so much more important for Lib Dem MPs than those from any other party.

This is a large part of the difference between Survation and Ashcroft. In Ashcroft’s second question, asking people to think about their own constituency, he found figures of CON 18%, LAB 32%, LD 26%, UKIP 14% – so the two-stage-constituency-question added 9 percentage points to the Lib Dems. Survation actually asked people to think about their constiuency in their question, probably explaining why they had the Lib Dems 6 points higher than Ashcroft in their first question, but I think the constituency prompt has more effect when it is asked as a second question, and respondents are given a chance to register their “national choice” first.

The other significant methodological difference is how Survation and Ashcroft treat people who say don’t know. In their local constituency polls Survation just ignore don’t knows, while Ashcroft reallocates them based on how they voted at the previous election, reallocating a proportion of them back to the party they previously voted for. Currently this helps the Liberal Democrats (something we also see in ICM’s national polls), as there a lot of former Lib Dems out there telling pollsters they don’t know how they will vote.

In this particular case the reallocation of don’t knows changed Ashcroft’s final figures to CON 19, LAB 28, LD 31, UKIP 11, pushing the Lib Dems up into a narrow first place. Technically I think there was an error in Ashcroft’s table – they seem to have reallocated all don’t knows, rather than the proportion they normally do. Done correctly the Lib Dems and Labour would probably have been closer together, or Labour a smidgin ahead, but the fact remains that Ashcroft’s method produces a tight race, Survation’s a healthy looking Labour lead.

So which one is right?

The short answer is we don’t know for sure.

Personally I have confidence in the two-stage constituency question. It’s something I originally used in marginal polling for PoliticsHome back in 2008 and 2009, to address the problem that any polling of Lib Dem seats always seems to show a big jump for Labour and a collapse for the Lib Dems. This would look completely normal these days of course, but you used to find the same thing in polls when Labour were doing badly nationally and the Lib Dems well. My theory was that when people were asked about their voting intention they did not factor in any tactical decisions they might actually make – that is, if you were a Labour supporter in a LD-v-Con seat you might tell a pollster you’d vote Labour because they were the party you really supported, but actually vote Lib Dem as a tactical anti-Tory vote. The way that it only has a significant effect in Lib Dem seats has always given me some confidence it is working, and people aren’t just feeling obliged to give as different answer – the overwhelming majority of people answer the same to both questions.

However the fact is the two-stage-constituency question is only theoretical – it hasn’t been well tested. Going back to it’s original use for the PoliticsHome marginal poll back in 2009, polling in Lib Dem seats using the normal question found vote shares of CON 41, LAB 17, LDEM 28. Using the locally prompted second question the figures became CON 37, LAB 12, LDEM 38. In really those seats ended up voting CON 39, LAB 9, LDEM 45. Clearly in that sense the prompted question gave a better steer to how well the Lib Dems were doing in their marginals… but the caveats are very heavy (it was 9 months before the election, so people could just have change their minds, and it’s only one data point anyway.) I trust the constituency prompted figures more, but that’s a personal opinion, the evidence isn’t there for us to be sure.

As to the reallocation of don’t knows, I’ve always said it is more a philosophical decision that a right or wrong one. Should pollsters only report how respondents say they would vote in an election tomorrow, or should they try and measure how they think people actually would vote in an election tomorrow? Is it better to only include those people who give an opinion, even if you know that those undecideds you’re ignoring appear more likely to favour one party than other, or is it better to make some educated guesses about how those don’t knows might split based on past behaviour?

Bottom line, if you ask people in Sheffield Hallam how they would vote in a general election tomorrow, Labour have a lead, varying in size depending on how you ask. However, there are lots of people who voted for Nick Clegg in 2010 who currently tell pollsters they don’t know how they would vote, and if a decent proportion of those people in fact end up backing Nick Clegg (as Ashcroft’s polling assumes they will) the race would be much closer.


-->

Polls often give contrasting results. Sometimes this is because they were done at different times and public opinion has actually changed, but most of time that’s not the reason. A large part of the difference between polls showing different results is often simple random variation, good old margin of error. We’ve spoken about that a lot, but today’s post is about the other reason, systemic differences between pollsters (or “house effects”).

Pollsters use different methods, and sometimes those different choices result in consistent differences between the results they produce. One company’s polls, because of the methodological choices they make, may consistently show a higher Labour score, or a lower UKIP score, or whatever. This is not a case of deliberate bias – unlike in the USA there are not Conservative pollsters or Labour pollsters, every company is non-partisan, but the effect of their methodological decisions mean some companies do have a tendency to produce figures that are better or worse for each political party – we call these “house effects”.

2014houseffects

The graph above shows these house effects for each company, based upon all the polls published in 2014 (I’ve treated ComRes telephone and ComRes online polls as if they are separate companies, as they use different methods and have some consistent differences). To avoid any risk of bias from pollsters carrying more or less polls when a party is doing well or badly I work out the house effects by using a rolling average of the daily YouGov poll as a reference point – I see how much each poll departs from the YouGov average on the day when its fieldwork finished and take an average of those deviations over the year. Then I take the average of all those deviations and graph them relative to that (just so YouGov aren’t automatically in the middle). It’s important to note that the pollsters in the middle of the graph are not necessarily more correct, these differences are relative to one another. We can’t tell what the deviations are from the “true” figure, as we don’t know what the “true” figure is.

As you can see, the difference between the Labour and Conservative leads each company show are relatively modest. Leaving aside TNS, who tended to show substantially higher Labour leads than other companies, everyone else is within 2 points of each other. Opinium and ComRes phone polls tend to show Labour leads that are a point higher than average, MORI and ICM tend to show Labour leads that are a point lower than average. Ashcroft, YouGov, ComRes online and Populus tend to be about average. Note I’m comparing the Conservative-v-Labour gap between different pollsters, not the figures for each one. Populus, for example, consistently give Labour a higher score than Lord Ashcroft’s polls do… but they do exactly the same for the Conservatives, so when it comes to the party lead the two sets of polls tend to show much the same.

There is a much, much bigger difference when it comes to measuring the level of UKIP support. The most “UKIP friendly” pollster, Survation, tends to produce a UKIP figure that is almost 8 points higher than the most “UKIP unfriendly” pollster, ICM.

What causes the differences?

There are a lot of methodological differences between pollsters that make a difference to their end results. Some are very easy to measure and quantify, others are very difficult. Some contradict each other, so a pollster may do something that is more Tory than other pollsters, something that is less Tory than other pollsters, and end up in exactly the same place. They may interact with each other, so weighting by turnout might have a different effect on a phone poll from a telephone poll. Understanding the methodological differences is often impossibly complicated, but here are some of the key factors:

Phone or online? Whether polls get their sample from randomly dialling telephone numbers (which gives you a sample made up of the sort of people who answer cold calls and agree to take part) or from an internet panel (which gives you a sample made up of the sort of people who join internet panels) has an effect on sample make up, and sometimes that has an effect on the end result. It isn’t always the case – for example, raw phone samples tend to be more Labour inclined… but this can be corrected by weighting, so phone samples don’t necessarily produce results that are better for Labour. Where there is a very clear pattern is on UKIP support – for one reason or another, online polls show more support for UKIP than phone polls. Is this because people are happier to admit supporting UKIP when there isn’t a human interviewer? Or it is because online samples include more UKIP inclined people? We don’t know

Weighting. Pollsters weight their samples to make sure they are representative of the British population and iron out any skews and biases resulting from their sampling. All companies weight by simple demographics like age and gender, but more controversial is political weighting – using past vote or party identification to make sure the sample is politically representative of Britain. The rights and wrongs of this deserve an article in their own right, but in terms of comparing pollsters most companies weight by past vote from May 2010, YouGov weight by party ID from May 2010, Populus by current party ID, MORI and Opinium don’t use political weighting at all. This means MORI’s samples are sometimes a bit more Laboury than other phone companies (but see their likelihood to vote filter below), Opinium have speculated that their comparatively high level of UKIP support may be because they don’t weight politically and Populus tend to heavily weight down UKIP and the Greens.

Prompting. Doesn’t actually seem to make a whole lot of difference, but was endlessly accused of doing so! This is the list of options pollsters give when asking who people vote for – obviously, it doesn’t include every single party – there are hundreds – but companies draw the line in different places. The specific controversy in recent years has been UKIP and whether or not they should be prompted for in the main question. For most of this Parliament only Survation prompted for UKIP, and it was seen as a potential reason for the higher level of UKIP support that Survation found. More recently YouGov, Ashcroft and ComRes have also started including UKIP in their main prompt, but with no significant effect upon the level of UKIP support they report. Given that in the past testing found prompting was making a difference, it suggests that UKIP are now well enough established in the public mind that whether the pollster prompts for them or not no longer makes much difference.

Likelihood to vote. Most companies factor in respondents likelihood to vote somehow, but using sharply varying methods. Most of the time Conservative voters say they are more likely to vote than Labour voters, so if a pollster puts a lot of emphasis on how likely people are to actually vote it normally helps the Tories. Currently YouGov put the least emphasis on likelihood to vote (they just include everyone who gives an intention), companies like Survation, ICM and Populus weight according to likelihood to vote which is a sort of mid-way point, Ipsos MORI have a very harsh filter, taking only those people who are 10/10 certain to vote (this probably helps the Tories, but MORI’s weighting is probably quite friendly to Labour, so it evens out).

Don’t knows. Another cause of the differences between companies is how they treat people who say don’t know. YouGov and Populus just ignore those people completely. MORI and ComRes ask those people “squeeze questions”, probing to see if they’ll say who they are most likely to vote for. ICM, Lord Ashcroft and Survation go further and make some estimates about those people based on their other answers, generally assuming that a proportion of people who say don’t know will actually end up voting for the party they did last time. How this approach impacts on voting intention numbers depends on the political circumstances at the time, it tends to help any party that has lost lots of support. When ICM first pioneered it in the 1990s it helped the Tories (and was known as the “shy Tory adjustment”), these days it helps the Lib Dems, and goes a long way to explain why ICM tend to show the highest level of support for the Lib Dems.

And these are just the obvious things, there will be lots of other subtle or unusual differences (ICM weight down people who didn’t vote last time, Survation ask people to imagine all parties are standing in the seat, ComRes have a harsher turnout filter for smaller parties in their online polls, etc, etc)

Are they constant?

No. The house effects of different pollsters change over time. Part of this is because political circumstances change and the different methods have different impacts. I mentioned above that MORI have the harshest turnout filter and that most of the time this helps the Tories, but that isn’t set in stone – if Tory voters became disillusioned and less likely to vote and Labour voters became more fired up it could reverse.

It also isn’t consistent because pollsters change methodology. In 2014 TNS tended to show bigger Labour leads than other companies, but in their last poll they changed their weighting in a way that may well have stopped that. In February last year Populus changed their weights in a way that reduced Lib Dem support and increased UKIP support (and changed even more radically in 2013 when they moved from using the telephone to online). So don’t assume that because a pollster’s methods last year had a particular skew it will always be that way.

So who is right?

At the end of the day, what most people asking the question “why are those polls so different” really want to know is which one is right. Which one should they believe? There is rarely an easy answer – if there was, the pollsters who were getting it wrong would correct their methods and the differences would vanish. All pollsters are trying to get things right.

Personally speaking I obviously I think YouGov polls are right, but all the other pollsters out there will think the same thing about the polling decisions they’ve made and I’ve always tried to make UKPollingReport about explaining the differences so people can judge for themselves, rather than championing my own polls.

Occasionally you get an election when there is a really big spread across the pollsters, when some companies clearly get it right and others get it wrong, and those who are wrong change their methods or fade away. 1997 was one of those elections – ICM clearly got it right when others didn’t, and other companies mostly adopted methods like those of ICM or dropped out of political polling. These instances are rare though. Most of the time all the pollsters show about the same thing, are all within the margin of error of each other, so we never really find out who is “right” or “wrong” (as it happens, the contrast between the level of support for UKIP shown by different pollsters is so great that this may be an election where some polls end up being obviously wrong… or come the election the polls may end up converging and all showing much the same. We shall see).

In the meantime, with an impartial hat on all I can recommend is to look at a broad average of the polls. Sure, some polls may be wrong (and it’s not necessarily the outlying pollster showing something different to the rest – sometimes they’ve turned out to be the only one getting it right!) but it will at least help you steer clear of the common fallacy of assuming that the pollster showing results you like the most is the one that is most trustworthy.


I hope most of my regular readers would assume a Daily Express headline about a “poll” showing 80% of people want to leave the EU was nonsense anyway, but it’s a new year, a new election campaign, and it’s probably worth writing again about why these things are worthless and misleading as measures of public opinion. If nothing else, it will give people an explanation to point rather overexcited people on Twitter towards.

The Express headline is “80% want to quit the EU, Biggest poll in 40 years boosts Daily Express crusade”. This doesn’t actually refer to a sampled and weighted opinion poll, but to a campaign run by two Tory MPs (Peter Bone and Philip Hollobone) and a Tory candidate (Thomas Pursglove) consisting of them delivering their own ballot papers to houses in their constituencies. They apparently got about 14,000 responses, which is impressive as a campaigning exercise, but doesn’t suddenly make it a meaningful measure of public opinion.

Polls are meaningful only to the extent that they are representative of the wider public – if they contain the correct proportions of people of different ages, of men and women, of different social classes and incomes and from different parts of the country as the population as a whole then we hope they should also hold the same views of the population as a whole. Just getting a lot of people to take part does not in any way guarantee that the balance of people who end up taking the poll will be representative.

I expect lots of people who aren’t familiar with how polling works will see a claim like this, see that 14,000 took part, and think it must therefore be meaningful (in the same way, a naive criticism of polls is often that they only interview 1000 people). The best example of why this doesn’t work was the polling for the 1936 Presidential election in the USA, which heralded modern polling and tested big sample sizes to destruction. Back then the most well known poll was that done by a magazine, the Literary Digest. The Literary Digest too sent out ballot papers to as many people as it could – it sent them to its subscribers, to other subscription lists, to everyone in the phone directory, to everyone with a car, etc, etc. In 1936 it sent out 10 million ballot papers and received two point four million responses. Based on these replies, they confidently predicted that the Republican candidate Alf Landon would win the election. Meanwhile the then little known George Gallup interviewed just a few thousand people, but using proper demographic quotas to get a sample that was representative of the American public. Gallup’s data predicted a landslide win for the Democrat candidate Franklin D Roosevelt. Gallup was of course right, the Literary Digest embarrassingly wrong. The reason was that the Literary Digest’s huge sample of 2.4 million was drawn from the sort of people who had telephones, cars and magazine subscriptions and, in depression era America, these people voted Republican.

Coming back to the Express’s “poll”, a campaign about leaving Europe run by three Tory election candidates in the East Midlands is likely to largely be responded to by Conservative sympathisers with strong views about Europe, hence the result. Luckily we have lots of properly conducted polls that are sampled and weighted to be representative of whole British public and they consistently show a different picture. There are some differences between different companies – YouGov ask it a couple of time a month and find support for leaving the EU varying between 37% and 44%, Survation asked a couple of months ago and found support for leaving at 47%, Opinium have shown it as high as 48%. For those still entranced by large sample sizes, Lord Ashcroft did a poll of 20,000 people on the subject of Europe last year (strangely larger than the Express’s “largest poll for 40 years”!) and found people splitting down the middle 41% stay – 41% leave.

And that’s about where we are – there’s some difference between different pollsters, but the broad picture is that the British public are NOT overwhelmingly in favour of leaving the EU, they are pretty evenly divided over whether to stay in the European Union or not.


Rob Hayward, the former Tory MP turned psephologist, gave a presentation at ComRes on Monday which has stirred up some comment about whether the polls are underestimating Conservative support.

Historically the polls have tended to underestimate Conservative support and/or overestimate Labour support. It was most notable in 1992, but was a fairly consistent historical pattern anyway. Since the disaster of 1992 this bias has steadily reduced as pollsters have gradually switched methods and adopted some form of political control or weighting on their samples. In 2010 – at last! – the problem seemed to have been eliminated. I hope that the polling industry has now tackled and defeated the problem of Labour bias in voting intention polls, but it would be hubris to assume that because we’ve got it right once the problem has necessarily gone away and we don’t need to worry about it anymore.

In his presentation Rob compared polls last year with actual elections – the polls for the European elections, for the by-elections and for the local elections.

I looked at how the polls for the European election did here and have the same figures as Rob. Of the six pollsters who produced figures within a week or so of the election five underestimated Conservative support. The average level of Tory support across those polls was 22.2%, the Tories actually got 23.9%. The average for Labour was 27%, when they actually got 25.4%.

Looking at by-elections, Rob has taken ten by-election polls from 2014 and compared them to results. Personally I’d be more wary. By-election campaigns can move fast, and some of those polls were taken a long time before the actual campaign – the Clacton polls, for example, were conducted a month before the actual by-election took place, so any difference between the results and the polling could just as likely be a genuine change in public opinion. Taking those polls done within a week or so of the actual by-elections shows the same pattern though – Conservatives tend to be underestimated (except in Heywood and Middleton), Labour tends to be overestimated.

Finally in Rob’s presentation he has a figure for polls at the local elections in 2014. I think he’s comparing the average of national Westminster polls at the time with Rallings and Thrasher’s NEQ, which I certainly wouldn’t recommend – the Lib Dems for example always do better in local election NEQ than in national polls, but it’s because they are different types of election, not because the national polls are wrong). As it happens there was at least one actual local election poll from Survation.

Survation local election: CON 24%, LAB 36%, LDEM 13%, UKIP 18%, Others 10%
R&T local election vote: CON 26%, LAB 36%, LDEM 11%, UKIP 16%, Others 12%

Comparing it to the actual result (that is, the actual total votes cast at the local election, which is what Survation were measuring, NOT the National Equivalent Vote) these figures were actually pretty good, especially given the sample size was only 312 and that it will be skewed in unknown ways by multi-member wards. That said, the pattern is the same- it’s the Conservatives who are a couple of points too low, Labour spot on.

So, Rob is right to say that polls in 2014 that could be compared to actual results tended to show a skew away from the Conservatives and towards Labour. Would it be right to take a step on from that and conclude that the national Westminster polls are showing a similar pattern? Well, let me throw out a couple of caveats. To take the by-election polls first, these are conducted solely by two companies – Lord Ashcroft and Survation… and in the case of Survation they are done using a completely different method to Survation’s national polling, so cannot reasonably be taken as an indication of how accurate their national polling is. ICM is a similar case, their European polling was done online while all their GB Westminster polling is done by telephone. None of these examples includes any polling from MORI, Populus or ComRes’s telephone polling – in fact, given that there were no telephone based European polls, the comparison doesn’t include any GB phone polls at all, and looking at the house effects of different pollsters, online polls tend to produce more Labour-friendly figures than telephone polls do.

So what can we conclude? Well, looking at the figures by-election polls do seem to produce figures that are a bit too Laboury, but I’d be wary of assuming that the same pattern necessarily holds in national polls (especially given Survation use completely different methods for their constituency polling). At the European elections the polls also seemed to be a bit Laboury… but the pollsters who produced figures for that election included those pollsters that tend to produce the more Laboury figures anyway, and didn’t include any telephone pollsters. It would be arrogant of me to rule out the possibility that the old problems of pro-Labour bias may return, but for the time being consider me unconvinced by the argument.

UPDATE: Meanwhile the Guardian have published their monthly ICM poll, with topline figures of CON 30%(+2), LAB 33%(nc), LDEM 11%(-3), UKIP 11%(-3), GRN 9%(+4) – another pollster showing a significant advance for the Green party.