Or at least, questions that should be treated with a large dose of scepticism. My heart always falls when I see questions like those below in polls. I try not to put up posts here just saying “this poll is stupid, ignore it” but sometimes it’s hard. As ever, I’d much rather give people the tools to do it themselves, so here are some types of question that do crop up in polls that you really should handle with care.

I have deliberately not linked to any examples of these in polls, though they are sadly all too easy to find. It’s not an attempt to criticise any polling companies or polls in particular and only he who is without sin should cast the first stone, I’m sure you can find rather shonky questions from all companies and I’m sure I’ve written surveys myself that committed all of the sins below. Note that these are not necessarily misleading questions or biased questions, there is nothing unethical or wrong with asking them, they are just a bit rubbish, and often lead to simplistic or misleading interpretations – particularly when they are asked in isolation, rather than part of a longer poll that properly explores the issues. It’s the difference between a question that you’d downright refuse to run if a client asked for it, and a question that you’d advise a client could probably be asked much better. The point of this article, however (while I hope it will encourage clients not to ask rubbish questions) is to warn you, the reader of research, when a polling question really should be read with caution.

Is the government doing enough of a good thing?

A tricky question. This is obviously trying to gauge a very legitimate and real opinion – the public do often feel that the government hasn’t done enough to sort out a problem or issue. The question works perfectly well it is something where there are two sides to the question and the question is intrinsically one of balance, where you can ask if people think the government has not done enough, or gone too far, or got the balance about right. The problem is when the aim is not contentious, and it really is a question of doing enough – stopping tax evasion, or cutting crime, for example. Very few people are going to think that a government has done too much to tackle tax evasion, that they have pushed crime down too low (“Won’t someone think of the poor tax evaders?”, “A donation of just £2 could buy Fingers McStab a new cosh”), so the question is set up from the beginning to fail. The problems can be alleviated a bit with wording like “Should be doing more” vs “Has done all they reasonably can be expected to do”, but even then you should treat questions like this with some caution.

Would you like the government to give you a pony?*
(*Hat tip to Hopi Sen)

Doesn’t everyone like nice things? There is nothing particularly wrong with questions like this where the issue at question is something controversial and something that people might disagree with. It’s perfectly reasonable to ask whether the government should be spending money on introducing high speed rail, or shooting badgers, or whatever. The difficulty comes when you are asking about something that is generally seen as a universal good – what are essentially valence issues. Should the government spend extra on cutting crime, or educating children, or helping save puppies from drowning? Questions like this are meaningless unless the downside is there as well – “would you like the government to buy you a pony if it meant higher taxes?“, “would you like the government to buy you a pony or could the money be better spent elsewhere?

How important is worthy thing? Or How concerned are you about nasty thing?

Asking if people support or oppose policies, parties or developments is generally pretty straightforward. Measuring the salience of issues is much harder, because you run into problems of social desirability bias and taking issues out of context. For example, in practical terms people most don’t actually do much about third world poverty. Ask people if they care about children starving to death in Africa you’d need a heart of stone to say that you really don’t. The same applies to things that sound very important and worthy, people don’t want to sound ignorant and uninterested by saying they don’t really care much. If you ask about whether people care about, are concerned about or think an issue is important they will invariably say they do care, they are concerned and it is important. The rather more pertinent question that is not always asked is whether it is important when considered alongside all the other important issues of the day. The best way of measuring how important people think an issue is will always be to give them a list of issues and pick out those they consider most important (or better, just give them an empty text box and ask them what issues are important to them).

Will policy X make you more likely to vote for this party?

This is a very common question structure, and probably one I’ve written more rude things about on this site than any other type of question. There are contexts where it can work, so long as it is carefully interpreted and is asked about a factor that is widely acknowledged to be a major driver of voting intention. For example, it’s sometimes used to ask if people would be more or less likely to vote for a party if a different named politician was leader (though questions like that have their own issues).

It becomes much more problematic when it is used as a way of showing an issue is salient. Specifically, it is often used by campaigning groups to try and make out that whatever issue they are campaigning about will have an important impact on votes (and, therefore, MPs should take it seriously or their jobs may be at risk). This is almost always untrue. Voting intention is overwhelmingly driven by big brush themes, party identification, perceptions of the party leaders, perceived competence on the big issues like the economy, health or immigration. It is generally NOT driven by specific policy issues on low salience issues.
However, if you ask people directly about the impact of specific policy issue on a low salience issue, and whether it would make them more or less likely to vote for a party, they will normally claim it does. This is for a number of reasons. One is that you are taking that single issue and giving it false prominence, when it reality it would be overshadowed by big issues like the economy, health or education. The second is that people tend to just use the question as a way of signalling if they like a policy or not, regardless of whether it would actually change their vote. The third is that it takes little account of current voting behaviour – you’ll often find the people saying a policy makes them more likely to vote Tory is made up of people voting Tory anyway, people saying a policy makes them less likely to vote Tory are people who wouldn’t vote Tory if hell froze over.

There are ways to try and get round this problem – in the past YouGov used to offer “Makes no difference – I’d vote party X anyway” and “Makes no difference – I wouldn’t vote for party X” to try and get rid of all those committed voters whose opinion wouldn’t actually change. In some of Lord Ashcroft’s polling he’s given people the options of saying they support a policy and it might change their vote, or that they’d support it but it wouldn’t change their vote. The best way I’ve come up with doing it is to give people a long list of issues that might influence their vote, getting them to tick the top three or four, and only then asking people whether the issue would make them more or less likely to vote for a party (like we did here for gay marriage, for example). This tends to show that many issues have little or no effect on voting intention, which is rather the point.

Should the government stop and think before going ahead with a policy?

This is perhaps the closest I’ve seen to a “when did you stop beating your wife” question in recent polls. Obviously it carries the assumption that the government has not already stopped to consider the implications of policy. Matthew Parris once wrote about a rule of thumb on understanding political rhetoric, saying that if the opposite of a political statement was something that no one could possibly argue for, the statement itself was meaningless fluff. So a politician arguing for better schools is fluff, because no one would ever get up to the podium to argue for worse schools. These sort of questions fall into the same trap – no one would argue the opposite, that the best thing for the government to do is to pass laws willy-nilly without regard for consequences, so people will agree to the statement in regard of almost any subject. It does NOT necessarily indicate opposition to or doubt about the policy in question, just a general preference for sound decision making.

Do you agree with pleasant uncontentious thing, and that therefore we should do slightly controversial thing?

Essentially the problem is one of combining two statements together within an agree disagree statement, and therefore not giving people the chance to agree with one but not the other. For example “Children are the future, and therefore we should spend more on education” – people might well agree that children are the future (in fact, it’s relatively hard not to), but might not agree with the course of action that the question suggests is a natural consequence of this.

How likely are you to do the right thing? or Would you signify your displeasure at a nasty thing?

Our final duff question falls into the problem of social desirability bias. People are more likely to say they’ll do the right thing in a poll than they are to do it in real life. This is entirely as you’d expect. In real life the right thing is sometimes hard to do. It might involve giving money when you really don’t have much to spare, or donating your time to volunteer, or inconveniencing yourself by boycotting something convenient or cheap in favour of something ethical. Answering a poll isn’t like that, you just have to tick the box saying that you would probably do the right thing. Easy as pie. Any poll you see where you see loads of people saying they’d volunteer to do something worthwhile, boycott something else, or give money to something worthy, take with a pinch of salt.


House effects

A lot of the points I made in my essay on how not to report polls boiled down to not taking a poll in isolation. Not making the outlier the story, only comparing apples to apples, not cherry picking – they all boil down to similar things, especially on voting intention.

In the last couple of days I’ve watched people getting overexcited over two polls. Yesterday’s ICM poll provoked lots of Tory excitement on Twitter and comments about the Labour lead falling and it being a terrible poll for Labour and so on. ICM’s poll, of course, did not show Labour’s lead falling at all – it showed it steady for the fourth month in a row. ICM’s methodology merely produces consistently lower leads for Labour due to their methodological approach. Saturday night had the usual flurry of excitable UKIP comments on Twitter about being on the rise and being the 3rd party after the Survation poll was published, conveniently ignoring the fact that 95% of polls this year have had them in fourth – often by a very long way. There was, needless to say, no similar excitement over UKIP being on 4%, 11 points behind the Lib Dems, in the ICM poll yesterday.

Different pollsters have different approaches, on things like weighting, likelihood to vote, how they deal with don’t knows, how they prompt and so on. While all the pollsters are politically neutral, these do have some consistent partisan effects – for example, ICM’s methods tend to produce the highest levels of support for the Liberal Democrats, YouGov’s methods tend to produce the lowest levels of support for the Liberal Democrats. The graph below shows an estimate of the partisan house effects of each polling company’s voting intention methodology, calculated by comparing each company’s poll results to the rolling average of the YouGov daily poll (1)

YouGov, ICM and ComRes’s online polls tend to show the highest shares of the vote for the Conservative party. However, in the case of YouGov this is cancelled out by a tendency to also show the highest levels of support for Labour, so the result is that ICM show the lowest Labour leads while YouGov tend to show some of the highest Labour leads after Angus Reid and TNS. For the Liberal Democrats, ICM show far higher support for the party than any other company, averaging at plus 3.3 points. Next highest is Survation and ComRes’s telephone polls. At the opposite end of the spectrum YouGov tend to show significantly lower Liberal Democrat support.

It would take a much longer post to dissect the full methodology of each pollster and the partisan implications, but to pick up the general methodological factors that contribute to the house effects:

How pollsters account for likelihood to vote. Some companies like YouGov and Angus Reid do not take any account of how likely people say they are to vote away from elections(2). Companies like ICM and Populus weight by how likely people say they are to vote, so that people who say they are 10/10 certain to vote count much more than someone who says their chances of voting are only 5/10. At the opposite end of the scale from YouGov, Ipsos MORI include only those people who are 10/10 certain to vote, and exclude everyone else from their topline figures. Other twists here are ICM, who also heavily downweight anyone who says they didn’t vote in 2010, and ComRes, who use a much harsher likelihood to vote question for people voting for minor parties than for the big three. Most of the time Conservative voters say they are more likely to vote than Labour voters, so the more harshly a pollster weights or filters by likelihood to vote the better it is for the Tories.

How pollsters deal with don’t knows. Somewhere around a fifth of people normally tell pollsters they don’t know how they would vote in an election tomorrow. Some pollsters like YouGov simply ignore these respondents. Some like MORI ask them a “squeeze question”, something like “which party are you most likely to vote for?”. Others estimate how those people would vote using other information from the poll, such as party ID (ComRes) or how those people say they voted at the previous election (ICM and Populus). These adjustments tend to help parties that have lost support since the last general election – so currently ICM and Populus’s adjustment tends to help the Liberal Democrats and, to a lesser extent, the Conservatives. In past Parliaments it has helped the Labour party.

How the poll is conducted. About half the current regular pollsters do their research online, about half do it by telephone. While there is no obvious systemic difference between online and telephone polls in terms of support for the Conservatives, Labour and Liberal Democrats there is a noticable difference in support for UKIP, with polls conducted online consistently showing greater UKIP support. This may be to do with interviewer effect, with respondents being more willing to admit supporting a minor party in an online poll than to a human interviewer, or may be something to do with sampling.

How the poll is weighted. Almost all pollsters now use political weighting of some sort in their samples. In the majority of cases this means weighting the sample by how people said they voted at the last election – i.e. we know 37% of people who voted in Great Britain in 2010 voted Tory, so in a representative sample 37% of people who say they voted at the previous election. It isn’t quite as simple as that because of false recall – people tend to forget their vote, or misreport voting tactically, or claim they vote when they didn’t actually bother, or align their past behaviour with their present preferences and say how they wish they had voted with hindsight. Most pollsters estimate some level of false recall in deciding their weighting targets, Ipsos MORI reject it on principle with the effect that proportionally their samples tend to contain slightly more people who say they voted Labour at the last election, and somewhat fewer who say they voted Lib Dem.

How the poll is prompted. As discussed at the weekend, almost all companies prompt their voting intention along the lines of Conservative, Labour, Lib Dem, Scots Nats/Plaid if appropriate and Other. Survation also include UKIP in their main prompt, leading to substantially higher UKIP support in their polls.

All these factors interact with one another – so you can’t look at one in isolation. For example, MORI’s sample tends to be a bit more Labour than other parties, but their turnout filter is harsher than most other companies which disadvantages Labour and cancels out the pro-Labour effect of not weighting by past vote. ComRes’s online polls tend to find a higher level of UKIP support than many other companies, but their harsh filter on likelihood to vote for other parties cancels this out. They also change over time – so while re-allocation of don’t knows currently helps the Lib Dems, in past years it has helped Labour (and when originally introduced in the 1990s helped the Tories.)

Inevitably the question arises which polls are “right”. The question cannot be answered. Come actual elections polls using different methods all tend to cluster together and show very similar results – polls have a margin of error of plus or minus 3%, so judging which methodology is more accurate based on one single poll every five years when all the companies are within the 3% margin of error is an utter nonsense.

Realistically it a more a philosophical question than a methodological one – the reason pollster show different figures is that they are measuring different things. YouGov don’t make second guesses about don’t knows and assume everyone who says they vote will. Their figures are basically how people say they would vote tomorrow. In comparison ICM weight by how likely people say they are to vote, assume people who didn’t vote last time are less likely to do so than they say they are, and make estimates of how people who say don’t know would actually vote. Their figures are basically how ICM estimate people would actually vote tomorrow. They are two different approaches, and there is not right answer as to which one to take. Shouldn’t a pollster actually report what people say they’d do, rather than making second guesses about what they’d really do? But if a pollster has good reason to think that people wouldn’t behave how they say they will, shouldn’t they factor that in? No easy answer.

Given these differences though, when you see a poll, it is important to remember house effects and to look at the wider trends. A poll from ICM showing a smaller Labour lead than in most other companies’ polls isn’t necessarily a sign of some great collapse in Labour’s lead, it’s more likely because ICM always show a smaller Labour lead than other companies (ditto a great big Labour lead in an Angus Reid poll). That said, even a big Labour lead from ICM or a small Labour lead from Angus Reid shouldn’t get people too excited either, as any single poll can easily be an outlier. As ever, the rule remains to look at the broad trend across all the polls. Do not cherry pick the polls that tell you what you want to hear, do not try to draw trends from one company to another when they use different methods and don’t get overexcited by single outlying polls.

(1)House effects were calculated by using the daily YouGov poll as a reference point. I took a rolling 5 day average of the YouGov daily poll, and compared that to each poll from another company. This was used to calculate each company’s average difference from the YouGov daily poll. Then it was calibrated to the average for difference for each party, so that YouGov wasn’t automatically the mid-point!)

(2) YouGov do take into account likelihood to vote during election campaigns, using roughly the same approach as Populus


-->

Some of the internet got very excited over a LibDemVoice poll earlier this week showing 46% of Lib Dem members don’t want Nick Clegg to stay on as party leader at the next election.

The question itself was rather more nuanced than some of the comment upon it suggested – it gave respondents options of Clegg staying for the election, stepping down just before the the election or stepping down sooner than that (and also separate opinions for stepping down as leader and deputy PM). Most of the 46% of Lib Dem members that wanted Clegg to go were happy for him to stay on for now – 32% of respondents wanted him to step down as party leader at some point, compared to only 14% who wanted him to step down in the next year. It suggests to me that this is more about Lib Dem members thinking Clegg is probably not the leader to get them votes at the next general election, rather than a sign of unhappiness or opposition to him per se.

While I’m here I should write quickly about how representative the polls on LibDemVoice are. Stephen Tall and Mark Pack don’t make huge claims about representativeness and are always quick to stress that they can’t claim they are representative. This is admirable, but is sadly not a carte blanche, as however much the person doing a poll hedges it with caveats and warnings these are rarely picked up by third parties who report a poll and are more interested in making it newsworthy than reporting it well.

That said, I think they are actually pretty worthwhile. They have the huge advantage of being able to actually check respondents against the Liberal Democrat member database so we can be certain that respondents actually are paid up Lib Dem members and not entryists, pissed off former members, other parties supporters causing trouble, etc. LDV also have access to some proper demographic data on the actual membership of the Lib Dem party, so while their sample is unrepresentative in some ways (it’s too male for example), they know this and can test to see if it makes a difference. They have also compared it against some YouGov polling of Lib Dem members which had very similar results, and actual Lib Dem party ballots, which had excellent results in 2008 and rather ropey ones in 2010. Mark Pack has a good defence of them here.

Of course, there are caveats too. The danger for such polls is if they end up getting responses disproportionately from one wing of the party or another, from supporters or opponents of the leadership. I am not a Lib Dem activist so such things may be over my head, but from an outside perspective the LibDemVoice website doesn’t seem to be pushing any particular agenda within the party that might skew the opinions of their readers or which party members take their polls. If reading LDV does influence their opinions though, it could obviously make respondents different to the wider Lib Dem party (for example, here Stephen suggests Nick Harvey’s increase in approval ratings could be the effect of making regular posts on Lib Dem Voice, which would indeed be a skew… but not on a particularly important question!)

I do also worry about whether polls that are essentially recruited through online party-political websites or supporter networks get too many activists and not enough of the armchair members, or less political party members (not an oxymoron, but the type of party member who joins for family or social reasons, because their partner is a member or because they want to contribute to their local community through being a councillor and the party is really just the vehicle).

All that said, while they aren’t perfect and Mark and Stephen never claim they are, I think they are a decent good straw in the wind and worth paying attention to, especially given the verification of whether respondents are party members.


Lord Ashcroft has released some more polling on gay marriage, asking a question on whether people would be more or less likely to vote for a party that legalised same-sex marriage.

As regular readers will know, I have an awful lot of reservations about would X make you more or less likely to vote Y questions. To tick them off quickly –

(a) people tend to use the question to register their support or opposition to a policy, regardless of whether it would actually change their vote
(b) people are extremely poor at understanding the drivers of their own voting behaviour anyway
(c) if it asks about a specific party, people who are already voting for that party regardless say it makes them more likely to vote Y, people would would never vote for them anyway say it makes them less likely to vote Y. Neither of these groups matter
(d) by singling it out it gives the issue being asked about a false prominance, when actually lot of other equally or more important issues would be there influencing people’s votes
(e) more or less likely is a pretty low bar. It isn’t saying people definitely would or would not vote Y if X happened, just more or less likely. It’s pretty easy to tell a pollster that to indicate your support or opposition to a policy, it can be a more difficult decision when it comes to an actual ballot box

Despite these problems, more or less likely to vote are much beloved of campaigning and pressure groups as it makes whatever pissant little issue they are campaigning on seem like something incredibly important that will decide elections.

Anyway, this isn’t to particularly criticise Ashcroft’s question, since they’ve done all they can to try and get a decent question out of it – they gave people the option of saying they supported or opposed gay marriage, but that it wouldn’t affect their vote and they looked separately at current Tory voters and potential Tory voters.

Overall, Ashcroft found people in favour of gay marriage by 42% to 31%, with 27% saying they had no real opinion either way. People who were opposed to gay marriage were more likely, however, to say it would affect their vote – overall 10% of people said they were more likely to vote for a party that supported gay marriage, 12% said they were were less likely.

It becomes more interesting when we look at the crossbreaks. Amongst people who voted Tory in 2010 and would still vote Tory today the vast majority say the issue makes no difference – 6% say it would make them more likely to vote for a party, 9% less likely to vote for a party. Amongst lost Tory voters, who voted for the party in 2010 but wouldn’t now 26% say supporting gay marriage would make them less likely to vote for a party and only 4% more likely – this fits nicely with a support that the Conservatives have lost to their right and UKIP.

However, there are two sides to the equation. Looking at the votes the Conservatives have gained since the election 15% say they are more likely to vote for a party that legalises gay marriage compared to 11% less likely. Looking at those who are not voting Tory but may consider it, 12% say they are more likely to vote for a party that supports gay marriage compared to 9% less likely.

Of course while Ashcroft and his pollsters have done their level best to write a good question, most of the caveats above still apply – questions like this give undue prominance to an issue of low saliance and even wording like this it probably grossly overestimates the importance of the issue in voting intention. It does however, as Ashcroft concludes, demonstrate that the effect of gay marriage on voting intention is not all one way.


The Boris bandwagon rolls on, and an ICM poll for the Sunday Telegraph tonight apparently has another question trying to measure whether the Conservatives would do better with Boris Johnson as leader. There are two things to consider with hypothetical “who would you vote for if X was leader” questions.

The first is that they need to be exactly comparable. The difference between voting intention with two different people as leader of a party is often only a few points. However, adjustments like weighting by likelihood to vote or reallocating don’t knows can also make a couple of points difference, so if you want to be confident the difference is due to the leader the need to be done exactly the same way. If the main figures are weighted or filterted by likelihood to vote, they need to be weighted by likelihood to vote (ideally asked separately), if there is a squeeze question or don’t knows are reallocated in their main question, the same needs to happen in the hypothetical questions.

Trickier to control for is the question itself. Normal voting intention questions don’t mention the party leaders, so if asking how people would vote with Boris as Tory leader increases the Tory vote by 2 points we can’t conclude that he’d do better than Cameron without checking mentioning David Cameron as Tory leader in the question wouldn’t do the same. This is why when YouGov run the questions they ask a control question including the names of the current party leaders.

The second thing to consider is quite how hypothetical these questions are! In many cases we are asking about politicians who the general public know very little about – apart from very well known politicians like party leaders and Chancellors of the exechequer many other ministers – even cabinet ministers – are almost complete unknowns to the majority of people. Even when a politician is relatively well known, like Gordon Brown pre-2007 or Boris Johnson now, people answering questions like this don’t know what they would do as a party leader, what sort of mission and narrative they’d set out, what policy priorities they’d follow, and all these things could change how they are viewed.

However, that doesn’t necessarily mean questions like this are never useful. Back before Gordon Brown became Labour leader polls like this consistently showed him doing less well than Tony Blair. At the time I made all the same caveats as above, but said in the specific context of Gordon Brown it probably was showing that Brown would do badly because of why people gave him negative ratings. The polls said people saw him as competent and efficient and capable… but they didn’t like him. If people had seen Brown as incompetent or inexperienced he could have changed impressions in office, but those were already positive. The polls were telling us that his problem was a negative that was difficult to change, just not being likeable.

So to Boris. What can we tell from hypothetical polls about him? Well, I haven’t seen the ICM poll yet, but YouGov have done two hypothetical polls about him. The first in May showed Boris doing basically the same as David Cameron. The second a week or so ago had Boris doing 5 points better than Cameron, presumably because of the effect the Olympics has had on how Boris is seen. We shall see if ICM shows the same sort of pattern.

Is this really meaningful? Well, as Gordon Brown seemed to do badly simply because people didn’t warm to him personally, Boris Johnson seems to be an opposite case – he seems to do well because he is likeable and eccentric. It’s an open question to what extent that would transfer were him to become Prime Minister or Conservative leader – a politicians ability to come across as likeable and to connect to people seems to be innate to some degree, so would probably benefit Boris in any role. On the other hand, being seen as a bit of a buffoon is not necessarily on the job description of PM. Would something that seems like a wizzard prank in a hypothetical opinion poll seem rather less funny in an actual election? We don’t know.

A more concrete caveat to keep in mind is to remember that all these Boris quesions are being asked in the midst of the London Olympics, Boris’s big moment in the sun. Before the Olympics the polls didn’t suggest Boris would do any better than Cameron. I’d wait until the publicity around the Olympics fades before drawing any long term conclusions…