As you might imagine, a year out from a general election the one question I get asked more than any other is “well, who is going win then?”. It’s a question I try to avoid answering like the plague. The simple answer is we don’t know – polls measure public opinion now, not a year ahead. Assuming no change from the polls now (which would imply a Labour majority) is a naive approach that would have served you poorly in past Parliaments. Assuming public opinion will change in the same way as it has towards the end of previous elections gives you a prediction (hung Parliament, Tories the biggest party), but also gives you huge confidence margins that stretch from a Tory landslide to a comfortable Labour majority.

My answer, therefore, tends to be to give the five questions (and one observation) that I think will decide the next election one way or another…

1) How will the growing economy effect the polls?

The economy has universally been the issue that voters see as most important in polls since the economic crisis began, and as such has been a major influence on voting intention. Economic confidence amongst the general public started rising sharply early in 2013 in all the major trackers, and has continued on a broadly positive trend. This has coincided with movement in public attitudes towards the government’s economic policies. From being neck-and-neck with Labour on the economy last year the Conservatives now have a consistent lead. Last year YouGov’s cuts trackers consistently showed that people thought that their cuts to public spending, while necessary, were actually bad for the economy, now they show more people think the cuts were good for the economy. It is impossible to draw a causal link, but over the same period the average Labour lead has dropped from about 10 points to around about 5 or 6 points.

Whether or not the facts back them up (that is a discussion for some economic blog elsewhere), the recovering economy seems to be having the effect of convincing some the public that the government’s economy policy was right and that the Conservatives can be trusted more on the economy. I am not an economist and this is not an economics blog, so I have no educated view on if the economy will continue to grow, which seems to be what commentators expect. Assuming it does, will that lead to continuing increases in government approval and a bigger lead for them on the economy, and will that translate into increased support? More specifically, while polls show people are more optimistic about the economy as a whole, they are still downbeat about their own personal finances. Will people themselves start to feel better off in the next 14 months, and would that translate into more government support?

A final thing to watch there is how important people say the economy is. There are examples of government’s losing elections despite being ahead on the economy – the classic is 1997. The reason becomes apparent if you look at what issues people told pollsters were important in 1997 – it wasn’t the economy (where the Tories were still holding their own), it was public services like the NHS and education where Labour were a mile ahead. I don’t think 14 months is long enough for this to happen, but if the economy really starts getting better keep an eye on whether people stop telling pollsters it is such an important issue.

2) Will Ed Miliband’s unpopularity matter anymore than it does now?

I find the contrast between Ed Miliband’s ratings and Labour’s support a puzzle. There really is a gulf between them. The basic facts are straightfoward – for an opposition leader whose party has been consistently ahead in the polls for years Ed Miliband’s ratings are horrible. His approval ratings are horrid, down at IDS, Howard and Hague levels; best Prime Minister ratings normally track voting intention pretty closely but Ed Miliband trails behind David Cameron by around 15 points. Polls consistently find that people think Ed Miliband is weak and not up to the job of Prime Minister. This is not just a case of opposition leaders always polling poorly compared to incumbent Prime Ministers – if you compare Ed Miliband’s ratings now to David Cameron’s in opposition Miliband is doing far worse. For example, in 2008 49% thought David Cameron looked like a PM in waiting, only 19% think the same about Ed Miliband now. To claim that Miliband’s ratings are not dire is simply denial. Yet Labour consistently lead in the polls.

Pause for a second, and imagine that we didn’t ask voting intention in polls. Imagine all you had to go on was all the other figures – the polls asking who people would trust more on the economy, who would make the better Prime Minister, who people trust on the issues they currently think are most important. Based on those figures alone the next election looks as if it should be a Conservative walkover…and yet Labour consistently lead in the polls.

The paradox between the underlying figures, which in most areas are increasingly favourable to the Conservatives, and the headline figures, consistently favourable to Labour, are fascinating. They are something I’ve returned to time and again without apology, as I’m sure there’s a key message here. Whatever the result of the next election, it’ll tell the loser something very important. If the Conservatives win, Labour will need to learn about using the goodwill an opposition gets to actually build up the foundations to, well, support their support (I suspect they’d also have to accept that getting a leader who people take seriously as a potential PM really is a prerequisite). If Labour win, the Conservatives should take home the message that leadership, economic competence and being preferred on policies really isn’t enough, that they have a serious issue with how people perceive their party and its values that needs to be addressed (I doubt they would learn that lesson, but there goes).

Given Labour are ahead now, I think the question is whether perceptions of the opposition and the choice of Prime Minister increase in importance as the election approaches and voting intention becomes less of a way of people indicating their opinion of the government, and more a choice between two alternatives. The reason Labour poll badly on so many of these underlying questions is not because Labour voters say they prefer Cameron and the Tories, it’s because many Labour voters simply say don’t know (or none of them). They aren’t convinced Miliband would be a good PM or Labour would run the economy well. Will those people overcome those doubts? Vote Labour regardless? Or do something else?

3) What level of support will UKIP get at the general election

Looking back over UKIP’s performance in the Parliament so far their support has mostly followed a pattern of election successes leading to boosts in the polls, followed by a decline to a new, higher plateau. I think UKIP can fairly comfortably expect a strong performance European elections (personally I would still expect them to come top, but whatever happens it’s going to be a strong showing). This will in turn be followed by another publicity boost and another boost in the Westminster polls. It will vary between different pollsters, as ever, but I think we can expect UKIP in the mid to high teens with the telephone polls and up in the low-twenties with the more favourable online companies.

From then on, it’s probably a case of a decline as we head towards the general election as the focus moves more towards the Con-v-Lab battle. The question is how quickly that support fades and to what extent. Here we are very much in unknown territory. UKIP got up to around 8% in the Westminster polls following the 2009 European elections, but declined to around 3% by the 2010 election; the Greens got up to 8% in the polls following the 1989 European election, but declines to 0.5% of the vote by the 1992 election. This time round is clearly different in terms of the size and scale of UKIP’s support and history provides no good guide. Neither does present polling – people are notoriously bad at answering questions on whether they’ll change their mind or what might make them change their mind. We are flying blind – but given that UKIP support has thus far disproportionately come from people who supported the Conservatives at the last election it is something that would have implications for the level of Tory support come the general election.

4) How resilient will Liberal Democrat incumbents be?

The three points so far have been about levels of overall support at the next election. The fourth is instead about distribution of the vote and therefore the outcome in numbers of MPs. On a uniform swing the Liberal Democrats will face severe losses in the election. It obviously depends just how badly they do, whether they are still in the coalition, whether they recover towards the election and so on, but projections of them losing half their seats are not unusual. However, there is also an expectation that Liberal Democrats will do better than this because of their incumbent MPs’ personal vote. Analysis from past elections and from studies like the PoliticsHome and Lord Ashcroft polls of marginal seats are pretty consistent in showing that Liberal Democrat MPs benefit more from personal votes than politicians from other parties, they handily won the Eastleigh by-election and have managed to hold on to councillors in some (but not all) of the areas where they have MPs. This would point to the Liberal Democrats actually doing better in terms of MPs than the raw numbers would suggest, although don’t expect magic… Lib Dem MPs might outperform the national trend, but it doesn’t render them immune to it. If you’ve lost a third to a half of your support, it has to come from somewhere and would be naive to expect it all to come from places you don’t need it.

As a caveat to this Lib Dem optimism though, look at the Scottish Parliament election in 2011. In that case Lib Dem incumbents didn’t seem to do any better, if anything the Liberal Democrats lost more support in areas where they had the most support to begin with, the very opposite pattern. The cause of this is probably a floor effect (the Lib Dems lost 8% of the vote in the election, but started off with less than 8% in many seats, so by definition more of their lost support had to come in their stronger seats). If the Lib Dems do really badly we may see the same effect at Westminster, if the Lib Dems lose enough support it’s impossible for it all to come from seats where they have hardly any support to begin with! The question is to what degree, if any, Lib Dem MPs can outperform the swing against them.

5) Will Scotland be voting?

Or perhaps more accurately, will the Scottish MPs be sticking around afterwards! All the polls on the Scottish referendum so far have shown the NO campaign in the lead. There has been a slight trend in the direction of yes, but nothing more than that. Personally I would expect the NO campaign to win, but there is obviously a chance they won’t and if so it would massively change our predictions for the next election. Exactly how and when Scottish MPs cease to be members of the House of Commons would need to be decided, but it would obviously disproportionately affect Labour – the Conservatives have only one Scottish MP to lose. More important though would be the wider effect on politics, thus far the Scottish independence referendum is something that has had minimal effect upon politics south of the border. Until January the London based media barely even mentioned it, it’s still something that’s very much a sideshow. If Scotland were to vote yes then then the negotiations in the following 18 months would suddenly become an issue of paramount importance, David Cameron’s position would presumably come under some pressure but either way, nothing would be the same anymore. I don’t expect it to happen, but it would be remiss of me not to include it here.

So, five things that I think will decide the election. I said there was an observation too – remember the impact of the electoral system. This one isn’t a question, we know that the system is more helpful to Labour than the Conservatives and, given the government’s failure to get the boundary review through, will remain that way. Getting ahead in the polls is not enough for the Conservatives – it would probably leave Labour as still the largest party. To get an overall majority the Conservatives need a lead of somewhere in the region of 7 points. We can’t be certain of the exact figures (the double incumbency bonus of MPs newly elected last time round will shift things a bit), but we can be confident that just being ahead isn’t enough for the Tories – they need to be well ahead.


Over on the right hand side of this site is a projection of how the current polls would translate into seats at a general election tomorrow, if there was a uniform swing. On twitter and suchlike I sometimes see if referred to as UKPR’s current prediction, but I’m afraid it isn’t. Polls don’t predict the next election, they measure support now, so the polling average here isn’t my best guess for the shares of the vote at the next election, it’s a measure of support in an election tomorrow. Of course, there isn’t an election tomorrow, and if there was, the polls probably wouldn’t be as they are – if there really was an election tomorrow then the last three weeks would have been full of manifestos, policy announces, campaigns and debates which may or may not have had some impact.

It’s also worth noting that while uniform national swing is not a bad guide by any means, it can certainly be bettered. To start with it’s definitely worth dealing with Scotland seperately based on Scottish polling figures, it might also be useful to include some assumptions about incumbency effects in seats with new MPs, and some degree of random variation at the margins.

I deliberately don’t make predictions this far out, given the huge amounts of unknowns. I tend to find most people who do predict this far out with any degree of confidence are – probably unconsciously – merely predicting what they would like to be the case. It’s rare to find someone confidently predicting a Labour victory who wouldn’t like a Labour victory (or who has an ideological axe to grind against the Tory leadership), or vice-versa on the Conservative side. Given the prominence of Nate Silver and other election prediction sites at the last US election I would expect a plethora of more academic and sensible election prediction models come the actual election (hell, I know for certain of several groups of academics working on various models), but so far virtually the only prediction I have seen that moves beyond wish-fulfillment to actually come up with a poll-based model is the attempt by Steve Fisher at Oxford here, with an explanation of the model here.

Steve’s model is a simple one – it is purely based upon voting intention polls and how they have tended to relate to the election result that follows*. We cannot assume that the polls will remain unchanged in the run up to the next election, given that in past Parliaments they have tended to change. Past change has not been a random walk, with equal likelihood of government’s gaining or losing in the polls – this is the key to Steve’s model. In the past the polls have rended to regress towards the result of the previous election (usually in the form of the government recovering). What Steve has done therefore is to take the current polls, and then factor in the sort of size and scale of changes that have typically happened to the polls over the last years of previous Parliaments, then based a prediction on that. At past elections this would have proven to be a more accurate predictor than just taking the current polls. That is not to say that that it is a particularly accurate prediction, only that in the past it would have been more accurate than assuming no change.

On that basis, if the polls over the next year behave like the polls in the last year of previous Parliaments the most likely result come the general election is a Conservative lead of 5 points over Labour, which would produce a hung Parliament with the Conservatives the largest party. The most important word in that sentence is probably the “if”, and perhaps the most important thing to note in Steve’s projection are the large prediction intervals around it. Steve’s model predicts the Conservative vote will be 37%, plus or minus 8.5 (so between 29% and 46%), the Labour vote at 32%, plus or minus 6.4 (so between 26 and 39). These are huge gaps. Of course, results towards the centre of those ranges are still considered more likely, but it underlines the imprecision of the projection, and the limitations on using current polling data to predict a general election a year away. Polls a year out from the election are not a very good prediction of the election. It would be wrong to say that anything could happen (Steve’s model, for example, suggests it is unlikely that Labour would get over 40, or that the Conservatives would fall below 29), but certainly a lot of different outcomes could happen.

It also reflects the sheer variety of elections. One criticism I’ve seen of Steve’s model is that this election will be different because of the coalition, the UKIP factor and the realignment of the Lib Dem vote. That may very well be true, but we could say the same about other elections – 1964 had two late changes of leader, 1966 wasn’t a whole term, 1974 was different because the Liberals started contesting all seats, 1979 was different because of the Lib-Lab pact, or the winter of discontent, 1983 was different because of the Falklands and the SDP split, 1992 was different because of Thatcher’s removal, 1997 was different because of the sheer scale of the landslide. 2001 was different because Labour never really had any mid-term blues to come back from. The infrequency of elections means that almost by definition each one has things that make it unique and different – yet Steve’s out-of-sample predictions shows the model would been a better tool at predicting those past elections from 20, 12 or 6 months out than just looking at what the polls 20, 12 or 6 months out were saying (it also underlines the difficulty for political scientists in coming up with any decent models at all – you only get 16 data points and they are all weird).

That doesn’t mean it would have been a particularly good prediction at those past points, just that it was better than the alternative of just looking at the polls 20, 12 or 6 months out. The polls now are a snapshot of public support now, they are not a prediction of what will happen in May 2015. If polls move in the sort of way they have in the run up to past elections we can expect the Conservatives to significantly recover. If they don’t, then they won’t, simple as that. Polls do not move by magic, drawn towards past election results by some invisible force. If they narrow, it will be because of the economy, because of changing attitudes to the parties, because, perhaps, of different factors weighing upon people’s political choices as an election becomes more imminent… that, however, is a post for another day.

(*I should also add that this is NOT Steve’s personal prediction of the election – it’s an attempt to see to what degree you can predict election results months in advance using just national poll data. I expect if Steve was making a personal prediction he probably would ponder what the impact of the economy, the party situation etc would be, but that would be a very different and more subjective model.)


Or at least, questions that should be treated with a large dose of scepticism. My heart always falls when I see questions like those below in polls. I try not to put up posts here just saying “this poll is stupid, ignore it” but sometimes it’s hard. As ever, I’d much rather give people the tools to do it themselves, so here are some types of question that do crop up in polls that you really should handle with care.

I have deliberately not linked to any examples of these in polls, though they are sadly all too easy to find. It’s not an attempt to criticise any polling companies or polls in particular and only he who is without sin should cast the first stone, I’m sure you can find rather shonky questions from all companies and I’m sure I’ve written surveys myself that committed all of the sins below. Note that these are not necessarily misleading questions or biased questions, there is nothing unethical or wrong with asking them, they are just a bit rubbish, and often lead to simplistic or misleading interpretations – particularly when they are asked in isolation, rather than part of a longer poll that properly explores the issues. It’s the difference between a question that you’d downright refuse to run if a client asked for it, and a question that you’d advise a client could probably be asked much better. The point of this article, however (while I hope it will encourage clients not to ask rubbish questions) is to warn you, the reader of research, when a polling question really should be read with caution.

Is the government doing enough of a good thing?

A tricky question. This is obviously trying to gauge a very legitimate and real opinion – the public do often feel that the government hasn’t done enough to sort out a problem or issue. The question works perfectly well it is something where there are two sides to the question and the question is intrinsically one of balance, where you can ask if people think the government has not done enough, or gone too far, or got the balance about right. The problem is when the aim is not contentious, and it really is a question of doing enough – stopping tax evasion, or cutting crime, for example. Very few people are going to think that a government has done too much to tackle tax evasion, that they have pushed crime down too low (“Won’t someone think of the poor tax evaders?”, “A donation of just £2 could buy Fingers McStab a new cosh”), so the question is set up from the beginning to fail. The problems can be alleviated a bit with wording like “Should be doing more” vs “Has done all they reasonably can be expected to do”, but even then you should treat questions like this with some caution.

Would you like the government to give you a pony?*
(*Hat tip to Hopi Sen)

Doesn’t everyone like nice things? There is nothing particularly wrong with questions like this where the issue at question is something controversial and something that people might disagree with. It’s perfectly reasonable to ask whether the government should be spending money on introducing high speed rail, or shooting badgers, or whatever. The difficulty comes when you are asking about something that is generally seen as a universal good – what are essentially valence issues. Should the government spend extra on cutting crime, or educating children, or helping save puppies from drowning? Questions like this are meaningless unless the downside is there as well – “would you like the government to buy you a pony if it meant higher taxes?“, “would you like the government to buy you a pony or could the money be better spent elsewhere?

How important is worthy thing? Or How concerned are you about nasty thing?

Asking if people support or oppose policies, parties or developments is generally pretty straightforward. Measuring the salience of issues is much harder, because you run into problems of social desirability bias and taking issues out of context. For example, in practical terms people most don’t actually do much about third world poverty. Ask people if they care about children starving to death in Africa you’d need a heart of stone to say that you really don’t. The same applies to things that sound very important and worthy, people don’t want to sound ignorant and uninterested by saying they don’t really care much. If you ask about whether people care about, are concerned about or think an issue is important they will invariably say they do care, they are concerned and it is important. The rather more pertinent question that is not always asked is whether it is important when considered alongside all the other important issues of the day. The best way of measuring how important people think an issue is will always be to give them a list of issues and pick out those they consider most important (or better, just give them an empty text box and ask them what issues are important to them).

Will policy X make you more likely to vote for this party?

This is a very common question structure, and probably one I’ve written more rude things about on this site than any other type of question. There are contexts where it can work, so long as it is carefully interpreted and is asked about a factor that is widely acknowledged to be a major driver of voting intention. For example, it’s sometimes used to ask if people would be more or less likely to vote for a party if a different named politician was leader (though questions like that have their own issues).

It becomes much more problematic when it is used as a way of showing an issue is salient. Specifically, it is often used by campaigning groups to try and make out that whatever issue they are campaigning about will have an important impact on votes (and, therefore, MPs should take it seriously or their jobs may be at risk). This is almost always untrue. Voting intention is overwhelmingly driven by big brush themes, party identification, perceptions of the party leaders, perceived competence on the big issues like the economy, health or immigration. It is generally NOT driven by specific policy issues on low salience issues.
However, if you ask people directly about the impact of specific policy issue on a low salience issue, and whether it would make them more or less likely to vote for a party, they will normally claim it does. This is for a number of reasons. One is that you are taking that single issue and giving it false prominence, when it reality it would be overshadowed by big issues like the economy, health or education. The second is that people tend to just use the question as a way of signalling if they like a policy or not, regardless of whether it would actually change their vote. The third is that it takes little account of current voting behaviour – you’ll often find the people saying a policy makes them more likely to vote Tory is made up of people voting Tory anyway, people saying a policy makes them less likely to vote Tory are people who wouldn’t vote Tory if hell froze over.

There are ways to try and get round this problem – in the past YouGov used to offer “Makes no difference – I’d vote party X anyway” and “Makes no difference – I wouldn’t vote for party X” to try and get rid of all those committed voters whose opinion wouldn’t actually change. In some of Lord Ashcroft’s polling he’s given people the options of saying they support a policy and it might change their vote, or that they’d support it but it wouldn’t change their vote. The best way I’ve come up with doing it is to give people a long list of issues that might influence their vote, getting them to tick the top three or four, and only then asking people whether the issue would make them more or less likely to vote for a party (like we did here for gay marriage, for example). This tends to show that many issues have little or no effect on voting intention, which is rather the point.

Should the government stop and think before going ahead with a policy?

This is perhaps the closest I’ve seen to a “when did you stop beating your wife” question in recent polls. Obviously it carries the assumption that the government has not already stopped to consider the implications of policy. Matthew Parris once wrote about a rule of thumb on understanding political rhetoric, saying that if the opposite of a political statement was something that no one could possibly argue for, the statement itself was meaningless fluff. So a politician arguing for better schools is fluff, because no one would ever get up to the podium to argue for worse schools. These sort of questions fall into the same trap – no one would argue the opposite, that the best thing for the government to do is to pass laws willy-nilly without regard for consequences, so people will agree to the statement in regard of almost any subject. It does NOT necessarily indicate opposition to or doubt about the policy in question, just a general preference for sound decision making.

Do you agree with pleasant uncontentious thing, and that therefore we should do slightly controversial thing?

Essentially the problem is one of combining two statements together within an agree disagree statement, and therefore not giving people the chance to agree with one but not the other. For example “Children are the future, and therefore we should spend more on education” – people might well agree that children are the future (in fact, it’s relatively hard not to), but might not agree with the course of action that the question suggests is a natural consequence of this.

How likely are you to do the right thing? or Would you signify your displeasure at a nasty thing?

Our final duff question falls into the problem of social desirability bias. People are more likely to say they’ll do the right thing in a poll than they are to do it in real life. This is entirely as you’d expect. In real life the right thing is sometimes hard to do. It might involve giving money when you really don’t have much to spare, or donating your time to volunteer, or inconveniencing yourself by boycotting something convenient or cheap in favour of something ethical. Answering a poll isn’t like that, you just have to tick the box saying that you would probably do the right thing. Easy as pie. Any poll you see where you see loads of people saying they’d volunteer to do something worthwhile, boycott something else, or give money to something worthy, take with a pinch of salt.


House effects

A lot of the points I made in my essay on how not to report polls boiled down to not taking a poll in isolation. Not making the outlier the story, only comparing apples to apples, not cherry picking – they all boil down to similar things, especially on voting intention.

In the last couple of days I’ve watched people getting overexcited over two polls. Yesterday’s ICM poll provoked lots of Tory excitement on Twitter and comments about the Labour lead falling and it being a terrible poll for Labour and so on. ICM’s poll, of course, did not show Labour’s lead falling at all – it showed it steady for the fourth month in a row. ICM’s methodology merely produces consistently lower leads for Labour due to their methodological approach. Saturday night had the usual flurry of excitable UKIP comments on Twitter about being on the rise and being the 3rd party after the Survation poll was published, conveniently ignoring the fact that 95% of polls this year have had them in fourth – often by a very long way. There was, needless to say, no similar excitement over UKIP being on 4%, 11 points behind the Lib Dems, in the ICM poll yesterday.

Different pollsters have different approaches, on things like weighting, likelihood to vote, how they deal with don’t knows, how they prompt and so on. While all the pollsters are politically neutral, these do have some consistent partisan effects – for example, ICM’s methods tend to produce the highest levels of support for the Liberal Democrats, YouGov’s methods tend to produce the lowest levels of support for the Liberal Democrats. The graph below shows an estimate of the partisan house effects of each polling company’s voting intention methodology, calculated by comparing each company’s poll results to the rolling average of the YouGov daily poll (1)

YouGov, ICM and ComRes’s online polls tend to show the highest shares of the vote for the Conservative party. However, in the case of YouGov this is cancelled out by a tendency to also show the highest levels of support for Labour, so the result is that ICM show the lowest Labour leads while YouGov tend to show some of the highest Labour leads after Angus Reid and TNS. For the Liberal Democrats, ICM show far higher support for the party than any other company, averaging at plus 3.3 points. Next highest is Survation and ComRes’s telephone polls. At the opposite end of the spectrum YouGov tend to show significantly lower Liberal Democrat support.

It would take a much longer post to dissect the full methodology of each pollster and the partisan implications, but to pick up the general methodological factors that contribute to the house effects:

How pollsters account for likelihood to vote. Some companies like YouGov and Angus Reid do not take any account of how likely people say they are to vote away from elections(2). Companies like ICM and Populus weight by how likely people say they are to vote, so that people who say they are 10/10 certain to vote count much more than someone who says their chances of voting are only 5/10. At the opposite end of the scale from YouGov, Ipsos MORI include only those people who are 10/10 certain to vote, and exclude everyone else from their topline figures. Other twists here are ICM, who also heavily downweight anyone who says they didn’t vote in 2010, and ComRes, who use a much harsher likelihood to vote question for people voting for minor parties than for the big three. Most of the time Conservative voters say they are more likely to vote than Labour voters, so the more harshly a pollster weights or filters by likelihood to vote the better it is for the Tories.

How pollsters deal with don’t knows. Somewhere around a fifth of people normally tell pollsters they don’t know how they would vote in an election tomorrow. Some pollsters like YouGov simply ignore these respondents. Some like MORI ask them a “squeeze question”, something like “which party are you most likely to vote for?”. Others estimate how those people would vote using other information from the poll, such as party ID (ComRes) or how those people say they voted at the previous election (ICM and Populus). These adjustments tend to help parties that have lost support since the last general election – so currently ICM and Populus’s adjustment tends to help the Liberal Democrats and, to a lesser extent, the Conservatives. In past Parliaments it has helped the Labour party.

How the poll is conducted. About half the current regular pollsters do their research online, about half do it by telephone. While there is no obvious systemic difference between online and telephone polls in terms of support for the Conservatives, Labour and Liberal Democrats there is a noticable difference in support for UKIP, with polls conducted online consistently showing greater UKIP support. This may be to do with interviewer effect, with respondents being more willing to admit supporting a minor party in an online poll than to a human interviewer, or may be something to do with sampling.

How the poll is weighted. Almost all pollsters now use political weighting of some sort in their samples. In the majority of cases this means weighting the sample by how people said they voted at the last election – i.e. we know 37% of people who voted in Great Britain in 2010 voted Tory, so in a representative sample 37% of people who say they voted at the previous election. It isn’t quite as simple as that because of false recall – people tend to forget their vote, or misreport voting tactically, or claim they vote when they didn’t actually bother, or align their past behaviour with their present preferences and say how they wish they had voted with hindsight. Most pollsters estimate some level of false recall in deciding their weighting targets, Ipsos MORI reject it on principle with the effect that proportionally their samples tend to contain slightly more people who say they voted Labour at the last election, and somewhat fewer who say they voted Lib Dem.

How the poll is prompted. As discussed at the weekend, almost all companies prompt their voting intention along the lines of Conservative, Labour, Lib Dem, Scots Nats/Plaid if appropriate and Other. Survation also include UKIP in their main prompt, leading to substantially higher UKIP support in their polls.

All these factors interact with one another – so you can’t look at one in isolation. For example, MORI’s sample tends to be a bit more Labour than other parties, but their turnout filter is harsher than most other companies which disadvantages Labour and cancels out the pro-Labour effect of not weighting by past vote. ComRes’s online polls tend to find a higher level of UKIP support than many other companies, but their harsh filter on likelihood to vote for other parties cancels this out. They also change over time – so while re-allocation of don’t knows currently helps the Lib Dems, in past years it has helped Labour (and when originally introduced in the 1990s helped the Tories.)

Inevitably the question arises which polls are “right”. The question cannot be answered. Come actual elections polls using different methods all tend to cluster together and show very similar results – polls have a margin of error of plus or minus 3%, so judging which methodology is more accurate based on one single poll every five years when all the companies are within the 3% margin of error is an utter nonsense.

Realistically it a more a philosophical question than a methodological one – the reason pollster show different figures is that they are measuring different things. YouGov don’t make second guesses about don’t knows and assume everyone who says they vote will. Their figures are basically how people say they would vote tomorrow. In comparison ICM weight by how likely people say they are to vote, assume people who didn’t vote last time are less likely to do so than they say they are, and make estimates of how people who say don’t know would actually vote. Their figures are basically how ICM estimate people would actually vote tomorrow. They are two different approaches, and there is not right answer as to which one to take. Shouldn’t a pollster actually report what people say they’d do, rather than making second guesses about what they’d really do? But if a pollster has good reason to think that people wouldn’t behave how they say they will, shouldn’t they factor that in? No easy answer.

Given these differences though, when you see a poll, it is important to remember house effects and to look at the wider trends. A poll from ICM showing a smaller Labour lead than in most other companies’ polls isn’t necessarily a sign of some great collapse in Labour’s lead, it’s more likely because ICM always show a smaller Labour lead than other companies (ditto a great big Labour lead in an Angus Reid poll). That said, even a big Labour lead from ICM or a small Labour lead from Angus Reid shouldn’t get people too excited either, as any single poll can easily be an outlier. As ever, the rule remains to look at the broad trend across all the polls. Do not cherry pick the polls that tell you what you want to hear, do not try to draw trends from one company to another when they use different methods and don’t get overexcited by single outlying polls.

(1)House effects were calculated by using the daily YouGov poll as a reference point. I took a rolling 5 day average of the YouGov daily poll, and compared that to each poll from another company. This was used to calculate each company’s average difference from the YouGov daily poll. Then it was calibrated to the average for difference for each party, so that YouGov wasn’t automatically the mid-point!)

(2) YouGov do take into account likelihood to vote during election campaigns, using roughly the same approach as Populus


Some of the internet got very excited over a LibDemVoice poll earlier this week showing 46% of Lib Dem members don’t want Nick Clegg to stay on as party leader at the next election.

The question itself was rather more nuanced than some of the comment upon it suggested – it gave respondents options of Clegg staying for the election, stepping down just before the the election or stepping down sooner than that (and also separate opinions for stepping down as leader and deputy PM). Most of the 46% of Lib Dem members that wanted Clegg to go were happy for him to stay on for now – 32% of respondents wanted him to step down as party leader at some point, compared to only 14% who wanted him to step down in the next year. It suggests to me that this is more about Lib Dem members thinking Clegg is probably not the leader to get them votes at the next general election, rather than a sign of unhappiness or opposition to him per se.

While I’m here I should write quickly about how representative the polls on LibDemVoice are. Stephen Tall and Mark Pack don’t make huge claims about representativeness and are always quick to stress that they can’t claim they are representative. This is admirable, but is sadly not a carte blanche, as however much the person doing a poll hedges it with caveats and warnings these are rarely picked up by third parties who report a poll and are more interested in making it newsworthy than reporting it well.

That said, I think they are actually pretty worthwhile. They have the huge advantage of being able to actually check respondents against the Liberal Democrat member database so we can be certain that respondents actually are paid up Lib Dem members and not entryists, pissed off former members, other parties supporters causing trouble, etc. LDV also have access to some proper demographic data on the actual membership of the Lib Dem party, so while their sample is unrepresentative in some ways (it’s too male for example), they know this and can test to see if it makes a difference. They have also compared it against some YouGov polling of Lib Dem members which had very similar results, and actual Lib Dem party ballots, which had excellent results in 2008 and rather ropey ones in 2010. Mark Pack has a good defence of them here.

Of course, there are caveats too. The danger for such polls is if they end up getting responses disproportionately from one wing of the party or another, from supporters or opponents of the leadership. I am not a Lib Dem activist so such things may be over my head, but from an outside perspective the LibDemVoice website doesn’t seem to be pushing any particular agenda within the party that might skew the opinions of their readers or which party members take their polls. If reading LDV does influence their opinions though, it could obviously make respondents different to the wider Lib Dem party (for example, here Stephen suggests Nick Harvey’s increase in approval ratings could be the effect of making regular posts on Lib Dem Voice, which would indeed be a skew… but not on a particularly important question!)

I do also worry about whether polls that are essentially recruited through online party-political websites or supporter networks get too many activists and not enough of the armchair members, or less political party members (not an oxymoron, but the type of party member who joins for family or social reasons, because their partner is a member or because they want to contribute to their local community through being a councillor and the party is really just the vehicle).

All that said, while they aren’t perfect and Mark and Stephen never claim they are, I think they are a decent good straw in the wind and worth paying attention to, especially given the verification of whether respondents are party members.