Survation have a new poll out in Sheffield Hallam which gives a ten point lead to Labour. Naturally this has produced a lot of crowing from people who don’t much like Nick Clegg and some possibly unwise comments from Nick Clegg about the poll being “bilge”, commissioned by the Labour affiliated Unite (which is was, but it shouldn’t make any difference to the voting intention figures). Tabs are here.
The poll has been compared to Lord Ashcroft’s one last year which showed Nick Clegg ahead in his seat, albeit, only narrowly. The reason for the difference is nothing at all to do with who commissioned the polls though, and everything to do with differences between the methodology Ashcroft uses and the methodology Survation use for all their clients (Unite, and anyone else).
One difference that people commented on yesteday is that Lord Ashcroft uses political weighting in his constituency polls, but Survation do not. This has the potential to make a sizeable difference in the results, but I don’t think it is the case here – looking at the recalled vote in Survation’s poll it looks fairly close to what actually happened, weighting by past vote would probably have bumped up the Lib Dems a little, but the reason the Lib Dems are so far behind is not because of the weighting, it’s because more than half of the people who voted Lib Dem in 2010 aren’t currently planning on doing so again.
However, there are other methodology differences that probably do explain the gap between the Ashcroft poll and the Survation one. If we start off with the basic figures each company found we get this:
In Survation’s poll the basic figures, weighted by likelihood to vote, were CON 22, LAB 33, LD 23, UKIP 9
In Ashcroft’s poll the basic figures, weighted for likelihood to vote, were CON 23, LAB 33, LD 17, UKIP 14
Both had a chunky Labour lead, in fact, Ashcroft’s was slightly bigger than Survation’s. Ashcroft however did two things that Survation did not do. He asked a two stage question, asking people their general voting intention and then their constituency question, and he reallocated don’t knows.
When Lord Ashcroft does constituency polls he asks a standard voting intention question, then asks people to think about their own constituency. This makes a minimal difference in most seats, where people’s “real” support is normally the same as how they actually vote. In seats with Lib Dem MPs it often makes a massive difference, presumably because tactical voting and incumbency are so much more important for Lib Dem MPs than those from any other party.
This is a large part of the difference between Survation and Ashcroft. In Ashcroft’s second question, asking people to think about their own constituency, he found figures of CON 18%, LAB 32%, LD 26%, UKIP 14% – so the two-stage-constituency-question added 9 percentage points to the Lib Dems. Survation actually asked people to think about their constiuency in their question, probably explaining why they had the Lib Dems 6 points higher than Ashcroft in their first question, but I think the constituency prompt has more effect when it is asked as a second question, and respondents are given a chance to register their “national choice” first.
The other significant methodological difference is how Survation and Ashcroft treat people who say don’t know. In their local constituency polls Survation just ignore don’t knows, while Ashcroft reallocates them based on how they voted at the previous election, reallocating a proportion of them back to the party they previously voted for. Currently this helps the Liberal Democrats (something we also see in ICM’s national polls), as there a lot of former Lib Dems out there telling pollsters they don’t know how they will vote.
In this particular case the reallocation of don’t knows changed Ashcroft’s final figures to CON 19, LAB 28, LD 31, UKIP 11, pushing the Lib Dems up into a narrow first place. Technically I think there was an error in Ashcroft’s table – they seem to have reallocated all don’t knows, rather than the proportion they normally do. Done correctly the Lib Dems and Labour would probably have been closer together, or Labour a smidgin ahead, but the fact remains that Ashcroft’s method produces a tight race, Survation’s a healthy looking Labour lead.
So which one is right?
The short answer is we don’t know for sure.
Personally I have confidence in the two-stage constituency question. It’s something I originally used in marginal polling for PoliticsHome back in 2008 and 2009, to address the problem that any polling of Lib Dem seats always seems to show a big jump for Labour and a collapse for the Lib Dems. This would look completely normal these days of course, but you used to find the same thing in polls when Labour were doing badly nationally and the Lib Dems well. My theory was that when people were asked about their voting intention they did not factor in any tactical decisions they might actually make – that is, if you were a Labour supporter in a LD-v-Con seat you might tell a pollster you’d vote Labour because they were the party you really supported, but actually vote Lib Dem as a tactical anti-Tory vote. The way that it only has a significant effect in Lib Dem seats has always given me some confidence it is working, and people aren’t just feeling obliged to give as different answer – the overwhelming majority of people answer the same to both questions.
However the fact is the two-stage-constituency question is only theoretical – it hasn’t been well tested. Going back to it’s original use for the PoliticsHome marginal poll back in 2009, polling in Lib Dem seats using the normal question found vote shares of CON 41, LAB 17, LDEM 28. Using the locally prompted second question the figures became CON 37, LAB 12, LDEM 38. In really those seats ended up voting CON 39, LAB 9, LDEM 45. Clearly in that sense the prompted question gave a better steer to how well the Lib Dems were doing in their marginals… but the caveats are very heavy (it was 9 months before the election, so people could just have change their minds, and it’s only one data point anyway.) I trust the constituency prompted figures more, but that’s a personal opinion, the evidence isn’t there for us to be sure.
As to the reallocation of don’t knows, I’ve always said it is more a philosophical decision that a right or wrong one. Should pollsters only report how respondents say they would vote in an election tomorrow, or should they try and measure how they think people actually would vote in an election tomorrow? Is it better to only include those people who give an opinion, even if you know that those undecideds you’re ignoring appear more likely to favour one party than other, or is it better to make some educated guesses about how those don’t knows might split based on past behaviour?
Bottom line, if you ask people in Sheffield Hallam how they would vote in a general election tomorrow, Labour have a lead, varying in size depending on how you ask. However, there are lots of people who voted for Nick Clegg in 2010 who currently tell pollsters they don’t know how they would vote, and if a decent proportion of those people in fact end up backing Nick Clegg (as Ashcroft’s polling assumes they will) the race would be much closer.