ComRes have done their monthly online poll for the Independent on Sunday. Topline figures are CON 31%(+2), LAB 34%(-1), LDEM 7%(nc), UKIP 19%(nc), GRN 4%(nc). For clarification, given some of the misunderstandings on Twitter earlier today, this is using ComRes’s normal methodology and prompting, they haven’t changed anything (I have no idea if they intend to do so or not… though I expect they’ll be getting a lot of people asking them tonight!). The sample size however was smaller than usual, as with the other half of the sample ComRes carried out an experiment asking the voting intention question including UKIP in the main voting intention prompt. The result using that different method was CON 29%, LAB 31%, LDEM 7%, UKIP 24%, GRN 5%.

Now, I should underline the importance of noting that this is just one poll. It is comparing two samples of 1000 or so people, with the usual margins of error that implies – so not all the difference will necessarily be prompting, some could just be normal sample variation. Please don’t go away with the idea that prompting for UKIP will always has the effect of bumping up UKIP by 5% – it’s just one data point. I think it probably does make a difference (we’ve tested in the past), but five points does seem rather high. Also remember that prompting may affect different methods differently, so the way it affects a ComRes online poll using their methods would not necessarily reflect the way it would affect any other poll (I am personally intrigued by the possibility that prompting may have a different impact in telephone polls, where people may feel obliged to pick one of the options offered by a human interviewer, than in an online poll where it’s just clicking through to another list of options – but obviously I don’t have phone polls to test it on!)

Knowing that prompting does make a difference – something that pollsters knew anyway – doesn’t actually get us any closer to an answer to the real question though, whether prompting for UKIP produces more or less accurate results in GB election polls. It the ComRes figure of 19% more or less accurate than the figure of 24%? Whether polls prompt or not for UKIP is often a issue that produces a lot of comment. Part of that is from people whose concern is, shall I say, more to do with maximising the reported level of support for UKIP than it is to maximise the accuracy of polling. Part of it is that, prima facie, it does seem somewhat strange that a party (normally) running in third place isn’t prompted for when the party that’s (normally) running in fourth place is. Another part is people looking for an explanation for the big difference in reported levels of UKIP support between different pollsters; typically the companies showing the highest levels of support, Survation and Opinium, show UKIP at about twice the support of ICM or MORI, who typically show the lowest. In the latter case I think the attention is misplaced – the reason for the biggest differences in levels of UKIP support in the polls appears to lie elsewhere – companies like Opinium manage to show some of the higher figures without any prompting! Rather they appear to be a contrast between telephone polling and online polling, for some reason online polls show consistently higher levels of UKIP support than telephone polls. That may be something to do with the mode (perhaps people are more ready to admit they are voting UKIP to an anonymous computer screen than to a human interviewer) or it could be something to do with sampling (for some reason phone samples have fewer of the sort of people who vote UKIP than online samples do).

As a pollster it is more important that methods produce the most accurate results than it is whether they appear “fair” (and certainly it’s more important to be accurate than to produce the higher possible score for UKIP!). The fact is that there isn’t a hard and fast rule about when you do and don’t prompt, we don’t have the evidence to say the cut off point is x% support, or y place, or z number of MPs. It’s a matter of judgement. We know from experience over the last couple of decades that prompting for smaller parties tends to overestimate their support (probably because it gives them a prominence and perception of equality with the major parties that may not be there among the general public), we also know that in the 1980s NOT prompting for the Lib Dems used to underestimate their support, so getting it wrong either way can produce error. Sometimes you can get it wrong by prompting, sometimes you can get it wrong by not prompting. There is no real way of knowing when a party switches from a position where prompting risks overestimating them to one where not prompting risks underestimating them – but clearly we are equally keen to avoid both errors. If UKIP establish themselves to the point that they have lots of MPs, consistent support over time, have known people and policies, are treated as a major established party that is given equal treatment by the BBC and OfCom and so on the time will come when the risk of not prompting outweighs the risk of prompting (it has already come, for example, in European elections)… but when you reach that point? It’s a judgement call.

It’s in a bigger context too. The last general election was held in the middle of “Cleggmania” and a surge of enthusiasm for the Liberal Democrats. The polls overestimated their support. The European elections earlier this year saw a great big surge of enthusiasm for UKIP… and of polls in the last week all but one company overestimated their support. In the Scottish referendum I don’t think anyone could deny that the YES campaign were the more enthused, and the polls seem to have all slightly overestimated their support. I may very well be reading something into these that isn’t there, but you get my drift – polls may be overestimating support for parties and movements that have particularly enthusiastic and zealous supporters. There’s also that unexplained difference in UKIP support between telephone companies and online companies, and what might be behind that. Getting UKIP right at the next election is the big challenge facing pollsters, but its about more than just prompting.


This morning’s YouGov poll for the Sun has topline figures of CON 33%, LAB 37%, LDEM 8%, UKIP 13%.

All very normal, but worth noting a slight update in methodology. As regular readers will know, YouGov’s political weighting is based on panelists recorded party identification in May 2010, meaning they don’t have to worry about changes in party ID over time – they weight people’s 2010 ID to 2010 targets. However, over the years new people join the panel, so the target weights need to adapt to this and reflect to some proportion that Lib Dem ID has fallen and UKIP ID has grown – hence once a year YouGov update the weights to reflect this. The changes this year decrease the target weight for Lib Dem ID and increase the target for Other (primarily UKIP) ID.

The end result is that the new weights tend to show UKIP 1 point higher, the Conservatives, Labour and Lib Dems very slightly lower (less than a percentage point in all cases).


The August ICM poll for the Guardian is out tonight and has topline figures of CON 31%(-3), LAB 38%(+5), LDEM 12%(nc), UKIP 10%(+1). It’s a much higher Labour lead than ICM have shown of late, their polls for the last few months have been showing Labour and Conservative essentially neck-and-neck. As ever, don’t read too much into a single poll, it might be the start of a broader Labour increase… or may just be normal sample volatility.

The poll also asked how people would vote with Boris Johnson as Tory leader, and found the Tories on 34% (3 points higher), Labour on 37% (one point lower) and UKIP two points lower. This is different from the conclusion to the YouGov poll at the weekend that showed virtually no change from a Boris leadership, but it appears to be the result of slightly different approaches to asking the question. Compared to their standard question, bot h YouGov and ICM found Labour’s lead reduced by three points when you asked how people would vote with Boris as leader. However, the difference is that YouGov also asked a control question of how people would vote if the leaders remained Cameron, Miliband and Clegg, and that also reduced the Labour lead by 2 points, accounting for much of the apparent “Boris difference”.

That said, I wouldn’t take “How would you vote with X as leader” questions too seriously anyway. People are rubbish at answering hypothetical questions, and here we’re expecting them to say how they’d vote with X as leader without knowing what changes X would make, what priorities and policies they’d adopt or anything else about what an X leadership would look like. They can be useful straws in the wind, but really, they are no more than that.

UPDATE: Meanwhile the Sun have just tweeted the daily YouGov poll: CON 33%, LAB 37%, LD 8%, UKIP 12%


Marginal polls

Back in May when ComRes first launched their marginal seat Omnibus I wrote about some of my reticence towards marginal polling, why it isn’t usually quite as useful as it should be, and why I hoped that might change. Marginal seat polls matter because they are the seats that might change hands, and therefore the seats that will decide the election. If they behave differently to the national polls, and if different groups of marginal seats behave differently to one another, it’s obviously a very big deal.

What has limited their usefulness in the past is their infrequency, and the lack of comparability and empirical testing. Marginal polls used to only come along occasionally, varied a lot, polled different groups of seats, and didn’t often happen right before elections so weren’t tested against reality, meaning methods weren’t finessed and improved over time in the same way national polls are.

In practice their rarity and inconsistency rendered them a very blunt tool when we’re looking to spot quite subtle differences – the reality is that marginal seats aren’t that different from the country as a whole:

  • In English & Welsh seats at the last election (the swing in Scottish seats is consistently different) the average swing from Lab to Con was 5.8%. In the 50 most marginal seats the swing was 5.6% – no real difference at all. In the real core battleground (Lab-v-Con seats), there was a slightly more noticeable difference, but it was still small. Amongst all Lab-v-Con seats the swing was 6.7%, amongst those with a majority of less than 10% the swing was 8% – so 1.3 percentage points bigger.
  • In 2005 the average swing in all English seats was 3.2%. In the Lab-v-Con battleground seats it was 3.5%, in Lab-v-Con marginal seats the swing was also 3.5%. No difference.
  • In 2001 the average swing in all English seats was 1.6%, the average swing in Lab-v-Con seats was also 1.6%, the average swing in marginal Lab-v-Con seats was -0.5% (that is, overall there was a small swing to the Conservatives, but on average there was a tiny swing to Labour in the Lab-v-Con marginals).

You can see that marginals do behave a little differently sometimes – the Conservatives managed a better swing in their target Labour marginals in 2010, Labour did better in those seats where they had fresh incumbency in 2001 – but the differences aren’t huge. We’re talking 1 or 2 percent difference. That’s enough to make a genuine difference in seat numbers, but is very difficult to determine from a single opinion poll. The difference between the national picture and the marginal picture will normally be so subtle that it could easily be lost under or mistaken for normal sample variation, or the methodological differences in doing marginal polls (or vice-versa, normal volatility or methodological impacts could be mistaken for a different pattern in the marginals when there is none).

More recently though things have been looking up. We’ve seen an increase in marginal polls and, more importantly, we’ve seen an increase in regular marginal polls – Lord Ashcroft and ComRes are both doing regular polls of the same groups or group of marginal seats. Different pollsters are also doing marginal polls of roughly the same marginal seats – Ashcroft, ComRes and this week Survation have all done polls that include ultra-marginal Conservative -v- Labour seats. However, despite covering the same ground, the results are very different.

The table below is an attempt to make the results roughly comparable. There are much more obvious differences between different battlegrounds (that is, between seats that are Con-v-Lab battles and seats that are Con-v-LD battles), so I’ve looked at only the Con-v-Lab battleground – those marginal seats with the Conservatives in first place ahead of Labour. Sample size for each poll is just for the Con-v-Lab marginals, the swing just those seats, and I’ve compared it to the average of national polls at the time of each marginals poll’s fieldwork.

marginalstable3

As you can see – three companies, three completely different stories. ComRes show the Conservatives doing much better than nationally in these key marginals. Lord Ashcroft shows very little difference between the national picture and equivalent marginals. The Survation poll today showed Labour doing much better in similar marginals.

Some of the differences will be methodological. For example, Ashcroft uses the two stage question wording to try and coax out local considerations though frankly it makes very little difference in Con-v-Lab marginals. I don’t think ComRes prompt for UKIP in their marginal polls, but Survation and Ashcroft both do. The weighting regimes are very different – I think Ashcroft weights by age, gender, social class and recalled vote; ComRes weight by the same plus housing tenure; Survation appear to weight only by age and gender, with no political or socio-economic weights. Lord Ashcrofts poll are also, it’s worth noting, of a substantially larger size – they are an aggregate of full size single constituency polls, rather than a poll of a group of marginals.

You pays your money, you takes your choice. My own expectation is that, if there is a relatively small Con to Lab swing the Conservatives will do very slightly better in the marginals thanks to the double-incumbency effect – the historical evidence for such an effect is extremely strong and I see no obvious reason for it not to happen this time round. If, on the other hand, there is a hefty swing towards Labour then it might well be cancelled out due to a stronger performance in Labour target seats, like we saw for the Conservatives in 2010 or Labour in 1997. Time will tell. Either way, I wouldn’t expect Con-Lab margins to perform radically differently to the national picture – if there’s a systemic difference between marginals and the country as a whole, I’d expect it to be a small one. In a close election that could still be the difference between a majority and a hung Parliament, so don’t underestimate its potential importance, but it would be a remarkable election if the swing in marginal seats really was 4 or 5 points bigger or smaller than the national picture.


Two new polls out today, both good for Labour. Populus this morning had toplines of CON 31%, LAB 38%, LDEM 9%, UKIP 14% (tabs here). Lord Ashcroft’s weekly poll has topline figures of CON 27%, LAB 34%, LDEM 11%, UKIP 15% (tabs here).

The Ashcroft poll comes after a poll last week that showed the Conservatives 2 points ahead, and has naturally provoked some comment about volatility. In one sense it’s fair comment – Ashcroft polling has been volatile. In another sense it’s not – Ashcroft’s polling hasn’t necessarily been any more volatile than you should expect, it’s just that we sometimes have slightly unrealistic expectations of how accurate a poll of 1000 people should be!

The standard margin of error on a poll of 1000 people is plus or minus 3 points. However, voting intention figures aren’t based on the whole sample, only on those who give a voting intention – in a phone sample of 1000 that’s typically 500 or so people, giving a margin of error of plus or minus 4 points. I should add that the margin of error is based upon what the margin would be in a pure random sample. This is very much a polite fiction – no voting intention polls are actual pure random samples. Many are from internet panels, even quasi-random phone polls aren’t actually random because of low response rates. Weighting effects would also change the actual margin of error.

Looking at Ashcroft’s nine regular polls to date, the average level of Labour support has been 33%, and all nine polls have been within 2 points of this. The average Lib Dem support has been 8.5%, and all nine polls have been within 2.5% of this. What’s made them look erratic is the level of Tory support, which has averaged 29%, but has varied between 25% and 34% – two of Ashcroft’s Tory scores have differed from the average by 4 points, one by 5 points. This assumes that there hasn’t been any genuine movement in Tory support, when it’s possible there has. Ashcroft’s highest Tory score came in his first poll in mid-May, at a time when ICM also showed a Tory lead and YouGov a neck-and-neck. Ashcroft’s lowest Tory score came just after the European results when UKIP had a post-European election boost.

Bottom line is that while Ashcroft’s polls look erratic, they probably aren’t much more erratic than we should expect from topline figures based on 500 people. There isn’t anything strange about their methodology, nothing odd going on, it’s just the normal limits of how precise polling with a given sample size can be. And it’s a useful reminder of why we shouldn’t read too much into individual polls, and it’s the underlying trend and average that count.