Political Weighting

MORI’s most recent political monitor included a question asking about how people voted at the last election. Since they don’t use it for weighting purposes, it isn’t a question that MORI regularly ask (or at least, it isn’t one they regularly publish) and it’s a good opportunity to see just how much difference political weighting makes to a poll.

I mention political weighting in polls a lot here, but it’s been a long time since I’ve looked at what it is, why it is done and what difference it makes. In short all polls use methods that are supposed to generate representative samples, i.e. they have the correct number of people from each region in the country, they have the right spread of people in different age brackets, the right mix of men and women and so on. No method is perfect though, so weighting is used to use to iron out the differences. For example, amongst UK adults 52% of the population are female and 48% male. If you had a sample that was 55% female and 45% male you’d have too many women in your sample, so would would weight them down – specifially, you’d make every female respondent count as 0.95 of a person, and every man count as 1.07 of a person, then when you totalled everything up it would be the equivalent of having 52% women and 48% men.

Political weighting is more controversial and more difficult to do because it isn’t clear what the correct proportions are. On age and gender we have figures from the census so we know what the real demographics are. People’s politics we don’t – the best we have is the last general election. We know that in May 2005 around 33% of those who voted backed the Tories, around 36% of voters backed Labour and so on. In theory a pollster should be able to ask respondents how they voted in the last election and then weight the sample so it matches. The problem with this is “false recall” – if you take a panel study, i.e. ask the same group of people how they voted at the last election, and then ask them the same question 6 months later, and then another 6 months later, they should give the same response each time: we can’t, after all, go back and change how we voted. In practice though it has been tried, and the results do change over time. People who didn’t actually vote start pretending they did, people who voted tactically give the name of the party they really supported, people say how they’d have like to have voted rather than how they really did, people who voted for minor parties forget, and so on. Because past recall isn’t fixed in stone and changes in this way it arguably makes it unsuitable to be used as a weighting variable – we never know what the correct picture we should be aiming at is.

So why do all the main phone pollsters still do it? Because they think it’s better than the alternative of just doing nothing. When it comes to actual elections polls without weighting (or some other major adjustment) grossly overestimate the level of Labour support. Without political weighting, if you ask how people voted at the last election you will tend to get answers around CON 26%, LAB 48%, LDEM 17%. Given that this is 7 points below what the Conservatives got at the last election and 12 points above what Labour actually got it seems self evident that such a sample is grossly over-sampling Labour supporters and grossly under-representing Conservative supporters. Some of that discrepancy though is not due to a biased sample, but to false recall, so it isn’t as simple as weighting to the actual result of the last election. Instead the pollsters who use weighting by past vote need to estimate what levels of recalled vote a truly representative sample would produce, and then weight to that. Populus do this by assuming that the difference is roughly 50/50 between sample bias and false recall, and weighting to a point half way between the actual result and the average recalled vote in their unweighted samples. ICM do similar, but put the point closer to the actual results.

Weighting by past vote (or other political weighting) also has the advantage of stability – the make up of the political sample each month is, in theory at least, the same, so if Labour go up 4 points from last month we can be confident that they have actually gone up, rather than us just having a sample with more past Labour voters in it (within the normal bounds of sample error and so on of course).

So where does MORI come into this? MORI don’t weight by past vote because of the concerns about the volatility of past vote recall. They are concerned that past vote recall itself can change from month to month – ICM and Populus’s figures suggest that it is relatively stable over time, but that doesn’t mean it can’t shift in the future. MORI don’t normally use phone polling, they use quota sampling, so there is actually no reason to think their raw samples will resemble the phone samples used by ICM and Populus. Last month’s figures though suggest that they do – MORI’s sample hed recalled vote of CON 27%, LAB 47%, LDEM 19%. Populus’s last poll had unweighted figures of CON 29%, LAB 47%, LDEM 16%; ICM’s last unweighted figures were CON 27%, LAB 47%, LDEM 21%. As you can see, in terms of past vote, all three samples were pretty similar. The difference is that ICM and Populus then both weighted their samples to reduce the proportion of past Labour voters and increase the proportion of past Conservative voters so it was closer to what actually happened at the last election. Specifically, Populus weighted to shares of CON 32%, LAB 39%, LDEM 21% and ICM weighted to shares of CON 32%, LAB 39%, LDEM 22%. Hence ICM and Populus ended up using samples that contained far more Conservative supporters and far fewer Labour supporters than MORI’s sample did.

The topline voting intention figures published in the newspapers by the three pollsters weren’t that different – MORI and ICM both gave the Tories a 7 point lead, Populus a lower 4 point lead – though that was after Blair’s resignation. The reason for this is that MORI add a very strict filter by likelihood to vote – ignoring everyone who doesn’t say they are 10/10 certain to vote – which vastly boosts the Conservatives. Without that filter Labour would have had a 2 point lead.

Via various different adjustments and filters the pollsters all arrive at roughly similar figures for voting intention. The thing to remember here is the effect on all the other political questions – there is no filtering by likelihood to vote on things like approval figures for party leaders, or whether X or Y would make a good Prime Minister. So remember, when you are looking at MORI figures on David Cameron’s approval ratings, or which party would be best on pensions or whatever, they are the opinions of a sample in which around 47% of people who say they voted claim they voted Labour. When you see the same questions in an ICM poll they are the opinions of a sample in which only 39% of respondents who say they voted say they voted Labour. It’s also worth keeping a beady eye on quicky questions done by the phone pollsters on omnibus polls – in ICM and Populus’s monthly polls for the Guardian and the Times with questions on voting intention the sample will always have been weighted by past vote. In polls without voting intention questions they might not have been weighted as such, and they too might have rather more Labour supporters than you’d normally expect.


Comments are closed.