With MORI’s monthly figures, I often find myself adding a caveat that their polls do tend to be rather more volatile than their competitors. In the past twelve months MORI has shown two shifts of 11 points in a party’s poll rating, one 9 point shift, one 8 point shift and two 7 point shifts. No other pollster has shown a single change of this size in their regular monthly polls. The average change in each party’s support in MORI’s monthly poll since the General Election has been 3.3 points, compared to 1.8 for ICM, 1.9 for Populus and 1.6 for YouGov. It’s pretty undeniable that MORI’s figures are more volatile. The question is why.

Until now I have put this down to two factors. Firstly MORI do not weight their sample politically and secondly they filter their headline figures based on likelihood to vote.

All the pollsters weight their samples by demographic factors, things like age, gender, employment, social class and so on. Hence all samples will contain, for example, 52% women and 48% men – just like the adult population as a whole. Therefore if, for example, more women than men voted Liberal Democrat, the poll wouldn’t be skewed against the Lib Dems by having too few women. The problem is things like gender don’t correlate very well with how people vote. For this reason all the pollsters except MORI try to weight their sample politicially, using something that does correlate well with how people vote. In the case of YouGov this is how people identify politically – 25% of a YouGov sample will be people who say they are Conservatives, 34% will be people who say they are Labour, 25.5% are people who say they don’t indentify with any party and so on. Because people might change their political identification, YouGov weight to the shares they found in May 2005, using the data people told them at the time.

YouGov use a panel, allowing them to use data people gave them a year ago. Pollsters who sample from the wider population don’t have that luxury – instead ICM and Populus weight according to how people voted in 2005. At the last election we know what proportion of people voted Labour, voted Tory and so on. ICM and Populus ask people how they voted last time, and weight their sample so it matches…or they would do, except that people aren’t very good at remembering how they actually voted. People who didn’t vote claim they did, people say how they would have liked to vote with hindsight rather than how they actually voted, people forget they voted Lib Dem in protest and so on – it’s known as “false recall”. ICM and Populus take account of this by using formulas based on their average results and the actual vote. In contrast MORI view the problem of false recall as intractable, and therefore do not use any political weighting.

What this difference means is that, while we cannot know for sure if ICM and Populus have got their respective weightings correct, the political make up of their samples is at least stable. If 40% of ICM’s sample in April say they voted Labour in 2005, about 40% of their May sample will also people who say they voted Labour in 2006, ditto in June and July (ICM and Populus’s weightings do adapt and change over time, but it is a very gradual process). In contrast, without such weighting the political make up of MORI’s samples will differ within standard margins of error from month to month. One month 25% of the sample might be past Labour voters, the next month 29% and so on. Assuming there is a correlation between past and present voting intention, this means that in theory at least, MORI’s polls should be slightly more volatile.

The second reason is the way MORI factor in likelihood to vote. In reality turnout at the last election was about 61%. However in polls very few people say that they won’t vote – hence opinion polls are including the views of some people who, in reality, won’t actually vote. MORI, ICM and Populus all seek to deal with this by asking people to rate themselves on a scale of 1-10 on how likely they are to vote. The difference between their approach is that ICM and Populus weight by the resulting figure – i.e. if someone rates their chances of voting at 10/10 they are included as is (given a weighting of 1.00), if someone says they are 8/10 likely to vote, they only count as 8/10 of a person who is 10/10 likely (given a weighting of 0.80), and so on. In contrast MORI use an all-or-nothing approach, including only those people who say they are 10/10 certain to vote and excluding other people entirely. The impact this might have on volatility is clear – using ICM or Populus’s approach if people who were 10/10 certain to vote Conservative one month are now only 9/10 certain to vote, the affect on Conservative support in the poll’s headline figures would be relatively minor. Using MORI’s methodology, it would result in an apparant slump in support.

At the weekend Dr Roger Mortimore at MORI has put up his own explanation for the volatility, with some surprising figures. According to Dr Mortimore, it is all down to the turnout filtering, and nothing to do with the different approaches to weighting. So people can continue to look at long term trends from before MORI adopted their turnout filter, MORI continue to publish their monthly figures without the filter they apply to their headline figures. Dr Mortimore has taken the average changes from those unfiltered figures during 2006, and finds that MORI’s average change there is only 1.9 points. Comparing this to ICM’s and Populus’s figures over the same time period and their average changes are 2.3 and 1.9 respectively – in other words, if you ignore their turnout filter MORI are less volatile than ICM and just as stable as Populus.

The clear implication of Dr Mortimore’s figures is that the volatility of MORI’s monthly figures has everything to do with how they deal with turnout and nothing to do with past vote weighting. I have to say that I am not convinced. There have only been seven polls so far this year (and therefore six changes) and it is not a huge sample to base conclusions upon. Those 6 bits of data include the polls around the local elections when everyone recording big changes in the vote, obscuring volatility behind genuine change. If you take a longer data series, you start getting different results and MORI go back to being more volatile than the other pollsters, even using unfiltered figures. If you look at all the data since September 2005 the average change in YouGov’s polls is 1.6, Populus’s 1.9, ICM’s 2.00 and MORI’s unfiltered polls 2.56. If you look at all the data since the general election, YouGov’s average change is 1.6, ICM’s 1.8 and MORI’s unfiltered polls 2.4.

If you look back at the last Parliament – not such a good measure because YouGov and Populus only started half-way through – MORI’s unfiltered figures still seem to be more volatile than ICM and YouGov, with average changes of 1.9, 1.5 and 1.6 respectively. Populus were on 1.9, the same as MORI’s unfiltered figures, but their past vote weighting at the time was based on a formula that changed their weighting targets from week to week, meaning that their past vote weighting did not serve to dampen down volatility at all.

It looks to me as though the last 6 months have just happened to contain some comparatively stable unfiltered MORI results, while ICM have been uncharacteristically volatile. Of course, it could be that MORI have changed their sampling or weighting regime in someway we don’t know about and have hence become more stable. If so, well done! Unless that turns out to be the case though, I see no reason to take January as a cut-off period and I’m inclined to put Dr Mortimer’s figures down to just a statistical blip. Of course, everything is subject to change – it will be worth coming back to this in six months time and seeing if MORI’s figures have become more stable for some reason. Until then, looking at a longer data sequence even MORI’s unfiltered figures are more volatile, which in my own personal view still suggests that their volatility is a result of both their turnout filter, and their approach to political weighting.

And finally, where does that leave us on how to treat MORI’s topline figures? MORI’s harsh turnout filter is based on the fact that actual turnouts are even lower than the percentage of people who say they are 10/10 certain to vote. Regardless of whether the reason for the volatility in MORI’s headline figures is a result of political weighting or the turnout filter, the fact remains their headline figures are more volatile, and will presumably continue to be so. That doesn’t mean the more stable unfiltered figures are preferable, as Dr Mortimore says “the proportion of a party’s supporters that turn out is as much a part of the equation that makes up a final election result as the number of supporters the party has in the first place, we believe it is also a more meaningful measure.” So keep on looking at the topline figures, but bear in mind they are a bit more volatile.

(While I’m on the article – I heartily endorse Dr Mortimer’s opinion of “polls of polls” which he says “are akin to measuring England’s sporting performance by averaging the most recent scores of the football, rugby and cricket teams”. If all pollsters used exactly the same methodology polls-of-polls would be very useful in effectively creating one great big sample with a lower margin of error. All pollsters don’t use the same methodology though, some are factoring in turnout, some aren’t, some are accounting for how they think don’t knows might vote, some aren’t, etc, etc. Therefore while some of the difference between them is normal sample error, a lot of it is that different polling companies are measuring slightly different things).

Comments are closed.