Looking again at MORI’s review of their methodology I spotted the rather interesting nugget that “the academic British Election Study (BES) from the 2005 general election […] has the advantage of having had the turnout of its individual respondents validated against the marked-up electoral register”. Indeed it does, and can be downloaded here as long as you have some stats software to handle it.

People who were interviewed during the election campaign – both online and face-to-face – were contacted again after the election and asked how they actually ended up voting. People who said they were 10/10 likely to vote were indeed the most likely to say they did vote in the end (97% of those interviewed online afterwards said they voted, 90% of the 10/10s interviewed face-to-face said they voted) and there is a pretty steady correlation between their answers and how likely they are to claim they did vote – the less likely you say you are to vote, the less likely to are to claim you did afterwards.

But, you may ask, if we don’t trust their predictions of how likely they are to vote, why should we trust them when they later claim they voted. Well, we don’t need to. As MORI mentioned in their review, the BES also looked them up on the marked register to see if they actually voted. Of those people who claimed in the post election face-to-face survey to have voted, the marked register revealed 5% were lying (or voted somewhere else). Of those people who claimed they did not vote in 2005, 3% were lying (or had been victims of personation!).

Whether or not people with postal votes actually voted cannot be discerned from the marked register, so they are shown separately on the graph. People who say they are 9/10 or 10/10 certain to vote are over-estimating their likelihood of voting, people who say they are certain not to vote are under-estimating their likelihood to vote… but broadly speaking, the more likely someone says they are to vote, the more likely they are to do so.

What there noticably isn’t is a huge contrast between people who say they are 10/10 certain to vote and people who say they are 9/10 likely to vote. I can’t see any justification for a sharp cut off like that – looking at these figures alone Populus’s method of incuding everyone, but weighting them according to their likelihood of voting seems to be the obviously correct approach.


3 Responses to “Testing likelihood to vote”

  1. I am sure I made a similar comment (about Internet polls reflecting the more committed voter) to Mark Senior a few months ago (in a YouGov blog). So no surprises for me. :)

    That said, your graphic is hard to read. The closeness of postal and proxy votes (in palette as well as dimensions) initially led me to believe that proxy-voting is growing. [Ok, I mistook the likelihood dimension (x-axis) to be a time-phase analysis, but I have examined closer.]

    Needless to say my annual appointment to the opticians is due. Better phone up a certain opticians, hey Virgil…!

  2. You use the phrase “10/10 likely to vote” and also the phrase “10/10 certain to vote”. Which one was used in the survey? I trust it was “likely”?

    Surely thinking people could never say 10/10 they were certain to vote (as all sorts of events might prevent them), though they could say 10/10 they were likely to vote.

    George

  3. Hi George – yep, the question is likelihood to vote with the scale from “very unlikely” to “very likely”