Polling news round up

Labour leadership

Regular polling remains sparse given the ongoing inquiry and that we’re in that odd sort of political interregnum with Labour yet to elect their new leaders, but there have been a couple of polls on the Labour contest and the EU. A new ORB poll on the Labour leadership earlier in the week showed Andy Burnham was seen as the candidate most likely to help Labour’s chances at the next election (36%), followed by Liz Kendall on 25%. Full tabs are here.

I would be extremely cautious of polling about the Labour leadership election. Essentially there are two real questions about the Labour leadership – who is going to win, and who would be best at winning votes for Labour. For the first one, we need a poll of Labour party members, and we don’t have a recent one (there is some data from a YouGov poll of party members for Tim Bale & Paul Webb, but that was done straight after the election before the candidates were clear). For the second, I suspect any data is fatally flawed by the public’s low awareness of the candidates – right now, polls about the Labour leadership are little more than name recognition contests. Looking at the tables for the ORB poll it looks to me as if the main reason the prospective leaders scored so highly is that the question didn’t offer a don’t know option, if it had, I bet the don’t knows would have had a runaway victory.

Worth looking at as a corrective is this ICM poll on the Labour leadership that asked people to identify photos of Andy Burnham, Yvette Cooper, Liz Kendall and Jeremy Corbyn. 23% were able to identify Burnham, 17% Cooper, 10% Kendall and 9% Corbyn. Essentially, if 90% of the respondents to a poll can’t even recognise a photo of Liz Kendall or Jeremy Corbyn, how good a judge are they going to be on what sort of Labour leader they’d be? “I’m a particular fan of the one I’ve never heard of and know nothing at all about” said no one, ever.

Sky poll on the EU

SkyNews have released a poll they have carried out themselves amongst a panel of BSkyB subscribers. The poll itself shows nothing particularly new (people think the EU is good for the British economy by 39% to 31%, etc, etc), but the concept itself is interesting – it’s a proper effort to get a representative sample from their subscriber database, weighted by age, gender, past vote, Experian segment (as an alternative to class), ethnicity, tenure and so on. It is, however, unavoidably only made up of Sky subscribers, which will bring with it its own biases. The question is to what extent those biases can be cancelled by their weighting and sampling. We shall see. The tables for this first poll are here.

Parliamentary debates

Last week there were two Parliamentary debates on regulating opinion polls. The first, last Thursday, was prompted by Lord Lipsey and concerned whether polling companies needed regulating to prevent them asking leading and biased questions – though it was largely made up of the specifics of one single poll on mitochondrial donation. The other was the second reading of George Foulkes private members bill regulating opinion polls, which includes a good response from Andrew Cooper of Populus. Lord Bridges for the government stated they had no plans to regulate polls. Lord Foulkes’s bill was nodded through to the committee stage, so will trundle on for a little longer.

Herding pollsters

Finally there’s a great piece by Matt Singh on pollster herding here. Matt mentions some of the possible reasons for herding, but more importantly actually does the sums on whether there was any herding… and finds there wasn’t. The spread between different pollsters in the final polls was very much in line with what you’d expect to find.


On Friday the BPC/MRS inquiry into the polls at the 2015 started rolling. The inquiry team had their first formal meeting in the morning and in the afternoon there was a public meeting, addressed by representatives of most of the main polling companies. It wasn’t a meeting intended to produce answers yet – it was all still very much work in progress, and the inquiry itself isn’t due to report until next March (Patrick Sturgis explained what some see as a very long time scale with reference to the need to wait for some useful data sources like the BES face to face data and the data that has been validated against marked electoral registers, neither of which will be available until later in the year). There will however be another public meeting sometimes before Christmas when the inquiry team will present some of their initial findings. Friday’s meeting was for the pollsters to present their initial thoughts.

Seven pollsters spoke at the meeting: ICM, Opinium, ComRes, Survation, Ipsos MORI, YouGov and Populus. There was considerable variation between how much they said – some companies offered some early changes they were making, some only went through possibilities they were looking at rather than offering any conclusions. As you’d expect there was a fair amount of crossover. Further down I’ve summarised what each individual company said, but there were several things that came up time and again:

  • Most companies thought there was little evidence of late swing being a cause. Most of the companies had done re-contact surveys, reinterviewing people surveyed before the election and comparing their answers before and afterwards to see if they actually did change their minds after the final polls, and most found little change that cancelled itself out, or produced negligible movement to the Tories. Only one of the companies who spoke thought it was a major factor.
  • Most of the pollsters seemed to be looking at turnout as being a major factor in the error, but this covered more than one root cause. One was people saying they will vote but not doing so, and this not being adequately dealt with by the existing 0-10 models of weighting and filtering by likelihood to vote. If that is the problem the solution may lie in more complicated turnout modelling, or using alternative questions to try and identify those who really will vote.
  • However several pollsters also talked about turnout problems coming not from respondents inaccurately reporting if they vote, but from pollsters simply interviewing the sort of people who are more likely to vote, and this impacting some groups more than others. If that’s the cause, then it is more of problem of improving samples, or doing something to address getting too many engaged people in samples.
  • One size doesn’t necessarily fit all, the problems affecting phone pollsters may end up being different to online pollsters, and that the solutions that work for one company may not work for another.
  • Everyone was very wary of the danger of just artificially fitting the data to the last election result, rather than properly identifying and solving the cause(s) of the error.
  • No one claimed they had solved the issue, everyone spoke very much about it being a work in progress. In many cases I think the factors they presented were not necessarily the ones they will finally end up identifying… but those where they had some evidence to show so far. Even those like ComRes who have already made some initial conclusions and changes in one area were very clear that their investigations were continuing, they were still open minded about possible reasons and conclusions and there were likely more changes to come.

Martin Boon of ICM suggested that ICM’s final poll showing a one point Labour lead was probably a bit of an outlier and in that limited sense was hence a bit of bad luck – ICM’s other polls during the campaign had shown small Conservative leads. He suggested this could possibly have been connected to doing the fieldwork for the final poll during the week, ICM’s fieldwork normally straddles the weekend and the political make up of C1/C2s in his final sample was significantly different from their usual polls (they broke for Labour, when ICM’s other campaign polls had them breaking for the Tories) (Martin has already published some of the same details here.) However, bad luck aside he was clear about there being a much deeper problem in that the fundamental error that had affected polls for decades – a tendency to overestimate Labour – has re-emerged.

ICM did a telephone recall poll of 3000 people who they had interviewed during the campaign. They found no significant evidence of a late swing, with 90% of people reporting they voted how they said they would. The recall survey also found that don’t knows split in favour of the Conservatives and that Conservative voters were more likely to actually vote… ICM’s existing reallocation of don’t knows and 0-10 weighting by likelihood to vote dealt well with this, but ICM’s weighting down of people who didn’t vote in 2010 was not, in the event, a good predictor (it didn’t help at all, though it didn’t hurt either).

Martin’s conclusion was that “shy Tories” and “lazy Labour” were NOT enough to explain the error, and there was probably some deeper problem with sampling that probably faced the whole industry. Typically ICM has to ring 20,000 phone numbers in order to get 1,000 responses – a response rate of 5% (though that will presumably include numbers that don’t exist, etc) and he worried again about whether our tools could get a representative sample.

Adam Drummond of Opinium also provided data from their recontact survey on the day of the election. They too found no evidence of any significant late swing, with 91% of people voting how they said they would. Opinium identified a couple of specific issues with their methodology that went wrong. One was their age weighting was too crude – they used to weight age using three big groups, with the oldest being 55+. They found that within that group there were too many people who were in their 50s and 60s and not enough in their 70s and beyond, and that the much older group were more Tory. Opinium will be correcting that by using more detailed age weights, with over 75s weighted separately. They also identified failings in their political weightings that weighted the Greens too high, and will be correcting that now they have the 2015 results to calibrate it by.

These were side issues though, Opinium thought the main issue was one of turnout, or more specifically, interviewing people who are too likely to vote. If they weighted the different age and social class groups to the turnout proportions suggested in MORI’s post-election election it would have produced figures of CON 37%, LAB 32%…. but of course, you can’t weight to post-election turnout data before an election, and comparing MORI’s data at past elections the level of turnout in different groups changes from election to election.

Looking forwards Opinium are going to correct their age and political weightings as described, and are considering whether or not to weight different age/social groups differently for turnout, or perhaps trying priming questions before the main voting intention. They are also considering how they reach more unengaged people – they already have a group in their political weighting for people who don’t identify with any of the main parties… but that isn’t necessarily the same thing.

Tom Mludzinski and Andy White of ComRes offered an initial conclusions were that there was a problem with turnout. Between the 2010 and 2015 elections actual turnout rose by 1%, but the proportion of people who said they were 10/10 certain to vote rose by 8%.

Rather than looking at self-reported levels of turnout in post-election surveys ComRes did regressions on actual levels of turnouts in constituencies by their demographic profiles, finding the usual patterns of higher turnout in seats with more middle class people and older people, lower turnout in seats with more C2DE voters and younger voters. As an initial measure they have introduced a new turnout model that weights people’s turnout based largely upon their demographics.

ComRes have already discussed this in more detail than I have space for on their own website, including many of the details and graphs they used in Friday’s presentation.

Damian Lyons Lowe of Survation discussed their late telephone poll on May 6th that had produced results close to the election, either through timing or through the different approach to telephone sampling they used. Survation suggested a large chunk of the error was probably down to late swing – their recontact survey had found around 85% of people saying they voted the way they had said they would, but those who did change their minds produced a movement to the Tories that would account for some of the error (it would have moved the figures to a 3 point Conservative lead).

Damian estimated late swing made up 40% of the difference between the final polls and the result, with another 25% made up from errors in weighting. The leftover error he speculated could be caused by “tactical Tories” – people who didn’t actually support the Conservatives, but voted for them out of fear about a hung Parliament and SNP influence and wouldn’t admit this to pollsters either before or after the election, pointing to the proportion of people who refused to say how they voted in their re-contact survey.

Tantalisingly, Damian also revealed that they were going to be able to release some of the private constituency polling they did during the campaign for academic analysis.

Gideon Skinner of Ipsos MORI‘s thinking was still largely along the lines of Ben Page’s presentation in May that was (perhaps a little crudely!) summarised as lazy Labour. MORI’s thinking is that their problem was not understating Tory support, but overstating Labour support. Like ComRes, they noted how the past relationship between stated likelihood to vote and actual turnout had got worse since the last election. At previous elections they noted how actual turnout had been about 10 points lower than the proportion of people who said they would definitely vote; at this election the gap had been 16 points.

Looking at the difference between people’s stated likelihood to vote in 2010 and their answers this time round the big change was amongst Labour voters. Other parties’ voters had stayed much the same, but the proportion of Labour voters saying they were certain to vote had risen from 74% to 86%. Gideon said how this had been noticed at the time (and that MORI had written about it as an interesting finding!), but it had seemed perfectly plausible that now the Labour party were in opposition their supporters would become more enthusiastic about voting to kick out a Conservative government than they had been at the end of a third-term Labour government. Perhaps in hindsight it was a sign of a deeper problem.

MORI are currently experimenting with including how regularly people have voted in the past as an additional variable in their turnout model, as we discussed in their midweek poll.

Joe Twyman of YouGov didn’t present any conclusions yet, just went through the data they are using and the things they were looking at. YouGov did the fieldwork for two academic election surveys (the British Election Study and the SCMS) as well as their daily polling, and all three used different question ordering (daily polling asked voting intention first, SCMS after a couple of questions, the BES after a bank of questions on important issues, which party is more trusted and party leaders) so will allow testing of the effect of “priming questions”. YouGov are looking at the potential of errors like “shy Tories”, geographical spread of respondents (are there the correct proportion of respondents in Labour and Conservative seats, safe and marginal seats), are respondents to surveys too engaged, is there panel effect and dealing with turnout (including used the validated data from the British Election Study respondents).

Andrew Cooper and Rick Nye of Populus also found no evidence of significant late swing. Populus did their final poll as two distinct halves and found no difference between the fieldwork done on the Tuesday and the fieldwork done on the Wednesday. Their recontact survey a fortnight after the election still found support at Con 33%, Lab 33%

On the issue of turnout Populus had experimented with more complicated turnout models during the campaign itself – using some of the methods that other companies are now suggesting. Populus had weighted different demographic groups differently by turnout using the Ipsos MORI 2010 data as a guide, and they also had tried using how often over 25s said they had voted in the past as a variable in modelling turnout. None of it had stopped them getting it wrong, though they are going to try and build upon it further.

Instead Populus have been looking for shortcomings in the sampling itself, looking at other measures that have not generally been used in sampling or weighting but may be politically relevant. Their interim approach so far is to include more complex turnout modelling and to add in disability, public/private sector employment and level of education into the measures they weight by to try and get more representative samples. Using those factors would have given them figures of CON 35%, LAB 31% at the last election… better, but still not quite there.


-->

Ipsos MORI’s monthly political monitor is out, their first since the election. Topline figures are CON 39%, LAB 30%, LDEM 9%, UKIP 8%, GRN 6%. As with other recent voting intention polls, the figures themselves are perhaps less interesting than the methodology changes. In the case of Ipsos MORI, they’ve made an adjustment to their turnout filter. In the past they used to take only those respondents who said they were 10/10 certain to vote, the tightest of all the companies’ approaches. Their new approach is a little more complex, filtering people based on how likely they say they are to vote at an election and how regularly they say they usually vote – now they include only people who say their likelihood to vote is 9/10 or 10/10 AND who say they usually or always vote or “it depends”. People who say they rarely, never or sometimes vote are excluded.

The impact of this doesn’t appear to be massive. We can tell from the tables that the old method would have produced similar results of CON 39%, LAB 29%, LDEM 10%, UKIP 8%, GRN 6%. In their comments on their topline results MORI are very explicit that this is just an interim measure, and that they anticipate making further changes in the future as their internal inquiry and the BPC inquiry continue.

Looking at the other questions in the survey, MORI also asked about the Labour leadership election, and found results in line with other polling we’ve seen so far… a solid lead for don’t know! Amongst the minority who expressed an opinion, Andy Burnham, led on 15%, followed by Yvette Cooper on 14%, Liz Kendall on 11%, Jeremy Corbyn on 5% and a dummy candidate (“Stewart Lewis”) on 3%.


ComRes have released their first voting intention poll since the election, and have topline figures of CON 41%, LAB 29%, LDEM 8%, UKIP 10%, GRN 5%. Full details are here.

The ComRes poll also had the first attempt at a methodology change to address the failings of the polls at the 2015 general election – though as ComRes make clear in their explanation this is not ComRes’s final word on the topic, they are continuing their internal review and may make extra changes too.

As with all the pollsters who use political weighting, the initial change is to move from past vote weighting using the 2010 election to past vote weighting using the 2015 election, something that would have been done anyway. The second change is a new model of turnout weighting. This is based on the theory that a cause of the error was people overestimating their likelihood to vote in an uneven way – that is, we all know people overestimate their vote, but ComRes suggest they overestimate it unevenly, that people in some social groups (who happened to support Labour this time) overestimated their likelihood to vote more than other groups, thus skewing the polls.

In the past almost all the pollsters accounted for likelihood to vote using a straightforward system of asking people to rate their likelihood to vote on a scale of 0 to 10, and then either filtering out those people who gave a low score, weighting people according to how likely they said they were to vote, or a combination of the two. ComRes’s new method still filters out people who say they are less than 5/10 likely to vote, but after that bases likelihood to vote weighting on demographics, based upon patterns of turnout at the general election, specifically that there tends to be lower turnout in areas of social deprivation and in areas with a high proportion of social classes DE and low proportions of ABs.

The mechanics of this aren’t completely clear yet (I’ve asked ComRes for some more details which I’ll update later), but essentially it looks as if younger and more working class respondents are assumed to be less likely to vote than they claim they are and weighted downwards accordingly. It means, in effect, that the final headline voting intention figures are made up of 41% AB, 31% C1s, 19% C2, and just 9% DEs, so the effective sample once it’s modelled for the sort of people who actually turn out to vote is far more middle class than the pre-election samples that got it wrong.

The impact of the change is, as you might expect, to produce significantly more Conservative figures. In this particular poll it increased the Conservative lead from eight points to twelve points. In ComRes’s final pre-election poll it would have changed the result from a one point Tory lead to a five point Tory lead, significantly nearer what actually happened.

UPDATE: ComRes have got back to me with some more details of their turnout model. In my original version of this post I’d assumed ComRes were still weighting people according to their 0-10 score, but were adjusting this score based on demographics too. In fact ComRes are now only using the 0-10 score to filter out people who say they are less than 5/10 likely, otherwise the turnout weights are all based on demographics.


After the first leaders debate there was a single poll showing Ed Miliband with a better approval rating that David Cameron. It produced a typical example of rubbish media handling of polls – everyone got all excited about one unusual poll and talked about it on news bulletins and so on, giving it far more prominence than the far greater number of polls showing the opposite. Nevertheless, it flagged up a genuine increase in Ed Miliband’s ratings.

I talk about leader ratings, but the questions different companies ask actually vary greatly. Opinium, for example, ask if people approve or disapprove of what each leader is doing, YouGov ask if they are doing a well or badly, ICM if they are doing a good or bad job. To give an obvious example of how these could produce different figures, UKIP have quadrupled their support since the election, so objectively it’s quite hard to argue that Nigel Farage hasn’t done well as UKIP leader…but someone who supported EU membership and freedom of movement probably probably wouldn’t approve of his leadership.

The graph below shows the net ratings for David Cameron and Ed Miliband from the four pollsters who ask leader ratings at least monthly (ComRes, ICM and Ashcroft all ask their versions of the question too, but not as frequently).

leaderatings

You can see there is quite a lot of variation between pollsters, but the trends are clear. Ed Miliband’s ratings have improved over the course of the campaign, though on most pollsters’ measures he remains significantly behind David Cameron. The main exception is Survation’s rating – I suspect this is because of the time frame of their question (Survation ask people to think specifically about the last month, other companies just ask in general – my guess is that the difference is people thinking Ed Miliband has done well in the campaign). Cameron’s ratings too have improved from three of the four pollsters, but not to the extent of Miliband’s.

What’s the impact of this? Theoretically I suppose it makes the potential for voters to be deterred from voting Labour by their lack of confidence in Ed Miliband that bit smaller. Whether that’s any real difference or not is a different matter – on one hand, while it is one of the Conservative party’s hopes that Miliband’s poor ratings will drive people towards voting Tory at the last minute, that’s very different to it actually happening. This could be the case of a window closing upon something that didn’t see to be happening anyway. The alternative point of view is that no one realistically expects some vast last swing producing a Tory landside – we are talking about grinding out a few percentage points at the margins. Hence Miliband’s ratings overall don’t necessarily make much difference – much of the increase is amongst Labour’s own voters anyway – it’s how he is seen amongst those small groups who are undecided whether or not to vote for Labour, and those who undecided whether or not to vote Tory to stop Miliband.

As we come to the final weeks of the election campaign, that’s a key to understanding a lot of polling. Most voters have already made their minds up (and even many of those who say they haven’t, are probably less likely to switch than they think they are). As postal votes have started to go out, an increasing number of people will have actually voted anyway. Most things that happen over the next two and a half weeks will have no impact on public opinion anyway – for those that do, it’s not national opinion that will make a difference, it’ll be the impact on that dwindling group of people who may yet be persuaded.