As well as the new EU poll, Friday’s Times also had a new YouGov Scottish poll. There was also a new TNS Scottish poll in the week. Topline voting intentions for Holyrood were:

YouGov (tabs)
Constituency: SNP 50%(-1), LAB 19%(-2), CON 20%(+1), LDEM 6%(+1)
Regional: SNP 42%(-3), LAB 20%(nc), CON 20%(+1), GRN 6%(nc), LDEM 5%(nc)

TNS (tabs)
Constituency: SNP 57%(-1), LAB 21%(-2), CON 17%(+5), LDEM 3%(-1)
Regional: SNP 52%(-2), LAB 19%(-1), CON 17%(+5), GRN 6%(-3), LDEM 6%(+2)

While the scale is difference, both polls have the usual overwhelming lead for the SNP. The obvious expectation is that they’ll easily secure a landslide win come May. More interesting is the battle for second place. YouGov have Labour and the Conservatives essentially equal (in the constituency vote the Conservatives are a point ahead after rounding… though this was nearly all in the rounding!). YouGov have tended to show the highest levels of Conservative support in Scotland and have had Labour only a whisker ahead of them for their last couple of polls, however other companies now seem to be showing the Labour and Conservative gap in Scotland narrowing too. TNS have the Conservatives up five points since December, bringing the gap in the regional vote down to two points, a Panelbase poll earlier this month also only had a two point gap between Lab & Con in the regional vote, MORI had the gap falling to 2-3 points in their last poll. Survation’s last Scottish poll still showed a 4-5 point gap this month, but it was down from an eight point gap in their previous poll.

Personally I’d still see the Conservatives coming second in Scotland as unlikely – while Ruth Davidson is well regarded (her approval ratings in the YouGov poll were substantially better than Kezia Dugdale’s) their brand seems almost irretrievably tarnished in Scotland. However if Scottish Labour fall far enough, I suppose it is possible. We shall see.


Tomorrow’s Times has a YouGov poll on the EU, conducted after the announcement of the draft renegotiation proposals. Topline referendum voting intentions are REMAIN 36%(-2), LEAVE 45%(+3), DK/WNV 19%. While the changes since YouGov’s last poll a week ago aren’t huge, since summer YouGov’s referendum polls have tended to show the race neck-and-neck, so today’s nine point lead for leave is a significant departure, and the largest YouGov have shown since 2014. The Times’s story is here and the YouGov tabs are here.

Asked about the details of the draft renegotiation (the emergency brake, child benefit changes, the “red card” and so on) most people were broadly supportive. However, these things are more than just the sum of their parts, and overall the draft agreement is seen as a bad deal for Britain by 46%, with 22% saying it’s a good deal. A majority of respondents said they thought the deal did not go far enough (17% thought it was about right, 4% too far) and 50% thought the deal represented little or no real change. In short, the public’s reaction seems to be “nice as far as it goes…but not nearly enough”.

The poll was conducted on Wednesday and Thursday so in the context of some very negative press coverage. To some degree this may be a short term reaction based upon that, and we may see things revert back to the neck-and-neck position as the impact fades. Indeed, when people were asked in the poll how they would vote if Cameron managed to secure the draft deal at the EU meeting in February the LEAVE lead dropped back to three points, far more typical for YouGov’s polling. We shall see.

(On other matters, the Daily Express have tragically got their front page headline as the latest results from an open-access voodoo-poll on their own website. I really can’t be bothered to rehearse my usual rant, so here’s one I prepared earlier)


-->

A quick update on some polling figures from the last few days.

ComRes released a new telephone poll for the Daily Mail on Friday. Topline voting intention figures were CON 37%, LAB 32%, LDEM 6%, UKIP 12%, GRN 4% (tabs are here.) On the EU referendum ComRes had voting intentions of REMAIN 54%, LEAVE 36%, DK 10%.

YouGov also released new figures on voting intention and the EU referendum on their website. Their lastest topline VI figures are CON 39%, LAB 30%, LDEM 6%, UKIP 17%, GRN 3% (tabs are here). On the EU referendum they have Leave slightly ahead – REMAIN 38%, LEAVE 42%, DK/WNV 20%.

Finally Ipsos MORI also released EU referendum figures (part of the monthly Political Monitor survey I wrote about earlier in the week). Their latest figures are REMAIN 50%, LEAVE 38%, DK 12%.

There continues to be a big contrast between EU referendum figures in polls conducted by telephone, and conducted online. The telephone polls from ComRes and Ipsos MORI both have very solid leads for remain, the online polls from ICM, YouGov, Survation and others all tend to have the race very close. In one sense the contrast seems to be in line with the contrast we saw in pre-election polls – while there was little consistent difference between online and telephone polls in terms of the position of Labour and the Conservatives (particularly in the final polls), there was a great big gulf in terms of the levels of UKIP support they recorded – in the early part of 2015 there was a spread of about ten points between those (telephone) pollsters showing the lowest levels of UKIP support and those (online) pollsters showing the highest levels of UKIP support. It doesn’t seem particularly surprising that this online/telephone gap in terms of UKIP support also translates into an online/telephone gap in terms of support for leaving the EU. In terms of which is the better predictor it doesn’t give us much in the way of clues though – the 13% UKIP ended up getting was bang in the middle of that range.

The other interesting thing about the telephone/online contrast in EU referendum polling is the don’t knows. Telephone polls are producing polls that have far fewer people saying they don’t know how they’ll vote (you can see it clearly in the polls in this post – the two telephone polls have don’t knows of 10% and 12%, the online poll has 20% don’t knows, the last couple of weekly ICM online polls have had don’t knows of 17-18%). This could have something to do with the respective levels of people who are interested in politics and the EU that the different sampling approaches are picking up, or perhaps something to do with people’s willingness to give their EU voting intention to a human interviewer. The surprising thing is that this is not a typical difference – in polls on how people would vote in a general election the difference is, if anything, in the other direction – telephone polls find more don’t knows and refusals than online polls do. Why it’s the other way round on the EU referendum is an (intriguing) mystery.


Ipsos MORI’s monthly political monitor is out today, with topline figures of CON 40%, LAB 31%, LDEM 7%, UKIP 11%, GRN 4%. Full details and tables are here.

MORI also asked respondents to choose between the parties on various more specific measures – a bank of questions with back data going back to 1989:

  • On having the “best policies for the country as a whole” the Conservatives now lead by ten points (compared to a two point Tory lead in 2010 and 2014, and a Labour lead from 1992 to 2005).
  • On being the most clear and united about its policies the Conservatives lead by twenty points (compared to ten points in 2014, five points in 2010. The last time there was a lead this big was a 31 point lead for Labour in 2001.)
  • On having the best “team of leaders” the Conservatives lead by twenty-seven points (compared to eleven points in 2014 and five points in 2010 – again you need to go back to Labour in 2001 to find a larger lead)
  • The only measure where Labour haven’t collapsed is “looking after the interests of people like yourself” – here the Conservatives have a narrow lead of four points, compared to a two point Labour lead in 2014 and a four point Tory lead in 2010.

The poll also had questions about two policy issues facing Labour. One was Jeremy Corbyn’s suggestion that companies should be barred from paying dividends if they don’t pay the living wage. In principle this idea seems popular – 66% of people say they would support it, 17% of people would be opposed. In the survey MORI did a split sample experiment and asked the other half of the sample about the policy without any attribution, and half about it having explained it was Jeremy Corbyn’s suggestion. When the policy was identified as coming from Corbyn support was lower – 60% support, 24% opposed.

The obvious conclusion is that identifying a policy as coming from Jeremy Corbyn makes it less popular. This is probably true… but I wouldn’t get too excited about it. Conservative party modernisers used to make their case using similar data showing policies were less popular when associated with the Conservative party. I think the reality is that strong partisan supporters of other political parties will almost always be turned off a policy when it is associated with an opponent, so yes, putting Jeremy Corbyn’s name to a policy would make it less popular, but so would putting the Labour party’s name to the policy, or the Conservative party’s name, or Osborne or Cameron’s name.

The other policy MORI asked about was Trident. 58% of people opposed Britain getting rid of nuclear weapons, rising to 70% when it was asked specifically about unilateral disarmament… a similar figure to when MORI asked the same question in the 1980s.


Today the polling inquiry under Pat Sturgis’ presented its initial findings on what caused the polling error. Pat himself, Jouni Kuha and Ben Lauderdale all went through their findings at a meeting at the Royal Statistical Society – the full presentation is up here. As we saw in the overnight press release the main finding was that unrepresentative samples were to blame, but today’s meeting put some meat on those bones. Just to be clear, when the team said unrepresentative samples they didn’t just mean the sampling part of the process, they meant the samples pollsters end up with as a result of their sampling AND their weighting: it’s all interconnected. With that out the way, here’s what they said.

Things that did NOT go wrong

The team started by quickly going through some areas that they have ruled out as significant contributors to the error. Any of these could, of course, have had some minor impact, but if they did it was only minor. The team investigated and dismissed postal votes, falling voter registration, overseas voters and question wording/ordering as causes of the error.

They also dismissed some issues that had been more seriously suggested – the first was differential turnout reporting (i.e, Labour people overestimating their likelihood to vote more than Conservative people), in vote validation studies the inquiry team did not found evidence to support this, suggesting if it was an issue it was too small to be important. The second was the mode effect – ultimately whether a survey was done online or by telephone made no difference to its final accuracy. This finding met with some surprise from the audience, given there were more phone polls showing Tory leads than online ones. Ben Lauderdale of the inquiry team suggested that was probably because phone polls had smaller sample sizes and hence more volatility, hence spat out more unusual results… but that the average lead in online polls and average lead in telephone polls were not that different, especially in the final polls.

On late swing the inquiry said the evidence was contradictory. Six companies had conducted re-contact survey, going back to people who had completed pre-election surveys to see how they actually voted. Some showed movement, some did not, but on average they showed a movement of only 0.6% to the Tories between the final polls and the result, so can only have made a minor contribution at most. People deliberately misreporting their voting intention to pollsters was also dismissed – as Pat Sturgis put it, if those people had told the truth after the election it would have shown up as late swing (but did not), if they had kept on lying it should have affected the exit poll, BES and BSA as well (it did not).

Unrepresentative Samples

With all those things ruled out as major contributors to the poll error the team were left with unrepresentative samples as the most viable explanation for the error. In terms of positive evidence for this they looked at the differences between the BES and BSA samples (done by probability sampling) and the pre-election polls (done by variations on quota sampling). This wasn’t a recommendation to use probability sampling (while they didn’t do recommendations, Pat did rule out any recommendation that polling switch to probability sampling wholesale, recognising that the cost and timing was wholly impractical, and that the BES & BSA had been wrong in their own way, rather than being perfect solutions).

The two probability based surveys were, however, useful as comparisons to pick up possible shortcomings in the sample – so, for example, the pre-election polls that provided precise age data for respondents all had age skews within age bands, specifically within the oldest age band there were too many people in their 60s, not enough in their 70s and 80s. The team agreed with the suggestions that samples were too politically engaged – in their investigation they looked at likelihood to vote, finding most polls had samples that were too likely to vote, and didn’t have the correct contrast between young and old turnout. They also found samples didn’t have the correct proportions of postal voters for young and old respondents. They didn’t suggest all of these errors were necessarily related to why the figures were wrong, but that they were illustrations of the samples not being properly representative – and that ultimately led to getting the election wrong.

Herding

Finally the team spent a long time going through the data on herding – that is, polls producing figures that were closer to each other than random variation suggests they should be. On the face of it the narrowing looks striking – the penultimate polls had a spread of about seven points between the poll with the biggest Tory lead and the poll with the biggest Labour lead. In the final polls the spread was just three points, from a one point Tory lead to a two point Labour lead.

Analysing the polls earlier in the campaign the spread between different was almost exactly what you would expect from a stratified sample (what the inquiry team considered the closest approximation to the politically weighted samples used by the polls). In the last fortnight the spread narrowed though, with the final polls all close together. The reason for this seems to be because of methodological change – several of the polling companies made adjustments to their methods during the campaign or for their final polls (something that has been typical at past elections, companies often add extra adjustments to their final polls). Without those changes them the polls would have been more variable….and less accurate. In other words, some pollsters did make changes in their methodology at the end of the campaign which meant the figures were clustered together, but they were open about the methods they were using and it made the figures LESS Labour, not more Labour. Pollsters may or may not, consciously or subconsciously, have been influenced in the methodological decisions they made by what other polls were showing. However, from the inquiry’s analysis we can be confident that any herding did not contribute to the polling error, quite the opposite – all those pollsters who changed methodology during the campaign were more accurate using their new methods.

For completeness, the inquiry also took everyone’s final data and weighted it using the same methods – they found a normal level of variation. They also took everyone’s raw data and applied the weighting and filtering the pollsters said they had used to see if they could recreate the same figures – the figures came out the same, suggesting there was no sharp practice going on.

So what next?

Today’s report wasn’t a huge surprise – as I wrote at the weekend, most of the analysis so far has pointed to unrepresentative samples as the root cause, and the official verdict is in line with that. In terms of the information released today there were no recommendations, it was just about the diagnosis – the inquiry will be submitting their full written report in March. It will have some recommendations on methodology – but no silver bullet – but with the diagnosis confirmed the pollsters can start working on their own solutions. Many of the companies released statements today welcoming the findings and agreeing with the cause of the error, we shall see what different ways they come up with to solve it.