Straight after the Greek referendum was announced actual polling evidence seemed quite light, but there has now been the expected rush in polling. Polls from a handful of different companies are all painting a consistent picture of YES and NO being neck and neck. In fieldwork conducted on Monday and Tuesday there was still a small lead for NO, but across all the polls conducted in the last couple of days the position has been almost a dead heat.

The most recent polls are below:

Metron/Parapolitik (Thurs-Fri) – YES 46%, NO 47% (No ahead by 1%)
GPO/Mega TV (Wed-Fri) – YES 44.1%, NO 43.7% (Yes ahead by 0.4%)
Alco/Proto Thema (Wed-Fri) – YES 41.7%, NO 41.1% (Yes ahead by 0.6%)
Ipsos (Tues-Fri) – YES 44%, NO 43% (Yes ahead by 1%)
Uni of Macedonia/Bloomberg (Thurs) – YES 42.5%, NO 43% (No ahead by 0.5%)

In the week we also had the monthly ComRes/Daily Mail poll. Latest voting intention figures are CON 41%, LAB 29%, LDEM 8%, UKIP 10%, GRN 5%. Tabs are here.

UPDATE: And the actual Greek result (with just over a third of the votes counted) looks like a solid victory for NO, absolutely miles away from what the Greek polls were showing. Ouch! I don’t know enough about Greek politics or Greek polling to hazard any guesses as to what they got wrong, but I imagine a country in economic turmoil is not the easiest to poll correctly in terms of contacting people, or to getting any firm demographic figures to weight or sample by – and that’s before you get to whether people feel able to answer the question honestly. As it happens most of the Greek polls were pretty good at their general election earlier this year, but clearly not this time.


Grexit polls

On Sunday there is a referendum in Greece on whether to accept the deal that was put to the Greek government before negotiations broke down (or at least, there was as I write, who knows what the position will be by the time you read this). What can the polling tell is about the likely result? There have not been any polls since the referendum announcement yet – though I don’t think there is anything preventing any (Greece previously had a ban on polls in the last fortnight of election campaigns, but this was repealed before the election earlier this year. I don’t know about referendums or any subsequent legal changes.)

There were, however, two Greek polls conducted in the three days before the referendum announcement that have been widely reported. A Kapa Research poll conducted between Wednesday and Friday actually asked how people would vote in a then hypothetical referendum, with 47% saying they would vote yes, 33% that they would vote no. Of course the poll was conducted prior to the referendum announcement so may not reflect current Greek opinion at all – people taking it as a sign Greece is about to vote yes should probably hold on a sec. Respondents may have been imagining a referendum on a deal that had the support of the Greek government, rather than a referendum where the government are opposed and backing a No vote.

The rest of the Kapa poll found 72% of Greeks wanted the country to remain within the EU and 68% wanted them to keep the Euro. There was a pretty even split over the government’s strategy – 49% had a positive opinion, 50% a negative opinion.

A second poll by Alco found negative opinions about the proposals on the table, but continuing goodwill towards Syriza. People didn’t think the proposals met their pre-election promises, but by 53% to 34% thought this was because Syriza hadn’t realised how difficult it would be rather than an attempt to mislead the people. By 61% to 33% respondents rejected the idea that the last Greek government would have done any better. Syriza continue to hold a robust lead in voting intention. Again, this is sometimes being reported as showing Greeks will vote Yes, but I’d be wary. It found people would, in principle, prefer a deal to default… but that’s not the same as saying they will vote YES in a referendum on a specific offer that the Greek government doesn’t support.

Turning to the attitude in other countries in Europe, YouGov polled the countries it has panels in a week ago and in most countries the public expected Greece to leave the Euro, and would prefer it if they did. In Britain people would prefer Greece to leave by 35% to 26%, Denmark by 44% to 24%, Sweden 35% to 26%, Finland 47% to 26%. France was the only country polled where people would prefer Greece to stay within the Euro, though only by 36% to 33%. In Germany 53% of the public thought Greece should leave the Eurozone, only 29% would prefer Greece to remain. Note, of course, that the countries YouGov operate in are largely Northern Europe… the public in Southern and Eastern European countries may have different views.

UPDATE: We finally have a poll on the referendum conducted after it was announced. Prorata for Efsyn found 51% of Greeks intending to vote no in the referendum. The fieldwork appears to have straddled the announcement to close the banks – before the announcement NO led by 57% to 30%, after the announcement NO led by only 46% to 37%. On the face of it that looks like No leading, but in a very fluid situation, but I don’t know what the sample size was before and after the bank closure (and indeed, whether the early and late respondents to the poll were comparable) so cannot tell if that apparently shrinking lead is meaningful.


-->

Polling news round up

Labour leadership

Regular polling remains sparse given the ongoing inquiry and that we’re in that odd sort of political interregnum with Labour yet to elect their new leaders, but there have been a couple of polls on the Labour contest and the EU. A new ORB poll on the Labour leadership earlier in the week showed Andy Burnham was seen as the candidate most likely to help Labour’s chances at the next election (36%), followed by Liz Kendall on 25%. Full tabs are here.

I would be extremely cautious of polling about the Labour leadership election. Essentially there are two real questions about the Labour leadership – who is going to win, and who would be best at winning votes for Labour. For the first one, we need a poll of Labour party members, and we don’t have a recent one (there is some data from a YouGov poll of party members for Tim Bale & Paul Webb, but that was done straight after the election before the candidates were clear). For the second, I suspect any data is fatally flawed by the public’s low awareness of the candidates – right now, polls about the Labour leadership are little more than name recognition contests. Looking at the tables for the ORB poll it looks to me as if the main reason the prospective leaders scored so highly is that the question didn’t offer a don’t know option, if it had, I bet the don’t knows would have had a runaway victory.

Worth looking at as a corrective is this ICM poll on the Labour leadership that asked people to identify photos of Andy Burnham, Yvette Cooper, Liz Kendall and Jeremy Corbyn. 23% were able to identify Burnham, 17% Cooper, 10% Kendall and 9% Corbyn. Essentially, if 90% of the respondents to a poll can’t even recognise a photo of Liz Kendall or Jeremy Corbyn, how good a judge are they going to be on what sort of Labour leader they’d be? “I’m a particular fan of the one I’ve never heard of and know nothing at all about” said no one, ever.

Sky poll on the EU

SkyNews have released a poll they have carried out themselves amongst a panel of BSkyB subscribers. The poll itself shows nothing particularly new (people think the EU is good for the British economy by 39% to 31%, etc, etc), but the concept itself is interesting – it’s a proper effort to get a representative sample from their subscriber database, weighted by age, gender, past vote, Experian segment (as an alternative to class), ethnicity, tenure and so on. It is, however, unavoidably only made up of Sky subscribers, which will bring with it its own biases. The question is to what extent those biases can be cancelled by their weighting and sampling. We shall see. The tables for this first poll are here.

Parliamentary debates

Last week there were two Parliamentary debates on regulating opinion polls. The first, last Thursday, was prompted by Lord Lipsey and concerned whether polling companies needed regulating to prevent them asking leading and biased questions – though it was largely made up of the specifics of one single poll on mitochondrial donation. The other was the second reading of George Foulkes private members bill regulating opinion polls, which includes a good response from Andrew Cooper of Populus. Lord Bridges for the government stated they had no plans to regulate polls. Lord Foulkes’s bill was nodded through to the committee stage, so will trundle on for a little longer.

Herding pollsters

Finally there’s a great piece by Matt Singh on pollster herding here. Matt mentions some of the possible reasons for herding, but more importantly actually does the sums on whether there was any herding… and finds there wasn’t. The spread between different pollsters in the final polls was very much in line with what you’d expect to find.


On Friday the BPC/MRS inquiry into the polls at the 2015 started rolling. The inquiry team had their first formal meeting in the morning and in the afternoon there was a public meeting, addressed by representatives of most of the main polling companies. It wasn’t a meeting intended to produce answers yet – it was all still very much work in progress, and the inquiry itself isn’t due to report until next March (Patrick Sturgis explained what some see as a very long time scale with reference to the need to wait for some useful data sources like the BES face to face data and the data that has been validated against marked electoral registers, neither of which will be available until later in the year). There will however be another public meeting sometimes before Christmas when the inquiry team will present some of their initial findings. Friday’s meeting was for the pollsters to present their initial thoughts.

Seven pollsters spoke at the meeting: ICM, Opinium, ComRes, Survation, Ipsos MORI, YouGov and Populus. There was considerable variation between how much they said – some companies offered some early changes they were making, some only went through possibilities they were looking at rather than offering any conclusions. As you’d expect there was a fair amount of crossover. Further down I’ve summarised what each individual company said, but there were several things that came up time and again:

  • Most companies thought there was little evidence of late swing being a cause. Most of the companies had done re-contact surveys, reinterviewing people surveyed before the election and comparing their answers before and afterwards to see if they actually did change their minds after the final polls, and most found little change that cancelled itself out, or produced negligible movement to the Tories. Only one of the companies who spoke thought it was a major factor.
  • Most of the pollsters seemed to be looking at turnout as being a major factor in the error, but this covered more than one root cause. One was people saying they will vote but not doing so, and this not being adequately dealt with by the existing 0-10 models of weighting and filtering by likelihood to vote. If that is the problem the solution may lie in more complicated turnout modelling, or using alternative questions to try and identify those who really will vote.
  • However several pollsters also talked about turnout problems coming not from respondents inaccurately reporting if they vote, but from pollsters simply interviewing the sort of people who are more likely to vote, and this impacting some groups more than others. If that’s the cause, then it is more of problem of improving samples, or doing something to address getting too many engaged people in samples.
  • One size doesn’t necessarily fit all, the problems affecting phone pollsters may end up being different to online pollsters, and that the solutions that work for one company may not work for another.
  • Everyone was very wary of the danger of just artificially fitting the data to the last election result, rather than properly identifying and solving the cause(s) of the error.
  • No one claimed they had solved the issue, everyone spoke very much about it being a work in progress. In many cases I think the factors they presented were not necessarily the ones they will finally end up identifying… but those where they had some evidence to show so far. Even those like ComRes who have already made some initial conclusions and changes in one area were very clear that their investigations were continuing, they were still open minded about possible reasons and conclusions and there were likely more changes to come.

Martin Boon of ICM suggested that ICM’s final poll showing a one point Labour lead was probably a bit of an outlier and in that limited sense was hence a bit of bad luck – ICM’s other polls during the campaign had shown small Conservative leads. He suggested this could possibly have been connected to doing the fieldwork for the final poll during the week, ICM’s fieldwork normally straddles the weekend and the political make up of C1/C2s in his final sample was significantly different from their usual polls (they broke for Labour, when ICM’s other campaign polls had them breaking for the Tories) (Martin has already published some of the same details here.) However, bad luck aside he was clear about there being a much deeper problem in that the fundamental error that had affected polls for decades – a tendency to overestimate Labour – has re-emerged.

ICM did a telephone recall poll of 3000 people who they had interviewed during the campaign. They found no significant evidence of a late swing, with 90% of people reporting they voted how they said they would. The recall survey also found that don’t knows split in favour of the Conservatives and that Conservative voters were more likely to actually vote… ICM’s existing reallocation of don’t knows and 0-10 weighting by likelihood to vote dealt well with this, but ICM’s weighting down of people who didn’t vote in 2010 was not, in the event, a good predictor (it didn’t help at all, though it didn’t hurt either).

Martin’s conclusion was that “shy Tories” and “lazy Labour” were NOT enough to explain the error, and there was probably some deeper problem with sampling that probably faced the whole industry. Typically ICM has to ring 20,000 phone numbers in order to get 1,000 responses – a response rate of 5% (though that will presumably include numbers that don’t exist, etc) and he worried again about whether our tools could get a representative sample.

Adam Drummond of Opinium also provided data from their recontact survey on the day of the election. They too found no evidence of any significant late swing, with 91% of people voting how they said they would. Opinium identified a couple of specific issues with their methodology that went wrong. One was their age weighting was too crude – they used to weight age using three big groups, with the oldest being 55+. They found that within that group there were too many people who were in their 50s and 60s and not enough in their 70s and beyond, and that the much older group were more Tory. Opinium will be correcting that by using more detailed age weights, with over 75s weighted separately. They also identified failings in their political weightings that weighted the Greens too high, and will be correcting that now they have the 2015 results to calibrate it by.

These were side issues though, Opinium thought the main issue was one of turnout, or more specifically, interviewing people who are too likely to vote. If they weighted the different age and social class groups to the turnout proportions suggested in MORI’s post-election election it would have produced figures of CON 37%, LAB 32%…. but of course, you can’t weight to post-election turnout data before an election, and comparing MORI’s data at past elections the level of turnout in different groups changes from election to election.

Looking forwards Opinium are going to correct their age and political weightings as described, and are considering whether or not to weight different age/social groups differently for turnout, or perhaps trying priming questions before the main voting intention. They are also considering how they reach more unengaged people – they already have a group in their political weighting for people who don’t identify with any of the main parties… but that isn’t necessarily the same thing.

Tom Mludzinski and Andy White of ComRes offered an initial conclusions were that there was a problem with turnout. Between the 2010 and 2015 elections actual turnout rose by 1%, but the proportion of people who said they were 10/10 certain to vote rose by 8%.

Rather than looking at self-reported levels of turnout in post-election surveys ComRes did regressions on actual levels of turnouts in constituencies by their demographic profiles, finding the usual patterns of higher turnout in seats with more middle class people and older people, lower turnout in seats with more C2DE voters and younger voters. As an initial measure they have introduced a new turnout model that weights people’s turnout based largely upon their demographics.

ComRes have already discussed this in more detail than I have space for on their own website, including many of the details and graphs they used in Friday’s presentation.

Damian Lyons Lowe of Survation discussed their late telephone poll on May 6th that had produced results close to the election, either through timing or through the different approach to telephone sampling they used. Survation suggested a large chunk of the error was probably down to late swing – their recontact survey had found around 85% of people saying they voted the way they had said they would, but those who did change their minds produced a movement to the Tories that would account for some of the error (it would have moved the figures to a 3 point Conservative lead).

Damian estimated late swing made up 40% of the difference between the final polls and the result, with another 25% made up from errors in weighting. The leftover error he speculated could be caused by “tactical Tories” – people who didn’t actually support the Conservatives, but voted for them out of fear about a hung Parliament and SNP influence and wouldn’t admit this to pollsters either before or after the election, pointing to the proportion of people who refused to say how they voted in their re-contact survey.

Tantalisingly, Damian also revealed that they were going to be able to release some of the private constituency polling they did during the campaign for academic analysis.

Gideon Skinner of Ipsos MORI‘s thinking was still largely along the lines of Ben Page’s presentation in May that was (perhaps a little crudely!) summarised as lazy Labour. MORI’s thinking is that their problem was not understating Tory support, but overstating Labour support. Like ComRes, they noted how the past relationship between stated likelihood to vote and actual turnout had got worse since the last election. At previous elections they noted how actual turnout had been about 10 points lower than the proportion of people who said they would definitely vote; at this election the gap had been 16 points.

Looking at the difference between people’s stated likelihood to vote in 2010 and their answers this time round the big change was amongst Labour voters. Other parties’ voters had stayed much the same, but the proportion of Labour voters saying they were certain to vote had risen from 74% to 86%. Gideon said how this had been noticed at the time (and that MORI had written about it as an interesting finding!), but it had seemed perfectly plausible that now the Labour party were in opposition their supporters would become more enthusiastic about voting to kick out a Conservative government than they had been at the end of a third-term Labour government. Perhaps in hindsight it was a sign of a deeper problem.

MORI are currently experimenting with including how regularly people have voted in the past as an additional variable in their turnout model, as we discussed in their midweek poll.

Joe Twyman of YouGov didn’t present any conclusions yet, just went through the data they are using and the things they were looking at. YouGov did the fieldwork for two academic election surveys (the British Election Study and the SCMS) as well as their daily polling, and all three used different question ordering (daily polling asked voting intention first, SCMS after a couple of questions, the BES after a bank of questions on important issues, which party is more trusted and party leaders) so will allow testing of the effect of “priming questions”. YouGov are looking at the potential of errors like “shy Tories”, geographical spread of respondents (are there the correct proportion of respondents in Labour and Conservative seats, safe and marginal seats), are respondents to surveys too engaged, is there panel effect and dealing with turnout (including used the validated data from the British Election Study respondents).

Andrew Cooper and Rick Nye of Populus also found no evidence of significant late swing. Populus did their final poll as two distinct halves and found no difference between the fieldwork done on the Tuesday and the fieldwork done on the Wednesday. Their recontact survey a fortnight after the election still found support at Con 33%, Lab 33%

On the issue of turnout Populus had experimented with more complicated turnout models during the campaign itself – using some of the methods that other companies are now suggesting. Populus had weighted different demographic groups differently by turnout using the Ipsos MORI 2010 data as a guide, and they also had tried using how often over 25s said they had voted in the past as a variable in modelling turnout. None of it had stopped them getting it wrong, though they are going to try and build upon it further.

Instead Populus have been looking for shortcomings in the sampling itself, looking at other measures that have not generally been used in sampling or weighting but may be politically relevant. Their interim approach so far is to include more complex turnout modelling and to add in disability, public/private sector employment and level of education into the measures they weight by to try and get more representative samples. Using those factors would have given them figures of CON 35%, LAB 31% at the last election… better, but still not quite there.


Ipsos MORI’s monthly political monitor is out, their first since the election. Topline figures are CON 39%, LAB 30%, LDEM 9%, UKIP 8%, GRN 6%. As with other recent voting intention polls, the figures themselves are perhaps less interesting than the methodology changes. In the case of Ipsos MORI, they’ve made an adjustment to their turnout filter. In the past they used to take only those respondents who said they were 10/10 certain to vote, the tightest of all the companies’ approaches. Their new approach is a little more complex, filtering people based on how likely they say they are to vote at an election and how regularly they say they usually vote – now they include only people who say their likelihood to vote is 9/10 or 10/10 AND who say they usually or always vote or “it depends”. People who say they rarely, never or sometimes vote are excluded.

The impact of this doesn’t appear to be massive. We can tell from the tables that the old method would have produced similar results of CON 39%, LAB 29%, LDEM 10%, UKIP 8%, GRN 6%. In their comments on their topline results MORI are very explicit that this is just an interim measure, and that they anticipate making further changes in the future as their internal inquiry and the BPC inquiry continue.

Looking at the other questions in the survey, MORI also asked about the Labour leadership election, and found results in line with other polling we’ve seen so far… a solid lead for don’t know! Amongst the minority who expressed an opinion, Andy Burnham, led on 15%, followed by Yvette Cooper on 14%, Liz Kendall on 11%, Jeremy Corbyn on 5% and a dummy candidate (“Stewart Lewis”) on 3%.