On Friday the BPC/MRS inquiry into the polls at the 2015 started rolling. The inquiry team had their first formal meeting in the morning and in the afternoon there was a public meeting, addressed by representatives of most of the main polling companies. It wasn’t a meeting intended to produce answers yet – it was all still very much work in progress, and the inquiry itself isn’t due to report until next March (Patrick Sturgis explained what some see as a very long time scale with reference to the need to wait for some useful data sources like the BES face to face data and the data that has been validated against marked electoral registers, neither of which will be available until later in the year). There will however be another public meeting sometimes before Christmas when the inquiry team will present some of their initial findings. Friday’s meeting was for the pollsters to present their initial thoughts.

Seven pollsters spoke at the meeting: ICM, Opinium, ComRes, Survation, Ipsos MORI, YouGov and Populus. There was considerable variation between how much they said – some companies offered some early changes they were making, some only went through possibilities they were looking at rather than offering any conclusions. As you’d expect there was a fair amount of crossover. Further down I’ve summarised what each individual company said, but there were several things that came up time and again:

  • Most companies thought there was little evidence of late swing being a cause. Most of the companies had done re-contact surveys, reinterviewing people surveyed before the election and comparing their answers before and afterwards to see if they actually did change their minds after the final polls, and most found little change that cancelled itself out, or produced negligible movement to the Tories. Only one of the companies who spoke thought it was a major factor.
  • Most of the pollsters seemed to be looking at turnout as being a major factor in the error, but this covered more than one root cause. One was people saying they will vote but not doing so, and this not being adequately dealt with by the existing 0-10 models of weighting and filtering by likelihood to vote. If that is the problem the solution may lie in more complicated turnout modelling, or using alternative questions to try and identify those who really will vote.
  • However several pollsters also talked about turnout problems coming not from respondents inaccurately reporting if they vote, but from pollsters simply interviewing the sort of people who are more likely to vote, and this impacting some groups more than others. If that’s the cause, then it is more of problem of improving samples, or doing something to address getting too many engaged people in samples.
  • One size doesn’t necessarily fit all, the problems affecting phone pollsters may end up being different to online pollsters, and that the solutions that work for one company may not work for another.
  • Everyone was very wary of the danger of just artificially fitting the data to the last election result, rather than properly identifying and solving the cause(s) of the error.
  • No one claimed they had solved the issue, everyone spoke very much about it being a work in progress. In many cases I think the factors they presented were not necessarily the ones they will finally end up identifying… but those where they had some evidence to show so far. Even those like ComRes who have already made some initial conclusions and changes in one area were very clear that their investigations were continuing, they were still open minded about possible reasons and conclusions and there were likely more changes to come.

Martin Boon of ICM suggested that ICM’s final poll showing a one point Labour lead was probably a bit of an outlier and in that limited sense was hence a bit of bad luck – ICM’s other polls during the campaign had shown small Conservative leads. He suggested this could possibly have been connected to doing the fieldwork for the final poll during the week, ICM’s fieldwork normally straddles the weekend and the political make up of C1/C2s in his final sample was significantly different from their usual polls (they broke for Labour, when ICM’s other campaign polls had them breaking for the Tories) (Martin has already published some of the same details here.) However, bad luck aside he was clear about there being a much deeper problem in that the fundamental error that had affected polls for decades – a tendency to overestimate Labour – has re-emerged.

ICM did a telephone recall poll of 3000 people who they had interviewed during the campaign. They found no significant evidence of a late swing, with 90% of people reporting they voted how they said they would. The recall survey also found that don’t knows split in favour of the Conservatives and that Conservative voters were more likely to actually vote… ICM’s existing reallocation of don’t knows and 0-10 weighting by likelihood to vote dealt well with this, but ICM’s weighting down of people who didn’t vote in 2010 was not, in the event, a good predictor (it didn’t help at all, though it didn’t hurt either).

Martin’s conclusion was that “shy Tories” and “lazy Labour” were NOT enough to explain the error, and there was probably some deeper problem with sampling that probably faced the whole industry. Typically ICM has to ring 20,000 phone numbers in order to get 1,000 responses – a response rate of 5% (though that will presumably include numbers that don’t exist, etc) and he worried again about whether our tools could get a representative sample.

Adam Drummond of Opinium also provided data from their recontact survey on the day of the election. They too found no evidence of any significant late swing, with 91% of people voting how they said they would. Opinium identified a couple of specific issues with their methodology that went wrong. One was their age weighting was too crude – they used to weight age using three big groups, with the oldest being 55+. They found that within that group there were too many people who were in their 50s and 60s and not enough in their 70s and beyond, and that the much older group were more Tory. Opinium will be correcting that by using more detailed age weights, with over 75s weighted separately. They also identified failings in their political weightings that weighted the Greens too high, and will be correcting that now they have the 2015 results to calibrate it by.

These were side issues though, Opinium thought the main issue was one of turnout, or more specifically, interviewing people who are too likely to vote. If they weighted the different age and social class groups to the turnout proportions suggested in MORI’s post-election election it would have produced figures of CON 37%, LAB 32%…. but of course, you can’t weight to post-election turnout data before an election, and comparing MORI’s data at past elections the level of turnout in different groups changes from election to election.

Looking forwards Opinium are going to correct their age and political weightings as described, and are considering whether or not to weight different age/social groups differently for turnout, or perhaps trying priming questions before the main voting intention. They are also considering how they reach more unengaged people – they already have a group in their political weighting for people who don’t identify with any of the main parties… but that isn’t necessarily the same thing.

Tom Mludzinski and Andy White of ComRes offered an initial conclusions were that there was a problem with turnout. Between the 2010 and 2015 elections actual turnout rose by 1%, but the proportion of people who said they were 10/10 certain to vote rose by 8%.

Rather than looking at self-reported levels of turnout in post-election surveys ComRes did regressions on actual levels of turnouts in constituencies by their demographic profiles, finding the usual patterns of higher turnout in seats with more middle class people and older people, lower turnout in seats with more C2DE voters and younger voters. As an initial measure they have introduced a new turnout model that weights people’s turnout based largely upon their demographics.

ComRes have already discussed this in more detail than I have space for on their own website, including many of the details and graphs they used in Friday’s presentation.

Damian Lyons Lowe of Survation discussed their late telephone poll on May 6th that had produced results close to the election, either through timing or through the different approach to telephone sampling they used. Survation suggested a large chunk of the error was probably down to late swing – their recontact survey had found around 85% of people saying they voted the way they had said they would, but those who did change their minds produced a movement to the Tories that would account for some of the error (it would have moved the figures to a 3 point Conservative lead).

Damian estimated late swing made up 40% of the difference between the final polls and the result, with another 25% made up from errors in weighting. The leftover error he speculated could be caused by “tactical Tories” – people who didn’t actually support the Conservatives, but voted for them out of fear about a hung Parliament and SNP influence and wouldn’t admit this to pollsters either before or after the election, pointing to the proportion of people who refused to say how they voted in their re-contact survey.

Tantalisingly, Damian also revealed that they were going to be able to release some of the private constituency polling they did during the campaign for academic analysis.

Gideon Skinner of Ipsos MORI‘s thinking was still largely along the lines of Ben Page’s presentation in May that was (perhaps a little crudely!) summarised as lazy Labour. MORI’s thinking is that their problem was not understating Tory support, but overstating Labour support. Like ComRes, they noted how the past relationship between stated likelihood to vote and actual turnout had got worse since the last election. At previous elections they noted how actual turnout had been about 10 points lower than the proportion of people who said they would definitely vote; at this election the gap had been 16 points.

Looking at the difference between people’s stated likelihood to vote in 2010 and their answers this time round the big change was amongst Labour voters. Other parties’ voters had stayed much the same, but the proportion of Labour voters saying they were certain to vote had risen from 74% to 86%. Gideon said how this had been noticed at the time (and that MORI had written about it as an interesting finding!), but it had seemed perfectly plausible that now the Labour party were in opposition their supporters would become more enthusiastic about voting to kick out a Conservative government than they had been at the end of a third-term Labour government. Perhaps in hindsight it was a sign of a deeper problem.

MORI are currently experimenting with including how regularly people have voted in the past as an additional variable in their turnout model, as we discussed in their midweek poll.

Joe Twyman of YouGov didn’t present any conclusions yet, just went through the data they are using and the things they were looking at. YouGov did the fieldwork for two academic election surveys (the British Election Study and the SCMS) as well as their daily polling, and all three used different question ordering (daily polling asked voting intention first, SCMS after a couple of questions, the BES after a bank of questions on important issues, which party is more trusted and party leaders) so will allow testing of the effect of “priming questions”. YouGov are looking at the potential of errors like “shy Tories”, geographical spread of respondents (are there the correct proportion of respondents in Labour and Conservative seats, safe and marginal seats), are respondents to surveys too engaged, is there panel effect and dealing with turnout (including used the validated data from the British Election Study respondents).

Andrew Cooper and Rick Nye of Populus also found no evidence of significant late swing. Populus did their final poll as two distinct halves and found no difference between the fieldwork done on the Tuesday and the fieldwork done on the Wednesday. Their recontact survey a fortnight after the election still found support at Con 33%, Lab 33%

On the issue of turnout Populus had experimented with more complicated turnout models during the campaign itself – using some of the methods that other companies are now suggesting. Populus had weighted different demographic groups differently by turnout using the Ipsos MORI 2010 data as a guide, and they also had tried using how often over 25s said they had voted in the past as a variable in modelling turnout. None of it had stopped them getting it wrong, though they are going to try and build upon it further.

Instead Populus have been looking for shortcomings in the sampling itself, looking at other measures that have not generally been used in sampling or weighting but may be politically relevant. Their interim approach so far is to include more complex turnout modelling and to add in disability, public/private sector employment and level of education into the measures they weight by to try and get more representative samples. Using those factors would have given them figures of CON 35%, LAB 31% at the last election… better, but still not quite there.


170 Responses to “The Polling Inquiry public meeting”

1 2 3 4
  1. The things you do to make Martyn happy…(not that he will be).

  2. On recontacting -if they were shy the first time why would they be any more open the second.?

    I can accept lazy as a factor ,the whole election seemed subdued ,grudging so no surprise labour voters were less likely to go than they said and less likely than tories.Means labour needs to rethink its ground operation.

    Main issue tho is that apart from survation they reject late swing which means that they think the polls were wrong for quite a while.

  3. Indeed, a combination of demographic and political (beyond party affiliation) distribution (such constituency political characteristics) could be a way forward in sampling. Benchmarking wouldn’t be available though …

    By the way, first :-)

  4. Third …

  5. Very interesting post. What isn’t covered though is why the polls converged to a greater extent than is statistically likely in the last couple of days. I suppose the pollsters are looking at their own polls so there wasn’t any scope to look at comparisons between them, but the fact they are identifying different causes of the inaccuracy makes the convergence even odder.

    Shy Tory still seems to be the likeliest cross-industry issue to me, though perhaps it should be renamed as ‘shyness’ seems the wrong term to describe people filling in online forms inaccurately.

  6. @Roger Mexico

    It’s my charming smile and winning ways…:-)

  7. @AnthonyWells

    Genuinely, thank you for this. I was going to contact the admin staff at RSS to see if anybody had taken a recording. I am very pleased that you have written this and I hope my nagging didn’t put you off. Thank you for your efforts, I appreciate this.

  8. What a fascinating account – thank you Anthony. Gideon Skinner’s comment about finding Labour supporters much keener to vote in 2015 than 2010 and yet not doing so on the day is fascinating. I had a feeling just towards the end of the campaign that somehow the Labour campaign drifted and lost energy and substance with the Edstone etc.

    However, that doesn’t explain all the companies getting the predictions based on the raw figures they had so wrong just hours before people actually voted. The raw figures to which various and variable criterea were applied must have been skewed IMHO.

    I suspect too that the voters who like to engage with pollsters are by definition more likely to be “engaged” or “open” about their opinions.

    In my experience most people on the Left are quite open about it, whereas those who similarly disclose their politics on the Right tend to be confident business types or politicos, not the Army of quiet conservative-minded people who keep themselves to themselves.

    This latter group has been described as “Shy Tories” – certainly we saw a similar phenomenon in 1992 – however, if they really exist it begs the question what ever happened to this quiet army of voters in 1997, 2001, 2005 and 2010?

    I think the answer might be that this hidden cohort actually voted Labour by-and-large in 1997, and gradually drifted back to Conservative voting over the subsequent three elections, although their return to norm was blunted in 2010 by Gleggmania, which held what would have been natural swingback to the Conservatives to a minimum – thus the Hung parliament in 2010.

    In 2015, with the LibDems on a sticky wicket shall we say, this quiet non-pollster engaging group voted Tory, just as they had in 1992.

    In other words I suspect there is an entire cohort of voters out there who are missed because they do not willingly engage in surveys and polls. When they split towards Labour, or evenly, or even slightly towards the Tories they don’t have a major impact on polling. But when they do, as in 1992 and 2015, it completely wrecks the predictions.

  9. Thank you Anthony,

    I, like many, have had my faith in pollsters and polling severely dented, but my confidence in you and love of this site remain true

    One other thought: there is a lot of talk of Tories or certain socio-economic groups being the main drivers of the delta between stated turnout and actual turnout … I wonder if this is right? Could it be that there is something about turnout being influenced by the power of the why message … In my gut, I feel that Tories had more reason to vote than Labour folks … Or to put it another way I think and Milliband and Labout did not run a good campaign and THAT influenced turnout … It means that the “Vision” thing might be important after all, and it is not enough to just try to get your vote out.

  10. @Johnpolitico

    I think the greater reason to vote Conservative could be described as :

    Cry SNP and let loose the dogs of war

    I think the fear of the SNP among English voters was pivotal, especially in the older demographic.

    Of course, the big danger occurs now an acceptable way forward to resolve the Scottish issues is required, and packing all that fear back into Pandora’s Box might prove exceedingly difficult.

  11. As far as I know, nobody has mentioned one other possible cause – whether the people in the samples were actually on the electoral register and hence able to vote.

    Presumably, both the telephone interviewees and the online panels are taken from the whole population – not just people on the electoral register. Not everybody knows whether they are on the register. A possible hypothesis is that Labour supporters (younger, more mobile, in rented accommodation. less articulate) are more likely to be unaware that they are not on the register – and hence cannot carry out their firm intention to vote Labour. Thus Labour support is overestimated. Is this a possible factor?

    If so, with individual registration being phased in, the problem is likely to get worse.

  12. I had thought about certain weighting being adjusted based on political changes. IIRC it’s ICM who adjust some don’t knows back to how people voted at the previous election, a weighting which made their accuracy worse in terms of lib-dem & ukip (&mebe SNP, I forget). Everyone’s more focused on the con/lab thing, but perhaps that weighting should be weaker for parties whose support has more than halved/doubled (or whaveter) since the last election…

    On a less serious note, perhaps those few swingers in marginals who decide elections respond less to general purpose headline polling, cos they’re fed up of being polled so much in all the other surveys trying to specifically target them…

  13. Thank you Anthony for the summsry snd we await interim and final reports in due course.

    I shall just make 3 comments:

    1. I assume the enquiry is to look at error in RECENT polling for GE2015 and let’s be honest the campaign started not 31 March but on or before 2 January. For what period of last parliament were polls in error? 6 months? 12 months? 2 years? We will never know but if Conservatives had led Labour in polls in summer 2014, Labour may have got rid of Miliband.

    2. I agree 1with Joe Twyman that polling should take account of seats held by particular parties and whether marginal or safe. Also for January 2020 till GE2020 should refer to likely candidates as per Ashcroft.

    3. I agree sampling weighting and turnout need to be thoroughly examined but am inclined to think media may simply have to pay pollsters more either for regional polls or larger sample national poll preferably the former as it may be that national swing has gone for the foreseeable future.

    Don’t get me wrong – I want polling to be accurate not just for elections but always if that could ever be achievable.

  14. CMJ

    “Of course, the big danger occurs now an acceptable way forward to resolve the Scottish issues is required, and packing all that fear back into Pandora’s Box might prove exceedingly difficult.”

    Seems that Yvette Cooper doesn’t want to face up to the fear – maybe she’s scared?

    https://archive.is/SONAx

    She said: “I don’t think you can have any kind of agreement with a party that wants to separate a country or fracture a country in that way.”

    Attacking the Tory “English nationalist” election strategy, she went on: “I do feel quite worried about the way in which about the way David Cameron had decided to play that politics.”

    “I feel worried about the careless way in which David Cameron thinks it is OK to play on the fear of Scotland as being the way for them to get more votes and the way in which that divides a nation.”

    She added: “I think that they [the Tories] will continue to do this and it’s a real challenge for the Labour Party for us to try and build that sense of things that we have in common rather than things that pull us apart.”

    Rather a strange idea that rejecting things that (E&W) Lab have in common with other parties is her seeking consensus?

    Even LiS understand that there are broad areas of consensus in which parties in Scotland can (and do) work together.

    Why is she so determined to boost the Tory fear campaign?

    As we are regularly reminded, my side lost the indyref, (we don’t actually need the reminders :-) ) but it seems that politicians in England are assuming a zero sum game. If Scotland (or any other nation in the UK) gets extra powers over its own affairs, then that seems somehow to threaten England.

    That is a wholly irrational position – so possibly a good line to take in the Labour Party in those areas where they actually have more than one MP.

  15. @Anthony

    Random thoughts / suggestions / comments

    Perhaps we need more regional polling. The pollsters seemed to get Scotland fairly accurately (within MoE).

    As far as I can tell, there’s basically been a holding action by Con against Lab, and all parties having at the Lib Dem VI. This ceded 15 LD seats to Con in the S. West and 19 in the South overall.

    This all but gave the Tories the election if they could hold elsewhere. They did, and took another 12 LD seat elsewhere, and lost a couple to Lab (net losses).

    This doesn’t explain the polling, but regional polling might go further, and reduce the likelihood of regional voter concentration being hidden within UK polls. Turnout was hardly up in rUK, while it was up 7% in Scotland. So basically we had a shift from LD to all others, and the polls were not able to ascertain where, due to methodologies (not criticism; observation).

    Too often we saw samples in Scotland of less than 200, some less than 100 and a few less than 50. A pointless exercise statistically, and whenever someone actually reports on the data, another comes along saying “small sample”. What are folks to do with dire data samples? You use what you have.

    Despite the woeful samples, the ones we had told a fairly accurate story. My own guess of 50 SNP seats was a split between by beliefs and my worries. I believed they might get 55, but was worried it was all smoke and would be closer to 45. In that sense, I should have just trusted the data. I expected 2-3 Con and 3-10 Lab seats.

    Back to the polling. Rather than do 2,000 per day, it would be more telling to split the UK into 10 parts (all regions of UK, less N. Ireland, and perhaps lump the NE and NW in together, considering the size of the former).

    If each was polled once a fortnight with all UK polled over ten working days, we would have far more reliable sample sizes for the same or less cost. I’m sure the polls could also vary a little in size, as long as all are 1,001 or more.

    Other than that, perhaps re-visiting the reliability of land-line or telephone polling in general, and weighting it appropriately. On-line polling might be the most simple way to gather additional data. How to get reliable samples on-line though?

    Also not convinced that newspaper type is a reliable metric. Circulation is down, and due to free or easy access, most read several. I might read the Scotsman, but not if their comments section is offensive. Or I might look in to the Guardian, but sometimes their take on things is through their own prism. I’m not a goof example, as I don’t identify with any news source.

    I have a feeling pigeon-holing voters by such metrics, with 4-8 parties on offer is doomed to failure. I would be interested to see if the size of the ‘consider self a floating voter’ category has increased.

  16. @Oldnat

    Not completely irrational if you follow things to a conclusion. If Scotland gains FFA, it will streamline cut its economy according to its cloth, and will focus on what it will be best at.

    As a result it might easily out-perform rUK in some industries, so Scotland competes and wins on quality of some areas, if not quantity.

    In addition, with FFA, Scotland will be able to demonstrate fiscal responsibility / ability etc. and prove that Scotland is not too wee, poor etc.

    I don’t think Westminster wants either scenario. All my opinion, of course. It could be twaddle.

  17. @Statgeek and Old Nat

    The problem with FFA within the UK is that the UK economy as a whole is still being run from and for London and the South East. FFA minus Barnet (which the Tories are almost certain to abolish during this parliament) means a very tight squeeze for Scotland, at least to begin with. That’s fine by me, but many will squeal!

    Anyway, this thread is about the Polls – and thanks to AW for the input on where we’re up to.

    I think Statgeek (3.44 a.m.) is right in wanting more detailed regional/national (Scotland and Wales) polls. It seem to me that the ‘one size fits all’ approach which certainly worked up until the 1980s is now long gone. There are now several different battlegrounds, each of which needs its own analysis, and only when all that has been done and assessed ought all the figures to be put together in order to produce UK wide (or GB wide) numbers. See the third point made by Peterelectionfollower

    I know that will take a lot more effort, but if polling companies want to be taken seriously again they will have to treat the UK as it is, not as they might like it to be.

    One last point: the next boundary review will be a real corker south of the border and may disrupt the pollsters’ thinking still further.

  18. JOHN B.
    Good Morning to you. I agree that there are many battlegrounds in contemporary politics. Additionally I think that, unlike, for example, in 1951 GE, that most voters do not align themselves with a political party, so past voting is not a reliable guide to the way they might vote in the future. Gower was lost to Labour for the first time since 1922 and a senior Welsh Labour MP said that Labour could no longer rely so heavily on tribal sentimental feelings.
    The UKIP voters of 2015 could go anywhere in 2020, thus boosting either Tory or even Labour numbers.
    Non voters are even more difficult to work out.
    Yvette Cooper and a competent team might make Labour less unattractive to out-of-the-heartlands and of London voters.

  19. Thank you to Anthony and all those on here for the work you are doing in this area.

    I don’t know the answer to what happened. The point made by ICM that their phone polls had a 5% response rate seems striking, and the post by Robin P about individual voter registration seems intuitively very plausible.

    If we are talking building up the predictions regionally, as the expert Statgeek seems to be doing, I would agree on the basis that different parties seem to be in contention in the different areas. At the least reporting separately Wales and the (English) midlands would make for more useful polling results.

    I will return to the site much later today to read more of this discussion, which is surely rather important.

  20. Old nat ,cooper is very conservative on all the constitutional issues .Very very anti electoral reform as well.

    She would be quite hopeless if labour and tories are level pegging after 2020 and she would have to negotiate with other parties -this is one of the reasons I wont be voting for her .Imo she is as hard right as ed B.

    Sadly the one labour politician who would have had a genuine constitutional convention has left the stage.Burnham will be pragmatic ,I suspect kendall.has written off scotland,thinking she cant do much , so she is only targetting the southern and midlands marginals .

  21. They are getting there, but a long way to go. The actual reason the pollsters were likely to get it very wrong was clear well before the election, and is summarised in this article. http://bit.ly/1cw9zBK

  22. LEIGHTON VAUGHAN WILLIAMS.
    Many thanks for the brilliant article.
    I felt in my bones, on the last week of the campaign, that the Tories were ahead. The mood was not very ‘laboury’ some of us said in UKPR.

  23. I had no idea what was about to happen until it did. The reason was geographical more than anything else. I live in a safe Labour seat being targeted by the Greens, and was campaigning in a Lib-Lab marginal. The opportunity was simply not there for me to see the Labour/Tory situation.

  24. LEIGHTON VAUGHAN WILLIAMS

    Listening to the wonderful 5th Symphony of your namesake as i write this. Great music by a British master. Cheers the soul!

    Thanks for the reference, an interesting article. It’s amazing how much the political scene has changed since the election. Yesterdays demonstrations seem so irrelevant to the real world.

    Looking forward with great interest to the July budget. Who will be winners and who losers.

  25. THE OTHER HOWARD.
    Good Afternoon to you. As another Master said: ‘The last shall be first and the first shall be last.’

    In my own field of education: I am taking retirement at end of August, and have been negotiating part time contracts in teaching. I have had a glimpse of austerity in the public sector recently.

    On Friday afternoon I was offered a full time contract in tough school in Southampton by the employment agency in charge of supplying teachers.
    They proposed £20,900 a year for a full time post; this is £17K below the normal salary for an experienced classroom teacher.They explained that they could source teachers from abroad who would teach for that sum.
    Unsure about the political impact of such trends.

  26. CHRISLANE1945

    Listening to Wagner’s Die Walküre now. Sublime music by a musical genius, but a very unpleasant man in many ways.

    “Unsure about the political impact of such trends.”

    Do you think it is a reflection of the free movement of Labour within the EU affecting pay rates? If the local authority can get fully competent teachers for those rates of pay then it makes sense for them to do so at a time when the nation still has a deficit to eliminate and a massive debt burden.

  27. Is it possible that Tory voters were more likely to lie to the pollsters? Presumably they would also lie when they are re-contacted.

    This could be for various reasons one of which may be effectively to tell the pollsters to mind their own business!

    Just a thought,

  28. @Robin P
    “As far as I know, nobody has mentioned one other possible cause – whether the people in the samples were actually on the electoral register and hence able to vote.”

    Indeed. It is not encouraging that, in all of AW’s detailed write up of the meeting, none of the participants are shown as referring to potential problems with the changes to electoral registration. We do however know that the BES study is taking this very seriously and has been designed accordingly.

    The Electoral Commission have just published a report on the implementation of IER which strikes me as generally very complacent in tone, especially as some of the statistics revealed tend to confirm the problems. Noteably:
    1. The rate of addition of people to the register through normal churn fell markedly from a fairly stable 12% over each 12 months under the old system to 11% over 17 months of the new system (i.e. since the last register under the old in Dec 2013) – a much lower monthly rate. So on an annualised basis around 6% fewer new people added their names to the register over 17 months than would have been expected.
    2. Although the register grew slightly in overall numbers thanks to late registrations, this is cosmetic as it appears to have been achieved only by leaving people on the register at non-responding addresses much longer than normal, so the register became much more out of date and inaccurate, containing more ghost electors who had long since moved on. Note that an increase in registration during an general election year is entirely predictable and the absence of it would be strange when when the overall population is growing rapidly. So the fact that it has been achieved by artificial means should have been alarming to the Commission.
    3. The number of “attainers” (16-18 year olds) fell by 47% from what was already a lousy response level. This is complacently dismissed by the EC (without citing any evidence) as being down to the absence of a universal household canvass in 2014, rather than to the greater hassle of registering under the new system.

    I absolutely agree with you about the perils of the full roll out of IER, especially as the EC seem to be complacently avoiding the warning signs that it’s going to make things far worse rather than better.

    The report is here:
    http://www.electoralcommission.org.uk/__data/assets/pdf_file/0006/190464/IER-June-report.pdf

    Two other points:
    1. Could the “lazy Labour” phenomenon be “increasingly non-registered Labour”?
    2. Could the marked and unexpected drop in turnout in such a crucial election even when new parties of protest were on the scene be something to do with the deterioration in the electoral register – i.e. the ghost elector problem referred to above?

    @AW
    Thank you for the detailed write up.
    I suggest that YouGov add some questions such as “When did you last register to vote?” and “Was it at your present address?” as part of improved methods of measuring likely turnout.
    It would though be very important to still publish figures including the VIs of people who are not registered.

  29. @PhilHaines

    Some very good points regarding registration. Here in East Devon, we had tremendous problems, with numbers alarmingly down, and culminating in our Chief Executive being summoned to a Select Committee to explain himself ( or not, depending on your point of view ).

    Chris Malthouse is probably correct about the ‘mind your own business’ element, which seems like a Tory characteristic! I also think there might be a bit of deliberate misleading involved. Is it possible that Tories deliberately said they intended to vote Labour to guard against complacency within their own party, or even to keep the odds more generous at the bookies!?

    Unlikely, I’m sure, but what about the forthcoming EU referendum, where everyone knows that Cameron’s negotiating position is improved if the prospect of Britain’s withdrawal is a real one? As someone likely to vote ‘Yes’, I could easily imagine being tempted to say ‘No’ to a pollster just to strengthen DC’s hand with Merkel.

  30. @Millie

    I have noticed over many elections that Conservatives seem almost pathologically reluctant to display window posters or garden stakes at home, notwithstanding the tendency of rural hedgerows to do so. There might be some of the same trait in non-responses to pollsters too.

  31. PHIL HAINES.

    If people don’t display their political preference, how do you know what it is?

  32. @Colin

    In the Conservatives’ strongest ward here locally, where they obtained around 60% of the vote in the local and general election, I can recall seeing only one Conservative poster, despite having spent a considerable amount of time there during the campaign. This was one of the key marginals they were defending at the GE, and Conservative literature was coming through the door on an almost daily basis, so there was no shortage of material to display.

  33. @Leighton Vaughan Williams

    “…Interestingly, those who invested their own money in forecasting the outcome performed a lot better in predicting what would happen than did the pollsters. The betting markets had the Conservatives well ahead in the number of seats they would win right through the campaign and were unmoved in this belief throughout…The Tories, they were convinced, were going to win significantly more seats than Labour…I have interrogated huge data sets of polls and betting markets over many, many elections stretching back years and this is part of a well-established pattern….”

    Mr V-W, I read your article with some interest. May I ask which “huge data sets of polls and betting markets” you are referring to please?

  34. THE OTHER HOWARD,
    Good Evening to you.
    In terms of Local Education Authorities: they do not have much to do with employment now; it is done at school level and schools often ‘outsource’ recruitment.

    In terms of fully competent teachers at low pay rates I would not want to comment, except to say that in middle class private and public schools this policy of low pay recruitment would not, I think, apply.

  35. “Is it possible that Tory voters were more likely to lie to the pollsters? Presumably they would also lie when they are re-contacted.”

    ———

    An elephant in the room, alongside the registration thing.

    Are pollsters really covering all the bases?

  36. Though to be clear, could also for example apply to Labour voters who said they’d vote but didn’t…

  37. “I suggest that YouGov add some questions such as “When did you last register to vote?” and “Was it at your present address?” ”

    ————

    Along with…

    “are you lazy, or shy, or both?!!”

    “How do you feel about others knowing your political views”

    “would you ever vote for a party you dislike to keeps Scots peeps from holding the balance of power and would this embarrass you?”

    “Would you ever lie to a pollster?”
    etc.

  38. Carfrew

    Of course, all these questions assume that respondents consistently tell the truth or not.

    Suppose there are two poll respondents, one which always tells the truth and one which always lies. A single yes-no question would identify which was which.

    But with polling respondents their truthfulness may depend on their mood, or annoyance level with pollsters/politicians.

  39. Seems there was another leadership hustings for some party or other in Stevenage.

    It’s not clear from the report which party!

    http://www.itv.com/news/2015-06-20/labour-leadership-candidates-booed-at-campaign-hustings/

  40. @Oldnat

    Yes, I wasn’t suggesting those be the actual questions, lol, it’s more that they suggest the issues.

    I am aware of the ironic effects of asking these questions not necessarily giving the info. required and yes, peeps can be fickle and variable.

    (So we can add questions to try and determine fickleness etc. if you like…)

  41. Carfrew

    I imagine most people’s interest in politics is much the same as mine on the reputation of various companies and related issues re product branding – ranging from total lack of interest to downright hostility to bloody stupid questions.

    YG can poll me as many times as they like (at 50 points a poll) on whether I would be “proud” or “ashamed” to work for particular companies, but are never likely to get useful information.

    As a polling geek, I normally answer as honestly as I can (by giving no response) but I might equally well randomly click buttons.

  42. “If people don’t display their political preference, how do you know what it is?”

    You can smell the gin.

  43. BBC Scotland are doing a retrospective on the fall of LiS – “in their own words”.

    As one would expect, it’s a mixture of commendable honesty, back stabbing opponents, self justification, and all the other things that politicians (like everyone else) does.

    Whose comments fall into which category is much more difficult to discern!

    http://www.bbc.co.uk/news/uk-scotland-scotland-politics-33198969

    While some of the concerns are Scottish only, there might be issues for the Labour in England [1] to consider as well.

    [1] To be consistent, I should contract “Labour in England” to LiE – but that would be impolite.

  44. Can’t we have simple questions, that put the person being polled into a yes/no corner?

    – Are you happy with the current government?

    – Did you vote for the current government?

    – Will you vote for the current government next time?

    No, no, no suggests unlikely to vote for current (Con in this case)

    Yes, yes, yes suggest very likely to vote Con.

    Mixture of noes and yesses suggests swing one way or the other. False recall will be largely unimportant in the YYY or NNN scenario.

    By all means let’s get party ID later, but let’s get Yes / No to current government first. I can’t imagine people having false recall on that, quite as much as picking from a list of parties.

    Let’s poll these people on their voting intention, not waste time asking them if they like leader ‘x’ or ‘y’ or why they want to vote for a party. That’s the polls the political parties should pay for. Anyway, we vote for MPs, and not leaders. The sooner we depart from these media-driven, image-driven elections, the sooner we politicians with substance.

    …whinge, moan etc. :))

  45. ‘Add to that late tactical switching and the well-established late swing in the polling booth to incumbents and we have, I believe, a large part of the answer’

    @ Leighton Vaughan Williams

    I don’t really swallow that at all.. There was no evidence of a late swing to the incumbent in 2005 – 2001 – 1983 – 1979 – Oct 1974 – Feb 1974 – 1970 – 1966 or 1964.

  46. @Oldnat

    It seems I am not being clear enough for you.

    As I said, I was not suggesting asking those questions. Or indeed any of a similar nature. I thought the rather direct obviously direct nature of some of them would make that clear.

    It was a reflection on the difficulty of trying to gauge views/intentions via asking questions. If such things are to be gauged, then the intent probably needs to be disguised. And even then, tricky…

  47. “BBC Scotland are doing a retrospective on the fall of LiS – “in their own words”.”

    ————-

    Lol, the gift that keeps on giving. We’ll never get Amber back at this rate…

  48. The torygraph are becoming more eurosceptic by the day .

    http://www.telegraph.co.uk/news/worldnews/europe/11686444/The-EU-commandments-10-things-David-Cameron-must-change-in-Europe.html

    Murdoch is rumoured to have gone cold on Brexit so perhaps the Barclays see a business opportunity.

  49. Interesting views on the BBC by a former editorial director.

  50. All of this is very interesting, but the question it raises is why now? Apart from the slight overestimate of the Lib Dem vote in 2010 election polling has been, by and large, accurate for the past twenty years.

1 2 3 4