On Tuesday the BPC/MRS’s inquiry into why the polls went wrong publishes its first findings. Here’s what you need to know in advance.

The main thing the inquiry is looking at is why the polls were wrong. There are, essentially, three broad categories of problems that could have happened. First, there could have been a late swing – the polls could actually have been perfectly accurate at the time, but people changed their minds. Secondly respondents could have given inaccurate answers – people could have said they’d vote and not done so, said they’d vote Labour, but actually voted Tory and so on. Thirdly the samples themselves could have been wrong – people responding to polls were honest and didn’t change their minds, but the pollsters were interviewing the wrong mix of people to begin with.

Some potential problems can straddle those groups. For example, polls could be wrong because of turnout, but that could be because pollsters incorrectly identified which people would vote or because polls interviewed people who are too likely to vote (or a combination of the two). You end up with the same result, but the root causes are different and the solutions would be different.

Last year the BPC held a meeting at which the pollsters gave their initial thoughts on what went wrong. I wrote about it here, and the actual presentations from the pollsters are online here. Since then YouGov have also published a report (writeup, report), the BES team have published their thoughts based on the BES data (write up, report) and last week John Curtice also published his thoughts.

The most common theme through all these reports so far is that sampling is to blame. Late swing has been dismissed as a major cause by most of those who’ve looked at the data. Respondents giving inaccurate answers doesn’t look like it will be major factor in terms of who people will vote for (it’s hard to prove anyway, unless people suddenly start be honest after the event, but what evidence there is doesn’t seem to back it up), but could potentially be a contributory factor in how well people reported if they would vote. The major factor though looks likely to be sampling – pollsters interviewing people who are too easy to reach, too interested in politics and engaged with the political process and – consequently – getting the differential turnout between young and old wrong.

Because of the very different approaches pollsters use I doubt the inquiry will be overly prescriptive in terms of recommended solutions. I doubt they’ll say pollsters should all use one method, and the solutions for online polls may not be the same as the solutions for telephone polls. Assuming the report comes down to something around the polls getting it wrong because they had samples made up of people who were too easily contactable and too politically engaged and likely to vote, I see two broad approaches to getting it right. One is to change the sampling and weighting in way that gets more unengaged people, perhaps ringing people back more in phone polls, or putting some measure of political attention or engagement in sampling and weighting schemes. The other is to use post-collection filters, weights or models to get to a more realistic pattern of turnout. We shall see what the inquiry comes up with as the cause, and how far they go in recommending specific solutions.

While the central plank of the inquiry will presumably what went wrong, there were other tasks within the inquiry’s terms of reference. They were also asked to look at the issue of “herding” – that is, pollsters artificially producing figures that are too close to one another. Too some degree a certain amount of convergence is natural in the run up to an election given that some of the causes of the difference between pollsters are different ways treating things like don’t knows. As the public make their minds up, these will cause less of a difference (e.g. if one difference between two pollsters is how they deal with don’t knows, it will make more of a difference when 20% of people say don’t know than when 10% do). I think there may also be a certain sort of ratchet effect – pollsters are only human, and perhaps we scrutinise our methods more if we’re showing something different from anyone else. The question for the inquiry is if there was anything more than that? Any deliberate fingers on the scales to make their polls match?

Finally the inquiry have been asked about how polls were communicated to the commentariat and the public, what sort of information is provided and guidance given to how they should be understood and reported. Depending on what the inquiry find and recommend, in terms of how polls are released and reported in the future this area could actually be quite important. Again, we shall see what they come up with.


71 Responses to “This week’s Polling Inquiry”

1 2
  1. Since the inquiry will be concentrating on the areas where pollsters got it wrong, I wonder if the areas where they got it right will be ignored?

    Worst of all would be a change of methodology which corrects an error (sampling/weighting or other) in England outwith London, but screws up everywhere else!

  2. The pollsters got it right in Scotland so any explanation should also explain why the particular factor(s) did not apply to Scotland.

    My view is the inaccurate polling went a long way to helping the Tories to victory as ‘hung parliament’ set the narrative and there was little\no scrutiny of the policies of a majority Tory government.

  3. I liked Rawnsley’s article on this subject today in the Observer and his position is very similar to mine. Imperfect as they are, opinion polls are still the best tool we’ve got to inform us about the state of public opinion. Like him, I too suspect that people “game” the pollsters much more readily than they used to and in the age of the intranet, and with an ageing electorate, it’s hellishly difficult now, however sophisticated the methodology may become, to obtain a truly representative sample. Maybe the 3% + or – MOE will have to be broadened.

    Rawnsley also raised two very interesting discussion points. Firstly, and very much in line with Anthony’s wise counsel (and AOH’s too, it has to be said), the sub-headline question responses can tell us much more than the voting intention figures in terms of forecasting an election result. Miliband’s persistently poor personal ratings and Labour’s chronic lack of credibility on the economy, were giving us tell tale clues that many ignored, including me, as we became hoodwinked by the overall VI ratings. I shan’t make the same mistake again!

    Secondly, Rawnsley, whilst rebuffing calls to ban opinion polls during an election campaign, did posit the the thought that the neck and neck opinion polls may well have helped the Tories in 2015. Not only did they raise the spectre of a minority Labour Government propped up by the SNP, they also steered attention away from a far more focused discussion on what a potential majority Tory Government might do. Polls consistently showing a 6-7% Tory lead, the real figure as it turned out, would have changed the whole tone of the campaign. Instead, what became the key issues turned out to be bogus ones based on wholly erroneous data.

    A scary thought indeed.

  4. It seems like the message is that professional polls have got too close to convenience sampling. So it seems like there is going to be a challenge to pollsters in how to most economically deal with the need to improve that.

  5. Thinking of the views from Couper2802 and Crossbat11: it seems to me that for pollsters to be allowed to affect the course of elections in the way that happened, there is an obligation on the pollsters to use methods that don’t have known avoidable flaws.

  6. I suspect that the inquiry will come to conclusions that would involve the Pollsters in unwelcome increased expenditure in order to improve sampling, which does seem to be the favourite cause of the problems so far.

    Will the Polling Industry be interested in the extra expenditure necessary for more accurate results? I doubt it – perhaps a little – but no consequential increase in costs.

    Political polling just isn’t worth it when measured in terms of potential increased financial returns from the Commercial Sector, by getting political results more right than another pollster.

    AW may tell me I am wrong, and I stand to be corrected – but my awful fear is that the Polling Industry will be attracted by half-methods of little cost, rather than the full cost implications of the inquiries findings, resulting in polls being as unreliable as ever. That is unless there is some very rich Fairy Godmother who is willing to pay for political polls using the more expensive sampling methodology proposed!!

  7. Apropos the sampling, I would argue that it’s not just the pollsters who can’t find everyone.

    The BBC insists that it’s audiences are a cross section of society, but if you were to listen to the overwhelming majority of opinions from the floor of Question Time you’d conclude that labour are winning by a mile. The same thing was true of the predominantly ‘yes-leaning’ audiences during the scottish referendum.

    It seems conservatives (note the ‘c’) vote, but they don’t seem to want to talk politics or worse still, answer intrusive questions.

  8. Tony Dean: at the same time there is a financial imperative for pollsters to make their polls reliable. Political- and especially General Election- polls are the ‘shop window’ for the industry. The public perception of polling took a hammering (greater than was warranted, imo), which has caused, and will continue to cause, harm to the industry in the UK and the companies within it. It might be avoiding that harm and repairing reputations over the next 5 years will be enough incentive to improve methods even if there is a cost associated. We shall see.

  9. Good evening all from cold clear crisp Westminster North.

    Whatever the reasons are for the polls getting it wrong I did say on several occasions leading up to the 2015 election that the top line Labour VI was masking a multitude of sins (polling on other key indicators was rotten for them) and that the Tories would poll ahead of Labour when the votes were counted.

    I think the pollsters are making a lot of noise and making it too complex as to why they got it wrong when it was the voters just simply bolting from one party which they may have said they would had voted for but in the end stuck with the devil they knew.

    The polls got the top line VI correct in Scotland because in the end their was no real shift on polling day.

  10. #there

  11. @Oldnat / Couper

    If the ‘shy Tories’ theory is correct, it wouldn’t affect Scotland as there are far less of them, while in England it made a difference.

    Personally I believe that the Conservatives with Crosby targetted the collapsed Lib Dem vote in the South West, taking all the seats there and winning the election (and holding existing seats for the most part).

    Maybe places like the South West get less regional in-depth scrutiny, and national calcs miss out on regional variations (e.g. Scottish list votes).

  12. CROSSBAT11

    Rawnsley [said] the sub-headline question responses can tell us much more than the voting intention figures in terms of forecasting an election result. Miliband’s persistently poor personal ratings and Labour’s chronic lack of credibility on the economy, were giving us tell tale clues that many ignored, including me, as we became hoodwinked by the overall VI ratings.

    But Rawnsley is wrong in this, because if those other statistics were a much better predictor of electoral outcome, then we would see a lot of people in the recontact surveys sheepishly admitting that they had changed their minds and voted Tory[1].

    Instead there’s very little evidence that there was much of a late swing that would explain the failure of the polls – they come up with similar results to their polling not to the result. The same people who said they were unhappy on leadership or economics are the same ones who gave the faulty VIs. And if they’re not telling the truth about their vote, why should they be doing so about other things?

    [1] Similarly Lord Ashcroft did an immediate post-vote mega-poll:

    http://lordashcroftpolls.com/2015/05/why-did-people-vote-as-they-did-my-post-vote-poll/

    with a sample of 12,553 voters using a mixture of online and phone which gave a result of Con 34%, Lab 31%, Lib Dem 9%, UKIP 14%, Green 5%

  13. Statgeek

    While the GB/English narrative is “hard to find Tories”, it would be surprising if it was preferred VI that made such people hard to find, and not other demographic/lifestyle factors.

    There may be equivalent souls in London, Wales and Scotland – but if they are better spread across the parties, then the polls in these places would still be accurate.

    If pollsters simply weight to increase the weight of the missing Bedfordshire Tories, but apply such a simple (and very cheap!) adjustment across GB, then they will simply replace one error with another.

  14. Panelbase/Sunday Times & Heart Radio polls in both Scotland and E&W

    http://www.heart.co.uk/scotland/news/local/65-scots-reject-brexit-from-eu/#R5Jh0P1A0ryzrwv1.97

    Scots are overwhelmingly opposed to “Brexit” (by 65% to 35%) while those south of the border England are narrowly in favour (53% to 47%).

    Because of England’s far larger population just over 51% support for Brexit there would be enough to overturn Scottish support for the EU.

    In the event of a UK vote for Brexit, a majority of voters in Scotland (54%) say they would want a second independence referendum to be held and in those circumstances support for a separate Scotland rises to form a small majority.

    The poll puts current support for Scottish independence at 47%, two points up from the September 2014 referendum, with 53% opposed – unchanged since last September.

    However, backing for Scotland to become independent grows to 52% when voters are asked if their minds would be changed by the UK withdrawing from the European Union.

  15. In a previous thread, Graham raised the question as to why this became a particular problem in 2015 as compared with earlier elections, suggesting a sharp decline in response rates as a possible cause.

    It’s a very good question and applies not just to UK general elections, but also to how well (or not) polling did in other elections such as the euros. But any explanation of the polling failures[1] has to also account for polling successes, otherwise the danger is fitting to one data point and throwing out for the rest. And of course that includes the polling that was correct in May as well – in Scotland and London for example.

    Pollsters got 2010 fairly close. They over-estimated the Lib Dems at the expense of both Lab and Con, but most got the 7 point Con lead or thereabouts. But in 2015 most predicted a tie or a 1-2 point Con lead rather than the actual 6.6 points. So with a similar result for the top two they got 2015 wrong, using very similar methods.

    Curtice’s British Social Attitudes paper doesn’t really mention this – it’s more concerned with making the case for these big-sample random surveys and what they tell us. As such it’s a useful confirmation of similar conclusions from the British Election Study random survey and the paper contains useful comparisons of the results of the two[2].

    But while both BSA and BES did better than the polls with the Lab-Con margin, both over-estimate Lab and Con by a few points each mainly at the expense of UKIP and also (by about a point) the Lib Dems[3]. The UKIP underestimating is particularly striking with them failing to find about a quarter of the UKIP vote. It suggests that exhaustive random sampling may not be the panacea either.

    I don’t think a decline in response rate can really be responsible for all of this (though it doesn’t help). That shouldn’t affect online polls and the drop off in phone polls seems to have been happening a while. The contact rates for BSA and BES also seem to be dropping. So there must be other reasons as well, perhaps related to the sort of the people the hard-to-contact are.

    [1] As I keep on pointing out there really needs to be a look at constituency polling as well – specifically in the Lib Dem held seats. If those had stayed Lib Dem with the Ashcroft predictions there would be no Conservative majority. Anecdotal evidence suggests that people voted ‘nationally’ rather than locally, perhaps encouraged by press campaigns and direct mail etc, but why it should be so much more effective in 2015 than other times is an additional issue.

    It might be interesting to compare the failure of UK constituency polling with the recent Canadian federal election where there also seemed to be a lot of polling in individual ridings.

    [2] One thing that Curtice doesn’t emphasise is that the two surveys use slightly different samples – BSA looking at all British adults (except those NW of the Great Glen) while BES is concerned only with those who can vote (ie not most EU nationals etc). This probably explains the slightly lower percentage of respondents who said they voted in the BSA (70.3% versus BES 73.6% – both as weighted). Both are higher than the actual figure 66.4% which you would expect for some technical reasons which Curtice mentions, but also because non-voters would be more likely to refuse to take part. Even after all the chasing up only 51% of the BSA sample agreed to take part (and only 56% of BES’s).

    [3] The discrepancies are actually made worse by weighting. I suspect that this may partly be a ‘shy UKIP’ factor which we have seen in telephone polls as well. Though in this case ‘shy’ may be more being unwilling to take part rather than giving the wrong answers. In online polls this may be compensated by some UKIP supporters being more likely to join panels. So too few of one type of UKIP supporter are balanced by too many of another.

    [Updated and corrected version of comment on earlier thread]

  16. @Ben Foley et al

    ‘it seems to me that for pollsters to be allowed to affect the course of elections in the way that happened, there is an obligation on the pollsters to use methods that don’t have known avoidable flaws.’

    That is very true but it was primarily the responsibility of journalists to report critically, and thoroughly assess, the policy proposals of the different political parties. A large part of the problem in the 2015 GE was that the media were carried away with the opinion polls and matched their coverage accordingly.

    I was aware of endless caveating on the part of opinion pollsters but journalists simply took the results at face value, and endlessly obsessed about the options for a hung parliament to the exclusion of virtually everything else. The MSM heralds their ‘free-ness’ as playing a vital role in democracy but the 2015 GE exemplifies their deficit in decent reporting of the different political options. The fact that the opinion polls suggested a hung parliament was worthy of mention but the focus should nevertheless have been on the manifestos, regardless of what the polls were suggesting. In particular, the Conservative policies were given little attention…and in particular, the BBC were as bad as the privately owned press.

  17. Mike Smithson said before May general election and also in recent weeks with reference to Corbyn: safest polling indicator with regard to the broad direction of travel (i.e. which party is most likely to win) are leadership ratings and who would make best PM. Pro EdM posters poo-poo’d this for 5 long miserable years (“people vote for a party not a leader; this is not a ‘presidential’ system etc etc ad nauseam).

    Today’s numbers on Corbyn are (just) the latest batch of disastrous personal ratings.

    Oh- and party numbers are terrible as well: though lucky fieldwork completed before Corbyn on smart and Thornberry on SP…

  18. @ Roger Mexico

    ‘As I keep on pointing out there really needs to be a look at constituency polling as well – specifically in the Lib Dem held seats. If those had stayed Lib Dem with the Ashcroft predictions there would be no Conservative majority.’

    Absolutely, and those seats were almost certainly the ones targeted by the 40/40 strategy, with micro-profiling and literature tailored for individual groups of voters. There is still secrecy about which constituencies the Conservatives focused on using these methods but according to Conservative Home, there seems to have been a lot of discontent amongst those excluded constituencies because they received no resources and were expected to campaign exclusively in their nearest target seats.

    I wonder what impact the concentration on 80 constituencies might have had in skewing the national polls. Looking at individual constituencies, it is clear that the LD vote collapsed nationally and that both Con and Lab benefitted to some degree but a targeted campaign may well have differentially increased the LD to Con vote beyond the national average with relatively few additional votes.

    In addition, in some LD held seats, like South Cheam and Lewes, there was a Conservative gain with hardly any increase in the Tory vote (+184 and +805)… the LD vote was dissipated to Lab, Ukip and the Greens.

    This fits with Labour increasing its overall vote in England and Wales by 1.5m whilst the Conservatives achieved a majority government with a gain of only 500k votes.

  19. “The pollsters got it right in Scotland so any explanation should also explain why the particular factor(s) did not apply to Scotland.”
    ________________________________________________________________

    That would suggest it is something to do with the Conservative voters, as they don’t got too many of them in Scotland.

    Shy Tories? So is it overlooked Tories? Inaccurately weighted Tories?

  20. Let’s remember though the a number of the libdem seats particularly in the south were always Tory really and have merely returned to the norm. I think particularly of Lewes, in which constituency I used to live. It only went liberal in 1997, I helped it along its way being fed up with the Tories at that time with all the sleaze and a weak pm but it was always going to return to blue eventually.

    That these seats held out for so long is merely a reflection of the quality of the individual lib dem MPs. However, in 2015, when it looked like the country was going to be run by the strange one with a belligerent SNP calling the shots, in a hung parliament, then of course the old Tories returned to the fold and the ld’s were sacrificed on the altar of political oblivion. There is no surprise to me that those seats were aggressively targeted by the Tories (all’s fair in love and war) they were always going to be the easiest to win over. Easy to argue that the Ld’s had been a roadblock to whatever policy, we must get rid of them, sort of thing.

    Might the result have been different if the polls had been accurate? Quite possibly but not certainly. Cameron is the most left wing leader the Tories have ever had. Blair was the most right wing leader labour has ever had. In other words, they are both in the centre, which is where uk general elections are won.

    Labour under Ed had swung very leftwards and had spent 5 years rubbishing New Labour. It is now swinging extreme left under Corbyn supported by the great unwashed, many of whom wont actually bother voting when it comes to it and are probably concentrated in labours heartlands anyway.

    The majority of British people don’t do extremes and until labour re discover that, then they will never be in government. Equally, the Tories must take care not to swing extreme right, when cameron stands down. It is something they could easily do if they are not careful.

  21. I wonder what corrections the Pollsters need to make in order for Corbyn to win the GE :-)

  22. @Syzygy

    “This fits with Labour increasing its overall vote in England and Wales by 1.5m whilst the Conservatives achieved a majority government with a gain of only 500k votes.”

    The England and Wales battleground in May 2015 did indeed give a rather mixed picture and while it is true to say that Labour performed poorly in historical terms, they did improve quite significantly on their 2010 performance. The LD collapse and the SNP landslide in Scotland gave the UK wide election a rather different look and has allowed many commentators to spin it as some extraordinary Tory triumph. In part it was, I suppose, certainly in the sense that the country elected its first Tory majority government for more or less a generation, but in terms of popular support, never has one emerged with such a dubious and weak mandate.

    This is another by-product of the inaccurate election opinion polls. Such was the surprise at the result, when the actual outcome defied expectations to such a large extent, that it allowed many people to exaggerate the scale of the Tory triumph. Labour gloom and disappointment helped this narrative take hold too

    Victors tend to write the history, I know, but a detailed analysis of the 2015 GE result presents a much more confused and nuanced picture. Tory triumph and Labour rout doesn’t really do it justice.

  23. CROSSBAT11

    “never has one emerged with such a dubious and weak mandate.”

    That’s just personal bias, you forgot the IMO.

    It was a Tory triumph in the sense that their targeting especially of LibDem seats worked really well for them.

  24. COLIN

    Restriction of voting to Labour Party members only.

    :-)

  25. CROSSBAT11

    I agree about Rawnsley’s article in the Observer, The sub-headline questions are key to identifying who is going to win elections.

  26. @Syzygy 12.22 a.m.

    “I was aware of endless caveating on the part of opinion pollsters but journalists simply took the results at face value, and endlessly obsessed about the options for a hung parliament to the exclusion of virtually everything else.”

    Pollsters could perhaps help journalists by including the dks in all their figures. Perhaps issuing the range of % points instead of only one might help as well (e.g. Con 32-35, Lab 30-33 etc.)

    But at the end of the day, the pollsters are going to have to find some way of allowing for greater flexibilty and variety across the regions and nations of the UK than is currently the case. Perhaps we ought to insist that Northern Ireland figures be always included, just as a reminder to everyone that not all the UK population lives in English suburbia!

    I say this not because I am against English suburbia in particular, but as a reminder that geographical concentrations matter. The SNP wins 56 seats whereas UKIP wins 1. A general ‘across the board’ approach may have worked twenty five years ago, but it certainly doesn’t in the present political climate.

  27. TOH

    :-) -wouldn’t be surprised to see it proposed actually.

    Quite enjoying the narrative here that Cons didn’t really win the GE-except in perverse voting sort of a way, engineered by Pollsters & LibDems.

    Dick Tuck always comes to mind on these occasions.

  28. Publishing the expected range of poll results as suggested above would initially confuse many people but would be far more honest and perhaps help everyone understand the reality of polling, that it is an inexact science.

    I too believe the hyping of the polls was stifling.

  29. @ John B

    In some countries on the Continent pollsters give the headline figures for both full population and those naming the VI for a party (rather than by likelihood of voting), and both are reported, yet they often get it wrong (e.g. Greece, in spite of their voting law).

  30. Hello everyone.

    I’ve been lurking on here for the last 5 years, since about this point in the last electoral cycle. I finally decided to create an account and join the fun and games. Thanks for the interesting discussion, and of course thanks to AW for creating the site.

    It’s an interesting problem the more you look at it. Someone makes the point above that if the leadership and economy ratings were the underlying thing we should have spotted, then how come the people we polled seem to have voted the way they say they did? That is assuming they’re telling the truth when re-contacted. But you’d have thought that people who run an online panel have quite a lot of past information on their panelists now, and so have a much better chance of spotting inconsistencies in user responses, than the users do of remembering the different times they’ve lied to them.

    Of course the answer to that could just be that Miliband’s personal ratings were actually even worse than polled – and it’s just that we weren’t capturing the people who disliked him even more.

    I do feel sorry for the pollsters now. I’ve answered one political poll in the past. But as everyone who phones you up now says “I’m not selling anything, just doing a survey” – I can’t be bothered to sort the sheep from the goats, and tend to just put the phone down. Plus if they phone in the day, I’m at work, and if they phone just after I’ve got back from work I’ve either just sat down, or am making dinner. If online panels of willing volunteers aren’t representative of the population – I’m struggling to see how we can successfully poll, short of randomly going out and lassoing people off the streets.

    Could the pollsters club together and fund some kind of voter capture and torture centre? Should at least cut down on the don’t knows…

    Finally, a point on showing the results of polls. I don’t often click through to see the tables – but what the press reports always seem to show are pie charts of voting intentions.

    As journalists are both busy, and lazy, could you improve the reporting by not giving them these? Instead, give them a line graph of voting intentions for the last few months, if not since the last election. That way, one-off fluctuations in lead will look like the noise they usually are – and people can’t do the lazy headline of “Labour lead leaps 4 points, voters now see Cameron as the anti-Christ and want to have Miliband’s babies!” They even run versions of that when the lead has “jumped” by one point.

    What with showing at least 4 parties on the national graphs, I presume it wouldn’t be so easy to show the margin of error.

  31. Roger Mexico’s point on the LibDem vote-collapse is important for the polling, but also the direction of the churn in different parts of the country.

    Considering the reinterviewing outcomes, apart from the sampling issue, DKs could also be a major problem. When creating a graph on VIs and including the DKs, it seems that the change in VIs is often not direct from party to party, but through the DKs (especially striking in the LibDem VI in the beginning of the last parliament). However, this is not a valid method (and probably an incorrect assumption as well) as if I can recall a discussion from 2011 correctly, as the DKs are not weighted. But then any error coming from the DKs couldn’t be discovered either …

  32. There is some muttering that the inaccuracy of the polls affected the result and while this may be true, it is impossible to know in which direction. Everyone loves a winner and the fact that the Tories were projected to lose to a Labour led coalition, might have depressed their vote in some areas. The Lib Dems used to exploit this tendency with their LDs winning here posters( before they lost all their activists.)

  33. Is there a lesson here for all “Populist Protest ” political parties ?-its no good pleading that the world is more complex than you thought-disillusion is followed by rejection .

    http://sputniknews.com/europe/20160117/1033271835/greece-nd-poll.html

    ( unless the Greek Polls are wrong of course )

  34. Have done a write up on the Scottish Survation poll:

    http://www.statgeek.co.uk/2016/01/survation-holyrood-poll-january-2016/

    It looks like the SNP will tread water, while UKIP might gain two seats. Labour seem to be the big losers (seems to be the recurring theme in Holyrood polling).

  35. @ Colin

    Thanks for the link for the Greek polling.

    This one gives more details:

    http://www.electograph.com/2016/01/greece-january-2016-alco-poll.html

  36. @ Crossbat11

    ‘Victors tend to write the history, I know, but a detailed analysis of the 2015 GE result presents a much more confused and nuanced picture. Tory triumph and Labour rout doesn’t really do it justice.’

    Victors, and those who want to argue for a return to New Labour. It will be interesting to find out whether the Beckett report differs markedly from the Blue Labour conclusions of Jon Cruddas.

    The LD results are usefully analysed in a two part series

    http://www.socialliberal.net/lib_dem_seats_in_2010_5_where_did_the_votes_go_part_1_of_2

  37. A couple of observations:

    Is the underestimation of Tory and (possibly) ukip VI in 2015 GE because these voters tend to be less ‘engaged’ with the modern world, i.e. less likely to use computers (so more reliant on traditional news sources); older and less socially engaged (narrower social circle); less inclined to respond to what may be perceived as intrusive or impertinent enquiries (especially by phone). That would be a sufficient explanation for lower response rates.

    Secondly, I feel that commercial opinion polling has been able to get away with increasing sloppiness. If you get people’s preference between McDonald’s and KFC wrong by 5%, no-one will be any the wiser. Only at a GE does the ‘right’ answer emerge to provide a wake-up call.

    Like many posters here, I respond to YouGov surveys and I am frequently appalled by questions so ambiguous and badly written that you end up answering almost at random. The commercial imperative is clearly to get the survey done as quickly and cheaply as possible, providing results that the client will swallow and pay for.

  38. Anthony
    My 7.57 post tripped into auto mod for some reason. I have just re written and reposted but whilst doing so, you have released the original, whilst the re written one has also auto modded!
    Can you delete the one at 12.01 therefore as it is a repeat and can you tell me what caused the auto mod in both cases please. Presumably a particular word but I can’t work out which one.

  39. There is another oddity about polling. YouGov had an experimental constituency level prediction on their website (using more than just standard demographic data). It has not been accessible since the elections, but surely it is for YouGov.

    If my memory is correct (no guarantee), the relative VIs were pretty correct for the NW, with a small underestimation of Labour (including those on the other side of the water – Wirral, and also Cheshire and Greater Manchester). I can’t recall other regions at all. But surely it is a potential source of data mining.

  40. I Ain’t Spartacus

    Welcome for coming out of the shadows.

    “Could the pollsters club together and fund some kind of voter capture and torture centre? ”

    That’s the kind of innovative thinking that is required around here.

    Perhaps we could make the subjects watch endless loops of PPBs? Guaranteed to weaken the strongest resistance.

  41. Statgeek

    Thanks. I have my bag of popcorn ready!

  42. @Robert

    I generally find posting about polling is what most likely gets me modded, that and food for some reason, if that helps any, Cricket is ok though.

  43. @ Robert Newark

    Interesting reflections on Lewes constituency. It seemed from the Ashcroft polling that Norman Baker, the LD incumbent, was heavily reliant on tactical voting/personal vote from both Lab and Cons.

    I am also pretty sure that Lewes was not one of the 40/40 target seats, being considered as a safe seat of the LDs. That was certainly my impression on the ground. There was no visible campaigning on behalf of the Conservatives. Perhaps Lewes should be the ‘Wokingham’ standard, by which to measure the impact of the American/Australian style of election campaigning.

  44. Robert Newark
    ‘Cameron is the most left wing leader the Tories have ever had. Blair was the most right wing leader labour has ever had.’

    I agree about Blair but not Cameron. Macmillan was more left wing as was Heath post-the U turn.

  45. ROBERT NEWARK

    @”Equally, the Tories must take care not to swing extreme right, when cameron stands down. It is something they could easily do if they are not careful.”

    Yes-this is the other side of the Corbyn coin for Cons.

    On one side-neverending navel gazing by the main party of opposition & so little examination of policy failures & mistakes.

    On the other side- the temptation for complacency & a “we can now get away with it” mentality.

  46. Colin

    There is something to be said for the “give ’em enough rope” argument. Tax Credits springs to mind. The government would be in serious trouble now but for the Lords.

  47. HAWTHORN

    The House of Lords is not Her Majesties Official Opposition .

    They have an important function to be sure, but an overpopulated unelected Chamber is no substitute for effective opposition in the elected Chamber. HoL is filling the vacuum at present.

  48. Colin

    The Wiki article on Greek polling is always very useful as well:

    https://en.wikipedia.org/wiki/Opinion_polling_for_the_next_Greek_legislative_election

    There aren’t really enough polls yet to tell whether the change is anything more than noise or a passing effect due to ND having elected a new leader (the previous one was always pro tem anyway). Individual pollsters seem to vary quite a lot and house effects are quite strong.

    The change in lead isn’t so much due to Syriza losing votes as ND gaining extra votes from DKs and smaller Parties. Though even the movement from the latter is thinly spread – it’s not like the centrist Parties (such as the Union of Centrists or the River) have suddenly collapsed. So it may just be a honeymoon effect – we’ll need another month’s worth of polls at least. If the previous ND leader was a turn-off for some voters then it may be more permanent.

    But of course the UK wasn’t the only places where the pollsters had problems in 2015. In the run-up to the Greek election on 20 September most of the polls showed Syriza only a point or two ahead or tied with ND. In the end they had a 7 point lead (as in the UK the exit polls were more reliable). They also got the referendum wrong in a similar manner.

    On the other hand the Greek pollsters called the January 2015 election (with a very similar result to September’s) pretty accurately. So it’s possible that Greek pollsters have even more confusing problems than British ones. Though whether they’re doing anything about it (apart from hoping it goes away) I don’t know.

  49. ROGER

    Thanks

    I noted the context of ND’s new leader. We will see whether it sustains.

    In any event, they have nearly as long as we do before a GE, so current polling-right or wrong-is a bit academic in both countries.

  50. Polling? Academic???

    How can you say that? Sacrilege!!

1 2