ICM’s (presumably) final poll of the year has topline figures, with changes from their last poll three weeks ago, of CON 39%(-2), LAB 34%(+4), LDEM 18%(-1), suggesting a significant recovery for Labour from the worst of their lows. Unlike in the YouGov poll yesterday, there is no boost for the Lib Dems, but at least some of the poll would have been conducted before Nick Clegg was crowned Lib Dem leader, so we shouldn’t expect to see any boost from the publicity surrounding the new leader yet.

The poll was conducted almost simultaenously with yesterday’s YouGov poll, yet they tell sharply contrasting stories. YouGov had Labour still declining, ICM recovering. The other pollsters don’t give us much help – Populus also had Labour in steep decline, Ipsos MORI had them recovering.

Incidentally, something odd is happening with ICM polls lately. Normally I assume all polls done by the same pollster using the same methodology are perfectly comparable, ignoring considerations like what time of the week the poll was conducted or what paper commissioned it.

Looking at ICM’s polls since the Tories retook the lead in October though, there seems to be a consistent pattern of lower Tory shares in ICM’s Guardian polls compared to the ones for the Sundays. Their first poll to put the Conservatives back ahead was a poll for the Sunday Telegraph that put the Tories up on 43%, the following weekend poll for the Guardian put them down 3 points at 40%. A fortnight later a poll for the Sunday Express had them back up at 43%, the following Guardian poll had them back down to 37%, a week later they were back up at 41% in a News of the World poll, down they are back down to 39%. Until this poll there was a similar up-and-down pattern to the Lib Dem score in ICM polls, with higher Lib Dem scores in polls for the Guardian.

I was pondering whether this difference was due to Guardian polls being done at the weekend, and ICM’s polls for the Sundays which are conducted mid-week. This poll was conducted mid-week though. There is no apparant difference in any of the weightings or adjustments made to the polls ICM do for the Guardian or for other papers, perhaps it is purely co-incidence.

UPDATE: ICM have also updated their analysis of regional breaks in the vote. I always advise people to be wary of the regional breaks in normal polls because they have very low sample sizes, and because polls are only weighted at the national level, so may be skewed within regions. Every now and again ICM aggregate their data from several months of polls to produce regional breaks – this doesn’t necessarily do much to allieviate problems with weighting, but does at least take away concerns about sample size. When they did this exercise back in August they found the Tory advance was largely concentrated in the South outside London, with only modest advances in the Midlands and London and them going backwards in the North.

The latest analysis, based on data collected since October, shows the Tory advance is still strongest in the South, but they are now making strong progess in the North and the Midlands too. Only in Scotland and Wales are they relatively static, suggesting the Conservatives are now making a broader advance.


26 Responses to “ICM show Labour recovering”

  1. Anthony – the poll was carried out on Tuesday and Wednesday so most of the interview probably took place after the announcement of Clegg’s victory. I think that with YouGov the fieldwork started on the Monday. Given your information on previous polls I would assume that more than half the response came in on the Monday.

    Yes – I had noticed that ICM Guardian polls seem to be less favourable to the Tories than those for the Sunday. I think that the November poll for the Guardian was also carried out mid-week.

    It will be interesting to see how ComRes, which I assume is polling this weekend, comes out.

  2. Anthony , I have responded to the point you raised on the previous thread on that thread .
    I pointed out the differences between the Guardian polls and the ICM Sunday newspaper polls a month or so ago . It would appear that the day(s) of the week when the poll is conducted does have an effect on the results . This could well be more true at this time of the year when people are rushing about doing Xmas shopping and at holiday time .
    The detailed data may give some clues as to what the reasons are .

  3. I agree with Mark, must be the time of the year, that polling taken through holiday periods would tend to produce strange results.

    A 4% increase in Labour’s vote for no apparent reason looks suspect to me.

  4. Anthony Wells:

    Given that both Yougov and ICM both provide polls for multiple customers if timing was the key factor why does this not seem to affect Yougov polls in the same way (internet factor perhaps)?

    Of course there are significant differences in the polling methods (polling medium, weighting method etc.) Could it be that this provides part of the answer? Similarly could it simply be that as ICM polls have a significantly smaller sample that they are just more prone to such volatility?

    On the regional issue I agree that the ICM method of using aggregate polls seems open to question. The comparison between periods of different length in the latest Guardian poll seems to undermine the conclusions to some extent and by it’s aggregate nature it is likely to underestimate the actual current situation.

    I think a snapshot view would be better than an aggregate view.

    Given, accurate regional information would give a clearer picture how an election might pan out can we not still use some of the data provided by the pollsters to identify a trend?

    In particular Yougov provide a fairly good breakdown of the regional scene (small samples accepted).

    Doing the same comparison as used in the Guardian (August/December)gives the following changes and current position.

    London: Con +4% Lab -3% (Con lead by 7%)
    South: Con +13% Lab -8% (Con Lead by 30%)
    Midlands/Wales: Con +9% Lab -12% (Con Lead By 12%)
    North: Con +18% Lab -14% (Con Lead by 5%)

    Furthermore, the Yougov poll of 30th November has a sample of over 4000 and regional samples that equal or exceed the normal weighted sample for the ICM national poll. This indicated a position of:

    London: Con Led by 12% (sample of 583)
    South: Con Led by 30% (sample of 1286)
    Midlands: Con Led by 16% (sample of 888)
    North: Lab led by 8% (sample of 923)

    Is this not a valid regional snapshot (weighting issues aside)?

    Given the size of these samples and the supporting information from both Yougov and ICM does this not greater credence to the idea that the Conservatives now dominate in the South and Midlands and that Labour have pretty much lost their advantage in the North and London and perhaps could be behind in both these regions?

  5. Mike – the YouGov poll would have been almost all from before Clegg. It varies of course, but normally about half the responses are the first day, then about a third the second day. Only the sorry final few would have dragged in late on Wednesday afternoon after Clegg had been declared.

    The November Guardian poll was indeed midweek, as was their first of two Guardian polls in October, so an explanation based on when in the week the poll was conducted is hard to sustain – only one of the Guardian polls we’re talking about was done over the weekend!

    As Mark and Ralph say, Xmas shopping (and people simply not thinking too much about politics) probably does do funny things to the polls at this time of year. If ComRes does poll over this weekend when most people really will be in holiday mode I think there’s potential for really quite odd results!

    Jsfl – well, as I’ve said, time of week doesn’t look like a very good explanation here, but if it were we shouldn’t expect thing to work the same with internet polling as phone polling. Phone polling relies on someone being there when the phone rings, and that only happens between certain hours of the day (I can’t remember when ICM stop ringing people, but I assume they call it a day at some point during the evening!). Internet polling the email sits waiting in respondents inbox until whatever time they check it, even if that is 11pm at night after late night shopping or a long commute.

    ICM have smaller sample sizes, but historically at least they have tended to be one of the least volatile of the pollsters, with a pretty comparable level of volatility to YouGov. Sample size isn’t everything in this game :)

  6. jsfl – The problem with the Yougov regional samples even more so than with Yougov itself is that they are not a representative sample of the people in the region . Yougov get responses from too many over 55 AB’s living in LonDon and the South East . Take the 4,000 survey you refer to . This had 1658 55 + age group weighted sample 1421 , 2441 ABC1s weighted sample 2161 and 583 people living in London weighted sample 505 . The sample as a whole had 36.5% Con 25% Lab 10% LD and 8% others with 20.5% don’t knows/wont votes . The typical ICM/Populus poll has figures something like 26/22/11/7 with 34% don’t knows/wont votes .
    Clearly the Yougov panel presumably because it has a greater proportion of politically interested people than the population as a whole , has a much higher % who say they will vote and these extra people seem to be Conservative inclined . Weighting adjustments reduces this Conservative advantage by 2% or so but even so you would not expect it to behave in the same way as a telephone poll as they are random and a panel survey by definition is not .

  7. Mark Senior-
    Your post , together with those of others who are clearly very knowledgeable about opinion polls leaves people like me ( who are not), wondering whether they impart any data which is worth considering.
    Peter Hitchens wrote a piece in MoS recently highlighting the scale of the “Don’t Knows” which left me feeling similarly uncertain.

    Reference the regional figures shown in YouGov Polls-
    In respect of the Telegraph sequence 26/28 Sept-3/4 Oct-22/24 Oct-26/29 Nov, the trends shown are as follows ( Con/Lab-data in time sequence)
    London 31/44-35/40-45/36-45/33
    South 40/40-47/31-51/30-54/24
    Mid/Wls 33/44-37-43-40/40-47/31
    North 27/49-28/50-34/50-36/44

    …From which I draw the conclusions that:-
    a) Conservatives have gained 14 percentage points in “London/South/Mid-Wales” over that period and have a significant lead over Labour in all three at the end of it.
    b) Conservatives have narrowed the deficit with Labour by 14 percentage points in “North” over that period but Labour still lead at the end of it.

    In your opinion are these reasonable conclusions or not?

  8. Mark Senior:

    We had this debate not so long ago. I’m not convinced that your arguments, which whilst factually correct no doubt (I haven’t checked), necessarily distort the regional figures sufficiently to significantly affect the trends indicated.

    I’ll agree that small regional figures are generally unreliable but considering that ICM Guardian/Populus headline figures seem slightly out of step at the moment with all the other pollsters I certainly would not consider ICM polls any more accurate than Yougov.

    So we will have to agree to differ and wait and see what happens at the next GE.

  9. Colin , it is no surprise that you can conclude that from Sept to Nov that the Conservatives have improved their position substantially against Labour in the regions you mention . They must have done because Yougov now give the Conservatives a big lead over Labour whilst in Sept. Labour had an equally big lead and it is reasonable to conclude that the change is pretty much evenly spread throughout the country .
    I would not though like to quote firm figures for those changes . The position in the South in Sept given as 40/40 may have been anything from 35/45 to 45/35 and a similar range for the figures now .
    The trends are clear but the exact amount of the trend cannot be quantified and in fact you only need to compare the headline poll figures to see the trend not the regional ones .

  10. As noted in comments on the last entry – this poll is clearly a rogue. – it was done at the same time as the yougov that gave a 12 pt lead.

    Although I don’t believe it, even if we accept the result this poll makes the WMA 41:32:16 so has an error of 4. It also makes the YouGov/Sunday Times poll show a retrospective error of 4 and that is very unlikely indeed: about a 3% chance. I wonder whether one reason that this (Telephone) poll seems to under-estimate C support by 2% and over-estimate Lab and LibDem by the same amount is that C supporters (and in particular people who have switched from L to C) are more likely to be out celebrating whereas diehard Labour and LibDems are at home?

  11. Mark-
    “The trends are clear but the exact amount of the trend cannot be quantified and in fact you only need to compare the headline poll figures to see the trend not the regional ones”

    Yes of course-but the regional trends are both interesting and important .
    I don’t understand why you allocate validity to the trend in headline figures but not to the regional figures which are it’s consituent parts.

  12. NBeale – I’m slightly confused. When you say ‘has an error of 4..’ what exactly do you mean? And why is it very unlikely the YouGov poll should have this error but you clearly think that ICM does? I’ve also had to assume the last comment regarding differential celebrations between different party supporters affecting the poll numbers is not serious – at least I hope it’s not being put forward as a genuine reason for major poll variations.
    Overall, I’m still unsure why everyone thinks this ICM poll is a rogue. Surely that judgement can only be made in retrospect, once more polls are in and trends become more established. On a non statistical basis it would surely be entirely expected for the Tory lead to shrink slightly after a brief period of respite from bad news stories about the government and a couple of half decent performances from Brown. Why such a clear cut opinion about differential results?
    I know these things are almost irrelevant, but it’s interesting to note that in December local by elections Tories have lost a few seats to Lab/LD – not what we might expect from 45% and a towering poll lead?

  13. Is there not a broader question here about the accuracy of opinion polling more geenrally given we have two polls which come out similainiously from respected pollsters which show very different stories? Both cannot be correct.

    My hunch is that the True figures are about 40-1% Con 33-34% Lab and 16-17% Lib.

  14. Luke Blaxill

    Yes it does.

    Polls have far more to do with how well propaganda is working then much else.

    Thats why watching and reading what the media is saying or more importantly quite often is NOT SAYING is a far better guide to eventual election results. If you can stand the constant brainwashing and the resultant offense to your intellect.

    Its the media and the BBC in particular that select governing parties. Surely you are old enough to understand this by now.

  15. Colin , The regional trends are important IF they show differential swings between the regions . The trouble is that the regional swings based on sub samples from the polls are not statistically accurate enough to discern any difference in regional behaviour except of course in Scotland .
    The headline figures from Sept to end Nov show a 11% or so swing from Labour to Conservative , which will be a bit higher in the English regions . All the regional subsample swings are in range of this but are not accurate enough to say that the swing is higher/lower in one particular region .

  16. Mark-thank you-
    Why aren’t they “statistically accurate”.What would make them so? How do you know that “South in Sept given as 40/40 may have been anything from 35/45 to 45/35 ” That is a huge range -how do you arrive at it?

    In the YouGov Telegraph sequence under discussion the swing for North is not within the range of the rest, being considerably lower.Is this totally without significance in your view?

  17. Colin , simple M of E calculations based on sample sizes for the regional subsamples 311 for London to 711 for South . The simplified formula for 95% confidence level for properly weighted samples is 0.98 divided by the square root of the sample size . Therefore the London figures would be +/- 6% and South figures +/- 4% . The fact that the subsamples are individually unweighted would increase these M OF E further .
    The figure for the North is as you say considerably lower but there is insufficient data to say whether the difference is significant or just a sampling quirk . I would refer you to the figures I quoted from ICM Nov polls where 1 particular figure for the Midlands had LibDems at 26% completely out of line with the Midlands figures in other ICM polls and what we tend to believe that the Midlands is in fact the LibDems weakest area not the strongest .

  18. Having been reading the views on here for many many months,I await the inevitable “ICM are not right”,”they always underestimate the conservative vote” etc etc.

    As someone put it the other week on here,Mandy would be proud of some of the posters on here who only accept whichever poll has the better result for their party,being on here it is predominantly conservative.

    Spin doctors had better watch their jobs.

  19. Mark-thanks for those stats.
    I canot understand why YouGov ( or any other pollster) publish the data derived from unweighted subsamples if these have no statistical significance.
    A caveat giving the M of E for each subsample would seem to be the minimum rquirement for an informed reading of the data.

  20. Colin , I would not go so far as to say they have NO statistical significance , they do have some but the problem is that even more so than the headline figures , margin of error and differences in methodology are completely forgotten and each poll is treated as a statement of the true current position or as a rogue depending on whether it agrees with each of our preconceived ideas of how the parties should be standing .
    I often quote the case of German opinion polls where one pollster Forsa always gives a SPD rating 2-3% lower than all the other german pollsters . It could be that they are correct and all tle other pollsters are wrong though the reverse is statistically more likely and of course the last German GE was significant for the fact that all the pollsters were wrong .

  21. At the time of a poll you can only calculate the “error” wrt the Weighted Moving Average. This means that, if there is a moving trend in the polls, a poll could be spot-on and still show a WMA error. After 2 more polls have been published, I can calculate a “retrospective” based on the average of the 5 polls with “that” one in the the middle, and this will generally give a more robust estimate.

    Yougov is noticeably more accurate that the other pollsers, so it is a lot less likely that they should be out by 4pts than ICM, who recently have had polls all over the place.

  22. Nick , Yougov is not a normal random opinion poll it is a poll taken from a relatively small compared to the population at large self selected group of people with a relatively much higher interest in politics than the average person ( it includes myself for example ) . Leaving aside the wisdom of projecting national opinion from an unrepresentative sample , you would expect a poll from a small sample to vary less than a random national poll .
    I have been polled 3/4 times by Yougov in 3 or so years but never by a telephone pollster , once many many years ago by a face to face pollster I think Gallup .

  23. i’m not talking about variations within a single polling organisation, but variations from the underlying avearages of all polls. Yougov is substantially more accurate on this basis.

  24. Nick I understand this but including Yougov in your average of all polls is not necessarily a sensible thing to do as they are completely different types of polls to the other pollsters .
    If you were compiling a moving average of the prices of 5 deciduous fruits 4 of which were apples and 1 was a pear , the price of the pear may remain stable and not vary as much as 1 or 2 of the varieties of apples but will tell you nothing about the change in price of apples or deciduous fruit as a whole .

  25. I should interject just to encourage people to remember the difference between how volatile a poll is, and how accurate they are. A poll can be very, very consistent and still entirely wrong, so it’s probably best not to use the word accurate to describe a polling company that produces polls that have very low volatility. They are two separate things.

    Incidentally, taking a sample from a smaller universe doesn’t necessarily make much difference to volatility once that universe is beyond a certain size. For example, phone pollsters theoretically take their samples from a universe of everyone with a phone (though in reality they take them from the smaller universe of people with a phone who are willing to spend 15 minutes answering a strangers questions, which is considerably smaller) so about 40 million or so adults. A sample of 1000 gives you a margin of error of 3.1%.

    Draw a random sample of 1000 people from a panel of 200,000 and you’d expect a lower margin of error, right? Not really, the margin of error is 3.09%. Counter-intuitive it may be, but once the universe you are sampling gets beyond a certain size its pretty much irrelevant.

    In practice of course Ipsos MORI and YouGov don’t use random sampling, so the actual calculation for margin of error doesn’t apply, and when using quota samples panel surveys can produce much lower levels of volatility because of the amount of demographic information that is readily available on panelists to construct samples.

  26. Well I’ve now re-analysed the data from the other polling companies eliminating YouGov entirely. It makes essentially no difference: the retrospective errors are: ICM:0.4, IMori:-1.4, Populus:-0.4, BPIX:1.5, ComRes:0.7 vs 0.5, -1.6,-0.2,1.0,0.9 and the retrospective StDevs are 2.5, 2.6, 2.4, 2.2, 3.4 vs 2.5, 2.7, 2.6, 2.0, 3.4. Statistically therefore the conclusion is clear: even though YouGov has a different methodology they appear to be sampling from the same underlying population: they are just appreciably more accurate.

    PS Anthony: shall we cross-link blogs?