The tables for MORI’s monthly poll are now up on their website so we can dig around inside them and look at the maths. Obviously with such a surprising shift in support, the thing I looked at first was Liberal Democrat support. What actually caused that jump in their figure?

The sample itself wasn’t massive more Liberal Democrat – last month 9% of the sample said they had voted Lib Dem in 2005, this month 10% said they had. The raw numbers of people saying they were voting Lib Dem were up from 17% to 20%, but again, that’s a lot less than 8 points! A major factor seems to be the filtering by likelihood to vote.

I have written a long article on the site here looking in detail at how pollsters deal with likelihood to vote. The simplified version though is as follows…

YouGov ignore it,
Populus – weight by it, so someone who says they are 9/10 likely to vote is worth 90% of someone who says they are 10/10 likely to vote (and so on),
ComRes – do similar, but entirely exclude those who are less than 5/10 likely,
Ipsos MORI – filter by it, so someone who says they are 10/10 likely to vote is counted, and someone who says they are 9/10 likely to vote (or lower) is excluded,
ICM – also filter by it, but less strictly, taking those who rate their chances at 7/10 or higher.

In last month’s MORI poll, of all the people who said they would vote Liberal Democrat, only 47% of people said they were 10/10 certain to vote. In this month’s MORI poll 69% of Liberal Democrats said they were 10/10 certain to vote, so a much larger proportion were included in the topline voting intention, contributing to the massive increase in Lib Dem support.

Interestingly though it wasn’t a massive shift in the likelihood of Lib Dem supporters to vote. Last month 81% of Lib Dem supporters said they were 7/10 likely to vote or above. This month 85% of Lib Dem supporters said they were 7/10 likely to vote or above. What actually happened is that lots of Lib Dem supporters who had said they were very likely to vote, rating their chances at 7 to 9 out of 10, moved to saying they were certain to vote, but because it tipped them over the 10/10 point it moved them from being excluded from the poll to being included. It’s the result of having a straight cut off, rather than a scale like ComRes & Populus do.


68 Responses to “The reason behind that 8 point Lib Dem jump”

1 2
  1. What is your interpretation of this, Anthony? There doesn’t seem to have been a specific event which would sure up the Lib Dem vote.

  2. Daft methodology. Properly daft. Surely weighting is the right approach?

  3. It seems to me that both Yougov and Ipsos Mori approach is oversimplistic and less likely to be accurate as a result.

    I think that a big feature of the GE will be the falling away of Labour’s support from the previous GE by something like 30% or more. Whether these people simply stay at home or switch will big a difference on the result.

  4. Interestingly, if the same poll had used YouGov’s methodology, the Tory lead would only be 6%!

  5. Although I dont support the Liberals, I do think it is time that a party starts to challenge, tory and labour dominance. Instead of having a choice of 2 parties to lead us, we should have a choice of 4. (I include Greens as the main 4.)

  6. Anthony’s analysis only appears to deal wth the Lib Dem figure. What were the unadjusted Mori figures for all three parties, disregarding likelihood to vote.

  7. I have always said that the more accurate figures to take from a Mori poll are those they publish in their tables for those 6-10 likely to vote . This month that would give Con 39 Lab 31 LD 20 . We can make a good estimate of how these figures would be if ICM past vote weighting was used as the detailed data is available in the table . THe figures would be Con 38 Lab 30 LD 22/23 which looks quite reasonable and pretty comparable to this year’s local election results .

  8. I think at the moment the polls are too unstable to get any real picture.

  9. Anton – no it wouldn’t, since if it had used YouGov’s methodology it would also have been politically weighted, have asked different questions, and been carried out online. You can’t take one part of the methodology in isolation :)

  10. To me, the most striking feature of the polls since the turn of the year is – excluding the highest (20% lead) and lowest (7% lead) – that virtually every poll has shown a Tory lead in the 10-14% range with almost no movement. It may be that day-to-day news is not fundamentally affecting voting intentions and that these are becoming set in stone while the overall environment – recession with little by way of early signs of a recovery – remains in place.

  11. If the LibDem vote is firming up, which is what seems to be happening, this could be the result of ex-Labour voters changing from toying with the idea of voting LibDem to being pretty certain that they will. This would presumably be because they definitely want a change from this government, but can’t bring themselves to vote Tory.
    As always, one poll doesn’t make a trend, but it will be very interesting to see how post-budget polls go. I wouldn’t be surprised to see the LibDem vote holding firm, or even rising higher still. The Tories seem unable to get much higher than where they are now, and the budget will give Vince Cable some publicity, and he’s the LibDem ace in terms of popular support.

  12. This set of numbers is odd :-

    Full time working:- 38/24/27
    Not full time/ not working 42/30/18

    ie-the more unemployment rises the better support for Labour ?………or it’s only people in work who indulge themselves with a third party vote;those out of work concentrate on the two main parties & Lib Dems get squeezed ie the more unemployment rises the lower support for Lib Dems .

    and:-

    Public Sector:- 24/34/29
    Rest 43/27/22

    I thought we had some Poll figures a while ago which refuted the Public Sector=Labour voters equation?

  13. Instead of elections, we should put all the party leaders in the big brother house, and then see who wins, he/she then becomes prime minister.
    The Queen could act as Big Brother, and obviously Davina would host.

    Or maybe host a Eurovision sort of event, each party has 3 minutes to present their manifesto, and then the parties all vote for the other parties, awarding 1-8, 10 and 12 points, to their favourite top 10, with the favourite receivng the 12 and so on.

    The party with the most points wins, and gets to be the party in power.

  14. MORI would argue that their method of only including people 10/10 certain to vote has produced accurate figures so far so there’s no reason to change it. For example they helped to produce, with NOP, a 100% accurate exit poll for the 2005 election in terms of the overall majority for Labour.

  15. “Ukip, quietly confident of a June Election win in Europe, complained to whoever deals with copyright, that Libertas was stealing their ideas, and that there was room for only one Eurosceptic pressure group, OOPS, I mean party, in the UK.”

    Quietly confident? I would assume that they are anything but that. The party is in its dying days.

  16. Possibly not the best example to choose Andy – exit polls by definition don’t need to factor in likelihood to vote ;)

  17. Well whichever example I choose next time will be better than the one I chose in my previous comment. :)

  18. Do Mori’s polls fluctuate more than those of the other pollsters because of their black and white methodology (where you get lots for 10/10 ands zero for 9/10). I think they do, but some kind of protracted standard deviation would have to be done to confirm it.

  19. I wonder how on earth people calculate that they are 9/10 certain to vote. Are these people who can’t bring themselves to commit due to niggling fears of being hit by a bus on the way to the polling station?

  20. Anthony, I don’t think you’ve got the approach to turnout weighting of other polling companies quite right. Populus does, as you say, weight in direct proportion to a respondent’s declared likelihood to vote (so that, for example, someone saying they are 4 out of 10 on the likelihood to vote scale has their voting intention valued at 0.4). But I believe ICM include in their calculation of party support only those respondents putting themselves at 7 or above on the 10 point scale. And I think ComRes do something more complciated (involving a squeeze question for those saying they aren’t very likely to vote).

    I also think Andy Stidwell is not correct in saying that NOP only included those saying they were absolutely certain to vote, in their final election poll in 2005, which as he notes got the result exactly right.

    The reason that Ipsos-MORI include only those saying they are absolutely certain to vote is that that group has an inherent Tory bias (because those saying they are 10 out of 10 certain to vote are disproportionately older, and older voters are disproportionately Tory-leaning). They use this pro-Tory skew to try and cancel out the pro-Labour skew that generally results from sample bias. It is a substitute for a political weighting, which YouGov, Populus, ICM and ComRes (and NOP, at the last election) all use to ensure their samples are politically representative, stable and comparable.

    There are several flaws in the Ipsos-MORI approach. Firstly, it implies a turnout of about 35% (when don’t knows are excluded that is roughly the proportion of the original sample which ends up in Ipsos-MORI’s vote intention calculation. Secondly, we know for an absolute fact (from the British Election Survey, which tracks the same panel of voters throughout a Parliament, and from post-election call-back polls) that some of those who say they are certain to vote will end up not doing so – and a very large number of people who say they are not certain to vote (even the day before an election) will end up doing so. Thirdly, sample bias is not stable and nor is likelihood to vote – so Ipsos-MORI is using one volatile random skew to try and cancel out another. Sometimes this seems to work, but often it doesn’t – and is why Ipsos-MORI polls tend to produce much spikier numbers than anyone else.

    There is a lot of evidence that most voters are very bad predictors of their own likelihood to vote (and as with a lot of 10 point scale questions, responses tend to cluster – so most people always say ’10 out of 10′, then there is a bit of a cluster around 8, 5 and 1). In the USA many polling organisations ask several other questions to try and model ‘likely voters’ in their samples, e.g. ‘are you someone who always votes, often votes, occasionally votes or rarely votes?’, ‘how important to you are the issues at stake in this election’. The main reason that this not done (or at least experimented with) in published polls in this country is cost.

    Furthermore small differences in wording of the question can have an effect: we recently did some analysis that established that if the top of the scale is defined as meaning ‘you will definitely vote’ Lib Dem voters were slightly more likely to give this answer than they were if 10 on the scale was defined as ‘you are absolutely certain to vote’.

  21. Leslie makes a good point. The Conservative lead is so static that theres no reason to believe they won’t poll between 10-14% at the general election. Of course the differance between 10-14% is a small majority and a landslide, so its quite an important differance, but as far as who will win the election it actually seems to me the public have more or less made up their mind.

    FWIW I would expect a result of 42/30/20, so a 12% Tory lead giving them a majority of 30-50 seats.

  22. Very interesting post Andrew. :)

    FYI, MORI have always been amongst my least favourite pollsters, precisicely because they to have some very extreme results. ICM is my favourite pollster followed closely, I might add, by Populus. :)

  23. Depends on the Campaign itself. The Liberal Democrats have gone up by 6% in a 4 week campaign before due to increased exposure. Why wouldn’t the same apply at the next election particularly if people aren’t convinced by the Tory and Labour party proposals. Up until now both parties have been slagging each other off and for the Tories in particular their policies have not come under much scrutiny.
    When they do during an election campaign floating voters may decide they do not like them and vote Liberal Democrat instead.

  24. Hi Andrew,

    ComRes is indeed more complicated – it’s the grossly simplified version above! ComRes do the same likelihood of vote weighting as you for people who rate their chance of voting at 5/10 or above, but additionally exclude entirely anyone who says their chance of voting is under 5.

    ICM I’m not sure, I think they switched at some point, but the blurb on their tables still refers to to weighting by likelihood, rather than filtering by likelihood. Looking at the actual figures in the last Guardian poll though, they do correspond with taking 7/10 and above. I’ll double check with Nick Sparrow.

    I think Andy was referring to the joint MORI/NOP exit poll, not their eve of election poll. I can’t recall off the top of my head what approach NOP took at the last election, though clearly they were doing something right :)

    (Since you’ve mentioned it, the BES didn’t just call back voters after the election to see if they voted. It also looked them up on the marked register to see if they genuinely voted. Here is the comparison between how certain people told the BES they were to vote, and whether the marked register showed they actually did…

  25. The theory that the Lib Dems always – or generally – benefit from a significant election campaign bounce, which Richard Whelan’s post refers to, is not really true any longer.

    It largely dates from the era when voting intention polls were unprompted, i.e. did not list the parties. Since all pollsters started using a prompted question – and using political weights to make sure the sample is properly representative, the polls have become much more accurate.

    In 2005 the Lib Dem poll rating was 21% in the run-up to the election campaign and they got 22.6%

  26. I’ve heard back from Nick, ICM are indeed filtering by likelihood to vote and taking those who say they are 7/10 or more likely to vote.

    Nick also tells me ICM have also tried out their data using the Populus and ComRes approaches to likelihood to vote, and they found it didn’t make any signficiant difference: all three methods had much the same effect.

  27. Why do ComRes always seem quite volatile then Anthony? After MORI I would have ComRes as the second most volatile pollster?

  28. This has been a staggeringly good and sophisticated thread; the science and the mathematics are amazing and the expertise of the contributors undeniable.

    But, if the next election is close, then I think pollsters are just a likely to get the result wrong as they have before.

    Just a normal human being speaking !

  29. Nothing to do with how they deal with likelihood to vote. I think it may be their political weighting.

    ComRes, ICM and Populus all calculate their weighting targets for past vote based upon the actual 2005 vote and the raw recalled past vote in their recent surveys.

    If I recall correctly, Populus weight to a point 50% of the way between the actual 2005 vote and the average recalled vote in their last 10 polls. ICM weight to a point 80% of way between the actual 2005 vote and the average recalled vote in their last 20 polls. However, I have never been able to confirm how ComRes do their calculation.

    It’s important because ICM and Populus’s past vote weighting targets are very, very steady and change only very slowly over time. In contrast ComRes’s past vote weighting targets seem to change quite a lot from month to month.

  30. Thanks Anthony. :) Very interesting.

    I wonder why ComRes won’t confirm to you and Mike Smithson how they do their calculation?

    Agree with Clive, BTW, a really good and informative thread this one. :)

  31. As luck would have it, I was just compiling a few responses to this post when Andrew’s post popped up. I think I’ve addressed his points in the below comments, but happy to answer further queries if anyone would like.

    In response to James (21st, 4.50pm):
    I can assure you that Ipsos MORI does of course weight our data, as well as apply the filter Anthony so accurately describes above. We weight by gender, age, class, working status and sector, region and cars in household. However, as I’ve posted occasionally on this site and on politcalbetting.com, Ipsos MORI chooses not to weight on the basis of reported past vote because – quite frankly – we are not convinced that there is an accurate way to do it. We know from experimentation that many respondents report their past vote inaccurately (either because they don’t remember how they voted, or if they voted, or if they voted tactically), and we also know it is impossible to predict either how innacurate they are or how that inaccuracy may change month-on-month. We prefer not to undertake a ‘guess’ or make assumptions about the answers a representative sample would give to the ‘past vote’ question, and to then use that guess as a weighting target! (Not least because it would be very difficult to explain how we arrived at that figure)
    We maintain that a representative sample of the public (and of voters), with an appropriate filter to distinguish between those who will and won’t vote, is the best way to approach political polling. However, I should note that we work continually to review our approach and methodology to ensure that we are as accurate as possible — an example is our introduction of the public vs. private sector worker weight last spring (full description here http://www.ipsos-mori.com/content/ipsos-mori-june-2008-methodology-review.ashx).

    In response to Colin (21st, 8.40pm):
    The full time/not-full time crossbreaks can not be analysed with relation to changes in unemployment. We include this measure (and crossbreak) because it is an important quota and weight for us. However, it is important to note that it is likely that the figures you have quoted with regard to this crossbreak are more likely correlated with other factors such as age and social grade; for example, individuals of a higher social grade are also more likely to work full time (and ABC1s are more likely than C2DEs to support the LibDems, as you’ll see from the ‘social class’ crossbreak). In addition, older individuals are more likely than younger people to vote Conservative — and are also more likely to be retired (i.e. not working). But my main point is simply that changes in the unemployment rates in Britain are simply too small (in terms of the total population) for this phenomenon to be measured in a national opinion poll of c. 1,000 individuals.
    As for your second point — the tables do show that public sector workers are more likely than private sector workers to support Labour. You will notice that 34% of public sector workers (who are ‘certain to vote’) have chosen Labour, compared to 27% of non-public sector workers.
    However, the final point to keep in mind here is that the base sizes for all of these column breaks is much smaller than the total base size and so of course as a result the margins of error are much larger than for the sample overall, making it more difficult to draw conclusions about month-on-month changes, or even differences between the relevant sub-groups.

    Luke Blaxill (22nd, 1.34am):
    Ipsos MORI’s polls actually do not fluctuate more than other pollsters, and in fact the polls are all very well in line at the moment, with very few exceptions. This is because it is crucial to look at the share, rather than the lead, when analysing these trends — see Bob Worcester’s explanation here http://www.ipsos-mori.com/content/the-polls-are-all-over-the-place.ashx (out of date now, but the conclusions still hold true as we keep the spreadsheet updated — happy to share). However, it is also true that the application of a political weight (ie. a weight for what we assume or guess to be the political profile of the population) is likely to dampen down natural volatility within the electorate. I would argue (and Anthony and Andrew may disagree! :) ) that it is important that polls reflect this volatility in the electorate, as we are in uncertain political times and it seems logical that public voting intentions would experience some fluctuation.

    Finally, just two additional points on the ‘certain to vote’ filter and our sampling. We have done a lot of experimenting around using the 7/10s, 8/10s, 9/10s etc (including using Populus’ approach of weighting scaled according to each likelihood factor) and have found the ’10/10′ filter to be the most accurate. And finally, in terms of other weights, modelling and second-guessing, there is far less difference between our raw sample profile and our weighted one than most other companies — that is, we get our sample profiles correct in terms of the overall population profile because we spend time ensuring we meet our quotas of some of the harder-to-reach groups such as young 18-24 year old males, etc.

    Cheers,

    Julia Clark
    Head of Political Research, Ipsos MORI

  32. Hi Julia (I do have an esteemed comment thread today!)

    Your last paragraph has reminded me of something something I was pondering a couple of weeks back and meant to look at – the different ways companies decide who in a household to interview when the phone is picked up. I think way back in the 1990s some attention was paid to it (I vaguely recall a paper, probably by Nick Sparrow, that looked in great detail at the differences between how ICM and Gallup conducted phone conversations, how they introduced themselves, how they picked the interviewee and so on) and in the past I’m aware of things like the person with the next birthday being chosen, but I have no idea how any of the companies conducting phone polls currently do it.

    How do you combine quota sampling with the switch to RDD phone sampling? Is a case of, for example, asking “Is there anyone male and under the age of 24 in your household?”

  33. Hi Anthony

    Regarding RDD, we use this on both quota projects and also for random probability approaches – RDD just defines how how the numbers are selected for dialling as opposed to how we select within the household.

    For our Political Monitor and also the majority of our other types of polls we use a quota approach in terms of selection: we simply set the quotas as identical to our weights (the quotas have a flexibility of about +/-10%). As for the actual execution, we do have an initial extra quota filter at the start of the survey, asking specifically to speak to any members of the household between 18 and 24 (as that is the most difficult quota to fill) — but if the answer is ‘no’ we simply proceed with meeting the quotas as normal (and screening out people from those quotas we have already filled). We do often spend a good deal of time screening for those harder-to-reach groups like young men.

    We do ocasionally adopt a form of random probability sampling selection on some of our work when the situation or client calls for it. We don’t currently utilise it for the Political Monitor — although it is certainly something we are experimenting with.

    I should also note that our quota approach means that our raw unweighted data is much closer to the actual population profile than would ‘naturally’ fall out (if we didn’t use quotas and just used e.g. a ‘next birthday’ approach) — and this reduces the design effect. If you look at some of the raw unweighted demographics from other companies you will see some quite distinct differences between the raw unweighted figures in comparison to the weighted ones – and this effect is minimised in our approach.

    Cheers,

    Julia

  34. Julia Clark is absolutely right to say that many respondents report their past vote inaccurately – and this makes defining a past vote weighting factor complicated. Unfortunately, however, the evidence of the last 3 general elections (i.e. since the concept of past vote weighting was introduced – by ICM) is that those who weight by past vote achieve consistently more accurate results than those who don’t, at election time, and consistently less volatile results than those who don’t, between elections.

    Past vote recall can be quite spiky from one sample to another, but is remarkably stable over time – i.e. if you put a trend line through the recalled past vote data it is virtually flat.

    This suggests that the actual pattern of recall – or misreporting – of past vote doesn’t change much during the Parliament and that the spikiness between the recalled past vote of different poll samples is largely a function of sample error. Having an abnormally large number of people who, for example, say they voted Labour at the last election is, in other words, a sign that you have a sample that you have a Labour bias in the fieldwork, not a sign that there has been an abrupt shift in the pattern by which voters misremember or misstate how they voted. And because recall is quite spiky from one sample to another, unless it is weighted it is literally impossible to tell how much of any movement in topline support is actual switching and how much is simply a read-through from the sample error.

    Putting the recall of our latest sample into a series of other recent polls and calculating the past vote weighting factor from the average – as ICM and Populus both do – ensures that we pick up over time any recurring shift in the pattern of recall, but that we also iron out spikes in recall that are just a result of sample skew.

  35. What a disasterous budget! Surely this is the last nail in Labour’s coffin. Government to fall by July, maybe August.

  36. Neil, as they have a very healthy majority I can’t see why the government would fall in the summer. We’re going all the way 2010.

  37. re the apparent Lib Dem surge in support, I may be imagining this, but it seems to me third party support does seem to go up when the economy is in trouble – 1974 and 1983 general elections. Perhaps it is because people are hoping for some fresh ideas that otherwise would not be considered.

    either that or it’s blip.

  38. Growth of -3%, 1.25% then 3.5%. I think we can all agree those are optimistic forecasts and even those result in nearly £500bn of borrowing over 3 years.

    Poll-wsie, now we play the waiting game.

  39. 3.5% growth in 11/12 is laughable. And thats what he bases his borrowing needs on. Terrifying.

  40. M,

    Yes, the “growth” forecast of 1.25% for 2010 is much too optimistic, and 3.5% in 2011 is hardly credible either – but would have been totally incredible if the 2001 figure had been nearwer the 0.3% consensus of independent forecasters.

    But, Darling needs those growth figures to show revenues rising and spending falling for 2010 and 2011 – otherwise even the £600bn deficit over four years is way too optimistic and we could be looking at £200bn in fiscal 2010/11.

    Just to put these truly terrifying figures in context, we are talking about a budget deficit of nearly £3,000 p.a for every man woman and child in the UK for teh next couple of years.

    Truly a case of Mum’s eyes, Dad’s nose, Gordon’s debt.

  41. Some very interesting posts from the pollsters themselves on this thread . Shame that Comres have not responded . Mike S on pb.com has commented that the Comres past vote weightings seem to vary wildly from one month to the next contrary to both Populus and ICM and I seem to recall one month where the changes in the headline figures were caused by this rather than any significant actual change in voting intentions in the sample

  42. The budget seemed a bit of a damp squib really, apart from the confessions of idiocy. It didn’t look like a budget that will magically reverse Labour’s fortunes but perhaps I’m missing something.

  43. I would expect that budget to prompt a DROP in the polls for Labour. But maybe I am missing something.

  44. Gin, I wouldn’t exactly describe Labour’s majority as healthy. Big, but not very healthy. A fall of the government in the summer is probably unlikely though, as a VONC would be too big a risk to take unless the government had some major problem between now and then.

    If there are more rumblings about the Labour leadership after June 4th (I still believe that June 4th will be a more defining point than today) then things could get very interesting.

  45. I wonder whether it’s possible to conceive of those voters in the 25% to 30% bracket of Labour support (assuming they are on at least 30% in a particular poll) as an identifiable group of voters for whom it is possible to say to whom they are most likely to defect if they stop supporting Labour. My idea was that maybe they’re more likely to move to the LDs than the Tories compared to those voters Labour attracts when it is polling between 30% and 36%. This would mean if Labour does fall below 30% the Tories would not be able to expect to pick up as large a percentage of their votes as they might hope for. Some of the recent polls seemed to suggest this, with the LDs picking up some support when Labour falls below 30%. Or maybe it isn’t possible to think of voters in this way.

  46. “The budget seemed a bit of a damp squib really”

    I disagree.

    It was a bombshell-a defining moment for both New Labour & the Conservatives.

    It will shape the battle ground for the GE campaign.

  47. @Neil

    I disagree. I think a VONC is a perfectly reasonable thing to go for. It would show that Cameron is seriously concerned for the country, rather than just happy to take over power when Labour is forced to call an election.

    He probably wouldn’t win it, but at least it would show him to be decisive and might shake off that worry people have about him that he is just an empty suit.

  48. Mark M

    I agree – a VONC may well be forthcoming in the next few weeks (if not days). Cameron intimated as much in his response to the budget when he suggested it was time for Labour to go. (I even heard a mention somewhere that Cameron would welcome a general election on 4th June).

    The tricky part is how to put a VONC into a context where it has maximum impact – and it looks as though Brown has accidentally played into the opposition hands on an issue which has nothing to do with the budget.

    Timing is important here. Any VONC needs to be tabled before the Finance Bill is presented, otherwise the Government will be entitled to say: ” the House just passed the budget – what better statement of confidence can there be ?”

    But, Brown has created a superb opportunity with his cack-handed attempt to play politics on MP expenses. The opposition are united against Brown’s proposals. Now they need to put an “anti-sleaze” pitch to the more principled Labour back-benchers and Ministers. If Brown loses the vote on MP expenses, then a VONC is a logical next step – and he might actually lose it. Then we can have a 4 June general election.

  49. Gin,

    As Neil says, they have a large majority, but it is not healthy.

    Despite its paper majority, the government is in deep trouble and the next few weeks are not going to be pleasant. Even if they would probably survive a confidence motion in next couple of weeks, the aftermath of the Euro Elections could well justify another.

    I don’t seriously expect many backbenchers to defect to other parties – or even vote against Brown – but there will be a lot of soul-searching going on and expect quite a few abstentions.

    If Brown survives a first VONC, but with a majority below 50, then expect another VONC in June. If at each event his majority falls further, expect repat votes until he actually falls.

    As John Major can testify, there is nothing more debilitating for a government than to face repeated challenges to its authority. With so many unhappy members on his back-benchers, Brown has a bigger discipline problem than Major did.

  50. @ Colin – I’m curious – what did you think was “bombshell” about it?

1 2