Last week, while sharing despair at Twitter throwing itself into another frenzy over a crossbreak of less than fifty respondents Sunder Katwala suggested to me that it might be a good idea to put together a post summarising the things not to do when writing about polls. I thought that was a good idea. This probably isn’t the sort of post Sunder was thinking of – I expect he envisaged something shorter – but nevertheless, here’s how NOT to report opinion polls.

1) Don’t report Voodoo polls

For a poll to be useful it needs to be representative. 1000 people represent only themselves, we can only assume their views represent the whole of Britain if the poll is sampled and weighted in a way that reflects the whole of Britain (or whatever other country you are polling). At a crude level, the poll needs to have the right proportions of people in terms of gender, age, social class, region and so on.

Legitimate polls are conducted in two main ways. Random sampling and quota sampling (where the pollster designs a sample and then recruits respondents to fill it, getting the correct number of Northern working class women, Midlands pensioners, etc, etc). In practice true random sampling is impossible, so most pollsters methods are a bit of a mixture of these two methods.

Open-access polls (pejoratively called “voodoo polls”) are sometimes mistakenly reported as proper polls. These are the sort of instant polls displayed on newspaper websites or through pushing the red button on digital tv, where anyone who wishes to can take part. There are no sampling or weighting controls so a voodoo poll may, for example, have a sample that is far too affluent, or educated, or interested in politics. If the polls was conducted on a campaign website, or a website that appeals to people of a particular viewpoint it will be skewed attitudinally too.

More importantly there are no controls on who takes part, so people with strong views on the issue are more likely to participate, and partisan campaigns or supporters on Twitter can deliberately direct people towards the poll to skew the results. Polls that do not sample or weight to get a proper sample or that are open-access and allow anyone to take part should never be reported as representing public opinion.

Few people would mistake “instant polls” on newspaper websites for properly conducted polls, but there are many instances of open access surveys on specialist websites or publications (e.g. Mumsnet, PinkNews, etc) being reported as if they were properly represenative polls of mothers, LGBT people, etc, rather than non-representative open-access polls.

Case study: The Observer reporting an open-access poll from the website of a campaign against the government’s NHS reforms as if it was representative of the views of members of the Royal College of Physicians, the Express miraculously finding that 99% of people who bothered to ring up an Express voting line wanted to leave Europe, The Independent reporting an open-access poll of Netmums in 2010.

2) Remember polls have a margin of error

Most polling companies quote a margin of error of around about plus or minus 3 points. Technically this is based on a pure random sample of 1000 and doesn’t account for other factors like design and degree of weighting, but it is generally a good rule of thumb. What it means is that 19 times out of 20 the figure in a poll will be within 3 percentage points of what the “true” figure would be if you’d surveyed the entire population.

What it means when reporting polls is that a change of a few percentage points doesn’t necessarily mean anything – it could very well just be down to normal sample variation within the margin of error. A poll showing Labour up 2 points, or the Conservatives down 2 points does not by itself indicate any change in public opinion.

Unless there has been some sort of seismic political event, the vast majority of voting intention polls do not show changes outside the margin of error. This means that, taken alone, they are singularly unnewsworthy. The correct way to look at voting intention polls is, therefore, to look at the broad range of ALL the opinion polls and whether there are consistent trends. Another way is to take averages over time to even out the volatility.

One poll showing the Conservatives up 2 points is meaningless. If four or five polls are all showing the Conservatives up by 2 points, then it is likely that there is a genuine increase in their support.

Case study: There are almost too many to mention, but I will pick up up the Guardian’s reporting their January 2012 ICM poll, which describes the Conservatives as “soaring” in the polls after rising three points. Newspapers do this all the times of course, and Tom Clark normally does a good job writing up ICM polls… I’m afraid I’m picking this one out because of the hubris the Guardian displayed in their editorial the same day when they wrote “this is not a rogue result. Rogue polls are very rare. Most polls currently put the Tories ahead. A weekend YouGov poll produced a very similar result to today’s ICM, with another five-point Tory lead. So the polls are broadly right. And today’s poll is right. Better get used to it.”

It was sound advice not to hand-wave away polls that bring unwelcome news, but unfortunately in this case the poll probably was an outlier! Those two polls showing a five point lead were the only ones in the whole of January to show such big Tory leads, the rest of the month’s polls showed the parties basically neck-and-neck – as did ICM’s December poll before, and their February poll afterwards. Naturally the Guardian didn’t write up the February poll as “reversion to mean after wacky sample last month”, but as Conservative support shrinks as voters turn against NHS bill. The bigger picture was that party support was pretty much steady throughout January 2012 and February 2012, with a slight drift away from the Tories as the European veto effect faded. The rollercoaster ride of public opinion that the Guardian’s reporting of ICM implied never happened.

3) Beware cross breaks and small sample sizes

A poll of 1000 people has a margin of error of about plus or minus three points. However, smaller sample sizes have bigger margins of error. Where this is most important to note is in cross-breaks. A poll of 1000 people in Great Britain as a whole might have fewer than 100 people aged under 25 or living in Scotland. A crossbreak made up of only 100 people has a margin of error of plus or minus ten percent. Crossbreaks of under 100 people should be given extreme caution, under 50 they should be ignored.

An additional factor is that polls are weighted so that they are representative overall. It does not necessarily follow that cross-breaks will be internally representative. For example, a poll could have the correct number of Labour supporters overall, but have too many in London and too few in Scotland.

You should be very cautious about reading too much into small crossbreaks. Even if two crossbreaks appear to show a large contrast between two social groups, if they are within each others margin of error this may be pure sample variation.

Pay particular caution to national polls that claim to say something about the views of ethnic or religious minorities. In a standard GB poll the number of ethnic minority respondents are too small to provide any meaningful findings. It is possible that they have deliberately oversampled these groups to get meaningful findings, but there have been several instances where news articles have been based on the extremely small religious or ethnic subsamples in normal polls.

Extreme caution should be given to crossbreaks on voting intention. With voting intention small differences of a few percentage points take on great significance, so figures based on small sample sizes, that are not internally weighted, are virtually useless. Voting intention crossbreaks may reveal interesting trends over time, but in a single poll are best ignored.

Case study: Again, this is a common failing, but the most extreme examples are reports taking figures for religious minorities. Take, for example, this report of an ICM poll for the BBC in 2005 – the report says that Jews are the least likely to attend religious services, and that 31% of Jews said they knew nothing about their faith. These figures were based on a sample of FIVE Jewish respondents. Here is the Telegraph making a similar error in 2009 claiming that “79 per cent of Muslims say Christianity should have strong role in Britain”, based on a subsample of just 21 Muslims.

4) Don’t cherry pick

In my past post on “Too Frequently Asked Questions” one of the common misconceptions I cite about polls is that pollsters only give the answers that clients want. This is generally not the case – published polling is only a tiny minority of what a polling companies produces, the shop window as it were, and major clients that actually pay the bills want accuracy, not sycophancy.

A much greater problem is people reading the results seeing only the answers they want, and the media reporting only the answers they want (on the latter, this is more a problem with pick-up of polls from other media sources, papers who actually commission a poll will normally report it all). Political opinion polls are a wonderful tool, interpreted properly they allow you to peep into what the electorate see, think and what drives their voting intention. As a pollster it’s depressing to see them interpreted by chucking out and dismissing anything that undermines their prejudices, while trumpeting and waving anything they agree with. It sometimes feels like you’ve invented the iPad, and people insist on using it as a doorstop.

It should almost go without saying, but you should always look at poll findings in the round. Public opinion is complicated and contradictory. For example, people don’t think prison is very effective at reforming criminals, but tend to be strongly opposed to replacing prison sentences with alternative punishments. People tend to support tax cuts if asked, but also oppose the spending cuts they would require. Taking a single poll finding out of context is bad practice, picking poll findings that bolster your argument while ignoring those that might undermine it is downright misleading.

Case study: Almost all of the internet! For a good example of highly selective and partial reporting of opinion polls on a subject in the mainstream press though, take the Telegraph’s coverage of polling on gay marriage. As we have looked at here before, most polling shows the public generally positive towards gay marriage if actually asked about it – polls by ICM, Populus, YouGov and (last year) ComRes have all found pretty positive opinions. The exception to this is ComRes polling for organisations opposed to gay marriage which asked a question about “redefining marriage” that didn’t actually mention gay marriage at all, and which has been presented by the campaign against gay marriage as showing 70% people are opposed to it.

Leaving aside the merits of the particular questions, the Telegraph stable has dutifully reported all the polling commissioned by organisations campaigning against gay marriage – here, here, here and here. As far as I can tell they have never mentioned any of the polling from Populus or YouGov showing support for gay marriage. The ICM polling was actually commissioned by the Sunday Telegraph, so they could hardly avoid mentioning it, but their report heavily downplayed the finding that people supported gay marriage by 45% to 36% (or as the Telegraph put it “opinion was finely balanced” which stretched the definition of balanced somewhat) instead running heavily on a question on whether it should be a prority or not. Anyone relying on the Telegraph for its news will have a very skewed view of what polling says about gay marriage.

5) Don’t make the outlier the story

If 19 times out of 20 a poll is within 3 points of the “true” picture, that means 1 time of out 20 it isn’t – it is what we call a “rogue poll”. This is not a dispersion or criticism of the pollster, it is an inevitable and unavoidable part of polling. Sometimes random chance will produce a whacky result. This goes double for cross-breaks, which have a large margin of error to begin with. In the headline figures 1 in 20 polls will be off by more than 3 points; in a crossbreak of 100 people 1 in 20 of those crossbreaks will be off by more than 10 points!

There are around 30 voting intention polls conducted each month, and each of them will often have 15-20 crossbreaks on them too. It is inevitable that random sample error will spit out some weird rogue results within all that data. These will appear eye-catching, astounding and newsworthy… but they are almost certainly not. They are just random statistical noise.

Always be cautious about any poll showing a sharp change in movement. If a poll is completely atypical of other data, then assume it is a rogue unless other polling data backs it up. Remember Twyman’s Law: “any piece of data or evidence that looks interesting or unusual is probably wrong”.

Case study: Here’s the Guardian in February 2012 claiming that the latest YouGov polling showed that the Conservatives had pulled off an amazing turnaround and won back the female vote, based on picking out one day’s polling that showed a six point Tory lead amongst women. Other YouGov polls that week showed Labour leading by 3 to 5 points amongst women, and that that day’s data was an obvious outlier. See also PoliticalScrapbook’s strange obsession with cherry-picking poor Lib Dem scores in small crossbreaks.

6) Only compare apples with apples

All sorts of things can make a difference to the results a poll finds. Online and telephone polls will sometimes find different results due to things like interviewer effect (people may be more willing to admit socially embarrassing views to a computer screen than an interviewer), the way a question is asked may make a difference, or the exact wording used, or even the question order.

For this reason if you are looking for change over time, you need to compare apples to apples. You should only compare a question asked now to a question asked using the same methods and using the same wordings, otherwise any apparent change could actually be down to wording or methodolgy, rather than reflect a genuine change in public opinion.

You should never draw changes from voting intention figures from one company’s polls to another. There are specific house effects from different companies methodologies which render this meaningless. For example, ICM normally show the Lib Dems a couple of points higher than other companies and YouGov normally show them a point or so lower… so it would be wrong to compare a new ICM poll with a YouGov poll from the previous week and conclude that the Lib Dems had gained support.

114 Responses to “How not to report opinion polls”

1 2 3
  1. How on earth did a company get a figure of 31% from five people? Surely each one is “worth” 20%.

  2. I think he wanted something he could tweet, Anthony – not a dissertation. ;-)

  3. Paul D – they would all have been weighted, so maybe it was two people with weights of 0.76!

    Amber – it sort of grew…

  4. Mon July 2, 10 p.m. BST
    Latest YouGov/The Sun results 2nd July CON 34%, LAB 44%, LD 8%, UKIP 8%; APP -38

  5. @Amber

    Rogue poll!

    Of course (unfortunately) rogue polls don’t have a big sign saying this one is a rogue.

  6. @ The Sheep

    LOL :-)

  7. Excellent post Anthony! Bookmarked.

  8. First class post. Congratulations and thanks

  9. Anthony,

    Like I suggested before I honestly think people don’t care about letting the facts get in the way of a good story.


  10. I have lurked on this site for two years or more and am thoroughly impressed by the almost unfailing intelligence and courtesy of the contributors. For which, I think, much credit should go to Anthony for somehow setting tone.

    And in keeping with this, I would have said that his notes on what not to trust in polls are excellent. One caveat though. In my only experience of commissioning questions from a major polling organisation I was asked what kind of impression I wanted to give to those reading the report. The pollster told me that the wording of the questions would have a big influence on the numbers answering one way or another and offered to guide me in wording mine.

    So although I am sure that polling organisations are scrupulously honest in reporting results. I am not sure that they are always above wording their questions in a way that ensures their paymasters will be pleased with the results.

  11. How about going for an acronym… CREWS.

    C. Make sure it’s compatible, Apples with Apples
    R. Beware rogues one poll isn’t a trend.
    E. There is always a margin of error use a pinch of salt.
    W. Unweighted polls aren’t worth a damn.
    S. Small polls can’t be relied on.

    I am sure someone can do better but if it is for twitter that’s what it will take.


  12. AW – superb piece. Thanks

  13. I get very annoyed when someone states that some reality TV programme produced more votes than (say) the general election.
    It is never mentioned that the former allow multiple voting. Some enthusiasts will (in spite of the cost) cast up to 100 votes for their favourite.

  14. On that sort of thing in the USA there seems to be one stream of opinion that believes polls of “likely voters” are ‘better’ than those of “registered voters”.

    Of course the commentators still cherry pick the “likely voters” polls rather invalidating the argument.

    It is true that weighting of voters likely to vote and indeed breaking down don’t knows has a big impact here too.

  15. “The pollster told me that the wording of the questions would have a big influence on the numbers answering one way or another and offered to guide me in wording mine.”

    Pollsters good and bad should always offer guidance in writing good questions. The question is whether that guidance is correctly aimed at asking fair and unbiased questions that will actually answer the question the client wants to find the answer to…. or whether it is colluding with a client in asking a skewed question. One hopes the former vastly outnumber the latter.

  16. Jonathan – it is true here too. If one was to use the US terminology, YouGov polls are of All Voters away from election time (as are Angus Reid and, from memory, Opinium). MORI and ComRes are of Likely Voters, Populus and ICM are, I suppose, somewhere inbetween (they include everyone but downweight less likely voters).

    The more tightly a pollster filters or weights by likelihood to vote, the better their results tend to be for the Conservatives.

  17. @AW @jonathan

    In the US some states have allowed registration up until polling day, so registered voters hasn’t necessarily been a good idea…

  18. AW

    I asked the question the other day, as to whether the polls leading up to a 2015 GE, will prove to be least accurate in polling history ?

    Reason for this.

    1) Boundary & Constituency changes
    2) UK & EU economic woes ( votes to other parties including UKIP)
    3) The health of the UK economy. People split on whether the coalitions policies have made matters worse or better.
    4) Coalition government fallout. ( LD’s possibly split and arguments between LD & Tories)
    5) Possibility of NHS candidates being fielded.

    There are probably more issues to list, but I am not sure pollsters will be able pick up the various issues that may affect the outcomes in constituencies up and down the country.

    Are You Gov involved in any specific work that is looking at the possible issues that may affect the outcome of the 2015 GE ?

  19. @R Huckle

    I would say the morale and organisation of the two main parties might come in to play – in which case Labour are far in front.

  20. “The pollster told me that the wording of the questions would have a big influence on the numbers answering one way or another and offered to guide me in wording mine.”

    That caveat goes to the very heart of the matter.

    Asking the public a basic question about their voting intention is one thing… attempts to massage opinion on behalf of a client is another; there should be a very clear distinction between any genuine attempt to interrogate public opinion, and the market research, marketing strategies and the PR side of things.

    It could be said that the very act of asking a non-VI question is an attempt to raise the salience of an issue. A good pollster will be constantly engaged in self examination: ” …what assumptions am I making by phasing the question in this way?”

    “For example, people don’t think prison is very effective at reforming criminals, but tend to be strongly opposed to replacing prison sentences with alternative punishments.”

    Polling has to ask keep the question fairly simplistic, and the answer even more so. No space for a detailed consideration of the academic studies, nuanced positions or alternative approaches. “Keep prisoners who can’t be reformed locked up, and make prison more effective at reforming the rest” – would be consistent with both of the above statements – which at first sight appear to be contradictory.

  21. R Huckle –

    Boundaries are irrelevant to standard GB polling – the polls are GB as a whole, so boundaries at lower levels don’t come into it. They come an issue when projecting seat totals based shares of the vote, but that’s hardly an exact science and we’ll deal with it the usual way.

    Economic woes, the health of the economy and so on are all things that will affect people’s voting decisions, but shouldn’t have any impact on people’s ability to tell pollsters what it is. Polls don’t predict voting intention based on other answers… just on how people say they would vote.

    More challenging would be the emergence of *significant* new parties which naturally makes things like past vote weighting or party ID weighting less useful in ensuring samples are balanced. It doesn’t mean there will be any problem, but it makes things a bit more challenging.

  22. In his resignation letter Bob D says: “The external pressure placed on Barclays has reached a level that risks damaging the franchise …”

    I had to look up “franchise”, as BD’s usage seems somewhat inappropriate…”an authorisation granted by a government or company to an individual or a group enabling them to carry out specified commercial activities…”

    Is “franchise” a pop at US and UK govts? Or an indication that BD saw Barclays as akin to say McDonalds or Burger King?

  23. “More challenging would be the emergence of *significant* new parties which naturally makes things like past vote weighting or party ID weighting less useful in ensuring samples are balanced. ”

    Did this cause any problems with the accuracy of the polls leading up to the 1983 election, after the SDP split off?

  24. Looks like we are moving towards Ed’s public enquiry. It’s being reported now that Tory MPs on the Treasury committee share Labour disquiet over the narrow remit for Cameron’s enquiry, and it would be very surprising now if this went ahead as formulated.

    We seem to have passed a Millie Dowler moment for the banks, and I doubt it will ever be the same again. All those people who warned for years that pandering to the narrow interests of the city was a mistake have been prove correct.

  25. Hannah – no, as far as I am aware there wasn’t any political weighting back in 1983, it was just demographic stuff, so a new party should not have posed any particular problem. It wouldn’t *necessarily* pose any problem now.

    Mike N – I thought it was a strange choice of phrase too. Perhaps he meant the UK part of Barclays (or perhaps the opposite, and he actually meant global brand?)

  26. AW
    Perhaps its use reveals much about BD’s mindset..
    Very very odd, and disturbing.

  27. Mike N,

    By Franchies he just ment brand, which is fine as we were talking about Reputational damage. As with spin doctored when a CEO stops making the Story good things about the brand and becomes the story themselves, the writing is on the wall.

    In Diamonds case it was in neon, which I suppose is better than Fred Goodwins which was in blood!


  28. PeterCairns
    “Brand” makes more sense, I guess. But “brand” and “franchise” are somewhat different.

    I’m reminded that America and UK are separated by a common language.

  29. Anthony

    The pollster offered a type of question that would give a good impression of the organisation in question. He didn’t offer any forms of words that would yield an a negative or ‘objective’ picture.

    It seems that different questions can yield very different impressions of who is (say) responsible for the current state of the economy. Do commissioners always want to know the ‘truth’ and never answers that show that most people agree with their point of view?

  30. – Don’t report Voodoo polls
    – Remember polls have a margin of error
    – Beware cross breaks and small sample sizes
    – Don’t cherry pick
    – Don’t make the outlier the story
    – Only compare apples with apples

    From now on, Anthony, I shall check every one of your written introductions and summaries even more assiduously that I currently do so that I can be sure that you’re conforming to your self-penned “Six Commandments of Polling Reporting”!

    Mind you, if we followed all of these strictures to the letter, where would the fun be? Or, more to the point, what on earth would ITN and Sky News do to replace their existing current affairs programmes? lol

  31. Charles –

    The big ticket commercial and academic projects always want to get accurate answers. Polls from pressure groups, campaigning organisations, etc, doing it for PR purposes normally want either a poll that will get publicity and pickup, or one that shows everyone agrees with them. That does not, of course, mean they get it!

    The sensible and legitimate way for them to do that is to pick areas of the topic under question where the public agree with them, and an unbiased and fair question will help them (hence it’s still a good idea to be a bit sceptical of polls commissioned by partisan campaigns, what questions didn’t they commission? I wrote more about it here.)

    The other way is to try and get pollsters to ask skewed questions for them. One hopes that any pollsters with any self respect should refuse to do this and should only sign off on a question that they think is fair and unbiased.

    Of course there are times when they are pushing at an open door and the public do agree with them (I always find it particularly depressing when pressure groups in that situation try to commission biased polls anyway, when a fair one would still show most people agreeing with them!)

  32. Slightly at odds with the theme, but when looking at struggling Highland Post Offces afew years back I became interested in Hybrid Mail.

    In effect it is printing close to post, where you send an e-mail and it is then printed out as close to the final destination as possible rather than being shipped all the way. It is already used widely in Australia and Canada.

    However the thought struck me that what the PO should actually do is create a free E-Mail address for every home, sort of a number followed by the post [email protected].

    The idea was to make direct marketing and the like far cheaper and easier be it hybrid or just e-mail.

    The relevance to polling is that I think the government should give every property an e-mail address and then surveys and the like could be made more efficient including polling.

    Just a thought.


  33. @Alec “Looks like we are moving towards Ed’s public enquiry. It’s being reported now that Tory MPs on the Treasury committee share Labour disquiet over the narrow remit for Cameron’s enquiry, and it would be very surprising now if this went ahead as formulated.”

    There are stirrings in the Conservative press, with the Mail backing the call for a judicial inquiry.

    And how on earth can Andrew Tyrie be expected to independently chair the Government-proposed inquiry, when he’s the author of a CPS paper highly critical of the scale of (over)regulation of the banks by the last Government?

  34. @PHIL
    There are stirrings in the Conservative press, with the Mail backing the call for a judicial inquiry`

    But the Lib Dems seem keen on a Parliamentary inquiry…Seem more keen to nail Balls than clean up the banking system.

  35. Anthony,

    many thanks.

  36. @Phil

    Patrick Wintour has filed four pieces for the Guardian on this subject so far today… in “Arguments against politician-led banking inquiry come large and small” he reminds us of a little incident which provoked some amusement last year:


  37. Jeff Randall’s interview with Agius on Sky-brutal & embarrassing .

    Great stuff.

  38. Labour have lost a vote in the HoL calling for a judge-led inquiry – 197 to 251.
    Likely the same result on Thursday in the Commons, given that the coalition is unified on wanting a parliamentary inquiry.

  39. Diamond before TSC tomorrow is a must watch-have cancelled everything else.

    What has he got to lose now-apart from that £18 m that is ?

  40. I’m not sure what exactly is meant by “whitehall in the context of this

    but I assume the implications are significant?

  41. Anthony

    Not sure why my link puts me in moderation. It’s currently being quoted on Ch4 News.

  42. This is the text of email I linked to (RED (believed to be Bob Diamond)

    File Note: Call to RED from Paul Tucker, Bank of England

    Date: 29th October 2008

    Further to our last call, Mr Tucker reiterated that he had received calls from a number of senior figures within Whitehall to question why Barclays was always towards the top end of Libor pricing.

    His response was “you have to pay what you have to pay”.

    I asked if he could relay the reality, that not all banks were providing quotes at the levels that represented real transactions, his response “oh, that would be worse”.

    I explained again our market rate driven policy and that it had recently meant that we appeared in the top quartile and on occasion the top decile of the pricing. Equally, I noted that we continued to see others in the market posting rates at levels that were not representative of where they would actually undertake business.

    This latter point has on occasion pushed us higher than would otherwise appear to be the case. In fact, we are not having to ‘pay up’ for money at all.

    Mr Tucker stated the levels of calls he was receiving from Whitehall were ‘senior’ and that while he was certain we did not need advice, that it did not always need to be the case that we appeared as high as we have recently.


    I’m not sure what exactly is meant by “Whitehall” in the context of this, but I assume the implications are significant?

  43. I think there is a difference between `Whitehall figures` worried about Libor,putting in legal measures to bring it down and actively encouraging illegal Libor manipulation.

    I`ll guess that the defence would be the former for a few people.

  44. I would read the note as indicating that Tucker thought that their “interest rate driven policy” was the problem, and that they should perhaps adjust their policy to bring their rates down.

    It does, however, imply that Diamond told Tucker at that time that other banks were misreporting.

    Of course, since Barclays had been misreporting for 3 years prior to this phonecall, it’s difficult to see how any reading of the discussion can possibly give them any sort of alibi.

  45. In US business language a franchise is usually a licence or agreement to conduct business in a particular geographic area.

    That used to be the reason for having franchises of Macdonald’s etc. They would not allow another Macdonald’s within a specific geographical area once a particular owner/ manager had purchased a franchise. The brand used to be seen as a secondary benefit; securing the area was the primary benefit. Now the brand is considered primary, hence the public perception that brand & franchise are synonymous.

    Therefore Diamond may have been hinting that he was concerned that a conflict between Barclays & the BoE could lead to speculation in the financial media about Barclays’s banking licences in the UK. Given Robert Peston (BBC ‘expert’) is saying that Mervyn King was instrumental in Diamond going, it seems reasonable to conclude that this may be what Diamond was inferring.

  46. Alistair Darling put in some very strong denial about the BoE or Treasury ever asking banks to manipulate Libor. The ‘smoking gun’ email also at this point appears weak – it’s a note of Bob D’s own impressions of the call, not a transcript, and doesn’t include any direct statements that he was told to manipulate the rate. More may yet come out, but this isn’t a killer blow as far as the BoE is concerned.

    Very disappointing to see the government turn this all party political. C4 are reporting senior Tories looking to this issue to score against Labour – depressing. It’s about the future of the country, not our political parties.

  47. @ALEC
    `C4 are reporting senior Tories looking to this issue to score against Labour – depressing`

    Not surprising though,is it?Labour have been 10 points ahead for more than 3 months and whatever Cameron`s pronouncements recently,this hasn`t changed…So they need a game changer and this could be it…I think this will reduce Labour`s lead for sure.


    It’s not an area of my expertise, but the Crown today confirmed investigations are currently taking place into possible banking irregularities connected to Scottish banks.

    My understanding is that would involve activities conducted by employees of banks HQ’d in Scotland – regardless of where those employees are located. Again, I’m told that the possibility of prosecutions is easier under Scots law which treats such transgressions as common law offences instead of a reliance on statute as is required in England.

    However, I’d like to see a proper legal assessment of that (and Lallands Peat Worrier hasn’t commented on this yet!)

  49. Confirmation from the BBC that prosecutions in this area are easier under Scots Law.

  50. Points from tonight’s excellent Jeff Randall :-

    Barclays has been trying to “finger” BoE with that exchange between Diamond & Tucker for some days.

    Tucker has yet to speak-he will certainly be called before TSC.

    The frantic few days in 2008 , when the whole UK Banking structure seemed at risk would have produced extraordinary pressures on all the players.

    The regulation of Banks & the safeguarding of customers has seen a catalogue of failure :-

    PPI. Interest rate SWAPS. LIBOR.

    Adair Turner’s final annual report from FSA (i) today indicates why-a “caveat emptor” approach.

    A contributor to Randall said that was only possible if there was product transparency-and there wasn’t/isn’t.

    Talk of a collapse in confidence in LOndon is wide of the mark. Regulation is being changed-FSA out/FCA /PRA in ; Financial Services Bill & Vickers implementation in train ; Heads rolling etc.


1 2 3