Ipsos MORI’s monthly political monitor for the Evening Standard came out earlier today. Topline voting intention figures are CON 40%(-3), LAB 29%(-2), LDEM 13%(+2), UKIP 9%(+3). The Tory lead remains pretty steady (note that the increase in the UKIP vote is probably largely a reversion to the mean following an anomolous 6% last month).

Satisfaction ratings with the party leaders are plus 17 for Theresa May (53% are satisfied, 36% are disatisfied) and minus 38 for Jeremy Corbyn (24% satisfied and 62% disatisfied). That includes 22% of Tory voters who say that are “satisfied” with Corbyn’s leadership… I suspect they don’t mean that in a complementary way.

Nothing else has been published yet (MORI normally ask a few other questions, but I expect they’ve held them back to give the Standard another story), all the details so far are over here.


“But the sheer size of the survey […] makes it of interest…”

One of the most common errors in interpreting polls and surveys is the presumption that because something has a really huge sample size it is more meaningful. Or indeed, meaningful at all. Size isn’t what makes a poll meaningful, it is how representative the sample is. Picture it this way, if you’d done an EU referendum poll of only over 60s you’d have got a result that was overwhelmingly LEAVE… even if you polled millions of them. If you did a poll and only included people under 30 you’d have got a result that was overwhelmingly REMAIN… even if you polled millions of them. What matters is that the sample accurately reflects the wider population you want them to represent, that you have the correct proportions of both young and old (and male & female, rich & poor, etc, etc). Size alone does not guarantee that.

The classic real world example of this is the 1936 Presidential Election in the USA. I’ve referred to this many times but I thought it worth reciting the story in full, if only so people can direct others to it in future.

Back in 1936 the most respected barometers of public opinion was the survey conducted by the Literary Digest, a weekly news magazine with a hefty circulation. At each Presidential election the Digest carried out a survey by mail, sending surveys to its million-plus subscriber base and to a huge list of other people, gathered from phone directories, membership organisations, subscriber lists and so on. There was no attempt at weighting or sampling, just a pure numbers grab, with literally millions of replies. This method had correctly called the winner for the 1920, 1924, 1928 and 1932 Presidential elections.

In 1936 the Digest sent out more than ten million ballots. The sample size for their final results was 2,376,523. This was, obviously, huge. One can imagine how the today’s papers would write up a poll of that size and, indeed, the Digest wrote up their results with not a little hubris. If anything, they wrote it up with huge, steaming, shovel loads of hubris. They bought all the hubris in the shop, spread it across the newsroom floor and rolled about in it cackling. Quotes included:

  • “We make no claim to infallibility. We did not coin the phrase “uncanny accuracy” which has been so freely applied to our Polls”
  • “Any sane person can not escape the implication of such a gigantic sampling of popular opinion as is embraced in THE LITERARY DIGEST straw vote.”
  • “The Poll represents the most extensive straw ballot in the field—the most experienced in view of its twenty-five years of perfecting—the most unbiased in view of its prestige—a Poll that has always previously been correct.”

digestpoll

You can presumably guess what is going to happen here. The final vote shares in the 1936 Literary Digest poll were 57% for Alf Landon (Republican) and 43% for Roosevelt (Democrat). This worked out as 151 electoral votes for Roosevelt and 380 for Landon. The actual result was 62% Roosevelt, 38% for Landon. Roosevelt received 523 in the electoral college, Landon received 8, one of the largest landslide victories in US history. Wrong does not nearly begin to describe how badly off the Literary Digest was.

At the same time George Gallup was promoting his new business, carrying out what would become proper opinion polls and using them for a syndicated newspaper column called “America Speaks”. His methods were quite far removed from modern methods – he used a mixed mode method, mail-out survey for richer respondents and face-to-face for poorer, harder to reach respondents. The sample size was also still huge by modern standards, about 40,000*. The important different from the Literary Digest poll however was that Gallup attempted to get a representative sample – the mail out surveys and sampling points for face-to-face interviews had quotas on geography and on urban and rural areas, interviewers had quotas for age, gender and socio-economic status.

pic2097election

Gallup set out to challenge and defeat the Literary Digest – a battle between a monstrously huge sample and Gallup’s smaller but more representative sample. Gallup won. His final poll predicted Roosevelt 55.7%, Landon 44.3%.* Again, by modern standards it wasn’t that accurate (the poll by his rival Elmo Roper, who was setting quotas based on the census rather than his turnout estimates was actually better, predicting Roosevelt on 61%… but he wasn’t as media savvy). Nevertheless, Gallup got the story right, the Literary Digest hideously wrong. George Gallup’s reputation was made and the Gallup organisation became the best known polling company in the US. The Literary Digest’s reputation was shattered and the magazine folded a couple of years later. The story has remained a cautionary tale of why a representative poll with a relatively small sample is more use than a large poll that makes no effort to be representative, even if it is absolutely massive.

The question of why the Digest poll was so wrong is interesting itself. Its huge error is normally explained through where the sample came from – they drew it from things like magazine subscribers, automobile association members and telephone listings. In depression era America many millions of voters didn’t have telephones and couldn’t afford cars or magazine subscriptions, creating an inbuilt bias towards wealthier Republican voters. In fact it appears to be slightly more complicated than that – Republican voters were also far more likely to return their slips than Democrat voters were. All of these factors – a skewed sampling frame, differential response rate and no attempt to combat these – combined to make the Literary Digest’s sample incredibly biased, despite its massive and impressive size.

Ultimately, it’s not the size that matters in determining if a poll is any good. It’s whether it’s representative or not. Of course, a large representative poll is better than a small representative poll (though it is a case of diminishing returns) but the representativeness is a prerequisite for it being of any use at all.

So next time you see some open-access poll shouting about having tens of thousands of responses and are tempted to think “Well, it may not be that representative, but it’s got a squillion billion replies so it must mean something, mustn’t it?” Don’t. If you want something that you can use to draw conclusions about the wider population, it really is whether it reflects that population that counts. Size alone won’t cut it.

=

* You see different sample sizes quoted for Gallup’s 1936 poll – I’ve seen people cite 50,000 as his sample size or just 3,000. The final America Speaks column before the 1936 election doesn’t include the number of responses he got (though does mention he sent out about 300,000 mailout surveys to try and get it). However, the week after (8th Nov 1936) the Boston Globe had an interview with the organisation going through the details of how they did it that says they aimed at 40,000 responses.
** If you are wondering why the headline in that thumbnail says 54% when I’ve said Gallup called the final share as 55.7%, it’s because the polls were sometimes quoted as share of the vote for all candidates, sometimes for share of the vote for just the main two parties. I’ve quoted both polls as “share of the main party vote” to keep things consistent.


-->

ComRes have a poll in Sunday’s Independent and the Sunday Mirror. Most interestingly, it found that people agreed by 45% to 39% that John Bercow was right to refuse to invite Donald Trump to address the Commons, but also that people thought by 47% to 37% that the Queen should meet Donald Trump if he visits the country. As we’ve already seen elsewhere, the British public have little sympathy for Donald Trump’s immigration policy (33% think he was right, 52% think he was wrong) though it’s worth noting that the question wording went considerably wider than Trump’s actual policy (ComRes asked about halting immigration from “Muslim-majority” countries in general, whereas Donald Trump’s policy deals with seven specific countries they claim have an issue with terrorism or vetting).

The poll also had voting intention figures of CON 41%, LAB 26%, LDEM 11%, UKIP 11%, GRN 4%. This is the first ComRes voting intenton poll since way back in June 2016 – after one of the poorer performing polls in the EU referendum (the final ComRes poll had Remain eight points ahead), they paused their voting intention polls while they conducted a review into their methods. They have now recommenced voting intention polls with – as far as I can tell – no changes to their pre-referendum methods. ComRes’s view appears to be that the referendum was an exceptional event, and while the turnout model they adopted after the polling errors of 2015 worked badly there, it worked well at the London mayoral election, so is being retained for Westminster polls. For better or for worse, the ComRes results seem to be very much in line with those from other companies, with a Conservative lead in the mid-teens.

Full tabs for the ComRes poll are here.

While I’m here, I should also mention a BMG Scottish poll that came out at the start of the week (I’ve been laid low with a heavy cold). Voting intention in a second independence referendum stood at YES 49%(+3.5%), NO 49%(-3.5%). This is the lowest lead for NO that any Scottish Indy poll has recorded since the EU referendum. This was interpreted by the Herald as a response to Theresa May’s announcement of her negotiating stance on Brexit. I think that is somewhat premature – so far we’ve had two Scottish polls conducted since May’s speech, a Panelbase poll showing a very small (and not statistically significant) movement towards NO and a BMG poll showing a somewhat larger (but still barely significant) movement towards YES. In short, there is nothing yet that couldn’t be normal sample variation – wait for the next few polls on attitudes towards Scottish independence before concluding whether there is or is not any movement. Full tabs are here


ICM’s regular poll for the Guardian came out today, topline voting intention figures are CON 42%(nc), LAB 27%(+1), LDEM 10%(nc), UKIP 12%(-1), GRN 4%(-1). There is no significant change since a fortnight ago and the Conservatives retain a formidable lead.

The poll also asked about expectations of Brexit. People tend to think it will have a negative impact on the economy (by 43% to 38%) and on their own personal finances (34% to 12%), but on the overall way of life in Britain they are slightly more positive (41% expect a positive impact, 36% a negative one). All these answers are, as you would expect, strongly correlated with referendum vote – very few Remainers expect anything good to come of Brexit, very few Leavers expect any negative consequences. Full tabs are here.

For those who’ve missed it, I also have a long piece over on YouGov’s website about the Brexit problem facing Labour and how to respond to it. Labour were already a party whose electoral coalition was under strain, with sharp divides between their more liberal, metropolitian middle-class supporters and their more socially conservative traditional working class support. Brexit splits the party right down that existing fault line and their choice on whether to robustly oppose or accept Brexit will upset one side or another of the Labour family.

More of Labour’s supporters backed Remain than Leave and a substantial minority of Labour voters would be delighted were the party to oppose Brexit. However, such a policy would also drive away a substantial chunk of their support. 20% of people who voted Labour in 2015 say they would be “angry” if Labour opposed Brexit. In contrast, if Labour accept Brexit but campaign for a close relationship with the EU once we leave then while it would delight fewer voters, it would also anger far fewer voters (only 7% of Labour’s 2015 vote would be angry). If Labour’s aim is to keep their electoral coalition together, then a “soft Brexit” would be acceptable to a much wider segment of their support.

Of course it’s more complicated than that. This is only how voters would react right now. Labour may want to gamble on public opinion turning against Brexit in the future and get ahead of the curve. Alternatively, they may think Brexit is such an important issue that Labour should do what they think right and damn the electoral consequences. That’s a matter for the party itself to decide, but in terms of current public opinion I think Jeremy Corbyn’s position on Brexit may actually be the one most likely to keep Labour together. Full article is here.


The BBC have quite a Ipsos MORI have quite a detailed poll on public attitudes towards funding the NHS. So far I think the BBC’s coverage has only briefly mentioned it in relation to (predictable) public support for increasing the charges on foriegn visitors who use the NHS, but the full tables have a lot of interesting things.

MORI asked people if they thought it was acceptable or unacceptable to increase funding for the NHS in various ways. The least popular method was – obviously – a move to an insurance model of NHS funded. The defining feature of the NHS is that we don’t have to worry about insurance and suchlike, people are free to go to the doctors without worrying about money. Nevertheless, a surprisingly high 33% of people thought this would be acceptable. People also rejected (by 51% to 37%) the idea of charging for services that are currently free. Asked about specific charges, 43% of people say they would be willing to pay for a guaranteed GP appointment within 24 hours, 51% would not (the average amount was £11).

Increasing income tax to fund the NHS was rejected by 40% to 50%. This is in contrast to a recent YouGov poll that asked a similar question and found slightly more people supported paying more income tax for the NHS than opposed it. I think this difference is down to wording – YouGov asked specifically about increasing income tax from 20% to 21% while the MORI poll did not specify the size of the increase – indeed, a later question in MORI’s poll asks more specifically about an increase in the basic rate from 20% to 21%, and this bumps support up to 50%. It looks like people are happy to pay more income tax for the NHS… so long as its only a modest rise. Support for increasing the higher rate of income tax (which most people wouldn’t have to pay themselves) is more popular, with 61% support.

As with the YouGov poll MORI also found a higher level of support (53%) when it asked about funding the NHS by increasing National Insurance. For the majority of respondents a 1p increase in income tax would be functionally identical to a 1p increase in national insurance, yet the NI increase is always more popular. Part of this difference may be down to the responses of over 65s, who do not have to pay national insurance, but looking at MORI’s breakdown the increase is across all age groups, so it is presumably also down to the fact that people are less aware of how National Insurance payments work. For what it’s worth, the MORI question did not specify employees NI contributions, so some respondents may have been thinking about employer’s NI.

MORI also asked about the potential for charging people for illnesses that are “caused by their lifestyle” or for missing appointments. These are similar in a way – the logic behind both is presumably that people are, through their behaviour, costing the NHS money. Public attitudes are completely different though – 71% think it is acceptable to charge the public for missing appointments, only 44% think it would be acceptable to charge for lifestyle related illnesses. Perhaps they view it as different levels of moral culpability, different potential costs, different likelihoods of being personally affected by it, or just infringing too much on the principle of being free at the point of delivery. When MORI asked about two specific cases later on in the survey people were far less forgiving: only 33% think liver transplants should always be available for free for alcoholics, only 27% think weight loss surgery should be freely available for obese patients (25% think it shouldn’t be available at all). That said, both these are quite unsympathetic examples.

So what can we conclude from all that? Well, around about half the population say they would support an increase in general taxation to pay for the NHS, depending on the level of the increase, which tax it was or which tax band. Only a minority (though perhaps a larger minority than you’d expect) would consider a change to the funding basis of the NHS acceptable. Asked in general, only a minority of people would support charges for treatment for conditions that are seen as “self-inflicted”, but shown some specific examples most people would support restrictions on treatment for some specific examples like transplants for alcoholics or weight loss operations for the obese.