Below are the polls that have come out since the weekend.

SavantaComRes/Telegraph (2nd-3rd Dec) – CON 42%(-1), LAB 32%(-1), LDEM 12%(-1), BREX 3%(-1) (tabs)
YouGov/Times/Sky (2nd-3rd Dec) – CON 42%(-1), LAB 33%(-1), LDEM 12%(-1), BREX 4%(+2) (tabs)
ICM/Reuters (29th Nov-2nd Dec) – CON 42%(+1), LAB 35%(+1), LDEM 13%(nc), BREX 3%(-1) (tabs)
Kantar (28th Nov-2nd Dec) – CON 44%(+1), LAB 32%(nc), LDEM 15%(+1), BREX 2%(-1) (tabs)
Survation/GMB (26th-30th Nov) – CON 42%(+1), LAB 33%(+3), LDEM 11%(-4), BREX 3%(-2) (tabs)

Last week there appeared to be a consistent narrowing of the Conservative lead across all the polls. That now appears to have come to a halt or, at least, there is no obvious sign of it continuing. Four of the polls published this week have shown no sign of the lead narrowing (and the exception – the Survation poll for Good Morning Britain – was actually conducted last week, at a time when other polls were showing the lead falling). Note that the ComRes poll reflects a change in methodology to prompt for candidate names, something that somewhat unusually lead to all the parties falling and “other others” going up by four.

As things stand the polls show a consistent Conservative lead, varying between 6 points from BMG and 15 points from Opinium, with the average around about 10 points. It is hard to be certain what sort of lead the Conservatives need for a majority (it depends on swings in different areas and how they do in the different battlegrounds), but a reasonable assumption is somewhere around 6 or 7 points, meaning that the BMG and ICM polls that show the smallest leads are in an area where an overall majority would be uncertain. All the other polls point towards a Conservative majority.

We should have two more sets of polls before election day – the typical rush of Sunday polls (Opinium, Deltapoll, YouGov, BMG and ComRes all usually release polls on Sundays), and then the pollsters final call polls on Tuesday and Wednesday next week.

General election campaigns provoke a lot of attention and criticism of opinion polls. Some of that is sensible and well-informed… and some of it is not. This is about the latter – a response to some of the more common criticisms that I see on social media. Polling methodology is not necessarily easy to understand and, given many people only take an interest in it at around election time, most people have no good reason to know much about it. This will hopefully address some of the more common misapprehensions (or in those cases where they aren’t entirely wrong, add some useful context).

This Twitter poll has 20,000 responses, TEN TIMES BIGGER than so-called professional polls!

Criticisms about sample size are the oldest and most persistent of polling criticism. This is unsurprising given that it is rather counter-intuitive that only 1000 interviews should be enough people to get a good steer on what 40,000,000 people think. The response that George Gallup, the founding father of modern polling, used to give is still a good one: “You don’t need to eat a whole bowl of soup to know if it’s too salty, providing it’s properly stirred a single spoonful will suffice.”

The thing that makes a poll meaningful isn’t so much the sample size, it is whether it is representative or not. That is, does it have the right proportions of men and women, old and young, rich and poor and so on. If it is representative of the wider population in all those ways, then one hopes it will also be representative in terms of opinion. If not, then it won’t be. If you took a sample of 100,000 middle-class homeowners in Surrey then it would be overwhelmingly Tory, regardless of the large sample size. If you took a sample of 100,000 working class people on Merseyside it would be overwhelmingly Labour, regardless of the large sample size. What counts is not the size, it’s whether it’s representative or not. The classic example of this is the 1936 Presidential Election where Gallup made his name – correctly predicting the election using a representative sample when the Literary Digest’s sample of 2.4 million(!) called it wrongly.

Professional polling companies will sample and weight polls to ensure they are representative. However well intended, Twitter polls will not (indeed, there is no way of doing so, and no way of measuring the demographics of those who have participated).

Who are these pollsters talking too? Everyone I know is voting for party X!

Political support is not evenly distributed across the country. If you live in Liverpool Walton, then the overwhelming majority of other people in your area will be Labour voters. If you live in Christchurch, then the overwhelming majority of your neighbours will likely be Tory. This is further entrenched by our tendency to be friends with people like us – most of your friends will probably be of a roughly similar age and background and, very likely, have similar outlooks and things in common with you, so they are probably more likely to share your political views (plus, unless you make pretty odd conversation with people, you probably don’t know how everyone you know will vote).

An opinion poll will have sought to include a representative sample of people from all parts of the country, with a demographic make-up that matches the country as a whole. Your friendship group probably doesn’t look like that. Besides, unless you think that literally *everyone* is voting for party X, you need to accept that there probably are voters of the other parties out there. You’re just not friends with them.

Polls are done on landlines so don’t include young people

I am not sure why this criticism has resurfaced, but I’ve seen it several times over recent weeks, often widely retweeted. These days the overwhelming majority of opinion polls in Britain are conducted online rather than by telephone. The only companies who regularly conduct GB voting intention polls by phone are Ipsos MORI and Survation. Both of them conduct a large proportion of their interviews using mobile phones.

Polls of single constituencies are still normally conducted by telephone but, again, will conduct a large proportion of their calls on mobile phones. I don’t think anyone has done a voting intention poll on landlines only for well over a decade.

Who takes part in these polls? No one has ever asked me

For the reason above, your chances of being invited to take part in a telephone poll that asks about voting intention are vanishingly small. You could be waiting many, many years for your phone number to be randomly dialled. If you are the sort of person who doesn’t pick up unknown numbers, they’ll never be able to reach you.

Most polls these days are conducted using internet panels (that is, panels of people who have given pollsters permission to email them and ask them to take part in opinion polls). Some companies like YouGov and Kantar have their own panels, other companies may buy in sample from providers like Dynata or Toluna. If you are a member of such panels you’ll inevitably be invited to take part in opinion polls. Though of course, remember that the vast majority of surveys tend to be stuff about consumer brands and so on… politics is only a tiny part of the market research world.

The polls only show a lead because pollsters are “Weighting” them, you should look at the raw figures

Weighting is a standard part of polling that everyone does. Standard weighting by demographics is unobjectionable – but is sometimes presented as something suspicious or dodgy. At this election, this has sometimes been because it has been confused with how pollsters account for turnout, which is a more controversial and complicated issue which I’ll return to below.

To deal with ordinary demographic weighting though, this is just to ensure that the sample is representative. So for example – we know that the adult British population is about 51% female, 49% male. If the raw sample a poll obtained was 48% female and 52% male then it would have too many men and too few women and weighting would be used to correct it. Every female respondent would be given a weight of 1.06 (that is 51/48) and would count as 1.06 of a person in the final results. Every male respondent would be given a weight of 0.94 (that is 49/52) and would count as 0.94 of a person in the final results. Once weighted, the sample would now be 51% female and 49% male.

Actual weighting is more complicated that this because samples are weighted by multiple factors – age, gender, region, social class, education, past vote and so on. The principle however is the same – it is just a way of correcting a sample that has the wrong amount of people compared to the known demographics of the British population.

Polls assume young people won’t vote

This is a far more understandable criticism, but one that is probably wrong.

It’s understandable because it is part of what went wrong with the polls in 2017. Many polling companies adopted new turnout models that did indeed make assumptions about whether people would vote or not based upon their age. While it wasn’t the case across the board, in 2017 companies like ComRes, ICM and MORI did assume that young people were less likely to vote and weighted them down. The way they did this contributed to those polls understating Labour support (I’ve written about it in more depth here)

Naturally people looking for explanations for the difference between polls this time round have jumped to this problem as a possible explanation. This is where it goes wrong. Almost all the companies who were using age-based turnout models dumped those models straight after the 2017 election and went back to basing their turnout models primarily on how likely respondents say they are to vote. Put simply, polls are not making assumptions about whether different age groups will vote or not – differences in likelihood to vote between age groups will be down to people in some age groups telling pollsters they are less likely to vote than people in other age groups.

The main exception to this is Kantar, who do still include age in their turnout model, so can fairly be said to be assuming that young people are less likely to vote than old people. They kept the method because, for them, it worked well (they were one of the more accurate companies at the 2017 election).

Some of the criticism of Kantar’s turnout model (and of the relative turnout levels in other companies’ polls) is based on comparing the implied turnout in their polls with turnout estimates published straight after the 2017 election, based on polls done during the 2017 campaign. Compared to those figures, the turnout for young people may look a bit low. However there are much better estimates of turnout in 2017 from the British Election Study, which has validated turnout data (that is, rather than just asking if people voted, they look their respondents up on the marked electoral register and see if they actually voted) – these figures are available here, and this is the data Kantar uses in their model. Compared to these figures the levels of turnout in Kantar and other companies’ polls look perfectly reasonable.

Pollster X is biased!

Another extremely common criticism. It is true that some pollsters show figures that are consistently better or worse for a party. These are know as “house effects” and can be explained by methodological differences (such as what weights they use, or how they deal with turnout), rather than some sort of bias. It is in the strong commercial interests of all polling companies to be as accurate as possible, so it would be self-defeating for them to be biased.

The frequency of this criticism has always baffled me, given to anyone in the industry it’s quite absurd. The leading market research companies are large, multi-million pound corporations. Ipsos, YouGov and WPP (Kantar’s parent company) are publicly listed companies – they are owned by largely institutional shareholders and the vast bulk of their profits are based upon non-political commercial research. They are not the personal playthings of the political whims of their CEOs, and the idea that people like Didier Truchot ring up their UK political team and ask them to shove a bit on the figure to make the party they support look better is tin-foil hat territory.

Market research companies sell themselves on their accuracy, not on telling people what they want to hear. Political polling is done as a shop window, a way of getting name recognition and (all being well) a reputation for solid, accurate research. They have extremely strong commercial and financial reasons to strive for accuracy, and pretty much nothing to be gained by being deliberately wrong.

Polls are always wrong

And yet there have been several instances of the polls being wrong of late, though this is somewhat overegged. The common perception is that the polls were wrong in 2015 (indeed, they nearly all were), at the 2016 referendum (some of them were wrong, some of them were correct – but the media paid more attention to the wrong ones), at Donald Trump’s election (the national polls were actually correct, but some key state polls were wrong, so Trump’s victory in the electoral college wasn’t predicted), and in the 2017 election (most were wrong, a few were right).

You should not take polls as gospel. It is obviously possible for them to be wrong – recent history demonstrates that all too well. However, they are probably the best way we have of measuring public opinion, so if you want a steer on how Britain is likely to vote it would be foolish to dismiss them totally.

What I would advise against is assuming that they are likely to be wrong in the same direction as last time, or in the direction you would like them to be. As discussed above – the methods that caused the understatement of Labour support in 2017 have largely been abandoned, so the specific error that happened in 2017 is extremely unlikely to reoccur. That does not mean polls couldn’t be wrong in different ways, but it is worth considered that the vast majority of previous errors have been in the opposite direction, and that polls in the UK have tended to over-state Labour support. Do not assume that polls being wrong automatically means under-stating Labour.


We have the usual glut of polls in the Sunday papers, with new figures from YouGov, ComRes, Opinium, BMG and Deltapoll. Topline figures are below:

Deltapoll/Mail on Sunday – CON 45%(+2), LAB 32%(+2), LDEM 15%(-2), BREX 3%(nc). Fieldwork was Thursday to Saturday, and changes are from last week. (tabs)
YouGov/Sunday Times – CON 43%(nc), LAB 34%(+2), LDEM 13%(nc), BREX 2%(-2). Fieldwork was Thursday and Friday, and changes are from the start of the week (tabs)
Opinium/Observer – CON 46%(-1), LAB 31%(+3), LDEM 13%(+1), BREX 2%(-1). Fieldwork was Wednesday to Friday, changes are from last week (tabs)
SavantaComRes/Sunday Telegraph – CON 43%(+2), LAB 33%(-1), LDEM 13%(nc), BREX 4%(-1). Fieldwork was Wednesday and Thursday, changes are from their mid-week poll (tabs)
BMG/Independent – CON 39%(-2), LAB 33%(+5), LDEM 13%(-5), BREX 4%(-1). FIeldwork was Tuesday to Wednesday, changes are from last week.

Polls in the last week had been consistent in showing a small decrease in Conservative support and a small increase in Labour support, marginally reducing the Tory lead. While these polls aren’t quite so uniform (Deltapoll show the lead steady, ComRes shows movement in the other direction… though if you consider ComRes carry out two polls a week, their lead compared to a week ago is actually unchanged), most still show movement in Labour’s favour.

There remains a significant difference in the overall size of the lead, varying from just six points in the BMG poll to fifteen from Opinium. It is hard to put a specific figure on what sort of lead the Conservatives need to secure an overall majority – it obviously depends on exactly where they gain or lose votes – but as a rough rule of thumb it is probably somewhere around six or seven points. That means at the moment the vast majority of polls are still indicating a Tory majority, but there is no longer that much room for further tightening.

The mid-week polls so far are below:

SavantaComRes (25th-26th) – CON 41%(-1), LAB 34%(+2), LDEM 13%(-2), BREX 5%(nc)
YouGov/Sky/Times (25th-26th) – CON 43%(-1), LAB 32%(+2), LDEM 13%(-3), BREX 4%(+1)
ICM/Reuters (22th-25th) – CON 41%(-1), LAB 34%(+2), LDEM 13%(nc), BREX 4%(-1)
Kantar (21st-25th) – CON 43%(-2), LAB 32%(+5), LDEM 14%(-2), BREX 3%(+1)
Survation/GMB (20th-23rd) – CON 41%(-1), LAB 30%(+2), LDEM 15%(+2), BREX 5%(nc)

Taken individually, almost all the changes in these polls are within the margin of error (Kantar is the only exception). However, looking at them as a group there is a clear trend, with every poll showing a slight drop in Tory support and a slight increase for Labour. Taken together it’s clear there’s been a slight narrowing of the race though, of course, that still leaves a Conservative lead between 7 and 11 points. As usual, it is almost impossible to ascribe specific causes to this.

As well as the standard polls this week, YouGov published their MRP model. MRP is a method of using a large national sample to project shares at smaller geographical areas – in this case Parliamentary constituencies. By modelling how different demographics vote in seats with different characteristics, and then applying that model to each constituency, the MRP model produces vote shares for each individual constituency and, via that, projects seat totals for each party. Famously the YouGov MRP model projected a hung Parliament in 2017 when most people expected a Conservative majority.

The model this time is less surprising – it projected national vote shares of CON 43%, LAB 32%, LDEM 14%, BREX 3% (so very much in line with YouGov’s traditional polling), and seat numbers of Conservative 359, Labour 211, SNP 43, Liberal Democrat 13. This represents a Conservative majority of 68, much what we would expect to find on those shares of the vote (though the detailed projection is interesting, with the Conservative gains coming largely in the North and the urban West Midlands, with notable gains in West Bromwich, Wolverhampton and Stoke). Full details of the MRP model are here.

Finally this week, we’ve seen what is only the second Scottish poll of the campaign, this time from Ipsos MORI. Topline figures with changes from the 2017 election are CON 26%(-3), LAB 16%(-11), LDEM 11%(+4), SNP 44%(+7). Tabs for that are here.

There were five GB voting intention polls in the Sunday papers (and the latest Panelbase poll appeared on Friday).

BMG/Independent – CON 41%(+4), LAB 28%(-1), LDEM 18%(+2), BREX 3%(-6). Fieldwork Tuesday to Thursday, with changes from last week. (tabs)
YouGov/Sunday Times – CON 42%(nc), LAB 30%(nc), LDEM 16%(+1), BREX 3%(-1). Fieldwork Thursday and Friday, with changes from mid-week. (tabs)
Opinium/Observer – CON 47%(+3), LAB 28%(nc), LDEM 12%(-2), BREX 3%(-3). Fieldwork Wednesday to Friday, with changes from last week. (tabs)
Deltapoll/Mail on Sunday – CON 43%(-2), LAB 30%(nc), LDEM 16%(+5), BREX 3%(-3). Fieldwork Thursday and Friday, with changes from last week. (tabs)
SavantaComres/Sunday Express – CON 42%(nc), LAB 32%(+1), LDEM 15%(nc), BREX 5%(nc). Fieldwork Wednesday and Thursday, with changes from midweek. (tabs)
Panelbase – CON 42%(-1), LAB 32%(+2), LDEM 14%(-1), BREX 3%(-2). Fieldwork was Wednesday to Friday, changes from last week.

Five of these were conducted wholly after the first leaders debate and two of them were conducted after the Labour manifesto had been released, so it is the first opportunity to see any impact from these events.

There does not appear to be any consistent trend or impact from the debate. The four point increase for the Conservatives in the BMG poll is likely the pact of starting to prompt by candidate names and, therefore, removing the Brexit party opinion for half of respondents (so far as I can tell, all polling companies apart from ComRes are now doing this). Setting BMG aide, the average change across the polls is no change for the Tories, less than a point change for Labour and the Liberal Democrats. Neither of the two polls that were conducted wholly after the publication of the Labour manifesto (YouGov and Deltapoll) show any sign of a manifesto boost for Labour. Both the debate and the manifesto launch were events that could potentially have had an impact on the race… thus far, neither appears to have done so.

Moving on, there has been an almost complete absence of Scottish polling during the campaign so far. While ITV Wales have commissioned specific Welsh polling and Queen Mary University of London have done a specific London poll, Scottish polls have been completely absent. The Sunday Times today have a Scottish poll from Panelbase, with topline figures (which changes from the general election) of CON 28%(-1), LAB 20%(-7), LDEM 11%(+4), SNP 40%(+3), BREX 1%(-4). On these figures the Conservatives would hold all but one of their current Scottish seats – rather a turnaround from assumptions at the start of the campaign that the Tories were set to lose many of their Scottish seats and would need to make up the deficit elsewhere.