I’ve been catching up on sleep after the election, but this is just to add a brief, post-election round up of how the polls performed. In 2015 and 2017 the equivalent posts were all about how the polls had got it wrong, and what might have caused it (even in 2010, when the polls got the gap between Labour and the Conservatives pretty much spot on, there were questions about the overstatement of the Liberal Democrats). It’s therefore rather a relief to be able to write up an election when the polls were pretty much correct.

The majority of the final polls had all the main parties within two points, with Ipsos MORI and Opinium almost spot on – well done both of them. The only companies that really missed the mark were ICM and ComRes, who understated the Tories and overstated Labour, meaning they had Conservative leads of only 6 and 5 points in their final polls.

My perception during the campaign was that much of the difference between polling companies showing small Conservative leads and those companies showing bigger leads was down to how and if they were accounting for false recall when weighting using past vote – I suspect this may well explain the spread in the final polls. Those companies that came closest were those who either do not weight by past vote (MORI & NCPolitics), adjusted for it (Kantar), or used data collected in 2017 (Opinium & YouGov). ComRes and ICM were, as far as I know, both just weighting recalled 2017 past vote to actual 2017 vote shares, something that would risk overstating Labour support if people disproportionately failed to recall voting Labour in 2017.

The YouGov MRP performed less well than in 2017. The final vote shares it produced were all within 2 points of the actual shares, but the seat predictions showed a smaller Tory majority than happened in reality. Ben Lauderdale who designed the model has already posted his thoughts on what happened here. Part of it is simply a function of vote share (a small difference in vote share makes a big difference to seat numbers), part of it was an overstatement of Brexit party support in the key Conservative target seats. Whether that was having too many Brexit supporters in the sample, or Brexit party supporters swinging back to the Tories in the last 48 hours will be clearer once we’ve got some recontact data.

Finally, the 2019 election saw a resurgence of individual constituency polling, primarily from Survation and Deltapoll. Constituency polling is difficult (and I understand has become even more so since the advent of GDPR, as it has reduced the availability of purchasable database of mobile phone numbers from specific areas), and with small sample sizes of 400 or 500 it will inevitably be imprecise. Overall, it performed well this time though – particularly given that many of the constituency polls were conducted in seats you would expect to be hard to poll: unusual seats, or places with independents or high profile defectors standing. David Gauke’s support was understated, for example, and in Putney constituency polling overstated Lib Dem support at the expense of Labour. However, in many places it performed well, particularly the Chelsea & Fulham, Wimbledon, Finchley and Esher & Walton polls.

And with that, I’m off for a nice Christmas break. Have a good Christmas and happy new year.


It is the eve of the election and I’ll be rounding up the final call polls here as they come in.

YouGov already released their final call prediction last night in the form of their updated MRP projection. The voting intentions in the model were CON 43%, LAB 34%, LDEM 12%, BREX 3%, GRN 3%. As an MRP, it also included projected numbers of seats, with the Conservatives winning 339, Labour 231, SNP 41, Liberal Democrats 15, Plaid 4 and the Greens 1. Fieldwork was the 4th to the 10th, but the model gives more weight to the more recent data. The full details of the model are here.

ICM also released their final poll yesterday, with topline figures of CON 42%, LAB 36%, LDEM 12%, BREX 3%. Fieldwork was conducted Sunday to monday, and full tables are here.

Opinium‘s final voting intention figures are CON 45%, LAB 33%, LDEM 12%, BREX 2%, GRN 2%. The Conservatives have a twelve point lead (though in their write up Opinium point out that this is because the Tory shares has been rounded up and Labour’s share rounded down, so before rounding it was actually an 11 point lead). In recent weeks Opinium have tended to show the biggest leads for the Conservatives, so this reflects a slight narrowing since their previous poll. Fieldwork was Tuesday and Wednesday, so would have been wholly after the Leeds NHS story on Monday. Full tables are here

BMG‘s final figures are CON 41%, LAB 32%, LDEM 14%. Fieldwork was between Friday and today, and doesn’t show any change since BMG’s figures last week.

Panelbase‘s final poll has topline figures of CON 43%, LAB 34%, LDEM 11%, BREX 4%, GRN 3%. Fieldwork was Tuesday and Wednesday so, like Opinium, would have been wholly after the Leeds NHS story (though unlike Opinium, Panelbase don’t show any tightening since their previous poll). Full tables are here.

Matt Singh’s NCPolitics have conducted a final poll on behalf of Bloomberg. That has final figures of CON 43%, LAB 33%, LDEM 12%, BREX 3%, GRN 3%. Their full tables are here.

There was also a poll by Qriously (a company that does polls in smartphone adverts, who is a member of the BPC). Fieldwork for that was conducted Thursday to Sunday, and had topline figures of CON 43%, LAB 30%, LDEM 12%, BREX 3%, GRN 4%. Details are here

SavantaComRes have final figures of CON 41%, LAB 36%, LDEM 12%. Fieldwork was Monday and Tuesday. The five point lead is the lowest any company has given the Conservatives during the campaign, and would likely be in hung Parliament territory (though ComRes have typically given some of the lower Tory leads). Full tables are here.

Kantar‘s final poll has topline figures of CON 44%, LAB 32%, LDEM 13%, BREX 3%. Fieldwork was Monday to Wednesday. The twelve point lead is unchanged from Kantar’s last poll, though the Lib Dems have fallen a little. Full results are here.

Deltapoll‘s final poll CON 45%, LAB 35%, LDEM 10%, BREX 3%. Fieldwork was also Monday to Wednesday. Full results are here.

Survation published their final call overnight. Topline figures there are CON 44.5%, LAB 33.7%, LDEM 9.3%, BREX 3.1%. Their poll also included an oversized sample for Scotland, to provide seperate Scottish figures – they were SNP 43.2%, CON 27.9%, LAB 19.8%, LDEM 7.3%. Full details are here.

Finally, Ipsos MORI published their final call in this morning’s Standard. Their final figures are CON 44%, LAB 33%, LDEM 12%, GRN 3%, BREX 2%. Full tables are here. (And, since people always ask – Ipsos MORI publish on election day because they partner with the Evening Standard, who publish at lunchtime. As you’ll know, it’s illegal to publish an exit poll until after voting stops at 10pm. However, it’s perfectly legal to publish a poll that was conducted before voting began)


-->

The final Sunday before the election. There should be plenty of polls out tonight (certainly we should see ComRes, YouGov, Deltapoll and Opinium – and perhaps others). I will update this post as they appear, and then round up at the end.

The first to appear is SavantaComRes. Slightly confusingly they have two polls out tonight, conducted using slightly different methods, over different timescale and showing slightly different results.

The first was conducted for RemainUnited, Gina Miller’s anti-Brexit campaign, and was conducted between Monday and Thursday. It has topline figures of CON 42%, LAB 36%, LDEM 11%, BREX 4%. The second was conducted for the Sunday Telegraph, with fieldwork between Wednesday and Thursday. Topline figures there are CON 41%, LAB 33%, LDEM 12%, BREX 3%. Tables for the SavantaComRes/Sunday Telegraph poll are already available here.

The previous ComRes poll was conducted for the Daily Telegraph with fieldwork on Monday and Tuesday, so the RemainUnited poll actually straddles the fieldwork period of both polls. It was also asked a little differently. The most recent two ComRes polls for the Telegraph have prompted people with the specific candidates standing in their constituency (i.e. someone would be asked if they will vote for Bob Smith – Labour, Fred Jones – Conservative, etc, and not be given the option of voting for any party that is not standing in their area). In contrast, it appears that the ComRes poll for RemainUnited was conducted using their previous method, where candidates were just prompted with a list of parties – Conservative, Labour, Liberal Democrat and so on. For some reason, ComRes seem to find a higher level of support for “other others” when they prompt using party names.

Putting that aside, the SavantaComRes poll for the Telegraph earlier in the week had a 10 point Conservative lead. Comparing the two SavantaComRes/Telegraph polls that used the same methodology shows the Tories down 1, Labour up 1. A small narrowing in the lead, but nothing that couldn’t just be noise. I’m expecting a fair number of polls tonight, so we should be in a position to see if there is a consistent trend across the polling companies, rather than getting too excited about any movement in individual polls.

UPDATE1 – Secondly we have Opinium for the Observer. Topline voting intention figures there are CON 46%(nc), LAB 31%(nc), LDEM 13%(nc), BREX 2%(nc). Fieldwork was conducted between Wednesday and Friday and the changes are from a week ago. There is obviously no movement at all in support for the main parties here. The fifteen point Tory lead looks daunting, but it’s worth bearing in mind that Opinium have tended to show the largest Conservative leads during the campaign.

UPDATE2: The weekly YouGov poll for the Sunday Times has topline figures of CON 43%(+1), LAB 33%(nc), LDEM 13%(+1), BREX 3%(-1). Fieldwork was Thursday and Friday, and changes are from their midweek poll for the Times and Sky. Again, no significant change here. YouGov’s last four polls have had the Tory lead at 11, 9, 9 and 10 points, so pretty steady.

Finally (at least, as far as I’m aware) there is Deltapoll in the Mail on Sunday. Changes are from last week. Their topline figures are CON 44%(-1), LAB 33%(+1), LDEM 11%(-4), BREX 3%(nc). A slight narrowing there, leaving the Conservative lead at 11, but again, nothing that couldn’t just be noise.

Looking at the four companies who’ve released GB opinion polls for the Sunday papers, we’ve got ComRes and Deltapoll showing things narrowing by a little, YouGov showing the lead growing by a point, Opinium showing no movement. The clear trend towards Labour we were seeing earlier in the campaign appears to have petered out. The average across the four is a Conservative lead of 11 points, though of course, these are tilted towards those pollsters who show bigger Conservative leads. Taking an average of the most recent poll from all ten pollsters producing regular figures gives an average of 10 points.


Below are the polls that have come out since the weekend.

SavantaComRes/Telegraph (2nd-3rd Dec) – CON 42%(-1), LAB 32%(-1), LDEM 12%(-1), BREX 3%(-1) (tabs)
YouGov/Times/Sky (2nd-3rd Dec) – CON 42%(-1), LAB 33%(-1), LDEM 12%(-1), BREX 4%(+2) (tabs)
ICM/Reuters (29th Nov-2nd Dec) – CON 42%(+1), LAB 35%(+1), LDEM 13%(nc), BREX 3%(-1) (tabs)
Kantar (28th Nov-2nd Dec) – CON 44%(+1), LAB 32%(nc), LDEM 15%(+1), BREX 2%(-1) (tabs)
Survation/GMB (26th-30th Nov) – CON 42%(+1), LAB 33%(+3), LDEM 11%(-4), BREX 3%(-2) (tabs)

Last week there appeared to be a consistent narrowing of the Conservative lead across all the polls. That now appears to have come to a halt or, at least, there is no obvious sign of it continuing. Four of the polls published this week have shown no sign of the lead narrowing (and the exception – the Survation poll for Good Morning Britain – was actually conducted last week, at a time when other polls were showing the lead falling). Note that the ComRes poll reflects a change in methodology to prompt for candidate names, something that somewhat unusually lead to all the parties falling and “other others” going up by four.

As things stand the polls show a consistent Conservative lead, varying between 6 points from BMG and 15 points from Opinium, with the average around about 10 points. It is hard to be certain what sort of lead the Conservatives need for a majority (it depends on swings in different areas and how they do in the different battlegrounds), but a reasonable assumption is somewhere around 6 or 7 points, meaning that the BMG and ICM polls that show the smallest leads are in an area where an overall majority would be uncertain. All the other polls point towards a Conservative majority.

We should have two more sets of polls before election day – the typical rush of Sunday polls (Opinium, Deltapoll, YouGov, BMG and ComRes all usually release polls on Sundays), and then the pollsters final call polls on Tuesday and Wednesday next week.


General election campaigns provoke a lot of attention and criticism of opinion polls. Some of that is sensible and well-informed… and some of it is not. This is about the latter – a response to some of the more common criticisms that I see on social media. Polling methodology is not necessarily easy to understand and, given many people only take an interest in it at around election time, most people have no good reason to know much about it. This will hopefully address some of the more common misapprehensions (or in those cases where they aren’t entirely wrong, add some useful context).

This Twitter poll has 20,000 responses, TEN TIMES BIGGER than so-called professional polls!

Criticisms about sample size are the oldest and most persistent of polling criticism. This is unsurprising given that it is rather counter-intuitive that only 1000 interviews should be enough people to get a good steer on what 40,000,000 people think. The response that George Gallup, the founding father of modern polling, used to give is still a good one: “You don’t need to eat a whole bowl of soup to know if it’s too salty, providing it’s properly stirred a single spoonful will suffice.”

The thing that makes a poll meaningful isn’t so much the sample size, it is whether it is representative or not. That is, does it have the right proportions of men and women, old and young, rich and poor and so on. If it is representative of the wider population in all those ways, then one hopes it will also be representative in terms of opinion. If not, then it won’t be. If you took a sample of 100,000 middle-class homeowners in Surrey then it would be overwhelmingly Tory, regardless of the large sample size. If you took a sample of 100,000 working class people on Merseyside it would be overwhelmingly Labour, regardless of the large sample size. What counts is not the size, it’s whether it’s representative or not. The classic example of this is the 1936 Presidential Election where Gallup made his name – correctly predicting the election using a representative sample when the Literary Digest’s sample of 2.4 million(!) called it wrongly.

Professional polling companies will sample and weight polls to ensure they are representative. However well intended, Twitter polls will not (indeed, there is no way of doing so, and no way of measuring the demographics of those who have participated).

Who are these pollsters talking too? Everyone I know is voting for party X!

Political support is not evenly distributed across the country. If you live in Liverpool Walton, then the overwhelming majority of other people in your area will be Labour voters. If you live in Christchurch, then the overwhelming majority of your neighbours will likely be Tory. This is further entrenched by our tendency to be friends with people like us – most of your friends will probably be of a roughly similar age and background and, very likely, have similar outlooks and things in common with you, so they are probably more likely to share your political views (plus, unless you make pretty odd conversation with people, you probably don’t know how everyone you know will vote).

An opinion poll will have sought to include a representative sample of people from all parts of the country, with a demographic make-up that matches the country as a whole. Your friendship group probably doesn’t look like that. Besides, unless you think that literally *everyone* is voting for party X, you need to accept that there probably are voters of the other parties out there. You’re just not friends with them.

Polls are done on landlines so don’t include young people

I am not sure why this criticism has resurfaced, but I’ve seen it several times over recent weeks, often widely retweeted. These days the overwhelming majority of opinion polls in Britain are conducted online rather than by telephone. The only companies who regularly conduct GB voting intention polls by phone are Ipsos MORI and Survation. Both of them conduct a large proportion of their interviews using mobile phones.

Polls of single constituencies are still normally conducted by telephone but, again, will conduct a large proportion of their calls on mobile phones. I don’t think anyone has done a voting intention poll on landlines only for well over a decade.

Who takes part in these polls? No one has ever asked me

For the reason above, your chances of being invited to take part in a telephone poll that asks about voting intention are vanishingly small. You could be waiting many, many years for your phone number to be randomly dialled. If you are the sort of person who doesn’t pick up unknown numbers, they’ll never be able to reach you.

Most polls these days are conducted using internet panels (that is, panels of people who have given pollsters permission to email them and ask them to take part in opinion polls). Some companies like YouGov and Kantar have their own panels, other companies may buy in sample from providers like Dynata or Toluna. If you are a member of such panels you’ll inevitably be invited to take part in opinion polls. Though of course, remember that the vast majority of surveys tend to be stuff about consumer brands and so on… politics is only a tiny part of the market research world.

The polls only show a lead because pollsters are “Weighting” them, you should look at the raw figures

Weighting is a standard part of polling that everyone does. Standard weighting by demographics is unobjectionable – but is sometimes presented as something suspicious or dodgy. At this election, this has sometimes been because it has been confused with how pollsters account for turnout, which is a more controversial and complicated issue which I’ll return to below.

To deal with ordinary demographic weighting though, this is just to ensure that the sample is representative. So for example – we know that the adult British population is about 51% female, 49% male. If the raw sample a poll obtained was 48% female and 52% male then it would have too many men and too few women and weighting would be used to correct it. Every female respondent would be given a weight of 1.06 (that is 51/48) and would count as 1.06 of a person in the final results. Every male respondent would be given a weight of 0.94 (that is 49/52) and would count as 0.94 of a person in the final results. Once weighted, the sample would now be 51% female and 49% male.

Actual weighting is more complicated that this because samples are weighted by multiple factors – age, gender, region, social class, education, past vote and so on. The principle however is the same – it is just a way of correcting a sample that has the wrong amount of people compared to the known demographics of the British population.

Polls assume young people won’t vote

This is a far more understandable criticism, but one that is probably wrong.

It’s understandable because it is part of what went wrong with the polls in 2017. Many polling companies adopted new turnout models that did indeed make assumptions about whether people would vote or not based upon their age. While it wasn’t the case across the board, in 2017 companies like ComRes, ICM and MORI did assume that young people were less likely to vote and weighted them down. The way they did this contributed to those polls understating Labour support (I’ve written about it in more depth here)

Naturally people looking for explanations for the difference between polls this time round have jumped to this problem as a possible explanation. This is where it goes wrong. Almost all the companies who were using age-based turnout models dumped those models straight after the 2017 election and went back to basing their turnout models primarily on how likely respondents say they are to vote. Put simply, polls are not making assumptions about whether different age groups will vote or not – differences in likelihood to vote between age groups will be down to people in some age groups telling pollsters they are less likely to vote than people in other age groups.

The main exception to this is Kantar, who do still include age in their turnout model, so can fairly be said to be assuming that young people are less likely to vote than old people. They kept the method because, for them, it worked well (they were one of the more accurate companies at the 2017 election).

Some of the criticism of Kantar’s turnout model (and of the relative turnout levels in other companies’ polls) is based on comparing the implied turnout in their polls with turnout estimates published straight after the 2017 election, based on polls done during the 2017 campaign. Compared to those figures, the turnout for young people may look a bit low. However there are much better estimates of turnout in 2017 from the British Election Study, which has validated turnout data (that is, rather than just asking if people voted, they look their respondents up on the marked electoral register and see if they actually voted) – these figures are available here, and this is the data Kantar uses in their model. Compared to these figures the levels of turnout in Kantar and other companies’ polls look perfectly reasonable.

Pollster X is biased!

Another extremely common criticism. It is true that some pollsters show figures that are consistently better or worse for a party. These are know as “house effects” and can be explained by methodological differences (such as what weights they use, or how they deal with turnout), rather than some sort of bias. It is in the strong commercial interests of all polling companies to be as accurate as possible, so it would be self-defeating for them to be biased.

The frequency of this criticism has always baffled me, given to anyone in the industry it’s quite absurd. The leading market research companies are large, multi-million pound corporations. Ipsos, YouGov and WPP (Kantar’s parent company) are publicly listed companies – they are owned by largely institutional shareholders and the vast bulk of their profits are based upon non-political commercial research. They are not the personal playthings of the political whims of their CEOs, and the idea that people like Didier Truchot ring up their UK political team and ask them to shove a bit on the figure to make the party they support look better is tin-foil hat territory.

Market research companies sell themselves on their accuracy, not on telling people what they want to hear. Political polling is done as a shop window, a way of getting name recognition and (all being well) a reputation for solid, accurate research. They have extremely strong commercial and financial reasons to strive for accuracy, and pretty much nothing to be gained by being deliberately wrong.

Polls are always wrong

And yet there have been several instances of the polls being wrong of late, though this is somewhat overegged. The common perception is that the polls were wrong in 2015 (indeed, they nearly all were), at the 2016 referendum (some of them were wrong, some of them were correct – but the media paid more attention to the wrong ones), at Donald Trump’s election (the national polls were actually correct, but some key state polls were wrong, so Trump’s victory in the electoral college wasn’t predicted), and in the 2017 election (most were wrong, a few were right).

You should not take polls as gospel. It is obviously possible for them to be wrong – recent history demonstrates that all too well. However, they are probably the best way we have of measuring public opinion, so if you want a steer on how Britain is likely to vote it would be foolish to dismiss them totally.

What I would advise against is assuming that they are likely to be wrong in the same direction as last time, or in the direction you would like them to be. As discussed above – the methods that caused the understatement of Labour support in 2017 have largely been abandoned, so the specific error that happened in 2017 is extremely unlikely to reoccur. That does not mean polls couldn’t be wrong in different ways, but it is worth considered that the vast majority of previous errors have been in the opposite direction, and that polls in the UK have tended to over-state Labour support. Do not assume that polls being wrong automatically means under-stating Labour.