The tabs for the weekly YouGov poll for the Sunday Times are now up here. Topline figures are CON 32%, LAB 36%, LDEM 8%, UKIP 13%, GRN 6%. The poll was conducted on Friday and Saturday, and the four point Labour lead equals the highest this year, so it looks like it could be an impact from the Paxman interviews. Then again, YouGov spat out a single four point Labour lead in one of their daily polls earlier this month that turned out to be just random noise, so this is nothing that couldn’t just be random error. To have any confidence about whether anything actually has changed in terms of voting intention, we need more polling.

In the meantime, what does the rest of the poll show? Well, leadership ratings do also suggest an improvement for Miliband. Asked if they are doing well or badly David Cameron’s net rating is up from minus 5 last week to minus 2 this week. Ed Miliband though is up from minus 39 to minus 29, so a solid jump (that said, Nick Clegg is up from minus 47 to minus 40 without being in the interviews at all…). Miliband also rose in the Best PM question – up four points since YouGov last asked this version of the question in November last year, but still 12 points behind Cameron (when YouGov ask the question for the Sun it’s Cameron v Miliband v Clegg, for the Sunday Times Farage is also an option – don’t compare the two, they give different results).

On the debate question itself, amongst people who watched the debates 49% of people thought Miliband came across better, 34% thought David Cameron did. This is of course very much in line with a movement to Labour in the headline voting intention figures… but why so different from the ICM poll after the debate? Part of the answer may well be that people have had longer to digest it, think about it and be influenced by discussing it with other people. First reactions are extremely important, but they aren’t everything.

Another factor though is who watched the debate – the ICM poll was weighted to be politically representative (though even weighted, the poll still ended with a sample showing an 11 point Labour lead rather than Con and Lab neck and neck), but a debate doesn’t necessarily get watched by a representative sample of the public. People from one party maybe more likely than another to watch it. Looking at the YouGov data, 31% of people who voted Labour in 2010 watched the debate, only 15% of people who voted Tory…so the sub-sample of people who watched the debate was actually a very Laboury group of people to begin with. This highlights a methodological challenge for pollsters in doing things like debate polls, how do you weight the sample? Do you try to make it politically and/or demographically representative of the country as a whole, regardless of who is actually watching? Or do you try to make it representative of the people who are actually watching, regardless of the political skews that brings? The second is probably more methodologically purer – all you can *really* measure is what people who watch think, but given what the media want is just a crude “who won” verdict, would it be fair to start out with a sample that was stronger biased one way or another?

Anyway, time will tell if the Paxman interviews actually did or did not make any difference. On other matters, YouGov found 11% of people said they were voting tactically at the election. Amongst that (obviously very small) sample people were pretty evenly split between voting tactically against the Tories (40%) and voting tactically against Labour (37%).

In my weekly round up I mentioned some YouGov polling about which taxes would rise under a Labour or Conservative government, conducted before Prime Minister’s Question time, Cameron ruling out a VAT rise and Ed Balls ruling out an NI rise. YouGov repeated those questions in this poll to see if they had changed. At the start of the week, 31% of people thought VAT would rise if the Conservatives won. Following David Cameron ruling out a rise in VAT, this is now…32%. At the start of the week 39% of people expected national insurance to rise if Labour won, but since Ed Balls ruled it out, that has changed to… 40%. A lovely illustration of how much of the politicians’ arguments, exchanges and pledges make not the slightest difference to public opinion.


303 Responses to “More from today’s Sunday Times poll”

1 5 6 7
  1. @unicorn

    I have used TheySay for a customer project – and recommend it highly. Their technology is genuinely state of the art.

    But without meaning to write an essay about automated sentiment analysis, it’s very much governed by the garbage-in, garbage-out principle – and as input, Twitter data isn’t of brilliant quality.

    Think of it this way – at the type of scale that this sort of analysis operates, there is no way of creating a representative sample (even considering that limiting the scope of sample to comments on Twitter compromises representation to begin with).

    I would say that you should feel confident that the technology is delivering a sensible read on the sentiment itself (this wasn’t always the case – I have worked with technology barely more accurate than a coin toss in the past).

    Beyond that point, it’s all about the sample – and while for example Facebook is a lot more representative across age and social class than people sometimes think, Twitter has a distinct skew in the UK – professional adults in higher income bands.

    Unhelpfully, it’s quite difficult to definitively map that skew. Most knowledge of the composition of Twitter users comes from primary audience research. The network itself exposes quite limited information about users – perhaps 10 – 15% of tweets are geographically identifiable, and any additional insights about – for example – age; gender etc are only really accessible through still more text analytics technology (e.g. software that scrapes real names from profile feeds, and matches them against databases of male or female names).

    That’s not to say that I’d argue that there is a political bias in a particular direction – if there is, I have no idea in what direction it skews. The problem is that noone else really does either.

    Rather – with the knowledge that we have of the demographics and behaviours of individuals that are likely to participate in event chats on Twitter, we can be quite sure that it isn’t representative in the same way a balanced poll would be, and it would consequently be brave to use this type of approach to try to predict voting intention.

    As long as you accept the terms for victory in a very prescriptive way – a count of the ‘cheers or boos’ received for either party, TheySay gives a solid reflection of who ‘won’ the event, in the sense that it does a good job of i. identifying whether EM or DC is being talked about ii. attributing sentiment to that mention (even if it is using ambiguous language).

    Another positive is that it works in virtual real time – meaning that the impact of the post-event analysis is greatly reduced (people are getting better at exerting influence in real-time – but I’d argue that you’d still expect the read given to be a bit more reflective of a commentator’s own authentic opinion having watched the event, than if you were to aggregate social media sentiment a week later).

    I therefore recommend that you take what TheySay say at face value, but don’t attempt to stretch things too much further.

  2. “@ Politicianado: “Very odd how different ComRes and today’s Yougov are. Could just be MoE but a 8 point difference is big”

    It’s not 8 though – when people quote MoE, it’s on each score. In other words, both YouGov and ComRes have both Lab and Con well within the normal margin of error of Con 34, Lab 34

  3. AW
    “both YouGov and ComRes have both Lab and Con well within the normal margin of error of Con 34, Lab 34”

    But the test of whether the YouGov 36 is an outlier or indication of trend is measurable more accurately by reference to previous and subsequent YouGov polls than to the findings of other pollsters, including ComRes?

1 5 6 7