The Boris bandwagon rolls on, and an ICM poll for the Sunday Telegraph tonight apparently has another question trying to measure whether the Conservatives would do better with Boris Johnson as leader. There are two things to consider with hypothetical “who would you vote for if X was leader” questions.

The first is that they need to be exactly comparable. The difference between voting intention with two different people as leader of a party is often only a few points. However, adjustments like weighting by likelihood to vote or reallocating don’t knows can also make a couple of points difference, so if you want to be confident the difference is due to the leader the need to be done exactly the same way. If the main figures are weighted or filterted by likelihood to vote, they need to be weighted by likelihood to vote (ideally asked separately), if there is a squeeze question or don’t knows are reallocated in their main question, the same needs to happen in the hypothetical questions.

Trickier to control for is the question itself. Normal voting intention questions don’t mention the party leaders, so if asking how people would vote with Boris as Tory leader increases the Tory vote by 2 points we can’t conclude that he’d do better than Cameron without checking mentioning David Cameron as Tory leader in the question wouldn’t do the same. This is why when YouGov run the questions they ask a control question including the names of the current party leaders.

The second thing to consider is quite how hypothetical these questions are! In many cases we are asking about politicians who the general public know very little about – apart from very well known politicians like party leaders and Chancellors of the exechequer many other ministers – even cabinet ministers – are almost complete unknowns to the majority of people. Even when a politician is relatively well known, like Gordon Brown pre-2007 or Boris Johnson now, people answering questions like this don’t know what they would do as a party leader, what sort of mission and narrative they’d set out, what policy priorities they’d follow, and all these things could change how they are viewed.

However, that doesn’t necessarily mean questions like this are never useful. Back before Gordon Brown became Labour leader polls like this consistently showed him doing less well than Tony Blair. At the time I made all the same caveats as above, but said in the specific context of Gordon Brown it probably was showing that Brown would do badly because of why people gave him negative ratings. The polls said people saw him as competent and efficient and capable… but they didn’t like him. If people had seen Brown as incompetent or inexperienced he could have changed impressions in office, but those were already positive. The polls were telling us that his problem was a negative that was difficult to change, just not being likeable.

So to Boris. What can we tell from hypothetical polls about him? Well, I haven’t seen the ICM poll yet, but YouGov have done two hypothetical polls about him. The first in May showed Boris doing basically the same as David Cameron. The second a week or so ago had Boris doing 5 points better than Cameron, presumably because of the effect the Olympics has had on how Boris is seen. We shall see if ICM shows the same sort of pattern.

Is this really meaningful? Well, as Gordon Brown seemed to do badly simply because people didn’t warm to him personally, Boris Johnson seems to be an opposite case – he seems to do well because he is likeable and eccentric. It’s an open question to what extent that would transfer were him to become Prime Minister or Conservative leader – a politicians ability to come across as likeable and to connect to people seems to be innate to some degree, so would probably benefit Boris in any role. On the other hand, being seen as a bit of a buffoon is not necessarily on the job description of PM. Would something that seems like a wizzard prank in a hypothetical opinion poll seem rather less funny in an actual election? We don’t know.

A more concrete caveat to keep in mind is to remember that all these Boris quesions are being asked in the midst of the London Olympics, Boris’s big moment in the sun. Before the Olympics the polls didn’t suggest Boris would do any better than Cameron. I’d wait until the publicity around the Olympics fades before drawing any long term conclusions…


A month ago I wrote a guide to How Not to Report Opinion Polls. I have a history of starting regular features and then failing miserably to deliver them, but I am at least going to try to come back to it at the end of every month and highlight particularly poor reporting in the weeks just gone by.

For July I’m going to start with this report from the Independent, claiming that a ComRes poll shows “Dramatic change as two-thirds now support GM crop testing” and that “Public opinion appears to be shifting in favour of the development of genetically-modified crops, according to a ComRes survey for The Independent”.

ComRes found 64% of people agreed with a statement that “Experiments to develop genetically-modified crops should be encouraged by the Government so that farmers can reduce the amount of pesticides they use”. However, the article doesn’t mention any past results that it can be compared to in order to justify the claim that there has been a dramatic turnaround in support for GM crops.

Historical polling on the issue by MORI here does indeed suggest a much lower level of support for GM crops, but these differences could easily be explained by the question wording – MORI was asking things like “How strongly, if at all, would you say you support or oppose genetically modified food?” while ComRes’s statement specifically links the development of GM crops to a positive outcome of reducing the use of pesticides.

To see the impact asking different questions could make, look at this more extensive polling on the issue by Populus. Asked a generic question on whether or not GM food should be encouraged 27% of people say yes, 30% no. However, if you look down the survey to page 38 it asks specifically about whether people are supportive of using GM wheat to repel aphids and reduce the need for pesticides and finds 58% of people are supportive of this specific use. It seems plausible that the reason ComRes found such high support is not because of some great shift in support, but because their question specifically mentioned a popular potential outcome from GM.

If we want to see whether or not public support for GM actually is growing we need to have a question that has been asked consistently over time. This is surprisingly difficult to find – MORI don’t seem to have asked the question above again since 2004. The best I can track down is the Eurobarometer polling here, which every 3-5 years has asked if people agree that GM food should be encouraged. As you can see from the table on the first page, there is no obvious trend in the UK’s answers, support for encouraging GM food has moved by 45% to 25% to 35% over the years. Certainly the picture it shows is not one of a strong trend towards people supporting GM food.

(The Populus poll, incidentally, asked a similar question to the Eurobarometer question, but they can’t be directly compared either, not least because 43% of people told Populus they “neither agree nor disagree”, an option that the Eurobarometer did not offer. This, in turn, was misreported by the Daily Mail back in March.)

Once again, the lesson is to look at the polls in the round, not to take a single finding out of context, especially when it that question is one that is likely to put an issue in a particularly good or bad light. If you are looking at trends over time, you should always compare apples with apples. If two significantly different questions give different results it is as likely to be down to different wording as it is to a change in opinion, especially in cases like this.


-->

Olympic poll boosts

No – don’t get excited – tonight’s poll doesn’t show one. Today’s YouGov poll for the Sun has perfectly normal topline figures of CON 34%, LAB 42%, LDEM 10%, UKIP 6%. Once again it is well within the range of the 9-10 point leads that YouGov have been showing for the past couple of months.

As yet there is no sign of any Olympic effect. I wouldn’t necessarily expect one, but I wouldn’t rule one out either, in the same way we saw a (brief) Jubilee effect straight after the Jubilee weekend. Two things to remember:

First, why these things happen. After the Jubilee I saw several comments saying how absurd it was that it affected the polls. Why would someone think “Oh look, the Queen has been there a long time, better vote Tory”? Well, that would be absurd, but the reasons things like this can affect the polls is more straightforward. First there is a general feel good factor – if people feel generally more positive about the country and the way things are going they may be more likely to support the incumbent government. Secondly, and in my opinion probably more significant, is the absence of bad news – looking back over recent months an average month for the government has at least a few party rows, a couple of controversial policies, a spattering of bad news, perhaps a rebellion and, on recent form, a U-turn or two. With the Olympics totally dominating all news coverage the next couple of weeks will have significantly fewer of all those things, so you can imagine how the absence of bad news may have an effect.

Secondly, remember that if there is an Olympic effect on the polls it will be probably be temporary. The Jubilee effect, if it ever existed, only lasted a couple of days. A positive feeling from a big national event fades; once the Olympics and silly season are over the normal news agenda and the trail of bad news stories that governments have to cope with will resume. If the Olympics does have an effect, it is unlikely to be long lasting.

UPDATE: I haven’t seen it mentioned on twitter or by the Indy, but the voting intention fgures from ComRes’s monthly poll have now appeared on their website here. Topline figures, with changes from their previous telephone poll a month ago, are CON 33%(nc), LAB 44%(+2), LDEM 10%(-3), Others 13%(+1). Certainly no sign of any Olympic boost there either!


Ed Miliband’s rising approval figures in the polls have led to some reassessment in the commentariat. A lot of it is predictably rather coloured by wishful thinking – with some honourable exceptions there are a lot of right-wing commentators who are still convinced that he is a leader who the voters do not see as up to the job and that this is an almost insurmountable obstacle to Labour, and a lot of people on the left who think he is either now the apple of the public’s eye, or that perceptions of the party leaders are at most a side issue, if not entirely irrelevant to how people vote.

At the extremes both are wrong – people who say that it is impossible for Labour to win with Ed Miliband are wrong, the leader is but one factor in voting intention and there are clearly many others. It is perfectly possible for a party to win despite having a duff leader. It would be equally wrong to say that leader perceptions are not a factor at all – we can be relatively certain from key driver analysis of recent British Election studies that perceptions of the party leaders are a major driver of voting intention. It is as much wishful thinking to dismiss the problem for Labour as it is to pretend the problem is insurmountable.

Let’s first try to identify the problem. It is easy to cherry pick good and bad results for party leaders, all politicians have strengths and weaknesses. For example on the up side Ed Miliband is seen as the most in touch of the party leaders, is far more likely than the other party leaders to care about the problems of ordinary people and often leads when people are asked how well the party leaders are currently doing at their jobs. On the downside, he is also seen as weak, not up to the job and people don’t think he looks like a potential Prime Minister. All of this adds colour and understanding to WHY a party leader is seen positively or negatively, but doesn’t get us to the core question of whether they are a positive or negative for their party. Let’s see if we can find some questions where we really can benchmark a leader against their party.

First there is the comparison between leader ratings and voting intention. Labour have a lead of around about 10 points in the polls, and yet David Cameron has a lead of around about 10 points as best Prime Minister. It is perfectly normal for the governing party to do better in Best PM than in voting intention because the sitting PM has the benefit of incumbency (it’s easier for people to see them as Prime Minister), but this is an unusually large gap. Below is the Conservative poll lead (or deficit) since YouGov started regularly tracking both figures in 2003, put alongside the Conservative leader’s lead (or deficit) on the measure of best PM.

You can see IDS lagged significantly behind his party (on average he was doing 16 points worse than his party). Things improved under Michael Howard, who only lagged 7 points behind his party. That shrunk to 5 point when David Cameron took over and once Gordon Brown replaced Blair the Conservative lead in voting intention was almost identical to the Conservative lead as best Prime Minister. Now look at what happens once Ed Miliband takes over as Labour leader. Labour are the opposition now so the lines are reversed as we would expect, but look at how far Miliband lags behind his party. The average gap is 18 points.

Let’s take another measure, Ipsos MORI have a great question on whether people like both the party and the leader, like the party but not its leader, like the leader but not his party, or don’t like either of them. Tracking data for it is here. If you look back to the last Parliament people consistently said they liked the Labour party more than Gordon Brown (in April 2010 43% said they liked Labour, but only 37% liked Brown) – he was a drag on his party. With David Cameron it was the other way round, in April 2010 53% said they liked Cameron, but only 38% liked the Tory party. Cameron was a positive for his party (note that even then more people liked the Labour party than the Conservatives!).

MORI have only asked the question once about Ed Miliband, well over a year ago, but ComRes asked an almost identical question this April. They found that David Cameron’s advantage over the Tory party had vanished, now 37% of people liked the Tories, 38% liked Cameron. Ed Miliband’s figures though looked worse than Brown’s – 45% of people said they liked Labour, but only 22% said they liked Miliband, so he trails his party by 23 points. A majority of people who said they liked Labour said they didn’t like Ed Miliband.

Another straw in the wind, back in May YouGov did a hypothetical poll asking how people would vote if Boris was Tory leader at the next election. As a control, they asked how people would vote at the next election if the party leaders remained David Cameron, Ed Miliband and Nick Clegg. The result was that a normal voting intention figure of CON 31%, LAB 43%, LD 9% became CON 32%, LAB 40%, LD 10% once you mentioned Cameron, Miliband and Clegg as the party leaders.

But even if Ed Miliband is less popular than the party he leads, does it matter? The first thing to note is that these opinions are already there, they are already factored into Labour’s price, and yet Labour are ahead by 10 points. Clearly Labour are perfectly capable of getting the most votes with Ed Miliband as their leader.

However, that doesn’t mean Ed isn’t costing the Labour party votes. Just because Labour have a good lead in the polls, doesn’t mean they couldn’t have a bigger one. Going back to that ComRes poll, 82% of people who say they like Labour AND like Ed Miliband who were asked their voting intention said they’d vote Labour tomorrow, amongst those who like Labour but do NOT like Ed Miliband that figure falls to 61% of people.

While Labour have a ten point lead in the polls this doesn’t matter that much. I doubt Prime Minister Miliband would arrive in 10 Downing Street, throw himself on the bed and cry himself to sleep because no one likes him and he only won by 10 percent points. However, mid-term opposition leads in the opinion polls have a tendency to fade as elections approach and a nine-or-ten point buffer mid-term may be far less comfortable come polling day.

What should be far more worrying to Labour is if things like leader perception becomes more important closer to elections. We all know the pattern of mid-term term blues, of oppositions doing better in the middle of Parliaments in the polls and in mid-term elections. Well, why is that? Part of it is due to the actions of political parties. Governments do unpopular things early in the Parliament and save nicer more populist things for the end of the Parliament when they need the votes. Oppositions are policy lite early in the Parliament and paint themselves as all-things-to-all-men, later on in the Parliament they must come off the fence and disappoint some people.

I suspect part of it though is how people think about voting intention and answer polls – right now I suspect a lot of voting intention is simply disapproval of the government, telling a pollster you’ll vote Labour is the way people indicate their unhappiness with the government. As we get closer to an actual general election it becomes more of a choice of alternative governments – which one would I prefer?

The reason that Ed Miliband lags behind the Labour party in polls is because there are a substantial number of people who say they’ll vote Labour, but on other questions say they aren’t sure who would be the best Prime Minister, or which party they’d trust most on the economy. It suggests that support may be pretty shallow and liable to fade once the general election approaches. Of course, there is plenty of time until the election, time to firm up that support, time for, as the vernacular used to be in the last Parliament, for Labour to “seal the deal”.

In short: is Ed Miliband a drag on Labour? Yes, he probably is. Can Labour win with him as leader despite that? Yes, it is certainly possible. Will he become even more of a drag as the election approaches and minds are focused on choice of government, rather than anti-government protest? The jury is still out.


Last week, while sharing despair at Twitter throwing itself into another frenzy over a crossbreak of less than fifty respondents Sunder Katwala suggested to me that it might be a good idea to put together a post summarising the things not to do when writing about polls. I thought that was a good idea. This probably isn’t the sort of post Sunder was thinking of – I expect he envisaged something shorter – but nevertheless, here’s how NOT to report opinion polls.

1) Don’t report Voodoo polls

For a poll to be useful it needs to be representative. 1000 people represent only themselves, we can only assume their views represent the whole of Britain if the poll is sampled and weighted in a way that reflects the whole of Britain (or whatever other country you are polling). At a crude level, the poll needs to have the right proportions of people in terms of gender, age, social class, region and so on.

Legitimate polls are conducted in two main ways. Random sampling and quota sampling (where the pollster designs a sample and then recruits respondents to fill it, getting the correct number of Northern working class women, Midlands pensioners, etc, etc). In practice true random sampling is impossible, so most pollsters methods are a bit of a mixture of these two methods.

Open-access polls (pejoratively called “voodoo polls”) are sometimes mistakenly reported as proper polls. These are the sort of instant polls displayed on newspaper websites or through pushing the red button on digital tv, where anyone who wishes to can take part. There are no sampling or weighting controls so a voodoo poll may, for example, have a sample that is far too affluent, or educated, or interested in politics. If the polls was conducted on a campaign website, or a website that appeals to people of a particular viewpoint it will be skewed attitudinally too.

More importantly there are no controls on who takes part, so people with strong views on the issue are more likely to participate, and partisan campaigns or supporters on Twitter can deliberately direct people towards the poll to skew the results. Polls that do not sample or weight to get a proper sample or that are open-access and allow anyone to take part should never be reported as representing public opinion.

Few people would mistake “instant polls” on newspaper websites for properly conducted polls, but there are many instances of open access surveys on specialist websites or publications (e.g. Mumsnet, PinkNews, etc) being reported as if they were properly represenative polls of mothers, LGBT people, etc, rather than non-representative open-access polls.

Case study: The Observer reporting an open-access poll from the website of a campaign against the government’s NHS reforms as if it was representative of the views of members of the Royal College of Physicians, the Express miraculously finding that 99% of people who bothered to ring up an Express voting line wanted to leave Europe, The Independent reporting an open-access poll of Netmums in 2010.

2) Remember polls have a margin of error

Most polling companies quote a margin of error of around about plus or minus 3 points. Technically this is based on a pure random sample of 1000 and doesn’t account for other factors like design and degree of weighting, but it is generally a good rule of thumb. What it means is that 19 times out of 20 the figure in a poll will be within 3 percentage points of what the “true” figure would be if you’d surveyed the entire population.

What it means when reporting polls is that a change of a few percentage points doesn’t necessarily mean anything – it could very well just be down to normal sample variation within the margin of error. A poll showing Labour up 2 points, or the Conservatives down 2 points does not by itself indicate any change in public opinion.

Unless there has been some sort of seismic political event, the vast majority of voting intention polls do not show changes outside the margin of error. This means that, taken alone, they are singularly unnewsworthy. The correct way to look at voting intention polls is, therefore, to look at the broad range of ALL the opinion polls and whether there are consistent trends. Another way is to take averages over time to even out the volatility.

One poll showing the Conservatives up 2 points is meaningless. If four or five polls are all showing the Conservatives up by 2 points, then it is likely that there is a genuine increase in their support.

Case study: There are almost too many to mention, but I will pick up up the Guardian’s reporting their January 2012 ICM poll, which describes the Conservatives as “soaring” in the polls after rising three points. Newspapers do this all the times of course, and Tom Clark normally does a good job writing up ICM polls… I’m afraid I’m picking this one out because of the hubris the Guardian displayed in their editorial the same day when they wrote “this is not a rogue result. Rogue polls are very rare. Most polls currently put the Tories ahead. A weekend YouGov poll produced a very similar result to today’s ICM, with another five-point Tory lead. So the polls are broadly right. And today’s poll is right. Better get used to it.”

It was sound advice not to hand-wave away polls that bring unwelcome news, but unfortunately in this case the poll probably was an outlier! Those two polls showing a five point lead were the only ones in the whole of January to show such big Tory leads, the rest of the month’s polls showed the parties basically neck-and-neck – as did ICM’s December poll before, and their February poll afterwards. Naturally the Guardian didn’t write up the February poll as “reversion to mean after wacky sample last month”, but as Conservative support shrinks as voters turn against NHS bill. The bigger picture was that party support was pretty much steady throughout January 2012 and February 2012, with a slight drift away from the Tories as the European veto effect faded. The rollercoaster ride of public opinion that the Guardian’s reporting of ICM implied never happened.

3) Beware cross breaks and small sample sizes

A poll of 1000 people has a margin of error of about plus or minus three points. However, smaller sample sizes have bigger margins of error. Where this is most important to note is in cross-breaks. A poll of 1000 people in Great Britain as a whole might have fewer than 100 people aged under 25 or living in Scotland. A crossbreak made up of only 100 people has a margin of error of plus or minus ten percent. Crossbreaks of under 100 people should be given extreme caution, under 50 they should be ignored.

An additional factor is that polls are weighted so that they are representative overall. It does not necessarily follow that cross-breaks will be internally representative. For example, a poll could have the correct number of Labour supporters overall, but have too many in London and too few in Scotland.

You should be very cautious about reading too much into small crossbreaks. Even if two crossbreaks appear to show a large contrast between two social groups, if they are within each others margin of error this may be pure sample variation.

Pay particular caution to national polls that claim to say something about the views of ethnic or religious minorities. In a standard GB poll the number of ethnic minority respondents are too small to provide any meaningful findings. It is possible that they have deliberately oversampled these groups to get meaningful findings, but there have been several instances where news articles have been based on the extremely small religious or ethnic subsamples in normal polls.

Extreme caution should be given to crossbreaks on voting intention. With voting intention small differences of a few percentage points take on great significance, so figures based on small sample sizes, that are not internally weighted, are virtually useless. Voting intention crossbreaks may reveal interesting trends over time, but in a single poll are best ignored.

Case study: Again, this is a common failing, but the most extreme examples are reports taking figures for religious minorities. Take, for example, this report of an ICM poll for the BBC in 2005 – the report says that Jews are the least likely to attend religious services, and that 31% of Jews said they knew nothing about their faith. These figures were based on a sample of FIVE Jewish respondents. Here is the Telegraph making a similar error in 2009 claiming that “79 per cent of Muslims say Christianity should have strong role in Britain”, based on a subsample of just 21 Muslims.

4) Don’t cherry pick

In my past post on “Too Frequently Asked Questions” one of the common misconceptions I cite about polls is that pollsters only give the answers that clients want. This is generally not the case – published polling is only a tiny minority of what a polling companies produces, the shop window as it were, and major clients that actually pay the bills want accuracy, not sycophancy.

A much greater problem is people reading the results seeing only the answers they want, and the media reporting only the answers they want (on the latter, this is more a problem with pick-up of polls from other media sources, papers who actually commission a poll will normally report it all). Political opinion polls are a wonderful tool, interpreted properly they allow you to peep into what the electorate see, think and what drives their voting intention. As a pollster it’s depressing to see them interpreted by chucking out and dismissing anything that undermines their prejudices, while trumpeting and waving anything they agree with. It sometimes feels like you’ve invented the iPad, and people insist on using it as a doorstop.

It should almost go without saying, but you should always look at poll findings in the round. Public opinion is complicated and contradictory. For example, people don’t think prison is very effective at reforming criminals, but tend to be strongly opposed to replacing prison sentences with alternative punishments. People tend to support tax cuts if asked, but also oppose the spending cuts they would require. Taking a single poll finding out of context is bad practice, picking poll findings that bolster your argument while ignoring those that might undermine it is downright misleading.

Case study: Almost all of the internet! For a good example of highly selective and partial reporting of opinion polls on a subject in the mainstream press though, take the Telegraph’s coverage of polling on gay marriage. As we have looked at here before, most polling shows the public generally positive towards gay marriage if actually asked about it – polls by ICM, Populus, YouGov and (last year) ComRes have all found pretty positive opinions. The exception to this is ComRes polling for organisations opposed to gay marriage which asked a question about “redefining marriage” that didn’t actually mention gay marriage at all, and which has been presented by the campaign against gay marriage as showing 70% people are opposed to it.

Leaving aside the merits of the particular questions, the Telegraph stable has dutifully reported all the polling commissioned by organisations campaigning against gay marriage – here, here, here and here. As far as I can tell they have never mentioned any of the polling from Populus or YouGov showing support for gay marriage. The ICM polling was actually commissioned by the Sunday Telegraph, so they could hardly avoid mentioning it, but their report heavily downplayed the finding that people supported gay marriage by 45% to 36% (or as the Telegraph put it “opinion was finely balanced” which stretched the definition of balanced somewhat) instead running heavily on a question on whether it should be a prority or not. Anyone relying on the Telegraph for its news will have a very skewed view of what polling says about gay marriage.

5) Don’t make the outlier the story

If 19 times out of 20 a poll is within 3 points of the “true” picture, that means 1 time of out 20 it isn’t – it is what we call a “rogue poll”. This is not a dispersion or criticism of the pollster, it is an inevitable and unavoidable part of polling. Sometimes random chance will produce a whacky result. This goes double for cross-breaks, which have a large margin of error to begin with. In the headline figures 1 in 20 polls will be off by more than 3 points; in a crossbreak of 100 people 1 in 20 of those crossbreaks will be off by more than 10 points!

There are around 30 voting intention polls conducted each month, and each of them will often have 15-20 crossbreaks on them too. It is inevitable that random sample error will spit out some weird rogue results within all that data. These will appear eye-catching, astounding and newsworthy… but they are almost certainly not. They are just random statistical noise.

Always be cautious about any poll showing a sharp change in movement. If a poll is completely atypical of other data, then assume it is a rogue unless other polling data backs it up. Remember Twyman’s Law: “any piece of data or evidence that looks interesting or unusual is probably wrong”.

Case study: Here’s the Guardian in February 2012 claiming that the latest YouGov polling showed that the Conservatives had pulled off an amazing turnaround and won back the female vote, based on picking out one day’s polling that showed a six point Tory lead amongst women. Other YouGov polls that week showed Labour leading by 3 to 5 points amongst women, and that that day’s data was an obvious outlier. See also PoliticalScrapbook’s strange obsession with cherry-picking poor Lib Dem scores in small crossbreaks.

6) Only compare apples with apples

All sorts of things can make a difference to the results a poll finds. Online and telephone polls will sometimes find different results due to things like interviewer effect (people may be more willing to admit socially embarrassing views to a computer screen than an interviewer), the way a question is asked may make a difference, or the exact wording used, or even the question order.

For this reason if you are looking for change over time, you need to compare apples to apples. You should only compare a question asked now to a question asked using the same methods and using the same wordings, otherwise any apparent change could actually be down to wording or methodolgy, rather than reflect a genuine change in public opinion.

You should never draw changes from voting intention figures from one company’s polls to another. There are specific house effects from different companies methodologies which render this meaningless. For example, ICM normally show the Lib Dems a couple of points higher than other companies and YouGov normally show them a point or so lower… so it would be wrong to compare a new ICM poll with a YouGov poll from the previous week and conclude that the Lib Dems had gained support.