Tony Twyman, who died last year, was the man behind much of the mechanics of TV and radio viewing figures, most notably as technical advisor for BARB viewing figures. In broader market research he is more widely known for coining Twyman’s Law – “Any figure that looks interesting or different is usually wrong”. The point is, of course, that strange and unusual things in a single poll are more likely the result of sample variation or error than some amazing shift in public opinion, and you should be cautious of them before getting excited (My colleague Joe Twyman likes quoting it without attribution in the hope people will jump to conclusions… not so fast!).

Anyway, today we have a classic case. Two polls that look interesting when compared to recent averages, but which are both probably no more than the result of normal sample error.

Today’s twice weekly Populus poll had figures of CON 32%, LAB 37%, LDEM 10%, UKIP 13%, GRN 4% (tabs). The five point Labour lead is the biggest Populus have shown since November, their 37% share the largest any company have shown since November. Labour resurgence?

Lord Ashcroft’s weekly poll however had figures of CON 34%, LAB 28%, LDEM 8%, UKIP 16%, GRN 8% (tabs.) A six point Conservative lead, by far the best poll for the Tories from any company for several years (the largest Tory leads up to now were the last two MORI polls, which had them three points up). Tory surge?

Of course the actual answer is that there is probably neither a Labour nor a Tory surge, that both of these changes are probably just down to sample error and that people should watch the overall trend across multiple polls, not get overexcited about individual polls. If the figures in one poll look strange or unusual, it’s probably wrong.

In some ways it’s quite nice they come on the same day, as it should stop people getting too excited over an outlier in just one direction. On the other hand, it does tend to produce lots of confused comments about how polls can be accurate when they are showing both a five point Labour lead and a six point Tory lead. Bottom line for those who are confused, part of it is down to pollsters using slightly different methods (in this case, the way Populus weight their polls tends to produce a bigger share of the vote for the main two paries than does Ashcroft). A bigger chunk will be simple margin of error – polls are not precision instruments and no one who understands them would claim they are. They are randomish samples of about 1000 or so people. The quoted margins of error are about plus or minus 3% (though given response rates, weighting effects and that polls are not pure random samples, that’s a bit of a polite fiction). That means if the real position was Labour and Conservative tied on 33%, you would expect to see the Conservatives ranging from 30% to 36% and Labour from 30% to 36%, and while the results would tend to be clustered around the middle of that range, random variation could reasonably vary between a 6 point Tory lead and a 6 point Labour lead. Taken alone and in isolation, it does mean an individual voting intention poll isn’t that useful… which is why you shouldn’t look at them alone and in isolation – watch the trend.


200 Responses to “Contrasting Populus and Ashcroft polls”

1 2 3 4
  1. @,BP

    The Tories have been rock solid on 32% on YG for about a year, with the odd oscillation. Wouldn’t it be extraordinary if after all the money spent on the short and long campaigns they actually end up on 32%?

  2. @Alec

    “Guess which poll is reported in the Telegraph?”

    Yes, I saw that. Also saw a Labour candidate making much of the Populus poll:

    https://twitter.com/GeorgeAylett/status/554711219937234944

    Pity the SNP didn’t gain any seats in his perfect election. :))

  3. If Scotland is depressing Labour’s GB vote share by up to 1.5% it implies a better performance in England & Wales than the headline figures might be suggesting. So a 1% GB lead under present conditions in Scotland might be the equivalent of a 3 – 4% lead had things not gone awry up north – ie a Con to Lab swing in England & Wales of up to 5.5% offsetting a negative swing in Scotland .

  4. I’m still sticking to my projection from years ago. It’ll be:

    Con 30, Lab 36, LD 18, OTH (total) 16.

  5. That Labour lead might be small but it’s persistent, albeit with a bit of short-lived wobbling around. And yet we see those prediction sites telling us the Conservatives will be a little bit ahead come election day.

    mmm… now wondering if that presumed swingback will actually happen.

  6. YouGov are teasing about something a young voters poll they have in the pipeline… My guess would be that the Greens have overtaken the Tories for second place in that age group…

    http://www.ncpolitics.uk/2015/01/youth-vote-green-surge-overtaken-tories.html/

  7. RAF,

    I’d be surprised if they do that well.

  8. Graham

    In the Populus poll, for example, removing the 116 respondents from Scotland does move the E&W figures – but perhaps not in the way you suggest.

    E&W “Others” is down by 4% compared with GB, but it doesn’t increase the Lab lead over Con. Instead, Con, Lab and UKIP in E&W all rise by 1% compared with GB.

  9. OLDNAT

    That is not quite my point. Since 2010 Labour’s lead over the Tories in Scotland has dropped sharply as a result of Labour voters switching to the SNP.In Scotland,therefore, there has actually been a significant swing from Labour to the Tories of -say – 8 to 10% approx. The GB polls suggest an overall Con to Lab swing of 4% or so. This must imply that in England & Wales alone the Con to Lab swing is higher – simply to offset the adverse swing in Scotland!

  10. @ Graham

    Yes, across England and Wales as a whole. But the marginals seem to be behaving more like GB than E&W, ie smaller CON-LAB swings. Part of the reason seems to be fewer LIB-LAB switchers in those seats are there were fewer Lib Dems there in the first place

  11. @ Number Cruncher
    That is a possibility – though I have to say that I am not inclined to take Ashcroft’s marginal polling at face value in the way that many appear keen to do. It may be the best evidence available to us but the track record of such polls from earlier elections does not lead me to rely unduly on them.

  12. “That means if the real position was Labour and Conservative tied on 33%, you would expect to see the Conservatives ranging from 30% to 36% and Labour from 30% to 36%”

    It’s a long time since I studied any statistics, but surely this isn’t a definition of what a confidence interval is? Surely if the confidence interval is +/-3%, what it means is that there is a 19 in 20 chance that the population is within 3% of the figure given figure for the sample?
    So in fact for the Populus poll, there is a 95% chance that support for the Conservatives in the population as a whole lies between 29% and 35%, whereas for the Ashcroft poll there is a 95% chance that support for the Conservatives lies between 31% and 37%. But there is a 5% chance (1 in 20), that support within the population as a whole lies outside these ranges.
    The problem is then the figures for Labour, because it implies that for the Populus poll, Labour’s support amongst the population as a whole is somewhere between 34% and 40%, whereas for the Ashcroft poll Labour’s support amongst the population lies somewhere between 25% and 31%. Which probably means both samples are outside the 95% confidence intervals, meaning they are both less accurate than 1 in 20.
    So if I understand this right, there is actually no overlap at all between Ashcroft’s poll, and Populus’s poll for Labour support.
    But as I say, it’s a long time since I studied any statistics, and maybe I am just rusty.

  13. “Which probably means both samples are outside the 95% confidence intervals,”

    Sorry, I phrased that badly. I meant to say that the population as a whole lies outside the 95% confidence intervals for both polls. In other words, these are not just outliers, they are extreme outliers.

  14. @Alun

    The complication is the different methodologies the pollsters use, and the different samples (they probably fish from different pools of voters).

    To be accurate, if I was performing a MSA (measurement system analysis) I would get the pollsters to test the same sample (ie the same x voters) and then compare the results.

    I imagine this will not happen, so we have to live with certain assumptions and guesstimates that are a little fuzzy.

    As AW often says, comparing one pollster to another is not as important and the trends within each pollster, and viewing trends over multiple polls not one offs.

  15. @CATMANJEFF

    My point is that his definition of confidence interval is wrong, from what I remember of statistics. The confidence interval doesn’t mean that if the “real” or population figure is 33%, then there is a 95% chance that the sample statistic will be between 30% and 36%. It means that is the sample statistic is 33%, then there is a 95% chance that the “real” or population statistic is between 30% and 36%.

    The confidence interval is the likelihood that the “true” statistic is within a range of the measured statistic. Not that the measured statistic is within a range of the “true” statistic.

    That’s what I remember anyway. Maybe someone knows something I’m missing.

  16. Alun – one of those polls is at the top end of Moe whilst the other must be outside the usual confidence interval.

    Taking a small Lab lead as the apparent position )however one chooses to weight and average the various polls); it seems to me that the populus is at the top end og MOE (although not right at the end of thhe confidence limit) while the Ashcroft is one of those occassional polls (like the ICM at the referendum) that has an adverse sample.

    The ANPs have a history of being erratic whilst Populus tend these days to favour Lab (as against the average) so neither result is a complete shock.

  17. @Alun –

    “The confidence interval is the likelihood that the “true” statistic is within a range of the measured statistic. Not that the measured statistic is within a range of the “true” statistic.”

    I understand the point you’re making semantically, and suspect that I might be about to say something that statistics professors have been wryly grinning and shaking their heads at for generations, but…

    it seems to me that if A is within a range of B, then B must be within the same range of A? And that therefore the likelihood of the first thing being true, is exactly the same as the likelihood of the other thing being true – is this wrong?

  18. @Wes

    I feel your way of looking at things ought to be right but I cannot persuade myself that it is!

    Suppose the ‘true statistic’ for some distribution is 0 and has a range of + or -10. By unlucky chance I get a measured statistic of 10 with a similar standard deviation. I will then estimate the range of this ‘measured statistic’ as from 0 to 20. So the two things are different as Alun says. Or have I also fallen into some grievous statistical error?

  19. ” random variation could reasonably vary between a 6 point Tory lead and a 6 point Labour lead. ”
    Given the ‘random’ errors you quote, not only ‘could’ so vary, but from time to time must.

  20. YG Scotland crossbreak

    SNP 43% : Lab 28% : Con 16% : UKIP 7% : LD 3% : Grn 2%

  21. @Charles – on the evidence you have to go on, the ‘likelihood’ that the truth is at either end of that range is the same. Of course the ‘truth’ is whatever it really is (without wanting to get too philosophical) – my point is that for our purposes reading these polling runes, I don’t think it makes a difference whether you treat the likelihood of the actual meeting the measured as 95% or the other way around. Or does it?

  22. @Oldnat

    EC on those figures:

    SNP 42
    Lab 17
    Tories, LD, Ukip, Greens 0

  23. First retail sales figures for December out today and it’s not great news. The BRC data shows a 0.4% fall in like for like sales for the month, which is the worst December performance since 2008.

    However, I don’t think it’s possible to draw too many conclusions from this. November’s data was very good, with Black Friday sucking in high levels of trade and distorting the figures. December also saw the lowest growth of online sales for a long time, as shoppers were concerned about delivery schedules in the run up to Christmas.

    However, with the heavy discounting is does show why a number of retailers are now in significant trouble. This is showing through in other, with the household finance data showing retail workers getting more and more pessimistic about employment prospects.

    One other very small bit of news with potentially very major implications. Health bosses are reporting the first signs of sustained decreases in life expectancy in some localized parts of England. This is extremely rare. No obvious cause is noted, but the suspicion is that care service cuts and A&E problems may be the main culprit.

    This is extremely interesting, not so much for the current causal issues, but for long term pension planning. There is a set of assumptions regarding aging that under examination may well turn out to be fundamentally wrong, but the policy making framework seems to follow entirely the unscientific and lazy media assumptions.

    So, for example, we are all ‘certain’ that life expectancy will continue to rise inexorably, and degenerative diseases like Alzheimer’s will become more and more prevalent.

    In fact, there are already signs that the deteriorating health of the 30 – 50 generation means life expectancy will reverse, with obesity and alcohol related conditions acting as a major drag on longevity.

    And we also now have very hard, factual data that the rates of Alzheimer’s are falling sharply, meaning fewer and fewer old people as a proportion are suffering from this condition.

    Not stuff you hear about every day, but important data for future public spending policy.

  24. @Wes – These are clearly murky waters. For the purposes of reading the poll runes I would have thought.

    a) As far as one poll goes, the truth is most likely to be whatever that poll say (e.g. that the Conservatives are on 32%)

    b) One poll is clearly an unreliable thing on which to estimate this truth and so a better guide is the average of recent polls

    c) The problem arises when it comes to assessing whether one poll or a run of them appears very different from what has recently been assumed to be the ‘true average’

    d) In relation to this the consensus urged on by AW seems to be that we should hold our breath if there is ever an outlier and see if a similar trend appears in other polls, and particularly so if like is being compared with like (i.e. if the trend in YouGov is the same as in Populus)

    e) The debate is over how one decides that there really has been a change. Obviously it is natural to start looking for a trend at the point that is most favourable to one’s own point of view and the more scientific minds on the site deplore this as ‘cherry picking’.

    So there has been much learned discussion of CUSUM charts and pre-specified points for testing changes and so on. There clearly are statistical gurus on the site, but I am not aware that they have come to an agreement among themselves on what is the best way to resolve this issue.

    I am not sure if this is relevant to your philosophical issues. Hopefully so, If not apologies! (I am sure most of this is familiar anyway, so I was partly rehearsing if for my own benefit)

  25. @ KeithP

    ” And yet we see those prediction sites telling us the Conservatives will be a little bit ahead come election day.”

    As you well know, this is because the models build in historical assumptions about processes we are all speculating about (such as Swingback or regression to mean).

    The size and timing of these effects is one of the big unknowns in the current election. Whatever the trends suggest at the moment, I presume that the Tory VI will soon push up from the 32% +/- MOE where it has been stuck for the last 12 months. What I find difficult to believe is that the change will be as marked as it has been in the past.

    For government formation, the other big imponderable is the future shape of the SNP VI trajectory. Is this now on a plateau, or will it drop away and – if so – how fast? I don’t think unbiased observers have much of a clue what will happen here.

    A more modest imponderable hangs on the unknown predictive accuracy of Lord Ashcroft’s Constituency VI measure, which has never been properly benchmarked as far as I can tell. CVI is now built into the fabric of certain models (Electionforecast, May2015) and if it is misleading then so, too, will be their projections.

    Uncertain, too, is where the Ukip VI is going. However, as I have mentioned before, the ramifications for May 7 are probably quite limited. (Different story for 2020, though, if they do well.)

    So…quite a lot to watch out for, the important thing being to look beyond the confusion spreed by individual – and probsbly rogue – polls.

  26. @RAF

    I get SNP 44, Lab12, Con 2, Lib 1.

    That site need to align its different calcs. :))

    @All

    Btw, have you noticed the population weightings for London & Scotland? The former has been downgraded from 12.8% to 10%, while the latter has been upgraded from 8.7% to 9%.

    Was there a mass exodus of voters from London recently? Did some of them move to Scotland?

  27. @JIMJAM

    “one of those polls is at the top end of Moe whilst the other must be outside the usual confidence interval.”

    But that’s not right, because the margin of error doesn’t apply to the “true” population statistic. It only applies to the measured sample statistic. The MoE is a property of the ESTIMATE, not a property of the ACTUAL population statistic.

    So if we could conduct a poll of every single person who will vote in 2015, then there will be no margin of error at all, because that would be the statistic for the population. The “true” population statistic doesn’t have a MoE.

    But because we can’t do that, we take a sample of the population, and that sample gives us an estimate of the population statistic. Because it’s an estimate based on a sample, it has a margin of error. But the margin of error is based on the probability of the true population statistic being within our defined margins. In other words, the MoE is a rage within which there is a 0.95 probability that the true statistic is within.

    The MoE isn’t applicable to the “true” value of support.

  28. Statgeek

    Perhaps those English SNP voters who pop up in the polls from time to time have finally moved!

  29. In fact there’s been a fair bit of tweaking:

    http://www.statgeek.co.uk/wp-content/uploads/2015/01/weight.png

  30. “Was there a mass exodus of voters from London recently? Did some of them move to Scotland?”

    Judging by the number of whiskery accents I hear round and about, the opposite is the case :p

  31. @WES

    “I understand the point you’re making semantically, and suspect that I might be about to say something that statistics professors have been wryly grinning and shaking their heads at for generations, but… it seems to me that if A is within a range of B, then B must be within the same range of A?”

    But it’s not a semantic point, it’s a point about what statistics mean. And I’d say it’s a fundamentally important point in this case.

    So let’s take the examples here. If the population statistic is 33% support for Labour, well that doesn’t have a margin of error, because it is a population statistic, it’s not based on a sample, but a measure of the whole population.
    On the other hand, the Populus and Ashcroft polls are both samples of the population, because they are samples, they are only estimates. If there is a 3% margin of error, and we’re using a 95% confidence interval, then what we’re really saying is that there is a 95% chance the true population statistic exists within that range. But we can also make a 99% confidence interval if we do that we are far more sure that the true population statistic fits within our range, but then the margin of error will be higher. On the other hand we can fit, say, 80% confidence intervals, then we are less sure that the “true” population statistic is within our range, but the confidence interval will be smaller.

    But the true population statistic has no MoE. And that’s a pretty important point, I think.

    So statistically for Ascroft’s poll, we can say that there is a 95% probability that the true population mean is between 25% and 31%. But for Populus we can say that there is a 95% probability that the “true” population mean is between 34% and 40%.

    I’d have to conclude that, in the absence of any dramatic shift in the polling from other companies, both of these polls are extreme examples of sample bias.

    What we can’t do is fit a MoE to a population statistic, because that makes no sense to me.

    Let me put it another way. Let’s define our population as the height of every child within a class. The mean of this population has no MoE, because it is the mean of our entire population, so there is no doubt that it is correct. But if we sampled, say, 5 children from the class, and took the mean, THEN we can fit a margin of error, because that’s an estimate of the population, and there is some probability that it is not a representative sample.
    So the whole point of a margin of error is to take into account sampling errors. Sampling errors don’t occur within a population statistic.

  32. ALEC

    Life expectancy – it has been suggested that those of us whose childhood was in 1940s had a nationally planned diet based on fresh food thanks to rationing and this greatly improved the long term health of the nation. In contrast perhaps the 30 to 50s generation were born into a time of plentiful junk and processed food. This is a complex and controversial subject and probably not within UKPR guidelines.

    Thank you to all the statisticians for explaining MoE. I find it interesting and a refreshing change from some of the partisan commenting.

  33. CHARLES
    “it is natural to start looking for a trend at the point that is most favourable to one’s own point of view and the more scientific minds on the site deplore this as ‘cherry picking’. ”

    Also, I suggest, to start looking for voting trends and causes to reflect what we know of wider political or social changes which are observable, for example the utility of immigrant populations to the economy or society and their integration, rather than attitudes to them. Do these trends, knowledge of them in the community, and similar trends in, say, the EU bring about gradual change in VI. Do statements by political leaders and legislation or proposed policy and legisation affect VI? These are concerns which inevitably reflect a political standpoint, which is not necessarily partisan so much as governed by a political philosophy: mine, I willingly affirm, towards a social market and social democracy and liberalism, but also towards any force in society or politics which opens up debate to information and which defends it against obscurantism or against repression by means of propaganda and false reporting.

  34. YouGov

    “In December the Green party was on 22% among 18-24 year-olds – tied with the Conservatives for second place”

    https://yougov.co.uk/news/2015/01/13/greens-tied-conservatives-among-young-people/

  35. FUNTYPIPPIN
    Statgeek
    Perhaps those English SNP voters who pop up in the polls from time to time have finally moved
    __________

    Nope I can confirm that I still reside in good ole Jim Murphy’s constituency where Rouken Glen Park meets lovely Whitecraigs train station.

    All aboard…choo-choo…Crickey I must stop watching Michael Portillo and his Bradshaw railway guide.

  36. @OldNat

    “””
    https://yougov.co.uk/news/2015/01/13/greens-tied-conservatives-among-young-people/
    “””

    That’s a nice graph — to my eye, it looks a lot like:

    Greens are taking votes from Lab
    UKIP are taking votes from Con

    In particular, since July, the pairs (Lab/Green, Con/UKIP) look nicely symmetrical.

    Of course the sample sizes will be small so a little caution is required ….

  37. Very interesting inflation figures. On the face of it, the record low of 0.5% should be great news for the consumer, but we’re beginning to get perilously close to deflation, which is already kicking in within the Eurozone.

    Electorally, this is probably good news for the government, as it clearly eases price pressures on households. Ideally I suspect they would have preferred to see prices slumping a little earlier, so there was a year or so ahead of the election to generate more of a feel good factor. Interestingly, home loan rates are now tumbling, but again this may be just a bit too close to the election to make a big difference.

    The worry is that these inflation figures begin to completely alter the dynamic of the economy, and in a highly indebted economy, deflation becomes an absolute killer. Italy is in a primary budget surplus, yet is seeing it’s debt ratios rise sharply as deflation shrinks the economy.
    If the low rates prevent investment, then we are in for a long and painful period in the years ahead.

  38. STATGEEK
    @RAF
    I get SNP 44, Lab12, Con 2, Lib 1
    ______

    So on this outcome presumably Ofcom would still class the Lib/Dems as a major party on the Northern isles?

  39. No suggestion from local council by-elections in England of a 5% swing to Labour -nearer 2 -3%.

  40. Alec,

    “And we also now have very hard, factual data that the rates of Alzheimer’s are falling sharply, meaning fewer and fewer old people as a proportion are suffering from this condition.”

    Rates may have fallen by a quarter over 25 years due to better medicine and some feel education but the propensity still rises with age. As the over 80’s is set to double and incidence here is higher even with a fall in rates both the number with the condition and the proportion will rise.

    Even if the rate for 60-80 year olds falls the huge increase in over 80’s and there higher Likelyhood of developing the condition will probably see the 60+ rate marginally rise.

    Peter.

  41. @Tony Cornwall

    It’s the Baby Boomers who are now entering old age which have the greater than expected morbidity and mortality. The early few who would have been very young kids during rationing. But then peak junk-food occurs in the 60s and 70s. Combined with tobacco restraints being lifted, and increased car ownership, and the 60s and 70s were a health-time-bomb. The 80s are actually when we started realising that junk food was junk food, that we shouldn’t drive everywhere, the final admission that smoking tobacco was actually really really bad for us.

    However, we’re in a hugely better situation than the US, where decisions made have actually locked-in Baby-Boomer lifestyles for generations. With the appearance of “Food Deserts” in urban areas which are areas the size of large towns where you can not buy anything but junk food. Urban sprawl planning based around driving everywhere. Tobacco clinging on stronger because of politics. Decisions about health care provision based on Baby Boomers thinking themselves either immortal, or that they’d have enough money to cover any issues in the future. And so on…

    I do think that there’s going to be a demographic and generational shock-wave that will only be amplified by a dramatic increase in mortality when the Baby Boomers are well into retirement age. It’ll be big demographic change here, it’ll be massive in the US.

  42. On statistics.

    I understand the discussion above (and earlier), and it is relevant as the models (of the polling companies) are constructed along these rules.

    However, I’m beginning to see VI more as fuzzy sets (non-Bayasian) of the De Morgan flavour.

    I do so, because there seems to be thresholds that at a point switches the individual contemplating voting for a particular party, then another one where the individual considering to vote for another party, and then one where the other party is preferred.

    While these thresholds are probably individual, there would also be group characteristics, but not necessarily along the usual demographic lines (region, age, gender, social status, etc).

    This could be important in the case of churn to Green and UKIP.

  43. Wolf

    “No suggestion from local council by-elections in England of a 5% swing to Labour -nearer 2 -3%.”

    Phew! I might just make it then. I need just over 2%. :)

  44. Predicting life expectancy, birth rates and future population levels has always been nigh on impossible because there are so many variables. Just a few years ago we were told to close schools because of the number of surplus places which was forecast to rise into the foreseeable future. It turns out that our foresight is pretty poor.
    The Alzheimer’s stats are interesting, and encouraging, because the number of new cases, as a percentage of the population at any given age, is undoubtedly falling. Perhaps some environmental trigger is fading in significance – e.g. lead pollution.
    Governments are tasked with predicting the future but like the rest of us they often guess wrong and we shouldn’t be surprised. That is why we should build in a degree of flexibility, both in our provision and our expectations.

  45. @RMJ1

    The main problem is political. There’s always a margin of error with capacity predictions. But deficit hawks will tend to view this as a negotiation over funding, and will hear ‘it could be higher or lower’ as ‘it could be lower’.

    It’s the carpet purchase problem. If you buy a patterned carpet that’s too large for the room, there’s wastage. But if you buy a patterned carpet that’s too small for the room, then it’s a nightmare to fix. The answer is of course, to try to buy a bit more than you think you will absolutely need, because being over-capacity is less of a problem than being under-capacity.

    But the rise of “over-capacity is a worse problem than under-capacity” thinking has taken over both in public and private realms. As ever, it relates to taking a concept that works in one situation (“Just In Time” provision for retail) and applying it to situations where it does not, combined with only seeing one metric (“Waste”) vs ignoring or not being able to quantify a metric for loss due to under-capacity.

  46. If you remove the highest/lowest from Con/Lab for the 11 polls so far this year then Lab have steady 1% lead throughout, including 3 ties and 6 leads varying from 1% to 3%.

    Ashcroft the only poll with a Con lead and that is discounted on the high/low removal.

    If it is true that polls tend to tighten back to the opposition in the last weeks of a campaign, and it seems a reasonable hypothesis given that only then do they receive equal coverage, then the Tories [and LDs] need to see swingback and crossover occurring pretty soon I think.

  47. @Peter Cairns – I think I agree in broad terms with what you are trying to say, but you may have made a couple of errors that could be significant.

    It’s true that the rate of incidence of Alzheimer’s has fallen very significantly, but this has nothing to do with medicine. Medication becomes relevant only after diagnosis, and while diagnostics are improving, meaning that we should expect to see higher rates of incidence over time, we are actually seeing much lower rates. There are now around 500,000 fewer cases of Alzheimer’s than were predicted even ten years ago, although as you say, because of the increase in the elderly population, this still represents an increase in the total numbers, albeit a much smaller increase than was expected. It’s worth noting that the decrease in rate of incidence is found across all age cohorts.

    You may also be making an error in assuming that the over 80’s age bracket is set to double. As I mentioned, we are now beginning to see some signs of consistent reductions in life expectancy for over 60 and over 80 cohorts in certain specific locations. It’s too early to tell if these are going to become more widespread or more entrenched, but an increasing number of researchers are now predicting life expectancy to start falling, although perhaps not just yet.

    What’s most interesting with regard to Alzheimer’s are the reasons why incidence rates are falling so rapidly. No one knows for certain what the reasons are, but one strong theory that is emerging is that modern work patterns are reducing the number of extremely tedious and repetitive manual jobs.

    Alzheimer’s seems to be kept at bay by mentally stimulating tasks, so even if people have a genetic predisposition for it, the effects can be significantly masked by maintaining good levels of brain activity. Office work and customer facing service jobs appear to provide better brain stimulation than standing in front of a production line machine repeatedly stamping gromits for a 37 hour working week.

    We also have more and more varied leisure time activities, which continue into retirement, even for lower income categories. Some are therefore theorizing that modern lifestyles are therefore assisting in the battle against Alzheimer’s.

    It certainly seems abundantly clear that relatively modest lifestyle alterations provide the most effective method for protecting society against this, but as with many other diseases, the media agenda tends to be captured by well coordinated campaigns led by drug manufacturers, who wish to convince us either that new diseases exist or that known diseases are far more common and inevitable, so that we pay large sums of money for their drugs, instead of looking for much better solutions.

    One other point that I find slightly amusing. We are constantly bombarded with the notion of the ‘aging time bomb’, but in practice, end of life care needs for the last 2 years of life has remained pretty static, regardless of how people are at this stage. The proportion of over 65’s needing residential care is falling, down from 4.5% in 2001 to 3.7% in 2011.

    Meanwhile, approximately 6% of working age people (excluding early retirees and students) are sick and unable to work.

    Perhaps we should be talking more about the crisis in younger people, because old age seems to be getting better and better?

  48. @ Statgeek

    What are those percentages you just linked to?

  49. Well that poll of 18-24s is supremely helpful in Hallam.

    /s

  50. @ Jayblanc

    It’s really a trade-off – you deal with statistical fluctuation through buffer. Then you watch what’s happening to the buffer over time. The questions are: the type of buffer (e.g. schoolrooms, teachers), where do you put the buffer (ideally you want to put it in the front of the bottleneck, but in reciprocal dependencies it’s extremely tricky, though doable through KPIs – I.e. it’s not an optimisation problem).

    The trouble with buffers is multifold (considered a waste – a pain in British business; can be very expensive. Thus tempting to give up on the goal. A&E is the most obvious topical example.)

1 2 3 4