Randomised monkey trials

“What, you humans never tried calorie restriction?”

Following a series of animal studies showing benefits to health, cognition and longevity, the practice of calorie restriction has been getting a lot of attention over the past few years. While the mechanisms were never well-understood, limiting caloric intake by around 30% was seen as a shortcut to a longer lifespan. I’ve always been a bit skeptical and hesitant to embrace the calorie restriction camp, in part because a life in which my hummus intake falls by a third is not a life I would find worth living, but also because the most pertinant results have been based on a single randomised controlled trial of rhesus monkeys, conducted by the University of Wisconsin.

Those results were challenged recently by a similar trial conducted with rhesus monkeys by the National Institute for Aging, which found that treatment monkeys were healthier, but didn’t actually live any longer than those without the restricted diet.

Why am I writing about monkey trials on a development blog? The results of the two trials offer some important lessons for interpreting and relying on RCTs, which are quickly becoming the standard method of identifying development impacts.

The first thing to take away is that while RCTs can allow us to accurately identify treatment effects, we need to carefully consider what treatment we are measuring. Ideally, a control group should look identical to how the treatment group would have looked if they hadn’t been treated. In the University of Wisconsin study, while the treatment group was subject to calorie restriction, monkeys in the control group were allowed to eat as much as they wanted. While we’d really like to know the “impact of restriction above and beyond a normal diet,” the treatment effect measured in the Wisconsin study was something closer to “the impact of restriction above and beyond an all-you-can eat buffet at the Golden Corral.” It is hardly surprising then that the treatment group fared better and a good reason to be suspicious of the results. We cannot always be certain that a study has no effect on the control group – imagine a job-training programme which allows treated individuals to access jobs at the expense of untreated individuals – and so we should pay extra careful attention to what happens to controls groups, not just those who receive the treatment.

The second thing to note is that restricting your analysis to a particular subgroup of individuals or a limited set of outcomes can be tricky. The University of Wisconsin study limited its measure of mortality to “age-related deaths.” According to the New York Times, there was no difference in total mortality between the two groups, meaning that the reduction in age-related mortality (if it is to be believed) might have been offset by an increase in other types of mortality. Be wary of studies which subdivide outcomes like this without reporting aggregates, as rises in one indicator of success might easily be offset by another. In general, be skeptical when results don’t hold for the aggregate, but do for some magically-defined subgroup.

Finally, we need to keep going back to check up on our treatment and control groups for as long as possible. Both these monkey trials went on for over two decades, and while positive health results have been apparent for quite some time, it is only recently that the mortality rates have been high enough to detect (or reject) a difference. It’s possible that we’re measuring a lot of positive impacts today which aren’t going to amount to much in the long run.

Now, if you’ll excuse me, I’m going to break for elevenses.

Comments are closed.