I’ll take an evaluation please, but hold the scientists

You need an econometrician dear, not a doctor

Following the massive kerfuffle over the Lancet article on the child mortality impact of the Millennium Village Project, both the authors of the paper and the journal itself have finally responded.

The first response, by Paul Pronyk of the Earth Institute, is reassuringly humble: the authors accept all of the mistakes highlighted by Gabriel Demombynes and Espen Prydz, and even claim that subsequent results will be analysed in a more transparent manner:

The project will invite an independent panel of experts, including critics of the project, to participate in scrutinising the vital events and survey data and in assessing their validity.

The second response, by the editors of the Lancet, is more defensive, arguing that even after failing to show that the fall in infant mortality in Millennium Villages was due to the the MVP intervention, the study still had merit – pointing to several other results which were not the focus of the study (and in two instances, were not significantly different than the `control ‘ villages). I was perturbed by the final statement, which suggests that more independent oversight by medical science professionals is the solution to our concerns:

To ensure that all future data from the project are fully and fairly evaluated, Prof Jeffrey Sachs, the Principal Investigator of the Millennium Villages project, is establishing new internal and external oversight procedures, including the creation of an International Scientific Expert Advisory Group, chaired by Prof Robert Black, Chairman of the Department of International Health, Johns Hopkins Bloomberg School of Public Health, which will report to the Principal Investigator and also communicate its findings to The Lancet. The goal is to provide a further independent means of verifying the quality of the project’s design and analysis. It is important that this work, which is of considerable significance for understanding how countries scale up multiple complex interventions across sectors, receives proper scientific evaluation before, during, and after publication.

Emphasis is mine. This suggestion for a solution is missing the point: the problem with the evaluation of the MVP isn’t that it needs more scientists (narrowly defined as researchers from the health community) paying attention. The problem with the evaluation of the MVP is that it has too many scientists paying attention. Let me be clear: while researchers from these fields are amazing at what they do well (especially randomized controlled trials) – they are not as adept at the careful statistical analysis needed for non-random, complex impact interventions. This is why – sadly all too frequently – incredible journals like The Lancet publish research which would be laughed out of a graduate-level applied economics seminar.

Now, to be fair, economists and other social scientists probably do enough injustice to the health literature to give your average epidemiologist an aneurysm, but there’s a difference between wallowing in within-discipline ignorance (economists or health researchers choosing not to know any better) and knowing better and choosing the path of least resistance. If one wanted to be overly cynical, the precise reason why the MVP is publishing in top medical journals has less to do with seeking the most appropriate audience for assessing impact and more to do with choosing a less critical one.

If they want to convince the world that the Millennium Villages are a big deal, they need to at least bring in some social scientists with the statistical know-how to properly evaluate the evidence. Let’s hope that Dr. Pronyk’s independent panel of experts will have an econometrician or two, rather than just relying solely on those who have solid record of publishing in The Lancet.

4 thoughts on “I’ll take an evaluation please, but hold the scientists

  1. Maxime Gasteen

    May 21, 2012 at 9:06am

    Isn’t Jeffrey Sachs an economist and not a health researcher? So perhaps one might expect him to have similar skills to economics grad students that would enable them to laught the MVP study out of the seminar?

    And doesn’t your claim that that social scientists have the statistical know-how that natural scientists lack just reinforce the disciplinary silo you critise in this article?

    If one were a cynic, one could suggest there’s possibly a more basic reason that the Lancet publishes high profile but low quality studies on occassion, one that has more to do with generating headlines and debate…

  2. Matt

    May 21, 2012 at 10:58am

    Hi Maxime,

    Thanks for the thoughts – a few in response:

    1) On Sachs himself – probably, but I think your question sidesteps the enormous incentive he has to publish research showing his scheme works. Sachs’s transition from empiricist to preacher is pretty much recognized by all in the field, including Sachs himself.

    2) Good point – but I’m not suggesting that econs have all the answers, but that there are comparative advantages in disciplines – neither innate nor immalleable – but they are certainly present. I’m not arguing evaluations like the MVP should be fully turned over to the econs, but that a little intra-disciplinary oversight might be a good thing.

    3) Journals are welcome to think of themselves as headline generators, and the quality of papers is always going to fluctuate, but the emphasis should always be on results we have good reason to believe are “true” (in this instance, that MVPs have an impact on child mortality). This study does not a consistent, additive impact. If the authors and The Lancet want to publicise the paper as “Look, we haven’t quite identified anything here, but still these results are interesting, let’s talk about them”, that’s an approach I would find quite reasonable. Publishing a paper and claiming “we have shown an impact, MVPs definitely work” is quite another approach, one which I’d argue is dangerously misleading.

  3. Søren

    May 21, 2012 at 11:40pm

    Empiricist, when?

  4. Calum Davey

    May 22, 2012 at 8:40am

    Matt

    The observation that there is comparative advantage in the two fields (epidemiology and economics) is a good one and I think that there is a lot to learn there. However, epidemiologists have only recently sunk back into the statistically comfortable armchair of the RCT, and before the 80s there was a lot more interest in piecing together the picture with more than one study, each taking a different angle, than going for the knockout with an RCT. Of course, RCTs are invaluable, but this area of impact evaluation struggles because there aren’t good _theories_ linking the observations in different settings. Perhaps this is too difficult to solve. Whereas inference about lung cancer could be drawn from any number of studies of smokers, in social (economic) interventions/outcomes the effect modification of unmeasurable social factors is (probably!) too great. There are a lot of advantages of an RCT, but what has not been explored enough (to my knowledge) is the process of knowledge creation from the body of empirical evidence. We all know about the systematic review, but knowledge is created in many other ways. David Deutsch may offer some insight, I don’t know.

    We are currently working on a book on impact evaluation of HIV prevention interventions for the WB, bringing together epidemiologists, economists, and modellers. I’ll let you know when it’s available to review!

Comments are closed.