The first response, by Paul Pronyk of the Earth Institute, is reassuringly humble: the authors accept all of the mistakes highlighted by Gabriel Demombynes and Espen Prydz, and even claim that subsequent results will be¬†analysed¬†in a more transparent manner:
The project¬†will invite an independent panel¬†of experts, including critics of the¬†project, to participate in scrutinising¬†the vital events and survey data and in¬†assessing their validity.
The second response, by the editors of the Lancet, is more defensive, arguing that even after failing to show that the fall in infant mortality in Millennium Villages was due to the the MVP intervention, the study still had merit – pointing to several other results which were not the focus of the study (and in two instances, were not significantly different than the `control ‘ villages). I was perturbed by the final statement, which suggests that more independent oversight by medical science professionals is the solution to our concerns:
To ensure¬†that all future data from the project¬†are fully and fairly evaluated, Prof¬†Jeffrey Sachs, the Principal Investigator¬†of the Millennium Villages project, is¬†establishing new internal and external¬†oversight procedures, including the¬†creation of an International Scientific¬†Expert Advisory Group, chaired by¬†Prof Robert Black, Chairman of the¬†Department of International Health,¬†Johns Hopkins Bloomberg School¬†of Public Health, which will report¬†to the Principal Investigator and¬†also communicate its findings to¬†The Lancet. The goal is to provide¬†a further independent means of¬†verifying the quality of the project‚Äôs¬†design and analysis. It is important¬†that this work, which is of considerable¬†significance for understanding how¬†countries scale up multiple complex¬†interventions across sectors, receives¬†proper scientific evaluation before,¬†during, and after publication.
Emphasis is mine. This suggestion for a solution is missing the point: the problem with the evaluation of the MVP isn’t that it needs more scientists (narrowly defined as researchers from the health community) paying attention. The problem with the evaluation of the MVP is that it has too many scientists paying attention. Let me be clear: while researchers from these fields are amazing at what they do well (especially randomized controlled trials) – they are not as adept at the careful statistical analysis needed for non-random, complex impact interventions. This is why – sadly all too frequently – incredible journals like The Lancet publish research which would be laughed out of a graduate-level applied economics seminar.
Now, to be fair, economists and other social scientists probably do enough injustice to the health literature to give your average¬†epidemiologist¬†an¬†aneurysm, but there’s a difference between wallowing in within-discipline ignorance (economists or health researchers choosing not to know any better) and knowing better and choosing the path of least resistance. If one wanted to be overly cynical, the precise reason why the MVP is publishing in top medical journals has less to do with seeking the most appropriate audience for assessing impact and more to do with choosing a less critical one.
If they want to convince the world that the Millennium Villages are a big deal, they need to at least bring in some social scientists with the statistical know-how to properly evaluate the evidence. Let’s hope that Dr. Pronyk’s independent panel of experts will have an econometrician or two, rather than just relying solely on those who have solid record of publishing in The Lancet.