Sachs the rainmaker

"But kemosabe, this would not stand up to a diff-in-diff"

Many of you will already be familiar with the ongoing debate over the efficacy and evaluation of the Millennium Village Project, the brainchild of the Earth Institute’s Jeffrey Sachs. Due primarily to the work of Michael Clemens at the CGD and Gabriel Demombynes at the World Bank, the MVP’s claims of development impact have finally faced substantial scrutiny, although frequently the debate has felt more like a war of attrition than productive discourse.

Enter the Lancet, a reputable medical journal which has a worrying tendency to publish really disreputable social science research, which just published a study by Sachs et al. showing that, over three years, child mortality (under the age of five) has fallen by roughly 25% across nine Millennium Villages. When compared with `control’ villages (which were chosen later and differ from the MVs in many, substantial ways), the drop was even larger – close to 31%.

Suddenly the bells starting ringing: after all the doubt, the MVP is hailed as being successful in reducing child mortality, with the editor-in-chief of the Lancet rallying behind the paper and the Guardian reporting the results with an astonishing lack of scrutiny. Only in the twitterverse/blogosphere has the response been largely negative (Lee Crawfurd disassembles the results of the Lancet article here).

However undeserved, this might have been a good opportunity for the the Earth Institute to bask in its momentary glory. Yet, the results might have already been undermined by awful timing: the Lancet study arrived just days after another study by the World Bank’s Gabriel Demombynes and Karina Trommlerová showing absolutely massive decreases in child mortality across most of sub-Saharan Africa in the past few years.

To understand why this is a problem for the Lancet study, consider the table below, which I’ve assembled from results from that study and some figures from the World Bank one (admittedly swiped from Michael Clemens’s post on it).

From the WB study I’ve taken the same nine countries used in the Lancet article, listed their declines in mortality and (assuming a linear trend) calculated the average decline in under-5 mortality per year. One caveat: the years considered in the World Bank study do not necessarily coincide with the timing of the Millennium Villages in their respective countries, so we may be comparing trends from different periods. Even so – these figures still provide a rough idea of the relative magnitude of the mortality decline.

Per-country figures are not available in the Sachs et al. study (which is it a bit worrying in itself), so I can only compare the average declines in these countries to the average decline in all Millennium Villages. What do the results suggest? While child mortality dropped by 24.6 (less children dying per thousand births) over a 3 year period, average declines for all countries in the study are broadly similar: 22.5.

The first and most important thing to take from these results is that the Millennium Villages aren’t vastly outperforming aggregate gains in the same countries. This makes it very difficult for the MVP to claim it is making an impact – it’s a bit like claiming credit for rain in Oxford, when it has been raining all over the UK.

The second thing worth noting: if you look at the above table, taken from the Lancet study, you’ll see that under-five mortality is actually increasing in the control villages. This strongly suggests that control villages are quite different from the rest of the country at large. The Earth Institute has argued that Millennium Villages (and their control counterparts) were selected because they were different – but even if these odd trends in the control villages don’t disqualify them as a counterfactual (which I still think they do), the differences seen here certainly prevent the MVP from having any sort of claims of external validity.

The argument that the Millennium Villages aren’t outperforming the rest of their host countries is not new: Clemens and Demombynes made it over a year ago, when they found that many other claims of `impact’ by the MVP were reflected in national statistics.  Let’s hope the hype from the this study is similarly deflated.

12 thoughts on “Sachs the rainmaker

  1. Staffan

    May 9, 2012 at 10:33pm

    First of all, when your not even comparing the same time period, you can’t draw the bombastic conclusions you do. In fact it’s only one country that matches the timeframe measured in the article (Rwanda).

    Second, the data you are referring to is not new, only the WB-paper itself on the progress in Kenya is new. The data comes from two DHS-serveys per country from the years 1998-2010. Why not use available data from e.g. childmortality.org, there you have good estimates on a yearly basis up to 2010 (or later). They have more sources, and should probably be considered more reliable, no?

    Thirdly, why don’t you stick with the annual Average Annual rate of Reduction that http://blogs.cgdev.org does? That would be more honest when comparing, and I probably not harm your purpose of sinking Sachs.

    The national AARR for the nine countries 07-10 is 4,14% not bad, just under the MDG-target. The Millennium Villages sites according to the Lancet study shows a rate of 7,84% per year, which is higher.

    I can’t say anything about the academic quality of the Lance Study, but your calculations here proves nothing. Your argument is not valid.

  2. Matt

    May 10, 2012 at 12:50am

    Hi Staffan,

    Thanks for your comment.

    1) I don’t feel that I’m drawing `bombastic conclusions’ especially not relative to the grand conclusions of the Millennium Village Project. I’m merely pointing out that there have been large improvements in child mortality in the host countries where the villages reside, so the `impact’ that the MVP claims is not clear. If you look back at the post, I pointed out the time frame issue already. Yes it’s an issue – I wouldn’t use different time periods as a basis for, say, an academic paper, but the general trends still seem to be at odds. I think others have pointed out that we have to do the best with the data we have, given that the MV project has failed to allow for a proper evaluation.

    2) I’m a busy PhD student – if you’d like to crunch new child mortality figures for comparison with the MVP, then go for it. I question the assumption that “more sources” = higher quality data. More sources usually equals noisier data, no? For these sort of direct comparisons, I’d prefer to use a single source over time (you’re welcome to argue that the DHS might not be the best source).

    3). It’s not clear to me why the AAR is any `more honest’, given that we’re calculating all these figures given two observations, we’re making assumptions about a linear decrease over time, both when using absolute changes in the percentage points and then looking at percentage changes across time. Using your method – even if both MVP and overall child mortality fell at the same rate (same percentage point decrease – say 5 children per year), the MVP would have a higher AARR just because they start with lower levels of child mortality – I fail to see how that is more robust!

    Last time I checked, my mission wasn’t to “sink Sachs” – it’s to point out deeply flawed claims about efficacy. The MVP has to do a lot better than this if it’s going to convince everyone that this intervention is worthwhile.

  3. Matt

    May 10, 2012 at 1:04am

    Just realised that “percentage point” might be confusing – I’m stuck in a regression world. It is easier to just say “absolute change” (it is technically a percentage point change because it is also the probability a child will die before 5″

  4. Staffan

    May 10, 2012 at 12:08pm

    No, you are right, sorry. I might have confused the message of this post with the twitter-references that share it with a different purpose, without reflection. And the conclusions are perhaps not that bombastic but a little too far reached, I think.

    2) Well, more studies = better data is probably not altogether true but I would assume that the data collected by CME (on childmortality.org) would be much better to use, than a simple linear trend from two single DHS.They are also easily accessible and have estimations on a yearly basis, up to 2010. From that data the AARR for the nine countries are just over 4% which is impressive enough to me. Sweden did not have that high reduction rate for many periods in our development.

    However you are probably also right that we can question if the MV really had almost 8% in AARR over the last three years, but I feel pretty sure that the average of these nine countries was not that high. IF the numbers in Lancet are correct, the villages are in fact outperforming there host, thats my point.

  5. Staffan

    May 10, 2012 at 12:21pm

    I think one has to use both absolute change and change rate, for different purposes. But to compare the impact of interventions and to reflect on the speed of the reduction AARR is generally better right?

    Compare with economic growth, to capture growth rate %-growth is the one we use but to comprehend the scope of the improvements one has to take the actually increase into consideration.

    To use “actual change” instead of growth-rate in child mortality, would obviously show greater progress if you have a bad starting point (if you go from 300 to 150 in u5mr you have -50% but 150 saved children per 1000) to go from 150-50 is an decrease with 2/3 but only 100 of 1000.

    The opposite goes for Economic growth (Tanzania grew gdp/capita ppp with 5% per year, but only $50 a year. Sweden grew 1% but $500.

    In this case, to compere the success in the MV with the success on national level it seems as if the national averages had a higher starting point, which make a comparison in actual numbers favor them, even if AARR better reflects the impact of the interventions and other policies.

  6. Staffan

    May 10, 2012 at 12:23pm

    But to know for sure, i guess we should have had the actual numbers per site and compare that to the numbers of respective country of course. Or at least more sites.

  7. Esa

    May 10, 2012 at 12:40pm

    Matt,

    I agree with your analysis. One question, though, as you point to national averages and the overall progress in SSA. Sachs et al do say that “the pace of mortality reduction in Millennium Village sites was three-times greater than the most recent 10-year national rural trends”, and provide a table to go with this claim in the appendix, page 9.

    Aren’t rural trends a valid point of refernce for the MDVs, given that they are rural?

  8. Matt

    May 10, 2012 at 3:32pm

    Hi Staffan,

    It is, as your posts are suggesting – rather complicated to determine which comparison is the most appropriate. You’re right in pointing out that the MVP estimates come out on top anyway (in all specifications). Yet we don’t know if it comes out on top in a statistically significant way (we can reject the hypothesis that the decline in child mortality in the MVs wasn’t zero, but we haven’t seen yet if we can reject the hypothesis that Decline(MVP) > Decline(Natonal)).

    I think the main point is just that the relative decline is still much less than what the MVP is claiming (remember that the control villages are, on average, seeing worse outcomes over time).

  9. Matt

    May 10, 2012 at 3:38pm

    Esa,

    Good point – this is something I glossed over – mainly because I was using the tables in Clemen’s post. You’re absolutely right – rural tends would be more appropriate here – they might be lower (although you could argue that expansion of medical services to rural areas is the main driver behind these reductions). This would be relatively easy to investigate using the DHS data.

    What you’re getting at though is that we need a reasonable `control’ group. I think most are doubtful that the MVP’s control villages are the most appropriate group though.

  10. Michael Clemens

    May 10, 2012 at 10:20pm

    Matt: Rural declines in child mortality in the nine countries in question have been *even faster* (slightly) than the nationwide declines. For any readers who care to check, this fact is documented in the printed DHS final reports for six of the nine countries in question. (The other three don’t separate rural and urban in the printed report.)

    That pattern is predictable: rural areas are where the low-hanging fruit is. The cheapest and quickest things one can do to get child mortality down—such as providing a waterpoint and vaccinating kids—are much easier to do in urban areas and have already been done in many of them. The margin for fast expansion of those efforts is in the countryside.

    Thanks for your perspicacious, data-driven, independent analysis of this issue.

Comments are closed.