J-PAL’s Christmas shopping list

The Abdul Latif Jameet Poverty Action Lab has a new list of “best buys” to reach the MDGs out. The suggested interventions are mostly intuitive and reasonable: providing free bednets, deworming, basic education, and empowering women using political quotas.

In the past few years, J-PAL has drastically altered the way we validate our policies, mainly by raising the bar of empirical skepticism. However, I question the specific-to-general recommendations they are making: while each individual study is rigorous and convincing, there is an implicit assumption being made here that what works in a few isolated settings will work in every setting.

Statements like “time-limited offers to purchase fertilizers in the harvesting season, with free delivery in the planting season, can massively increase uptake and usage of fertilizers,” should have qualifiers like “in Western Kenya” added to them.

UPDATE: It looks like these are pretty common criticisms.

3 thoughts on “J-PAL’s Christmas shopping list

  1. KVA

    December 1, 2009 at 3:18am

    This is a great point Matt. For instance, re: #4 on the list, quotas for women in politics: it looks like the research behind this recommendation was primarily done in India, which has a unique political history and very particular experiences/traditions with democracy. These and other factors may make quotas more feasible/more effective here than in many other places without a similar historical experience.

    Another point that I wanted to raise is the issue of feasibility. Going back to the example of quotas, demonstrating that they are good for X, Y, and Z is one thing–but how do you actually implement them? They would likely have to be made law (like in India). While bednet and deworming programs are relatively straightforward to implement, quotas and other interventions are more complicated and may require a lot more sustained effort to carry out. It’s not as easy as demonstrating a positive impact and then implementing on a larger scale (which in itself isn’t so easy). This is where other perspectives (political economy?) could really contribute to our understanding of how to design and implement development programs.

  2. RFG

    December 23, 2009 at 6:46pm

    There is two types of validity for any study: external and internal. J-PAL’s approach to impact evaluation (using randomized controlled trials), is the best way we can ensure we have internal validity (i.e. this refers to the fact that we can make statements about causality, such as “deworming pills CAUSE children to go to school more”). External validity, is more tricky and it exactly refers to the capacity to make generalizations about specific projects. A big part of external validity, is representativeness, but there are other factors -for instance think about programs that only work when a relatively small fraction of the population gets it, but not if everyone got it-. This problem of external validity is present in all types of impact evaluations, not only the kind that JPAL does. However, in general, researchers -at least JPAL affiliates- do try to make some kind of statement about the external validity of the evaluation. This can be done in several ways. For instance, by not only saying X has an impact on Y. But also understanding the mechanisms through which has an impact. This way, we can see if this mechanisms are also present in other countries or areas. Another way, is by replicating projects, IPA (Innovations for Poverty Action) has done a lot of work in re-checking the original evaluations in different settings, before any big policy recommendations are made. The point being, that researchers are usually aware of this problem.

  3. Matt

    December 24, 2009 at 5:03pm

    RFG -

    All good points, but I don’t believe J-PAL has done enough checking on external validity to warrant their strong endorsement of the interventions listed above, especially since those recommendations are not location-specific. Many of the them are based on just a single study, which just isn’t enough.

    There’s very little incentive to do so – research dollars flow towards testing out new ideas, not re-running the experiments, as do preferences towards publication.

Comments are closed.