In a new paper, Benjamin Olken, Junko Onishi and Susan Wong examine the effects of a cash-grant programme in Indonesia, randomly allocated (across sub-districts), but with some receiving a cash-incentive bonus, conditional on performance:
This paper reports an experiment in over 3,000 Indonesian villages designed to test the role of performance incentives in improving the efficacy of aid programs. Villages in a randomly-chosen one-third of subdistricts received a block grant to improve 12 maternal and child health and education indicators, with the size of the subsequent yearâ€™s block grant depending on performance relative to other villages in the subdistrict. Villages in remaining subdistricts were randomly assigned to either an otherwise identical block grant program with no financial link to performance, or to a pure control group. We find that the incentivized villages performed better on health than the non-incentivized villages, particularly in less developed provinces, but found no impact of incentives on education. We find no evidence of negative spillovers from the incentives on untargeted outcomes. Incentives led to what appear to be more efficient use of block grants, and led to an increase in labor from health providers, who are partially paid fee-for-service, but not teachers. On net, between 50-75% of the total impact of the block grant program on health indicators can be attributed to the performance incentives.
Cash-on-delivery aid critics and proponents should give this a good read. Of course, it won’t settle any arguments, but provides some interesting evidence. A couple of thoughts:
The authors find that there are no Milgrom & Holstrom-type external spillovers (i.e. you start paying me to publish more so I start publishing in worse journals) and deduce that non-targeted indicators might have benefited from the extra effort. One caveat: they only look at non-targeted indicators within the domains of health and education, so we can’t say what happened to indicators outside of this domain that went unobserved or unreported.
Prior to the intervention, the optimal allocation of funds was unknown, but it seems that the villages solved it: the extracted inefficient spending on school materials and re-allocated it to health. As far as black box solutions go, this is great – but note that the only thing that might have kept educational performance constant was the addition of the performance measures.
The performance measures were relative to other sub-districts, so there was no set `target’ that villages had to meet, so not as much scope for threshold effects (slacking off because you confident you will meet the target or that the target can never really be met).
One question we should be asking ourselves when we try and tie this to cash-on-delivery aid: how might a village respond differently to incentives than a national government?
The UK’s Conservative Party has just released a policy paper on international development. This ‘green paper’ is basically a discussion of the party’s agenda for international development – the sort of reforms they would enact if they took control of the government. Since the Conservatives are widely expected to take power sometime next year, the green paper has received a large amount of scrutiny.
One of the policies embraced in the green paper is the new(ish) aid modality Cash on Delivery, which was thought up by the folks at the Centre for Global Development a few years ago. The basic description of COD aid (yes, it already has its own acronym) can be found on their website:
Under â€ścash on deliveryâ€ť aid, donors would commit ex ante to pay a specific amount for a specific measure of progress. In education, for example, donors could promise to pay $100 for each additional child who completes primary school and takes a standardized competency test.
A credible baseline survey would be conducted, the country would publish completion numbers and test scores, and then the donor would pay for an independent audit to verify the numbers. The payment would be made upon a successful audit. Payments would be â€ścash on deliveryâ€ť â€“ made only after measurable progress, only for as much as is verifiably achieved, and without prescribing the policy or means to achieve progress.
The payment for the results would then be fully fungible – the recipient government would be allowed to do anything they wanted with it (although the reality is that there will likely be some limits on this). COD aid is initiatlly being targeted at the education sector, likely because the outputs are, relative to most outcomes in this field, easier to measure. The CGD has been working on this concept for several years, writing discussion papers and concept notes – the background information can be found here, and a full-fledged FAQ section.
There are a couple of things about COD aid that I find quite promising:
The donors don’t get involved in policymaking; they just pay for the results = no more development training wheels!
There would be no attempt to tell the government what they should do with the payout, again another point for
It would represent a way of thinking about aid that, for a chance, is impact centered.
However, there are a number of things about the scheme I’m not as confident about – some of my concerns are purely theoretical (and so are likely wrong) – some are observational:
The burden of the task – Much of the literature on incentives and the public sector has come to the same conclusion: designing incentive contracts for public institutions is not easy, and most of the time low-powered incentives prevail. Part of the reason is that outputs are usually hard to measure. One would argue that COD, as it’s currently being presented, avoids this problem by very carefully measuring output. However, as Duncan Green pointed out in his recent post on the Green Paper, there are plenty of reasons school results and attendance could worsen (or get better) that are totally out of the control of the education authority. The less control they have, the greater the risk burden they carry (Nancy Birsall responded to that concern here). Agents that are forced to face too much risk might opt to just not play the game – they’ll make little or no effort to affect the outcome. While incentives might be useful in the short-run, a distant, difficult target that requires unprecedented effort might just be too much for the average ministry. For good examples of public incentive schemes falling short of their the desired impact, see Heckman’s work on the JTPA or Burgess on Jobcentre Plus.
A numbers game – Development is a tricky business – on one hand, we want to know that our intervention has a measurable impact.Â On the other hand, we should always be concerned about turning the business into a stats game (readers familiar with The Wire will know the pitfalls of the pursuit of stats). One always worries if quality is being abandoned for the sake of quality. To be fair, CGD has repeatedly addressed this issue – their hope is also that very strict evaluation will deter attempts to game the numbers.
Donors are still playing with sticks and carrots – COD still carries with it that uneasy premise that still makes me wince: it is our job (as donors) to incentivise recipient governments to do the right things for their people – i.e. we know the way to salvation, if only these bloody governments would listen to us. Again, to be fair, this is no worse than the way aid has historically been handled – it’s just a bit patronising. It could be the right way to approach things – I tend to believe that we should be less concerned about getting governments to treat their people properly because we’ll give them money for it and more concerned with getting governments to treat their people properly because they have a natural, endogenous incentive to do so.
All these things said, I’m certain that the Center is just as worried about these same issues. They aren’t blindly pushing this new modality as an instant cure to the woes of ineffective aid – they’re approaching it cautiously, slowly building on the discussion year, and rolling out a pilot programme to see how successful it really can be. That’s the right approach – yet sometimes great-sounding but untested ideas can be quickly adopted and converted into policy. My worry is that the Conservative party, eager to distinguish its new development policy, will take up the idea and run with it before the Center finishes making up its mind whether it’s really a good idea or not!