Duncan Green has a good pair of guest posts on the results / value for money agenda in aid and development. The first post, by Rosalind Eyben, suggests that the agenda runs the real risk of warping the way we conceive development support, and not in a good way:
âŚ donor governments âŚ can show how many kilometres of roads they have built or numbers of babies vaccinated as compared with before they started the projects. But such facts reveal little about how the change was achieved and what can be learnt for future policy and practiceâŚ Donors are ignoring lessons long since learnt: without local people empowering themselves to change those less tangible factors that cannot be counted, once donor money stops the roads will crumble away and the next generation of babies will not be vaccinatedâŚ
Experienced staff and consultants know itâŚThey have to work with complex problems â such as why maternal mortality rates refuse to go down â as if they were bounded problems… In a largely unpredictable and dynamic environment, rather than choosing a single âbest optionâ, a more value-for-money might be achieved by financing two or more different approaches to solving a complex problem, facilitating variously-positioned actors to implement an intervention according to their different theories of change and diagnoses and consequent purposes.
Claire Melamed from the ODI disagrees, sort of:
If you donât define in advance what the objectives of an aid programme are, you leave it up to the managers who make the decisions and the politicians who guide them to impose their own values and prejudices onto the aid programme. Of course if they could all be trusted to make the right decision, thereâs no problem. But evidence suggests that might be over-optimisticâŚ [and] without measurement, there can be no accountability.
The real question is what results we are looking for, and how to measure them. Of course if donors want to do the wrong things, and measure the wrong things, they wonât get good results. But pointing to examples of the wrong way of using results and saying, âso letâs not measure resultsâ, seems to me [a big folly]âŚ
The two positions arenât as far apart as they may think. Eybenâs post is basically arguing that in complex situations, systems strengthening and supporting a range of possible approaches may be the best way of improving outcomes, rather than rigid results monitoring because what matters is not how a variable changes so much as how the process for changing the variable changes. Melamedâs argument is that what we measure is open for debate, but that we measure is not, and we could measure changes to the processes as well their outcomes, to capture some of the issues that Eyben raises. The positions are actually quite compatible.
Iâm not sure, however, that this should really even be the focus of our debate. It seems to me that we should be looking more at some central problems of how results management is undertaken and how it distorts our incentives and actions. The central issues here are the conflation of assessment with quantitative measurement, the bias towards measurable impacts, and the bias towards time-specific measurement.