Result!

Duncan Green has a good pair of guest posts on the results / value for money agenda in aid and development. The first post, by Rosalind Eyben, suggests that the agenda runs the real risk of warping the way we conceive development support, and not in a good way:

… donor governments … can show how many kilometres of roads they have built or numbers of babies vaccinated as compared with before they started the projects. But such facts reveal little about how the change was achieved and what can be learnt for future policy and practice… Donors are ignoring lessons long since learnt: without local people empowering themselves to change those less tangible factors that cannot be counted, once donor money stops the roads will crumble away and the next generation of babies will not be vaccinated…

Experienced staff and consultants know it…They have to work with complex problems – such as why maternal mortality rates refuse to go down – as if they were bounded problems… In a largely unpredictable and dynamic environment, rather than choosing a single ‘best option’, a more value-for-money might be achieved by financing two or more different approaches to solving a complex problem, facilitating variously-positioned actors to implement an intervention according to their different theories of change and diagnoses and consequent purposes.

Claire Melamed from the ODI disagrees, sort of:

If you don’t define in advance what the objectives of an aid programme are, you leave it up to the managers who make the decisions and the politicians who guide them to impose their own values and prejudices onto the aid programme. Of course if they could all be trusted to make the right decision, there’s no problem. But evidence suggests that might be over-optimistic… [and] without measurement, there can be no accountability.

The real question is what results we are looking for, and how to measure them. Of course if donors want to do the wrong things, and measure the wrong things, they won’t get good results. But pointing to examples of the wrong way of using results and saying, ‘so let’s not measure results’, seems to me [a big folly]…

The two positions aren’t as far apart as they may think. Eyben’s post is basically arguing that in complex situations, systems strengthening and supporting a range of possible approaches may be the best way of improving outcomes, rather than rigid results monitoring because what matters is not how a variable changes so much as how the process for changing the variable changes. Melamed’s argument is that what we measure is open for debate, but that we measure is not, and we could measure changes to the processes as well their outcomes, to capture some of the issues that Eyben raises. The positions are actually quite compatible.

I’m not sure, however, that this should really even be the focus of our debate. It seems to me that we should be looking more at some central problems of how results management is undertaken and how it distorts our incentives and actions. The central issues here are the conflation of assessment with quantitative measurement, the bias towards measurable impacts, and the bias towards time-specific measurement.

Continue reading