Result!

Duncan Green has a good pair of guest posts on the results / value for money agenda in aid and development. The first post, by Rosalind Eyben, suggests that the agenda runs the real risk of warping the way we conceive development support, and not in a good way:

… donor governments … can show how many kilometres of roads they have built or numbers of babies vaccinated as compared with before they started the projects. But such facts reveal little about how the change was achieved and what can be learnt for future policy and practice… Donors are ignoring lessons long since learnt: without local people empowering themselves to change those less tangible factors that cannot be counted, once donor money stops the roads will crumble away and the next generation of babies will not be vaccinated…

Experienced staff and consultants know it…They have to work with complex problems – such as why maternal mortality rates refuse to go down – as if they were bounded problems… In a largely unpredictable and dynamic environment, rather than choosing a single ‘best option’, a more value-for-money might be achieved by financing two or more different approaches to solving a complex problem, facilitating variously-positioned actors to implement an intervention according to their different theories of change and diagnoses and consequent purposes.

Claire Melamed from the ODI disagrees, sort of:

If you don’t define in advance what the objectives of an aid programme are, you leave it up to the managers who make the decisions and the politicians who guide them to impose their own values and prejudices onto the aid programme. Of course if they could all be trusted to make the right decision, there’s no problem. But evidence suggests that might be over-optimistic… [and] without measurement, there can be no accountability.

The real question is what results we are looking for, and how to measure them. Of course if donors want to do the wrong things, and measure the wrong things, they won’t get good results. But pointing to examples of the wrong way of using results and saying, ‘so let’s not measure results’, seems to me [a big folly]…

The two positions aren’t as far apart as they may think. Eyben’s post is basically arguing that in complex situations, systems strengthening and supporting a range of possible approaches may be the best way of improving outcomes, rather than rigid results monitoring because what matters is not how a variable changes so much as how the process for changing the variable changes. Melamed’s argument is that what we measure is open for debate, but that we measure is not, and we could measure changes to the processes as well their outcomes, to capture some of the issues that Eyben raises. The positions are actually quite compatible.

I’m not sure, however, that this should really even be the focus of our debate. It seems to me that we should be looking more at some central problems of how results management is undertaken and how it distorts our incentives and actions. The central issues here are the conflation of assessment with quantitative measurement, the bias towards measurable impacts, and the bias towards time-specific measurement.

Firstly, there’s the conflation of assessment with measurement. This is a big problem across the public sector in the UK, so it’s not a surprise to see it spread to the international development sphere as well. The idea that a result must be quantifiable to be objectively verified has infected virtually every aspect of development work. Targets are not considered to be appropriate for strategies or project documents until they are quantified. It’s a complete fallacy. There are plenty of positive outcomes that simply do not lend themselves to measurement, but can be objectively assessed qualitatively. Take capacity building. How do you measure the improvement in a Government’s ability to think critically about the budget process, and produce a budget that is the product of better economic planning? If the budget has always been delivered on time and executed accurately, it’s virtually impossible to quantify any change in the budget itself. Yet it’s equally difficult to try and quantify the change through its ultimate impact, on the economic and social wellbeing of the state’s citizens, because there is too much noise in the relationship.

Difficulties like this tend to lead is one of two directions. The first is trying to find a proxy measurement for progress. This can sometimes work, but more often results in ridiculous measures that tells us almost nothing about the activity undertaken. It’s extremely common to see capacity building assessed on the number of training workshops held. This is crazy – training workshops can be completely ineffectual, and the best way of building capacity is nearly always teamwork between a trainer / TA and local staff over a prolonged period of time, during which time staff can learn at their own pace and the trainer can modify his own methods and approaches based on feedback. Yet this is very difficult to measure – as is the outcome. In an aid effectiveness department, the outcome of a better trained staff body may paradoxically be that fewer aid agreements are signed in year, as staff become more discerning about what should and should not be agreed to and take to rejecting poorly designed proposals more regularly.

If this is recognized without a corresponding increase in the use of qualitative assessment, it may just lead to the second problem I identified: a movement away from supporting difficult to measure issues and into those which lend themselves to quantification. So, instead of focusing on areas where results are less countable or tangible, donors tend to push their support to areas where it is much easier to measure outcomes, such as vaccinations or disease eradication. Noise still comes into play here, but there’s no doubt that its easier to measure what needs to be achieved, subject to some caveats covered below. This herding of support to the measurable areas creates donor overcrowding in areas such as health and education – where in every country I’ve worked in, the volume of aid and number of donors involved far outweighs those in other sectors. The incentives created by a rigid conception of what results are and how they can be measured makes it harder for Governments to secure funding across the full breadth of their funding needs.

Yet even within those areas where outcomes can be measured with some reliability, such as vaccination, as Eyben points out – time scales matter. The vast majority of donor funded activities are completed within five years. It’s rare for a project to do any significant monitoring after the lifespan of the project is over. In other words, the current regime of results measurement measures the value of an intervention over a short time period. As Eyben points out, the biggest challenge in development is not simply creating change, but sustaining change. If we know that the success of our programme (and inevitably, individuals jobs) will be measured after five years, our time horizon shortens. The fact of measurement ensures that our actions focus on achieving results within that period, which in turn stimulates neglect of the longer term, fundamental changes that will be needed to sustain improvements.

I do believe that a level of results monitoring and management is important. I’m not going to dispute that, and I don’t think anyone will seriously argue otherwise. The terms of the debate must therefore be shifted to the real issues: firstly, what constitutes a result, and what constitutes acceptable evidence for it? The current balance is far too strongly weighted towards quantification and scientific measurement, but this is not a pure science, or even approaching one. Secondly, how do we ensure that the process of defining targets does not in itself distort the process of allocating aid? Thirdly, how do we ensure that the time period for measuring results does not shorten time horizons, even given the fact that as time horizons extend, attribution of success become harder? It seems to me that these should be the focus on our attention, not a debate on the need for results management altogether.

One thought on “Result!

  1. MJ

    March 21, 2011 at 7:05am

    I think the time horizons issue is a critical one, and it brings us back to the old problem of measuring intervention outputs rather than assessing development outcomes.

    Critical thinking (in any process) is, of course, just about one of the hardest things to assess, and yet is critically important. I also completely agree with you about how to and how not to stimulate it. But even if you’ve got the method right, one can easily see how this could become a black hole for donor support, sucking up endless cash. It’s all very well to ask for subjective assessments from those involved, but they might have strong incentives for talking up their level of successes.

    I wonder whether this is something that COD aid could help with? Under COD its up to the developing country whether to hire any TAs, and ultimately, if they don’t deliver, then they don’t get the aid money. My guess is that, initially at least, TA employment might go down, but then it might come back up when recipient govts learn themselves the true value of critical thinking.

    MJ

Comments are closed.