Duncan Green has a good pair of guest posts on the results / value for money agenda in aid and development. The first post, by Rosalind Eyben, suggests that the agenda runs the real risk of warping the way we conceive development support, and not in a good way:
â€¦ donor governments â€¦ can show how many kilometres of roads they have built or numbers of babies vaccinated as compared with before they started the projects. But such facts reveal little about how the change was achieved and what can be learnt for future policy and practiceâ€¦ Donors are ignoring lessons long since learnt: without local people empowering themselves to change those less tangible factors that cannot be counted, once donor money stops the roads will crumble away and the next generation of babies will not be vaccinatedâ€¦
Experienced staff and consultants know itâ€¦They have to work with complex problems â€“ such as why maternal mortality rates refuse to go down â€“ as if they were bounded problems… In a largely unpredictable and dynamic environment, rather than choosing a single â€˜best optionâ€™, a more value-for-money might be achieved by financing two or more different approaches to solving a complex problem, facilitating variously-positioned actors to implement an intervention according to their different theories of change and diagnoses and consequent purposes.
Claire Melamed from the ODI disagrees, sort of:
If you donâ€™t define in advance what the objectives of an aid programme are, you leave it up to the managers who make the decisions and the politicians who guide them to impose their own values and prejudices onto the aid programme. Of course if they could all be trusted to make the right decision, thereâ€™s no problem. But evidence suggests that might be over-optimisticâ€¦ [and] without measurement, there can be no accountability.
The real question is what results we are looking for, and how to measure them. Of course if donors want to do the wrong things, and measure the wrong things, they wonâ€™t get good results. But pointing to examples of the wrong way of using results and saying, â€˜so letâ€™s not measure resultsâ€™, seems to me [a big folly]â€¦
The two positions arenâ€™t as far apart as they may think. Eybenâ€™s post is basically arguing that in complex situations, systems strengthening and supporting a range of possible approaches may be the best way of improving outcomes, rather than rigid results monitoring because what matters is not how a variable changes so much as how the process for changing the variable changes. Melamedâ€™s argument is that what we measure is open for debate, but that we measure is not, and we could measure changes to the processes as well their outcomes, to capture some of the issues that Eyben raises. The positions are actually quite compatible.
Iâ€™m not sure, however, that this should really even be the focus of our debate. It seems to me that we should be looking more at some central problems of how results management is undertaken and how it distorts our incentives and actions. The central issues here are the conflation of assessment with quantitative measurement, the bias towards measurable impacts, and the bias towards time-specific measurement.
Firstly, thereâ€™s the conflation of assessment with measurement. This is a big problem across the public sector in the UK, so itâ€™s not a surprise to see it spread to the international development sphere as well. The idea that a result must be quantifiable to be objectively verified has infected virtually every aspect of development work. Targets are not considered to be appropriate for strategies or project documents until they are quantified. Itâ€™s a complete fallacy. There are plenty of positive outcomes that simply do not lend themselves to measurement, but can be objectively assessed qualitatively. Take capacity building. How do you measure the improvement in a Governmentâ€™s ability to think critically about the budget process, and produce a budget that is the product of better economic planning? If the budget has always been delivered on time and executed accurately, itâ€™s virtually impossible to quantify any change in the budget itself. Yet itâ€™s equally difficult to try and quantify the change through its ultimate impact, on the economic and social wellbeing of the stateâ€™s citizens, because there is too much noise in the relationship.
Difficulties like this tend to lead is one of two directions. The first is trying to find a proxy measurement for progress. This can sometimes work, but more often results in ridiculous measures that tells us almost nothing about the activity undertaken. Itâ€™s extremely common to see capacity building assessed on the number of training workshops held. This is crazy â€“ training workshops can be completely ineffectual, and the best way of building capacity is nearly always teamwork between a trainer / TA and local staff over a prolonged period of time, during which time staff can learn at their own pace and the trainer can modify his own methods and approaches based on feedback. Yet this is very difficult to measure â€“ as is the outcome. In an aid effectiveness department, the outcome of a better trained staff body may paradoxically be that fewer aid agreements are signed in year, as staff become more discerning about what should and should not be agreed to and take to rejecting poorly designed proposals more regularly.
If this is recognized without a corresponding increase in the use of qualitative assessment, it may just lead to the second problem I identified: a movement away from supporting difficult to measure issues and into those which lend themselves to quantification. So, instead of focusing on areas where results are less countable or tangible, donors tend to push their support to areas where it is much easier to measure outcomes, such as vaccinations or disease eradication. Noise still comes into play here, but thereâ€™s no doubt that its easier to measure what needs to be achieved, subject to some caveats covered below. This herding of support to the measurable areas creates donor overcrowding in areas such as health and education â€“ where in every country Iâ€™ve worked in, the volume of aid and number of donors involved far outweighs those in other sectors. The incentives created by a rigid conception of what results are and how they can be measured makes it harder for Governments to secure funding across the full breadth of their funding needs.
Yet even within those areas where outcomes can be measured with some reliability, such as vaccination, as Eyben points out â€“ time scales matter. The vast majority of donor funded activities are completed within five years. Itâ€™s rare for a project to do any significant monitoring after the lifespan of the project is over. In other words, the current regime of results measurement measures the value of an intervention over a short time period. As Eyben points out, the biggest challenge in development is not simply creating change, but sustaining change. If we know that the success of our programme (and inevitably, individuals jobs) will be measured after five years, our time horizon shortens. The fact of measurement ensures that our actions focus on achieving results within that period, which in turn stimulates neglect of the longer term, fundamental changes that will be needed to sustain improvements.
I do believe that a level of results monitoring and management is important. Iâ€™m not going to dispute that, and I donâ€™t think anyone will seriously argue otherwise. The terms of the debate must therefore be shifted to the real issues: firstly, what constitutes a result, and what constitutes acceptable evidence for it? The current balance is far too strongly weighted towards quantification and scientific measurement, but this is not a pure science, or even approaching one. Secondly, how do we ensure that the process of defining targets does not in itself distort the process of allocating aid? Thirdly, how do we ensure that the time period for measuring results does not shorten time horizons, even given the fact that as time horizons extend, attribution of success become harder? It seems to me that these should be the focus on our attention, not a debate on the need for results management altogether.