If only you knew the power of nagging

Give quiche a chance

I’m naturally a bit skeptical of ground-level interventions that don’t involve cash, needles or textbooks. Anything that involves dubiously-titled training or “empowerment”  sets off my very cynical alarm bells. However, I’m beginning to be persuaded by the evidence that targeted information campaigns work.

First there was Pedro Vicente and Paul Collier’s study on a randomised anti-violence campaign staged prior to the 2007 Nigerian elections, showing significant reductions in the treated districts. Then there was the Heckle and Chide’s study of minibuses in Kenya: a random treatment group were given posters advising passengers to speak up if the minibus drivers drove dangerously (which is pretty much what minibus drivers are born to do). The treatment group saw sizable declines in insurance claims, including those for injury and death.

Now there is a soon-to-be-published paper by Martina Björkman and Jakob Svensson, offering a unique randomised intervention:

  1. Assess local health providers and inform the communities on their relative performance using ‘report cards’,
  2. Encourage these communities to form groups to monitor local health performance.
  3. Sit back and see what happens.

A year after the intervention, a repeat study revealed that the treated communities had: harder working health providers, higher rates of immunization and significantly reduced rates of child mortality and underweight children, all with the same levels of funding.

The best part of the study was the lack of investigation into what the communities were doing to make changes – (there is some rough evidence that the communities were more active in electing and dissolving the local provider management committees). My guess is that a fair amount of nagging was involved.

I’ve come to believe that a crucial part of development is strengthening the accountability link between citizens and their government (not to be confused with enforcing accountability externally), especially when the citizens face a trade-off for enforcement (in this situation, that trade-off is time spent hassling health workers).

A few questions remain:  is it persistent (or would health workers become more resistant to this informal accountability over time?) Is this scalable? Which part of the intervention was key: the information transfer allowing for yardstick comparisons between district, or the “empowerment” workshops? My hunch is the former.

(Bonus points to those that got the Red Dwarf reference).

4 thoughts on “If only you knew the power of nagging

  1. Paul Revill

    July 6, 2009 at 5:54pm

    Matt, just a quick comment on this — this is exactly the intervention Care trialled in 2 districts in Malawi way back in 2004, I think it even commenced 2-3 years earlier. I’m pretty sure people from the World Bank went to view the project also … funny how ideas get recycled with little mention of the humble fellows who were first working on this stuff.

  2. Ranil Dissanayake

    July 7, 2009 at 8:43am

    Interesting stuff. Few further points to raise though.

    1) The accountability question is more complex than we’re allowing for here. The micro-evidence on specific interventions may be compelling, but these are characterised as having defined parameters within which accountability is being increased (e.g. service provision) and also being locally restricted.

    But the effects of certain kinds of citizenry-Government acocuntability might be different. What happens when a Government is accountable in two regions of equal power and has to decide on a resource split between them for healthcare provision? Or when a land reform with long term economic benefits is put to referendum? The presence of an over-developed form of accountability at a macro level may hamper the ability to make the difficult decisions and trade offs that development policy normally requires.

    On the other hand, accountability for resource use can reduce corruption and wastage of resources. And micro interventions, as you mention, might have a positive impact.

    So basically, what I’m saying is, structures of accountability are just as important as the principle. It might not be a popular sentiment (though at least Collier might agree with me!) but too much accountability can be a bad thing.

    2) Second issue to consider is the type of information accountability is based on (for accountability cannot exist without info). A problem that is increasingly emerging in the UK, for example, one legacy of Thatcherism is the counting and quantifying of absolutely everything. This was borne of a legitimate desire to improve public resource-use, but has descended over time to the proliferation of indicators and targets, many of which don’t even measure what they claim to, and cause perverse behaviours from target-chasing to outright cheating. the balance between quantitative info and qualitative info has still not been struck.

    3) Finally, I imagine that the interventions in the paper you quote are complementary, especially among poorly educated populations. Info must be produced, that’s necessary. But not necessarily sufficient, if people don’t know how to interpret it; hence training unlocks the potential of the system.

    That said, we also need to consider how long training will ‘last’ for. Think of your experiences in Malawi: how many times had people been trained on something and then six months later, had to go to training again. I was flicking through (I think) Chris Blattman’s blog yesterday, and he mentioned a similar experience with villagers rather than Government officials. Clearly some interventions work and others don’t, and we need to understand why.

    I’m generally a cynic on these kinds of intervention, not on a fundamental principle but because these nuances are rarely explored or considered in planning stages.

  3. Matt

    July 7, 2009 at 10:57am

    Paul – Do you have a citation for that study?

    Ranil

    1) Of course it is, but do we need to delve into the specifics to say it works? Some of the points you raise are classic issues in the public incentives literature: drawing away effort or resources from other tasks, focusing on-easy-to-measure or observe metrics (i.e. the stats game in the wire).

    Yes, structures matter, and context matters – I’m not suggesting a California-style referendum gang-bang. In fact, I’m not suggesting anything. I’m just showing a concrete example where enhanced, intangible and unobservable accountability has a tangible result.

    2). Of course. The information set given to the villagers was quite large, but reduced to a simple set of cards. The targets were set by groups of villagers, not by bureaucrats. There are many possibilities for target bias, but this tends to be an issue for more developed countries: target bias is more of an issue when the NHS is trying to figure out the best way to get a positive stat out of you, but when the medical issues are very basic: a limited set of basic health issues, including malaria, nutrition, etc, target bias is going to be less of a problem.

    3). You’ll find a link for the Blattman post on training already in my post.

    You should be a cynic, but do you disagree that the intervention had a result? Sure, you can reject the concept based on your theoretical objections, but are you rejecting the empirical result?

  4. Ranil Dissanayake

    July 8, 2009 at 10:50am

    Good points. I wasn’t rejecting the specific intervention. My point was that the policy generalisations we can draw from it are much more restricted than they may first appear. Unfortunately, the way aid programmes work tends to be ‘if it works somewhere, extend the principle’. That’s what I’m cautioning against.

Comments are closed.