If you are ¬†a fan of podcasts then you really should be listening to NPR’s Hidden Brain. This week’s episode is a fascinating look at the impact of “checklists” or “to-do” lists used across a number of different professions to offset human error. It recounts how,¬†during the development of the¬†Boeing B-17 “Flying Fortress” bomber in the late 1930s, a fatal cash of a prototype plane led to the US Air Force to mandate the practice of running through check lists prior to future flights. While experts in high-skill professions like piloting or surgery typically feel confident in their abilities to get the job done, the addition of routine (and mundane) checks forces them to guard against unlikely but high-cost events.
In another recent podcast the journalist Sarah Kliff frames this as treating mistakes as plane crashes rather than car crashes, the former requiring a full rethink of how procedures are performed. She recounts how hospitals in the US began using checklists to reduce the incidence of central line infections. Even though medical staff are highly-trained professionals who should know the correct procedures to reduce the chance of contamination, infection rates plummeted once they were forced to use checklists prior to a procedure (there were of course other complementary changes in policy).
This led me to wonder whether or not `checklist culture’ is reflected in modern development policy.¬†One might argue that in some ways policy has become too checklist oriented. Many reform agendas – ranging from anti-money laundering standards to private sector reforms –¬† rely on a simple list of indicators or policy changes that a country needs to check off in order to be compliant.¬† While many of these agendas are centered around outcomes or processes worth achieving anyway, many are shallow in nature. This results in brittle institutions that look good on paper, but are incapable of doing much beyond that, a point laid out a long time ago by Lant Pritchett when he first spoke of isomorphic mimicry.
But there may be some ways in which the checklist mentality might be useful for decision makers in the development space. We know from psychology and behavioral economics that people often¬†exhibit cognitive biases in their decision-making. This leads them to make decisions that are bad for them in the long term, but while the ramifications can be substantial, they are largely confirmed to the individual level.
The stakes are potentially a lot higher when those cognitive biases and errors are being made by those that make decisions that effect hundreds, thousands or millions of other people. It would then seem important that development professionals be able to act as impartial, rational decision makers, alas there is evidence that we’re just as flawed as the rest of humankind. A recent working paper by researchers from the World Bank and the Universities of East Anglia and Oxford brought development professionals from DFID and the WB to investigate, aaaaaaand the results ain’t too pretty.¬†From the paper’s abstract:
“Experiments conducted on a novel subject pool of development policy professionals (public servants of the World Bank and the Department for International Development in the United Kingdom) show that policy professionals are indeed subject to¬†decision making traps, including sunk cost bias, the framing of losses and gains, frame-dependent risk-aversion, and, most strikingly, confirmation bias correlated with ideological priors, despite having an explicit mission to promote evidence-informed and impartial decision making. These findings should worry policy professionals and their principals in governments and large organizations, as well as citizens themselves.”
Thankfully, development professionals are not unchecked autocrats, out decisions are confined by the structures of the institutions we work for. But what we don’t know is whether those institutions mitigate or amplify our biases or priors – I think cases can be made in either direction. Development¬†bureaucrats certainly have to jump through a lot of hurdles to get their projects off the ground – but those `checklists’ are largely around mitigating risk, ensuring a proposal has been properly vetted and that it is likely development-friendly. There is some evidence from the above paper that deliberation is effective in reducing these biases, but one wonders whether the type of deliberation that the subjects (in this case DFID economists) participated in mirrors at all the kind of peer-review or administrative checks that the average bureaucrats at DFID or the World Bank goes through.
So maybe we need checklists specifically to offset our biases. I’ll start with a few ideas for both bureaucrats and researchy-economists working on a proposal or note or paper, but would be interested to hear what yours would be.
- Is there any rigorous evidence supporting the argument I am making?
- Have I sat down and examined whether my beliefs are based on emotion or reasoning?
- Would an ordinary person who doesn’t study development or economics understand what I am saying?
- Have I made the case that my proposal addresses a development/poverty question, rather than justifying its existence through internal or external politics or momentum around some issue?
- Have I listed, at least in my own head, the reasons why I might be wrong about this?
- Would someone in another team/department/institution make better use of these resources that I control?
- Have I written down a contingency plan for when things go wrong?
- Have I thought about how I will know if something has gone wrong?
By the way, you¬†can find the podcasts I mentioned here: