“Many real-world problems are not easily described with the kind of precision that professional mathematicians insist upon. This is due to the limitations of data, the costs of collecting and analyzing data, and the inherent difficulties of giving mathematical expression to the complexity of human behavior.” This strikes me as very true. At what point are we expecting too much from our impact assessments?
While the more rigorous impact assessments certainly require some statistical knowhow and reliable data, they don’t necessarily require giving “mathematical expression” to human behaviour. Even though the resulting academic publications might have some calculus window-dressing, an impact measurement is generally about as atheoretical as they come: what was the impact of X on Y? When academics move on and start asking why X impacts Y, they then often retreat to the black box of mathematical modeling (usually in a desperate attempt to avoid qualitative methods, which Chris Blattman writes an excellent post about here).
Alanna also discusses a point made by Andrew Natsios:
Natsios points out that USAID has begun to favor health programs over democracy strengthening or governance programs because health programs can be more easily measured for impact. Rule of law efforts, on the other hand, are vital to development but hard to measure and therefore get less funding.
I think this is the most important criticism of over-reliance on empirical assessment – donors will prefer to fund causes that can easily signal an impact that can be touted back home. A reasonable counter is that those that swim in murkier waters just have to work harder to show their impacts, but in reality they are more likely to either let effort collapse, or just migrate over to programs that do get the funding.
While I’m partially sympathetic to doubts about impact-analysis, I think that much of (but not Alanna’s) the criticism is self-serving: let us continue using the same methods we’ve always used, which happen to always show an impact despite the never-ending micro-macro paradox.
That’s fine if you choose to reject statistical rigour, but please don’t pop up five years later and claim that your project/aid flow is responsible for all sorts of wonderful things you can’t really prove. Some may be content with photos, anecdotes and correlations, but don’t be surprised if the rest of us aren’t.
This doesn’t mean that everything should (or could) be judged by a hardcore RCT starting tomorrow, but when the evidence is less direct, the onus is on the presenter to be more modest and careful with their assertions.