Every intervention is unique, perhaps unintentionally so.
Recently, Dan Honig of Johns Hopkins forwardedÂ Ranil and me some thoughts he had in reaction to an Albert Hirschman on development projects that he felt was pertinent to the discussion on the pros and consÂ of RCTs. What followed is a discussion (rant) between Dan, Ranil and me. I’ve edited out the e-maily bits for clarity:
Matt, just after I hit send on this I realized I should have included you on this â€“ I generally think youâ€™re right on RCTs and the stale-ness of the conversation (and discussed this with Ranil a few months back some 2 days after you had dinner with him, hence the cc to Ranil) but feel like Iâ€™ve never seen this Hirschmann frame and wondering if it struck you as interesting. And yes, basically Iâ€™m trying to catalyze you writing something cool on this so I can quote/reference it down the road
Reading Hirschmannâ€™s Development Projects Observed for the first time, and as I read it heâ€™s with [Lant Pritchett and Michael Woolcock] on RCTs and causal density in international development projects. The quote below is from page 186 of the 1967 edition; italics are his, brackets mine; just before this he suggests we may not be able to identify good indicators of effects ex-ante and thus presumably couldnâ€™t be pre-specified in a trial, meaning presumably we would be ill served by an RCT on a particular intervention even if we ignored external validity concerns.
â€śThe indirect effects [of development projects] are so varied as to escape detection by one or even several criteria uniformly applied to all projects. Upon inspection, each project turns out to represent a unique constellation of experiences and consequences, of direct and indirect effects.â€ť
Hey Dan, that’s aÂ really interesting quote by Hirschman. If my interpretation is correct, it seems to be more damning for empirical evaluation in general than for RCTs in particular.
I’m not sure how I feel about this. Even if you move away from a simple, reduced form causal framework, Hirschman’sÂ critique seems like it would apply. Even if development is a messy, complex thing that can’t really be boiled down in an impact evaluation framework, we still rely on measurement when we talk about development, and any given set of measurements is going to leave out things which might matter which are unmeasured. We can point at improving test scores but leave out student stress, etc, and the set of things that we leave out that might be important will change depending on the context. I guess I see this as a problem of measurement rather than as a problem for RCTs.
I also wonder what this means for how an empirical researcher operates. Over the last few years, I have become incredibly suspicious of surprising, counter-intuitive results, where a researcher measures something outside of the standard set of outcomes and finds a result. In a world of multiple hypothesis tests, expanding the set of outcomes to include as much of Hirschman’s unique constellation as possible will open up the door to a lot of false positives which will end up getting written up and published.
So that was a rant. Um, what do you think Ranil?