In a piece for Project Syndicate released today, Ricardo Hausmann makes a grand case against evidence-based policies, specifically the rise of randomized controlled trials:
My main problem with RCTs is that they make us think about interventions, policies, and organizations in the wrong way. As opposed to the two or three designs that get tested slowly by RCTs (like putting tablets or flipcharts in schools), most social interventions have millions of design possibilities and outcomes depend on complex combinations between them. This leads to what the complexity scientist Stuart Kauffman calls a “rugged fitness landscape.”
After presenting a theoretical case of an RCT which tests for and fails to find an impact of tablets on learning in schools, he offers up an alternative approach, one that relies on rapid experimentation and adaptation:
Consider the following thought experiment: We include some mechanism in the tablet to inform the teacher in real time about how well his or her pupils are absorbing the material being taught. We free all teachers to experiment with different software, different strategies, and different ways of using the new tool. The rapid feedback loop will make teachers adjust their strategies to maximize performance.
Over time, we will observe some teachers who have stumbled onto highly effective strategies. We then share what they have done with other teachers.
Notice how radically different this method is. Instead of testing the validity of one design by having 150 out of 300 schools implement the identical program, this method is “crawling” the design space by having each teacher search for results. Instead of having a baseline survey and then a final survey, it is constantly providing feedback about performance. Instead of having an econometrician do the learning in a centralized manner and inform everybody about the results of the experiment, it is the teachers who are doing the learning in a decentralized manner and informing the center of what they found.
Hausmann makes a compelling argument here, but it all hinges on an exceptional premise: that teachers have access to a magical device that gives them *real time* feedback on student learning. Iteration and adaptation makes a lot of sense….. if you are in an environment where you can actually observe the immediate effects of your decisions and be sure that those decisions are having a causal impact.
But most of us are not in those environments. Many teachers might have an idea of how good their particular method is, but in absence of a technology which can provide them with high-quality real-time feedback, it would be very hard to be sure. Most of us are in an environment where we have little idea of what we are doing is effective at all. Even after 32 years of direct observation and some experimentation, I still can’t figure out if spicy food gives me indigestion.
Even when we can successfully parse the noise of life and match an action with a reaction, low-level experimentation still opens up the door to all sorts of internal biases. Human beings are fantastic at creating narratives (I feel good today, it must have been because of that thing I did yesterday) which would whither under larger-scale experimentation.
Of course there are clear examples of low level, rapid experimentation being successful when we have access to technologies that give us good, quick feedback. Bridge Academies, which is now one of the largest private school providers in the world, succeeded largely due to a very high degree of internal experimentation. But to accomplish this, Bridge had to have access to a wealth of real time data on student achievement and attendance as well as enough centralized control to be able to experiment across classrooms and schools.
But in reality these kinds of feedback technologies just don’t exist in many contexts, at least not yet. If I am working in a Ministry of Health in a developing country and I want to discern whether a given health intervention has had an impact, I won’t necessarily have access to real time data on hospital admissions. Instead, I would have to rely on costly household surveys which take time to collect. This slows down the process of iteration and adaptation to a point where a randomized controlled trial combined with some qualitative fieldwork actually looks pretty attractive.
RCTs are far from a perfect solution and Hausmann is correct to point out that they can be slow and blunt tools for figuring out exactly how an intervention should be implemented. But that is a reason to complement them with other methods – not to chuck them out the door. If a teacher has come up with a new method of using a tablet through rapid experimentation and it is rolled out to the entire school, that method should be rigorously empirically tested. If an RCT of some new intervention finds no effect, we should turn to more rapid experimentation to find a better way.
We’ve been arguing about RCTs for years now – it is disheartening that this debate still feels very black and white.