Simply put, if youâ€™re attracted to ideas that have a good chance of being wrong, and if youâ€™re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, youâ€™ll probably succeed in proving wrong theories right.
Following the same train of thought, Alex Tabarrok, points out that by pure statistical chance, about 5% of all false hypotheses that are tested will give statistically significant results. If you believe that most hypothesis are false, and that we’re only successful in identifying true hypothesis part of the time, then our collection of statistically significant results could be largely contaminated with false positives (Tabarrok gives an arbitrarily alarming figure of 25%).
This makes it terribly important to take these results carefully, and not to treat each individual study as the final word on a subject. It is also a very, very good reason to be distrustful of big, conclusive reports; the sort that are often produced by international NGOs and many of the bilateral donors and think tanks.
Researchers are often called upon to make expansive, sometimes globalÂ statements about tenuous, uncertain relationships (like the relationship between climate change and HIV/AIDS). The tendency is for these researchers to mine results for useful `impacts’, then use those as underpinning assumptions for bigger leaps of logic: a researcher takes person A’s results and person B’s results, proclaims they are true, and uses them to produce results C.
Even if report-writers do this very carefully, they are still bound by the limitations of the original study – and the probability of error goes up. If there is a 25% chance that person A’s results are a fluke, and a 25% chance that person B’s results are a fluke, then there is only a .75 x .75 = 56% chance that result C isn’t constructed from some false results. This is less of a problem if the researcher is considering many results from a single hypothesis, but if the researcher cherry picks different hypothesis (say, for example, an assumption of the impact of X on Y and Z on Q) and strings them together, such flaws will be more and more pronounced.
Tabarrok has a list of things everyone should consider in a world where most research is false. I’ll add a few more, pertinent to the uncertain world of development reports and policy briefs:
- Be extremely wary of reports touting specific numbers. A report which says that climate change will cost us exactly $50 billion in the next ten years probably has many, many, many assumptions behind it. For each additional assumption, consider the collective probability that the whole estimate is wrong.
- Read the footnotesÂ and references behind assumptions, and follow-up with the source literature. Be wary if a number is taken from a study that has never been published, or for which there is no clear evidence of inspired debate. Be wary if the author does not mention and appreciate the potential problems with those assumptions, or the reference’s place in the general literature.
- Please, please, put on your causality cap before you start touting any numbers.