David McKenzie goes to town on those that complain about the lack of external validity in experimental methods. For one, the standard seems to be applied more often to research in developing countries:
So letâ€™s look at the April 2011 AER. It contains among other papers (i) a lab experiment in which University of Bonn students were asked to count the number of zeros in tables that consisted of 150 randomly ordered zeros and ones; (ii) a paper on contracts as reference points using students of the University of Zurich or the Swiss Federal Institute of Technology Zurich; (iii) an eye-tracking experiment to see consumer choices done with 41 Caltech undergraduates; and (iv) a paper in which 62 students from an unnamed university were presented prospects for three sources of uncertainty with unknown probabilities; (v) a paper on backward induction among world class chess players.
And then, a swipe against those withing-development who argue that experimental methods aren’t externally valid:
Consider some of the most cited and well-known non-experimental empirical development research papers: Robert Townsendâ€™s Econometrica paper on risk and insurance in India has over 1200 cites in Google Scholar, and is based on 120 households in 3 villages in rural India; Mark Rozenzweig and Oded Starkâ€™s JPE paper on migration and marriage is based on the same Indian ICRISAT sample; Tim Conley and Chris Udryâ€™s AER paper on technology adoption and pineapples is based on 180 households from 3 villages in southern Ghana; on a somewhat larger scale, Shankar Subramanian and Angus Deatonâ€™s work on the demand for calories comes from 5630 households from one state in India in 1983.
From the perspective of a researcher (and one currently working on an experiment in a developing country), I completely agree with McKenzie here. Micro-empirical evidence is always useful, whether or not it is immediately generalizable or not – as long as we update our priors with care every time we read a new study.
From the perspective of a blogger who has taken swipes at the randomistas over external validity a few times, I think much of the push back on the external validity front has less to do with the research itself, and more with how the research is being trumpeted outside the academic sphere – there haven’t been any NYT articles about how eye-tracking experiments herald the end of poverty.