A randomista for hire is a dangerous thing

Our research shows that the treated (caged) group was 30% more likely to return home than the non-caged group.

Our research shows that the treated (caged) group was 30% more likely to return home than the control (non-caged) group.

The Behavioural Insights Team is a research unit made up of randomistas who prefer to rely on behavioural economics and psychology to develop and test `nudges’ to achieve certain policy goals. They originally grew out of the Cabinet Office, but eventually went private (the CO has retained a stake in the BIT).

I was always excited by the mere existence of the Behavioural Insights Team – this was the first clear example of government investing in rigorous randomisation to test some of its policies.

That said, while the BIT likely comprises a group of people who want to make the world a better place, they are beholden to their clients. One of these clients is the Home Office, which is currently paying the BIT to find ways to convince illegal migrants to voluntarily leave the UK. From the BIT’s update report:

Increasing voluntary departures of illegal migrants

BIT has been working with the Home Office to consider new measures to help illegal migrants to voluntarily return home, focusing initially on engagement at reporting centres. Reporting centres are seen as an important but underutilised opportunity to prompt illegal migrants to consider whether leaving the UK voluntarily would be a preferable option in their circumstances.

Starting in December 2014, BIT undertook a short piece of ethnographic research at reporting centres across London, reviewing current procedures and interaction points to gain an understanding of the reporting centre experience from the perspective of a member of the reporting population and the reporting agent.

Informed by this, BIT developed several options for Home Office consideration to employ behaviourally informed trials in reporting centres that could encourage higher numbers of voluntary departures from the UK.

At this stage, the precise scope of a trial is still being finalised, with the aim to combine a number of behavioural elements to create a distinct reporting centre experience that encourages members of the reporting population to consider voluntary departure as an alternative to their current situation.

Note that many people who end up in reporting centres are asylum seekers, not just illegal `economic’ migrants. The BIT has another project in the pipeline aimed at targeting business who hire illegal migrants, with a similar end goal of convincing the migrants to voluntarily go home. The Home Office got a lot of push back from trying this before, in the not-too-subtle form of a van driving around telling migrants to go home:


So now the UK government has turned to more insidious methods, aided by a team of randomistas. It’s useful reminder that rigorous, evidence-based policy can be used for stupid, short-sighted policy as well.


*Disclaimer: I once applied to work at the BIT, but dropped out midway through the selection process to work on a project in Oxford.

How I Learned to Stop Worrying and Love the Bomb


Orbital Mechanics has created an incredibly creepy, yet riveting video showing every single nuclear detonation from 1945 onward.


You would be excused if your first reaction is “holy shit, we’re absolutely bonkers.” That was my first reaction.

My second thought was: “we were lucky that, aside from the obvious first two bombs, we were relatively lucky in that nothing ever went wrong.” But I was wrong to think that – nuclear testing led to long lasting, negative effects on the cognition of people who were exposed to the radiation while in utero. Sandra Black and co. have a really interesting paper examining the effect on Norwegians (apparently due to atmospheric conditions, a large hunk of radiation form nuclear testing landed in Norway):

Research increasingly shows that differences in endowments at birth need not be genetic but instead are influenced by environmental factors while the fetus is in the womb. In addition, these differences may persist well beyond childhood. In this paper, we study one such environmental factor – exposure to radiation –that affects individuals across the socio-economic spectrum. We use variation in radioactive exposure throughout Norway in the 1950s and early 60s, resulting from the abundance of nuclear weapon testing during that time period, to examine the effect of nuclear exposure in utero on outcomes such as IQ scores, education, earnings, and adult height. Importantly, we are able to examine the effects of exposure each month in utero to determine the periods when exposure is most harmful. We find that exposure to low-dose nuclear radiation, specifically during months 3 and 4 in utero, leads to a decline in IQ scores of men aged 18. Moreover, radiation exposure leads to declines in education attainment, high school completion, and earnings among men and women. We are also able to examine whether these effects persist across a second generation – we find that the children of persons affected in utero also have lower cognitive scores, suggesting a persistent effect of the shock to endowments. Given the lack of awareness about nuclear testing in Norway at this time, our estimates are likely to be unaffected by avoidance behavior or stress effects. These results are robust to the choice of specification and the inclusion of sibling fixed effects.

Hat tip to Kottke.


The IMF, inequality and the trickle-down of empirical research

"It took so many assumptions to put you together!"

“It took so many assumptions to put you together!”

By Nicolas Van de Sijpe

recent IMF staff discussion note has received a lot of attention for claiming that a smaller income share of the poor lowers economic growth (see also here and here). This piece in the FT is fairly typical, arguing that the paper “establishes a direct link between how income is distributed and national growth.”

It quotes Nicolas Mombrial, head of Oxfam International’s office in Washington DC, saying that (my emphasis): “the IMF proves that making the rich richer does not work for growth, while focusing on the poor and the middle class does” and that “the IMF has shown that `trickle down’ economics is dead; you cannot rely on the spoils of the extremely wealthy to benefit the rest of us.”

The aim of this blog post is to clarify that the results in Table 1 of  the paper, which are based on system GMM estimation, rely on assumptions that are not spelled out explicitly and whose validity is therefore very difficult to assess. In not reporting this and other relevant information, the paper’s application of system GMM falls short of current best practices. As a result, without this additional information, I would be wary to update my prior on the effect of inequality on growth based on the new results reported in this paper.

The paper attempts to establish the causal effect of various income quintiles (the share of income accruing to the bottom 20%, the next 20% etc.) on economic growth. It finds that a country will grow faster if the share of income held by the bottom three quintiles increases. In contrast, a higher income share for the richest 20% reduces growth. As you can imagine, establishing such a causal effect is difficult: growth might affect how income is distributed, and numerous other variables (openness to trade, institutions, policy choices…) might affect both growth and the distribution of income. Clearly, this implies that any association found between the income distribution and growth might reflect things other than just the causal effect of the former on the latter.

To try to get around this problem, the authors use a system GMM estimator. This estimator consists of (i) differenced equations where the changes in the variables are instrumented by their lagged levels and (ii) equations in levels where the levels of variables are instrumented by their lagged differences (Bond, 2002, is an excellent introduction). Roughly speaking, the hope is that these lagged levels and differences isolate bits of variation in income share quintiles that are not affected by growth or any of the omitted variables. These bits of variation can then be used to identify the causal effect of the income distribution on growth. The problem with the IMF paper is that it does not tell you exactly which lagged levels and differences it uses as instruments, making it hard for readers to assess how plausible it is that the paper has identified a causal effects.

Continue reading

Guns don’t kill people


Think of all the cultural reasons I am wielding this rocket launcher.

Another shooting happens. This time in Charleston. I grew up about two hours away and often visited the town with my parents. It’s a lovely place, although like most places in South Carolina it has a difficult, disturbing past.

There are two opposing views which typically surface after  a mass shooting. The first is that gun violence is driven by gun ownership, and that effective gun control will reduce the number of people killed by firearms every year.  A simple mathematical way of describing this relationship would be to say that gun violence is a function of the number of guns in a country:

V = F(G)

The opposing view is that there are all sorts of other things that determine gun violence. Proponents look to countries with high levels of gun ownership but low levels of violence, such as Canada. Holders of this view assert a relationship that looks like this:

V = F(S)

Where S is “other stuff” which influences gun violence. This is somewhat consistent with the  “guns don’t kill people, people kill people” argument, which includes the unstated third statement: “and there are lots of things that determine whether people want to kill each other.”

Setting aside any preoccupations with the Second Amendment, the gun control debate can be characterized as a fight over whether V = F(G) or V = F(S). But this is a mischaracterization which gives more legitimacy to those opposed to gun control. In reality, gun violence is a function of both the number of guns in circulation and all the “other stuff,” and that, by construction, fewer guns makes it more difficult to commit gun violence, so that.

V = F(G,S) and V = 0 if G = 0 or S=0

That is: it doesn’t matter if Canada can have its cake and eat it. If there is some special ingredient to having guns without the violence (S = 0), we don’t know what it is, and won’t know any time soon. But that doesn’t mean that reducing G will not reduce violence. Whether it is a cost-effective way to reduce violence is another question, but unless someone identifies what goes into S, the best bet is for the US to focus on G.

I drink your milkshake


The Ethiopians appear to be close to finalizing construction of a large hydroelectric dam on the Omo river, primarily to generate power but also to support local irrigation efforts.  Over the past five years the project has received substantial foreign financing and investment by China and indirectly by the World Bank. However, there appears to have been little consideration of the potential downstream impacts: the Omo river feeds Lake Turkana, which is a source of livelihood for a large number of communities in northern Kenya. The possibility that the lake may be partially drained is obviously upsetting a lot of people, although it does not seem that the Kenyan government is making a big fuss over the project.

This is a typical problem of negative externalities: the Ethiopians aren’t factoring in the welfare of Kenyan Turkana residents in the decision to build the dam. There’s actually some research showing that this is a common problem. From a recent World Bank paper by Sheila Olmstead and Hilary Sigman:

This paper examines whether countries consider the welfare of other nations when they make water development decisions. The paper estimates econometric models of the location of major dams around the world as a function of the degree of international sharing of rivers. The analysis finds that dams are more prevalent in areas of river basins upstream of foreign countries, supporting the view that countries free ride in exploiting water resources. There is weak evidence that international water management institutions reduce the extent of such free-riding.

By their very nature dams generate inequality in the flow of water between upstream and downstream areas. It is easier to pay the cost of hurting downstream communities when they are are in a different country (hey, they don’t vote for you). Ergo, countries are more likely to build dams when the costs are external.

It would be interesting to see what mitigates these effects – it is possible that Kenya’s relative indifference is due to lack of political power on the part of the northern tribes. Are dams with substantial cross-border costs less likely in areas where the proximate ethnic group is quite powerful?


LaTeX Wars Episode V: The Word Users Strike Back

Of course I edit all my documents using the original Nintendo Power Glove

Of course I edit all my documents using the original Nintendo Power Glove

Throughout the mid-90s, my father used a DOS-based typesetting program called PC-Write to produce his books and journal articles. In stark contrast to more-popular word processing programs, PC-Write relied on a what-you-get-is-what-you-mean approach to typesetting: dad would indicate his formatting preferences as he wrote, but he would be forced to print out a page in order to see his formatting options being applied. By contrast, I grew up working with Microsoft Word and so with each passing year I found my father’s system to be increasingly archaic. Eventually, after a substantial amount of healthy mockery from his son, he migrated over to Word and hasn’t looked back since.

However, by the time I arrived in grad school an increasing number of other (economics) students were using LaTeX, a typesetting language that was much closer in design to the old-fashioned PC-Write than to the what-you-see-is-what-you-get format of Word. Although I suspected that LaTeX was another manifestation of the academic economist’s tendency to choose overly-complex methods and technical mastery over user-friendliness, I eventually became a convert. Somehow, I found my preferences begun to mirror Dad’s original love of PC-Write.

If you ever feel like experiencing a wonderfully-arbitrary argument, ask a group of economists if they prefer LaTeX or Word. Within the profession there is a pretty serious division between those who prefer the look and workflow of the former  and those who prefer the accessibility of the latter. While there are some of us who are comfortable working in both formats, each camp has its stalwarts who find members of the other camp to be bizarrely inefficient.

The two sides appeared to be in a stable stalemate until recently, when a new study comparing the efficiency and error rates among LaTeX and Word users appeared in PLOS One. The headline result: Word users work faster AND make less errors than LaTeX users.


Ooof – I hear the sound of a thousand co-authors crying out with righteous indignation. The Word camp was quick to seize upon this study as clear evidence that LaTeX users were probably deluding themselves and that now would be a good time for everyone to get off of their high horse. The authors of report  even went as far to suggest that LaTeX users were wasting public resources and that journals should consider not accepting manuscripts written up using LaTex:

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper.

Pretty damning, eh? Not so fast! There are several reasons we should doubt the headline result.

For one, rather than randomly assigning participants to Word or LaTex, the researchers decided to allow participants to self-select into their respective groups. On one hand, this makes the result even more damning: even basic Word users outperformed expert LaTeX users. The authors themselves admit that preference for the two typesetting programs varied wildly across disciplines (e.g. computer scientists love LaTeX and health researchers prefer Word). It’s perfectly possible that the types of people that select into more math-based disciplines are inherently less efficient at performing the sort of formatting tasks set by the researchers. Indeed, the researchers found that LaTeX users actually outperformed Word users when it came to more complex operations such as formatting equations.

Furthermore, the researchers only evaluated these typesetting programs along two basic dimensions: formatting speed and error-rates, ignoring other advantages that LaTeX might have over Word. As an empirical researcher, I find it enormously easier to link LaTeX documents to automated data output from programs like Stata, making it simple to update results in a document without having to copy and paste all the time. Word can also do this, but it has always been far clunkier.

So, in short, the jury is still out. Feel free to return to your respective camps and let the war continue.

Troubling aspirations

"On second thought, I think I'll keep the ring and become a lawyer."

“On second thought, I think I’ll keep the ring and become a lawyer.”

From a new paper in the Journal of Development Economics:

This paper sheds light on the relationship between oil rent and the allocation of talent, toward rent-seeking versus more productive activities, conditional on the quality of institutions. Using a sample of 69 developing countries, we demonstrate that oil resources orient university students toward specializations that provide better future access to rents when institutions are weak. The results are robust to various specifications, datasets on governance quality and estimation methods. Oil affects the demand for each profession through a technological effect, indicating complementarity between oil and engineering, manufacturing and construction; however, it also increases the ‘size of the cake’. Therefore, when institutions are weak, oil increases the incentive to opt for professions with better access to rents (law, business, and the social sciences), rather than careers in engineering, creating a deviation from the optimal allocation between the two types of specialization.

In plain speak, the authors posit that when there are large windfalls from natural resources, people will choose careers (and the necessary education) which will allow them to reap the benefits from those windfalls. Normally this involves choosing careers associated with oil extraction, like engineering. However, in weak states where it’s possible to gain access to oil rents in a less-than-legitimate manner, people choose to go into careers which better allow them to get access to those rents, like law or business. Hence talent is `misallocated’ in developing countries with weak institutions and oil booms, as the possibility of getting access to oil rents sends people into careers which they are less fit for.

I would not despair so quickly – the empirical results in the paper are more suggestive than definitive, dependent on a handful of mainly cross-country regressions. Still, the results are disconcerting – the authors do not investigate further, but the prospect of societies re-orienting themselves into a structure better suited for rent-seeking likely means that true institutional reform becomes all the more difficult.

Protected by randomness


From fusion.net:

The Random Darknet Shopper, an automated online shopping bot with a budget of $100 a week in Bitcoin, is programmed to do a very specific task: go to one particular marketplace on the Deep Web and make one random purchase a week with the provided allowance. The purchases have all been compiled for an art show in Zurich, Switzerland titled The Darknet: From Memes to Onionland, which runs through January 11.
The concept would be all gravy if not for one thing: the programmers came home one day to find a shipment of 10 ecstasy pills, followed by an apparently very legit falsified Hungarian passport– developments which have left some observers of the bot’s blog a little uneasy.

The title of the piece (Robots are starting to break the law and nobody knows what to do about it) elicits worries of AIs gone amok, but the basic conundrum of this piece and others about the Random Darknet Shopper is more complex: if I design an AI which takes a random, blind action in a space which is largely – but not uniformly – illicit, am I legally culpable?

Take this thought experiment: imagine going around your office with a ten dollar bill, offering to buy whatever your colleagues would be willing to sell to you at that price, but under the condition that you do not see the item until the transaction has taken place. If one of your colleagues slipped you some cocaine, who would be at fault? What if you chose to repeat the experiment in an area of town infamous for drug-deals, are you suddenly more culpable?

When I was young, I used to order what they called “Grab Bag” comic packs, where I would pay a set amount of money for an unknown, random assortment of comic books. If someone had slipped a pornographic comic into my grab bag, it’s hard to see how I would be at fault. But where I choose to make my blind transactions seems to augment how we perceive culpability.

Several years ago I wrote a piece about how randomness can complicate our standard notions of guilt. The intersection of randomness, culpability and the law sounds like an area that – if someone hasn’t written about it a lot already – is ripe for further work.