The MPI brings together 10 indicators of health (child mortality and nutrition), education (years of schooling and child enrolment) and standard of living (access to electricity, drinking water, sanitation, flooring, cooking fuel and basic assets like a radio or bicycle). It’s thus a logical extension of its predecessor, UNDP’s pioneering Human Development Index, launched in the first Human Development Report back in 1990, which combined life expectancy, education (literacy + enrolment rates) and GDP per capita.
The measure, like the HDI, is part of an attempt to get a “better measure” of poverty, by including many non-income indicators. While I think most would agree that policymakers and researchers should always consider non-income indicators of welfare, does it make sense to average them out into a single index?
What precisely are we measuring when the HDI for a given country increases by .01? These questions always seem to lead back to the original indicators: “A advanced in rank because of education improvements” or “B is lower than C despite being richer, because life expectancy in B is much lower.” Given that we need to unpack these indices to figure out what’s going on, why do we bother to pack them in the first place?
Duncan, always open for a healthy debate, has already posted a criticism of the MPI by Martin Ravallion of the World Bank, which questions the implicit values placed on different indicators when they are weighted:
The index is essentially adding up “apples and oranges” without knowing their relative price. When one measures aggregate consumption from household-survey data for the purpose of measuring poverty, as in the World Bank’s “$1 a day” measures, one relies on economic theory, which says that (under certain conditions) market prices provide the correct weights for aggregation. We have no such theory for an index like the MPI. A decision has to be taken, and no consensus exists on how the multiple dimensions should be weighted to form the composite index.
On closer scrutiny, the embedded trade-offs (stemming from the weights chosen by the analyst) can be questioned, and may be unacceptable to many people. In the context of the HDI, I pointed out 15 years ago that by aggregating GDP per capita with life expectancy the HDI implicitly put a value on an extra year of life, and I showed that this value rises from a very low level in poor countries to a remarkably high level in rich ones (4-5 times GDP per capita). If it was made clearer to users, I expect that they would question this trade-off embedded in the HDI.
The MPI index faces the same problem. How can one contend (as the MPI does implicitly) that the death of a child is equivalent to having a dirt floor, cooking with wood, and not having a radio, TV, telephone, bike or car? Or that attaining these material conditions is equivalent to an extra year of schooling (such that someone has at least 5 years) or to not having any malnourished family member? These are highly questionable value judgments. Sometimes such judgments are needed in policy making at country level, but we would not want to have them buried in some aggregate index. Rather, they should be brought out explicitly in the specific country and policy context, which will determine what trade off is considered appropriate; any given dimension of poverty will have higher priority in some countries and for some policy problems than others.
One could continue to argue about the weights – but Ravallion’s argument will still stand. I fail to see why these indices amount to anything more than intellectual exercises – while the HDI has got us all thinking about other things than income, has it really been useful as a method of actually measuring development? Is the MPI likely to do any better with poverty?