21 September 2010

Why Works?

A guru has spoken. Angus Deaton at Princeton has a newish (I'm catching up) paper out called “Instruments, Randomization and Learning about Development”. Ok, it’s not the snappiest title, but it is authoritative and by turns sobering, refreshing and taxing.

The whole message of the paper is that a focus on “what works” in the absence of why something works will be nothing more than the generation of isolated bits of knowledge--knowledge that has no transferability or relevance outside of its context.

Deaton argues that randomized controlled trials (RCTs) have accentuated the shift from why to what. But before going on to make this point he notes that RCTs—unless under ideal circumstances—are not even a magic bullet for solving the issue of “what works” in a given context. And even under ideal circumstances he points out that RCTs only generate impact estimates at the mean of the population (i.e. they would not be able to distinguish between interventions that generate very large effects for a few and negative effects for many and interventions that generate modest positive effects for the vast majority).

He argues that the application of experimental and non-experimental methods (such as econometrics) in the absence of theory generates highly localized but decontextualised knowledge—a double whammy. Experimental methods such as randomized controlled trials may even encourage an atheoretical approach.

RCTs will be most useful, as will any analytical method, when they test assumptions and results generated by a theory of change. He says we need to focus on “mechanisms” rather than “projects”—this will increase our chances of learning about what might work outside of a given context.

He runs through all the drawbacks and limitations of RCTs (he is equally harsh with econometrics and the terrible job most of us do with “identification” – i.e. isolating the independent effects of explanatory variables): conceptual, practical, and ethical. My favourite example of the fallibility of randomization is his critique of the use of alphabetization of school names to allocate schools to control and treatment groups (he cites several papers that show that alphabetization is not the same as randomization).

In his conclusions he says “for an RCT to produce ‘useful knowledge’ beyond its local context it must illustrate some general tendency, some effect that is the result of mechanisms that are likely to apply more broadly”. The proponents of RCTs, by definition, focus a huge amount of rigour on establishing internal validity (does the method give a credible estimate of a mean effect in this context?) but much less on external validity (is the effect portable?). In fact they often argue for external validity by failing to allow for the unobservable effects that the internal validity is so keen to control for.

So what are these “mechanisms”? Things like loss aversion, procrastination, risk taking and the way we discount the future. But it’s not clear to me that these have a great deal of portability outside of specific contexts either (many of these studies use US graduate students as their subjects!).

The conclusion, with which it is hard to disagree, is that there is no substitute for specifying the causal nature of the processes of change we are investigating.

As Deaton says “I believe that we are unlikely to banish poverty in the modern world by trials alone unless those trials are guided by and contribute to theoretical understanding." This is actually a pretty mainstream view--we have to have ideas about why and how things work to generate hypotheses to be tested. Perhaps some randomistas and econometricians have forgotten this along the way.

The paper is good, but it takes too long to point out the fallacy of thinking that we can dispense with "why works?" in favour of "what works?".

As development becomes more complex in an increasingly uncertain context, the "why" questions will become more important than ever.


Rob van den Berg said...

Lawrence, I fully agree and both Angus Deaton's paper and your comments are confirmed in evaluation findings we have. We feel quasi-experimental methods help us identify what happens, not why it happens. We studied income trends in the borders of protected areas and found that income had gone up - and this was confirmed through strict control methods, comparing to similar areas elsewhere. However, thinking through what this means we can be very sure there is no natural law that would stipulate that protected areas directly (and magically) cause incomes to rise of people who live at their borders. There was no theoretical assumption tested, because there was no theory - what we have is an authoritative calculation of what happened, not why it happened. We can now develop hypotheses of why it happened and do research on them - and randomization is probably not the best way to test these hypotheses.
Rob van den Berg

Lawrence said...

Rob, a good point--sometimes tracking what happened in a rigorous way can raise important why questions. But I think this is only efficient if we really do not have any theory to generate why hypotheses we can then test.

HowMatters said...

Thanks for sharing this paper. Over the years I've witnessed the development sector as a whole demonstrate an increasing desperation to “know” what is often inherently beyond logic and induction. It is certainly time to examine our belief that there are technocratic, precise ways of measuring progress in order to make consequential judgments based on these measures. Increasing obsession with abstract metrics and experimental design, stemming from a reductive, managerial approach in development, is quite far from the intimate, difficult, and complex factors at play.

My hope is that the dominance of quantitative statistical information as the sole, authoritative source of knowledge can be challenged so that we embrace much richer ways of thinking about development and of assessing the realities of what is happening closer to the ground.

Thus, I'll be paying attention to the WHY? and the HOW?

Lawrence said...

Dear HowMatters, yes "how" is also crucial of course...

Agree that the blend of approahes is what counts and that this must be driven by the issue at hand. Too often evaluators are driven by a method and not by the complex nature of the issues.