01 December 2009

Is "power to the people" a panacea?

Two new papers have come to my attention recently. They are unusual in that they bring the tools of randomised controlled trials (RCTs) to the issue of participation in development interventions. RCTs are the random allocation of the intervention to treatment and control groups with baselines and follow up surveys.

The first, by Abhijit Banerjee and colleagues from MIT investigates the effectiveness of 6 types of participatory intervention on community involvement in schools, teacher effort and learning outcomes in Uttar Pradesh, India. One of the 6 interventions has a positive (large) effect on reading. The others have no effect. The interventions were designed as follows: 1: information on school norms and provisions is made available and large groups of school officials and community leaders are organised to discuss, 2: information on school norms and provisions made available and small groups of school officials and community leaders discuss, 3: as 1, but with design of a report card on school and community comparison with other schools and community discussion, 4: as 3, but with smaller groups, 5: as 3 but with training and encouragement of volunteers to show how child reading skills can be encouraged, 6: as 5 but with smaller groups.

Intervention 6 was the successful one, but only because of the added “direct control small group component”. In other words, giving villagers information about the state of their schools was not enough—it required encouragement and training, in small groups, to turn this information into an intervention that improved learning outcomes (intervention children were 60% more likely to decipher words than the control groups one year on).

They conclude that “it seems clear that the current faith in participation as a panacea for the problems of service delivery is unwarranted”.

The second paper is by Martina Bjorkman and Jakob Svennson (Quarterly J. of Economics May 2009) of Bocconi University and the Centre for Economic Policy Research and focuses on community based monitoring of public primary health care providers in Uganda. Here the intervention consists of a report card (designed by the community for its own treatment facility) and the development of a community contract between patients and medical staff. The community then used the cards to monitor facility performance. The intervention has large impacts on under 5 mortality rates and on weight for age scores for infants (underweight). The authors estimate that it costs $300 to avert a child death using this intervention which is well below the average cost of $887 for 23 other child survival interventions. The authors conclude by stating that “future research should address long term effects (of the intervention), identify which mechanisms or combination of mechanisms are important, and study the extent to which the results generalise to other social sectors.”

What do I take away from these 2 studies?

1. Using experimental methods to test participatory interventions seems possible, working closely with local groups to design the interventions

2. Continuing the cross-method investigation theme, can participatory research methods look at RCTs as a development intervention? Do RCTs introduce healthy or unhealthy dynamics and under which conditions?

3. Both studies are strong on internal validity (did the intervention have an effect?) but struggle valiantly to look at external validity (can we say anything about the range of contexts within which these can work?). The first study uses variation in intervention design to do this and the second uses regressions on sub-samples.

4. Neither study looks at empowerment of the community per se, regardless of learning or health outcome. If confidence and capability at the individual and collective level have been built up, might the true effects of the intervention in terms of outcomes come at a later stage rather than within structures that the community did not design (note the success of the UP intervention that was not constrained by the school system)?

5. Do the concluding sentences of the papers reveal the authors’ inherent biases? The first paper says that participation is not a panacea. Who said it was? Of course mindless application or participation is not going to work. We want to know when and what types of participatory intervention work. The second paper makes this point in a much more thoughtful way.

6. Finally, the cupboard for this kind of research is bare. We need more work of this kind to help us understand the conditions under which participation makes a difference to people’s lives—both the people directly affected and those such as infants who are indirectly affected.

3 comments:

Alan M Jackson said...

I think your 4th point is particularly important and interesting, where you say, "Neither study looks at empowerment of the community per se, regardless of learning or health outcome."

Improving literacy and lowering infant mortality rates are obviously good things. But is there a measure of these that we deem as "developed"? Globally we all want improvement and there is no foreseeable end to that desire. While we can argue for certain standards with respect to literacy and infant mortality, I think it's not so clear cut for other types of intervention. For instance in the sector I work in, ICT4D, is there a particular technical capability that should be the goal of any project? Not necessarily.

What I'm trying to say is that participation is an end in itself. What is development if it is not the ability to choose how to use resources available to you?

Daniel said...
This comment has been removed by the author.
Daniel said...

Concerning another cupboard being bare described in People-centred M&E: Aligning Incentives So Agriculture Does More to Reduce Hunger, don't you think that Casley and Kumar's series on the M&E of agricultural projects back in the 1980's define some content? Whilst perhaps not placing beneficiaries centre stage, i always got gist of what they wanted to convey as:

1) How hopelessly naive yet politically driven demands by donors, cowered to by M&E units and researchers, to 'validate' impact sought to demonstrate causal relationships between outputs and impacts that were analytically impossible to establish within the period required - the naivety of those who reported such information was surpassed only by those who believed it!; and

2) Impressing upon the World Bank's then OED, the better returns intelligence monitoring processes could make 'generate' by providing space for beneficiaries to voice their subjective opinions on the relevance and quality of the service aid made available. Trying to assess 'profound' and lasting developmental impact, in the absence of effective feedback loops that focus on learning about the preferences, responses and behaviours of beneficiaries, and how these are differentiated, makes for a rather academic exercise. This is particularly so given how the sustainable success of interventions depends on the performance of the partner institutions, who are in the development process for the long run, in being able - and continuing - to offer a quality service as perceived by beneficiaries (Salmen in 1994). Cue outcome mapping six odd years later and when will we ever learn in 2006 that predictably led, in turn, to a return to the vanities surrounding RCTs and pseudo-experimental design so well peddled by JPAL and CGD.