Two new papers have come to my attention recently. They are unusual in that they bring the tools of randomised controlled trials (RCTs) to the issue of participation in development interventions. RCTs are the random allocation of the intervention to treatment and control groups with baselines and follow up surveys.
The first, by Abhijit Banerjee and colleagues from MIT investigates the effectiveness of 6 types of participatory intervention on community involvement in schools, teacher effort and learning outcomes in Uttar Pradesh, India. One of the 6 interventions has a positive (large) effect on reading. The others have no effect. The interventions were designed as follows: 1: information on school norms and provisions is made available and large groups of school officials and community leaders are organised to discuss, 2: information on school norms and provisions made available and small groups of school officials and community leaders discuss, 3: as 1, but with design of a report card on school and community comparison with other schools and community discussion, 4: as 3, but with smaller groups, 5: as 3 but with training and encouragement of volunteers to show how child reading skills can be encouraged, 6: as 5 but with smaller groups.
Intervention 6 was the successful one, but only because of the added “direct control small group component”. In other words, giving villagers information about the state of their schools was not enough—it required encouragement and training, in small groups, to turn this information into an intervention that improved learning outcomes (intervention children were 60% more likely to decipher words than the control groups one year on).
They conclude that “it seems clear that the current faith in participation as a panacea for the problems of service delivery is unwarranted”.
The second paper is by Martina Bjorkman and Jakob Svennson (Quarterly J. of Economics May 2009) of Bocconi University and the Centre for Economic Policy Research and focuses on community based monitoring of public primary health care providers in Uganda. Here the intervention consists of a report card (designed by the community for its own treatment facility) and the development of a community contract between patients and medical staff. The community then used the cards to monitor facility performance. The intervention has large impacts on under 5 mortality rates and on weight for age scores for infants (underweight). The authors estimate that it costs $300 to avert a child death using this intervention which is well below the average cost of $887 for 23 other child survival interventions. The authors conclude by stating that “future research should address long term effects (of the intervention), identify which mechanisms or combination of mechanisms are important, and study the extent to which the results generalise to other social sectors.”
What do I take away from these 2 studies?
1. Using experimental methods to test participatory interventions seems possible, working closely with local groups to design the interventions
2. Continuing the cross-method investigation theme, can participatory research methods look at RCTs as a development intervention? Do RCTs introduce healthy or unhealthy dynamics and under which conditions?
3. Both studies are strong on internal validity (did the intervention have an effect?) but struggle valiantly to look at external validity (can we say anything about the range of contexts within which these can work?). The first study uses variation in intervention design to do this and the second uses regressions on sub-samples.
4. Neither study looks at empowerment of the community per se, regardless of learning or health outcome. If confidence and capability at the individual and collective level have been built up, might the true effects of the intervention in terms of outcomes come at a later stage rather than within structures that the community did not design (note the success of the UP intervention that was not constrained by the school system)?
5. Do the concluding sentences of the papers reveal the authors’ inherent biases? The first paper says that participation is not a panacea. Who said it was? Of course mindless application or participation is not going to work. We want to know when and what types of participatory intervention work. The second paper makes this point in a much more thoughtful way.
6. Finally, the cupboard for this kind of research is bare. We need more work of this kind to help us understand the conditions under which participation makes a difference to people’s lives—both the people directly affected and those such as infants who are indirectly affected.