26 May 2011

Ravallion on Impact, Interactions and Selectivity

I don't really like blogging about blogs, but Martin Ravallion's blogs are like mini articles, so it does not count. In his guest blog the Director of the World Bank's Development Research Group flags 2 things that we should be worried about in the current impact surge.

1. Interactions. We are packaging change into neat interventions which are then evaluated. Second round effects over time,macro effects at the economy level, unplanned interactions between interventions and unanticipated effects on behaviour are being missed.

2. Selectivity on what gets evaluated. Whether it is the low hanging fruit, or interventions that bow to different impact methods or interventions in areas where it is politically expedient to do evaluations, what is being evaluated is not necessarily what needs to be evaluated.

So he calls for some kind of mechanism to systematically scan what is and is not being evaluated and then prioritise what needs to be. And he calls for a more eclectic blend of methods to be brought to the table.

I agree with all of this, that is why I would like to see a mapping of what is being evaluated against what our community thinks should be evaluated (can 3ie organise this please?).

We also need to go beyond different economics tools in our blending, although blending across disciplines will require extraordinary openness, respect and flexibility because we are effectively drilling down into world views and very different ideas about how change happens. But I believe it can be done, and must.

1 comment:

Uma Lele said...

Hi Lawrence: I rarely miss reading your blog. It is like the good old E. F. Hutton. The ad used to say when he speaks everyone listens. That being also the case with Ravallion, it is not what he has said--that is old hat among evaluators--but that HE has said it which seems to matter, Martin being a measurement Guru par excellence. But do we only consider evaluable that which is measureable? How do we measure role of institutions and effect of changes in them on outcomes, how do we assess impact of information and knowledge on behavioral change? How do we deal with attribution in the case of multiple sources of information and multiple partners--some not even formal partners-?
Methodological challenges in evaluations abound once what is important to evaluate is agreed upon. In the donor frenzy for assessing results and impacts---, and the demand for results has increased precisly when the role of aid has become puny in total public expenditures of developing countries in all but African countries--an important question is how do we combine the discipline of the old project approaches to design of interventions with government programs? When funded largely by governments-- programs lack the discipline in definition of expected results or how precisely they will be achived--in short a logical framework and the risks to their not being realized. Numerous donors with varied ideas on what to evaluate has compounded the evaluation challenge with insufficient coordination among them? In short, there are even more fundamental challenges to evaluations which are rarely discussed. Cheers. Uma Lele