26 June 2011

Evaluation Literacy: Reading the Systematic Review Tea Leaves

Lots of systematic reviews are emerging from the various 3ie/DFID initiatives. As a researcher, I find them a fantastic resource. As a policymaker, I'm not so sure. It seems to me they require a huge amount of evaluation literacy.

Why? Well, it's not as if we have 20 studies on microfinance that have a similar design and similar outcome indicators and were all run in South Asia, or 20 agricultural interventions that tried to improve nutrition in the same way using the same indicator of nutrition. Getting the centre of gravity of the review is not easy, because the reviews are comparing African apples with Asian oranges and some of the fruits are bigger and more nourishing than others.

In any case, one of my colleagues at IDS, Emilie Wilson attended the big 3ie conference last week in Mexico and this is a brief report from her with links, on evaluation literacy.

Evaluation Literacy by Emilie Wilson, IDS

Donors want to see value for money. Researchers want to apply credible approaches to measuring the impact of development interventions. Politicians want to be re-elected on the back of successful social programmes. A match made in heaven?

Last week, I had the privilege of attending a 3ie conference in Cuernavaca, Mexico, on impact evaluation, entitled “Mind the Gap: from evidence to policy impact”. At IDS, I am lucky to be both working at the coal-face of policy influence using research communication and engaged with action research on how communication of research brings about change.

Wearing both those hats, I engaged in the conference wanting to learn more about the truism that “policy influence is a complex and nonlinear process”. And the beauty of attending these events is that faceless “policymakers” become Felipe Kast, Planning Minister for Chile, Gonzalo Hernandez-Licona, Executive Secretary for the National Council for Evaluation of Social Development Policy (CONEVAL) in Mexico and Ruth Levine from the Hewlett Foundation. Real people with real problems (to resolve).

The conference pre-clinics, thematic and special sessions broadly divided into three areas:

1. how to do impact evaluations (methods)

2. what have impact evaluations already told us (case studies and findings on a wide range of issues including agriculture, health, and social protection)

3. how can we share with those in decision-making positions the news about what works and what doesn’t in development interventions

I focused on this last area, attending two excellent sessions on “Donor priorities for evaluations” and “Perspectives from policymakers”.

Presentations were clear and insightful (see especially “How to influence policy” and “Factors that Help or Hinder the Use of Evidence in Policymaking” by Iqbal Dhaliwal, global head of Policy for the Jameel Poverty Action Lab (J-PAL)), donors and policymakers were frank and humble, and the audience did not shy from asking challenging questions.

Some take-aways for me include:

· building the bridge between evidence and policy is a two-way process: researchers should ensure that evidence is substantial, timely and policy-relevant; policymakers need to be ‘evaluation literate’ and understand the value of using evidence in policy

· there is an important role to be played by ‘intermediaries’ – those who synthesise, repackage, ‘translate’ – making research and evidence relevant and usable beyond academia

We are often told that policymakers are constrained by time and money. Surely this assumption was challenged by the presence of so many at this conference, which required both time (including 30 hour journeys across the world) and money. Perhaps Esther Duflo, who spoke at the opening plenary, was right to talk of “fake urgency” and warn that rushing to gain time would eventually waste time. If we don’t learn lessons now, we’ll make the same mistakes again in the future.

2 comments:

  1. As an author of one of the DFID systematic reviews, I just wanted to pick up on your comments, Lawrence.

    I agree that considerable amount of evaluation literacy is required to conduct and make sense of reviews (although I'd also argue that clear, simple writing and good networking can break down some of the barriers to accessing systematically reviewed evidence).

    I also recognise the challenge of trying to combine apples and oranges and I think there is methodological work to be done on how reviews collate and synthesise evidence in development - I fear we over-simplify leading to meaningless findings that lack credibility amongst those for whom context and complexity are the everyday realities of development.

    I don't have the answers, but I suspect that find them will require more multi-disciplinary working, flexibility in order to advance methods, and possibly more focussed reviews (whether regionally or conceptually).

    Lots of work still to do on this.

    Ruth

    ReplyDelete
  2. Ruth, completely agree with you..I believe it can be partially solved, but it seemed to me that even for systematic review I was a co-author of (on ag and nutrition) the conclusion could be interpreted in many ways (even with a clear writing style--which of course I am not claiming to possess!)

    ReplyDelete