05 September 2009

Grading Research for Development

I just returned from Coleraine and the Development Studies Association Annual Conference. Lots of interesting papers and presentations: Charles Gore on new global paradigms (knowledge-dominated), Santosh Mehotra from the Indian Planning Commision on the impacts of the downturn on India's growth (not too bad) and poverty (not clear but likely to be not good), Mayra Buvinic from the World Bank on what to do to protect women in the downturn (not too many new interventions it seemed to me, mainly intensification of existing ones), the new Director of Research at DFID, Chris Whitty, on a quality graded research evidence base, and DFID DG Andrew Steer reporting on the new DFID White Paper. There were many interesting papers from parallel attended by the 200 participants inlcuding some good IDS sessions on the impacts of the downturn and the implications for development policy. Catch some of the clips at The Broker website.

Chris Whitty's session generated the most heat, and perhaps a little bit of light. Chris shares the same concern I do--research is not fulfilling its potential to reduce poverty. It's hard to prove this. But we do know that there is an accelerating amount of research being generated--much funded by DFID--and that the very volume of it makes it very tough to keep up with. Just think how hard it is to stay on top of developments in one's own field and then imagine how hard it is for generalists in decision making positions either in policy or frontline positions to do so. So how do we systematically organise the material around questions and contexts, separating out the careful from the not so careful, and then communicate that in an accessible way? Outside of the development social sciences this is fairly routine--there is the Cochrane database and the Campbell Collaboration. Inside the development social sciences it is not unknown (see this example from the World Bank) but fairly rare.


The debate at the session revolved, it seemed to me, around which research questions one applies such a mechanism to and what that mechanism looks like, especially who does the grading. On the first issue--which question--one needs questions that decision makers want answers to and questions that lend themselves to comparisons across contexts. One example: when does conditionality of particpant behaviour improve social protection programmes and when not? Even this question is challenging to create an evidence base for--what qualifies as a social protection programme? What does "improve" mean?--but other questions such as: can pro-poor growth be pro-environment? will be more difficult, and more open ended questions such as: how do politics shape the use of knowledge? even more so and potentially counterproductive to even try. On the second issue--grading--it would be good to have peers reviewing, but perhaps in an open wiki-style way. I will keep you posted on this debate as it plays out.

2 comments:

Dominic Furlong said...

I think the issues you raise around how to, as well as who should, grade research outputs are very interesting. This may be an obvious comment to make but to me these issues are part of a broader debate on how to strengthen research-to-policy processes, of which the grading of research outputs is only one part of the cycle.

How to, and who should, grade research outputs needs to be considered alongside how to improve the monitoring and evaluation of research uptake and impact. DFID published a report in December 2008 on the lessons learnt in research communication within the context of its RPCs which touches on these matters (http://www.research4development.info/PDF/Publications/DFID_ResComm_WSReport3_22Jul08.pdf)

In focusing on the quality of research outputs solely in terms of their traditional form as academic papers and reports — which I’m sure you don’t but I think it useful to make the point — we run the risk of not framing them as knowledge products which form part of the research-to-policy cycle. The two issues are inter-linked: one is about how to grade the quality of research outputs; the other is about how to monitor and evaluate the quality of research-to-policy uptake and impact. They need to be considered in tandem.

Lawrence Haddad said...

Dominic, completely agree that quality grading is only one part of the story. The M&E of research is a very understudied area and impact should be a part of quality assessment. My colleague Andy Sumner just finished a useful review of assessing the impact of research which I am sure he will share at a.sumner@ids.ac.uk