Yesterday IDS and ODI hosted a meeting with the Secretary of State Andrew Mitchell on the Results for Change agenda, chaired by IDS Board Chair Richard Manning. In all there were 25 people present from a range of research and development partners.
The Results for Change agenda has two key features (a) intensifying the focus on generating development outcomes, and (b) putting more scrutiny on whether this is being done cost-effectively. The agenda is aimed at reassuring the UK electorate that their aid is well spent. But it is also about wanting to support “good change” to make the world a better place. By “good change” I mean change where it is most needed and for whom it is most needed, change that is transforming and enduring and, vitally, change that does no harm.
The meeting focused on how the results agenda could be shaped to support the delivery of this good change. The Secretary of State’s speech at the Royal College of Pathologists demonstrated that he and his team are well aware of the potential disconnects between results and good change. For example, a narrow interpretation of “results” could mean that we evaluate things that are easy to evaluate, diverting attention away from things that are potentially more transforming and enduring but more awkward to evaluate.
My takeaways from the meeting:
1. Innovation. There is a desperate need for innovation in this space. First, on issues. For example, what is the best way of evaluating conflict prevention efforts or the efforts of businesses to achieve development outcomes as well as profits? In other words how do we push and pull good evaluations into the more difficult spaces? Second, how do we get more stakeholder voice into the evaluation process? Will this incentivise learning, make failure harder to ignore and force more listening? Third, how do we communicate results to policymakers, to the UK public (referred to as “Mrs Jones” by several participants) and to citizens in places where DFID works (ditto, “Mrs Banda”). The gap between the technical findings and how they are communicated to these different stakeholders is clearly large.
2. Simplification. Owen Barder noted that the outcomes focus, while a much needed emphasis, was in danger of overburdening organisations using aid. The introduction of a new reporting structure without a paring down of the existing structures risks diverting creative energy away from the achievement of the outcomes. Can the input and output tracking systems be simplified? If not, then outcomes will be at risk.
3. Accountability. How can we track where the evaluations are landing? Despite best intentions, is too much of the evaluative effort going into the short term, service delivery activities and not enough into activities that try to improve systems and rebalance power? Ben Ramalingam shared a useful graph that mapped he results context by (1) the nature of the intervention (simple to complex) and (2) the political context (pro-poor to no pro-poor). If all the evaluations and results are accumulating in the simple intervention/pro-poor space then we are not focusing on the portfolio of potential actions in a sufficiently balanced way.
4. Evidence and decision making. Political opportunity trumps evidence, up to a point. From the practitioners in the room we had some good insights on (a) how evidence is used to make the best decisions within the political space that exists (e.g. if political space exists for vaccinations, use evidence to make sure the right vaccinations are delivered to those most in need in ways that promote greatest spillovers) and (b) on how evidence can shape the political space in the medium run (e.g. does the evidence justify the political space that HIV/AIDS commands?).
5. More Evidence. We agreed that the evidence base in development is weak in many areas (don’t forget, we are researchers)--at least for the more RCT type evidence. But we skirted the issue of what constitutes credible evidence, noting that in some cases RCTs will be the “gold standard”, but in many other cases there will be different methods and blends that will be labelled as gold standard. We heard that DFID has commissioned a review of methods which can offer levels of rigour in their context that are similar to the potential rigour RCTs offer in theirs.
6. Need for country led accountability. We did not spend enough time on this. Who are the results for? And who generates them? These are key factors that will shape which policies and actions are evaluated, the definitions about the quality and weight of evidence and, crucially, the incentives to learning from the findings.
It’s clear to me that the “results for change” agenda has great potential. Those of us in the research community must play our part if this potential is to be realised. We must be at the forefront of this agenda, working with aid agencies and Governments to make the results work for “good change”. In doing this, we may well have to change ourselves--to learn from evaluation methods outside of development, to try to evaluate the seemingly unevaluable and to invest more in understanding the political processes within which decisions are made and communicated.