19 November 2016

Evidence in nutrition: have we set the bar too high?


I’m going to be honest here, I don’t know the answer to this question, but I think it is a question worth asking:  is our quest for evidence enabling or inhibiting us when it comes to acting to end malnutrition?

I raise this now because over the past few weeks I have met senior people in various development organisations who think the quest for evidence about nutrition interventions may have gone too far.  

The arguments they put forward include:

* Randomised controlled trials are not the only credible source of evidence and in many cases are not the most appropriate.  Think of the recent BMJ evaluation of the soda tax in Mexico—this was done with good old-fashioned economic modelling.

* Does an intervention really have to show positive impacts in multiple geographies before we are convinced? Are we trying to find design-proof interventions?

* If the intervention we are implementing in the absence of cast iron evidence could do harm, then caution is of course warranted, but in the absence of the potential of harm, are the downsides of taking calculated gambles on interventions really that large?

* Do the tools we consider to be gold standard lead us away from exploring certain approaches?  Are we only exploring what is amenable to randomisation and not what is meaningful?

This is a familiar debate in development but not one I have heard voiced in nutrition.  Make no mistake about it: the Lancet series of 2008 was a massively positive game changer—a set of proven interventions (“proven” being largely determined by randomised baseline and endline kinds of evaluations) that policymakers could latch onto in their search for a response to the food price crisis of 2007-8. 

Nevertheless, I do have some sympathy for the above views.  I am a staunch advocate of rigorous evaluations, but this does not mean RCTs only (and I have been involved in the design of at least 2 RCTs).  Moreover only certain interventions--new ones, ones never tried in a certain context before—need the highest level of rigour.  “Rigorous enough” should be the guiding concept.  We are trying to figure this out for GAIN too—when do we go deep in our evaluations and when is something less heavy “rigorous enough”?

But does the quest for purity on evidence really matter?  Does it really stop action?  I feel like it might be—particularly in stopping creative thinking and experimentation on how to reduce adolescent malnutrition.  A recent meta review cycles through the current options: micronutrient supplementation, delaying age of first birth, increasing birth spacing, and the education of adolescents around healthy diets. OK, but we could have written this list 20 years ago—where is the creativity in this space?  

Evidence is the ideal driver of action, but if it is the only driver then we are stuck in its absence.  We can’t be hamstrung by the lack of evidence—we must be driven by it.  Driven to imagine, innovate, design, pilot and evaluate.  Lives depend on it.

4 comments:

Unknown said...
This comment has been removed by a blog administrator.
Unknown said...

This is a critical discussion that needs to be had and NOW. I believe that using the typical medical model for assessing nutrition interventions is not the way forward. GAIN is ideally placed to host this discussion.

Anonymous said...

Yes, we may gain more in triangulation with ad hoc reviews of qualitative assessments for the same type of interventions.

Unknown said...

How much evidence is enough evidence? Let us not forget that even RCTs are not impervious to malpractice. For each independently funded study, industries can fund ten endorsing the opposite thesis, let us not forget Big Tobacco. Evidence is only useful when you're debating with people of integrity.