A new paper by James Manley, Seth Gitter (both Towson University in Maryland US) and Vanya Slavchevska (American University in Washington DC) asks "How effective are cash transfer programmes at improving nutrition status?".
The paper is a rapid evidence assessment of all the studies that evaluate the impact of conditional and unconditional cash transfers on various measures of nutrition status.
(It is not a systematic review--the restricted time period available to find the studies probably led to some of the difficult to obtain and foreign language material being excluded and this might bias the findings because easier to find studies--i.e. published--tend to be more likely to find statistically significant results. It still seemed to me like a very careful study.).
Their search uncovers 24 papers on 18 programmes in 11 countries.
The authors focus most of their energy on the analysis of the impacts of cash transfer programmes on height for age as this is the outcome for which they have most data (18
studies looking at 15 programmes in 10 countries which generate 117 estimates).
The multiple estimates are averaged out per study per outcome indicator and then used in statistical meta analyses (simple regressions or analysis of variance) to see if the impacts varied by study features (e.g. quality, RCT, sample size), programme features (e.g. conditionality, size of transfer), child characteristics (e.g. sex, age) and country level features (e.g. infant mortality rates and health service provision). For the height for age outcome this generated 18 observations which (I think--they don't actually say) from the basis for the regressions (n=18) in the meta-analyses.
The paper is thorough and, for the most part, well done and generates some interesting results.
I liked the fact that the authors attempted to do meta analyses on the estimates of the impact of the programmes on height for age.
My main problem is that the authors used all 117 estimates stating that "Each estimate contains useful information so we want to
include all of them, but at the same time we must control for the correlation
between estimated impacts for different estimators or treatment groups in the
Well, they may all contain interesting information, but but that does not mean they should all be included. Some are reported in the original studies to test whether more sophisticated estimates are needed and if they are needed then the more basic estimates should be discarded by the meta analysis. In other words only the preferred estimates from the original papers should be included. From the review, I could not tell if the non preferred estimates had been discarded from the meta-analysis.
If known biased estimates are included in the meta analysis, then this is obviously a big problem for all the conclusions of the paper.
If--and it is a big if--the conclusions are drawn from meta analyses that exclude the known biased estimates, then they are really interesting.
* "The average effects of the programmes on height for age are positive but statistically indistinguishable from zero, and the
conditions, including the country characteristics, recipient population
characteristics, and the programme characteristics all matter." (In other words, these interventions are not proven in all contexts, and design matters--this echoes the 2008 Lancet conclusions from Bhutta (Table 1).)
* Which programme characteristics matter? The only one found to be significant was conditionality that is not tied to health and education (not terribly clear what these non health and education conditionalities were--I presume they had something to do with employment)--this had a negative effect. There was no statistical difference between programmes that did not condition and those that conditioned on health and education (I was surprised at this--my priors were that the latter would have a bigger effect). Wisely the authors state that there are probably many other programme features that are more important than conditionality and we should avoid over-focusing on this design feature, although they recognise the political as well as technical rationale for this feature (but, then again, many of the others are political too--think of transfer size!).
* Which study characteristics mattered? The analysis could not find any significant correlations between outcome and study characteristics. So use of RCTs, quality, peer reviewed--no significant differences.
* Which child characteristics matter? The impacts tend to be higher for girls. No significant age effects found. The girl effect is not discussed in the report. Nor is the programme characteristic of who the transfer is given to within the household explored--perhaps there is some link there?
* Which country characteristics matter? When infant mortality is high and hospital infrastructure is poorest, impacts are most positive. As the authors note, this provides some supportive evidence for the UNICEF Narrowing the Gaps to Meet the Goals approach of reaching the most marginalised.
But the meta analysis is a tease. Only 18 observations. And these might include known biased estimates of impact. We need clarification of the last point and multiplication of the data points.
Nevertheless this is a fascinating paper and perhaps its greatest contribution is the creation of a universe of studies with a coding of some of their key features.
As the paper says, we need more studies. I agree, and they should be informed by this report.