Save the Children and Oxfam released a joint report today entitled "A Dangerous Delay: The Cost of Late Response to Early Warnings in the 2011 Drought in the Horn of Africa". The report argues that early warning systems performed, but decision makers did not respond to them. In an interview Justin Forsyth the head of Save the Children UK likened the situation to an alarm bell that had a very delayed effect. A lot of people were harmed by the delayed response and the cost of dealing with hunger and malnutrition was much higher than if it had been addressed earlier.
So why the delay? Using Forsyth's analogy, either policymakers did not hear the alarm, did not trust it, or although hearing and believing the alarm, they simply could not respond to it quickly enough. The report makes some good recommendations about amplifying the alarm (via the media and building up capacity of those to communicate the significance of the alarm up the decision chain). It also makes good recommendations about helping people respond to it more quickly once they believe it (emergency response funds, insurance, greater joint programming between development and humanitarian groups).
The one area that the report is relatively silent on is whether the policymakers believe the signals. The report focuses on the case where the signals were right and outlines the cost of ignoring them, but policymakers might argue: what about the costs of acting when the signals were wrong?
As researchers, we should be analysing, ex post, the frequency with which the early warning signals get it right. If we can attach a probability to the predictive power of the signals, then when the next signal goes off policymakers can better assess the risks of responding to a non crisis against the risks of responding late to an emerging crisis. In other words, how often will the signal get it right?
It's not pretty, but I would not be surprised if this is the kind of calculation that is often made.