20 January 2011

Science, Uncertainty and Failure

Perhaps it is because we are in what scientists tell us is the most depressing period of the year in the Northern Hemisphere (the third Monday in January), but in the past few weeks we have heard from 4 initiatives which positively embrace uncertainty, failure and sheer randomness.

First up, the December 13 New Yorker has an essay by Jonah Lehrer "The Truth Wears Off" which describes how initial significant and large findings tend to diminish in magnitude upon replication. Part of this is due to selective reporting--the journals only wanting confirming data when the new idea is at its most exciting. When the idea becomes less novel, the non-confirming studies make it back into the journals. In other words, the initial study does not give us an accurate summary of things.

Edge, a web magazine for scientists, invited a group of leading thinkers to answer the question "what scientific concept would improve everybody's cognitive toolkit?" There are 164 responses from folks like Daniel Kahneman (Nothing in Life is as important as you think it is while you are thinking about it), Eno (Ecology), Richard Dawkins (Double Blind Randomised Trials) and Nassim Taleb (Antifragility). Many of the entries are about sheer randomness, and our addiction to certainty. The Taleb article was especially interesting. What is the opposite of fragility? Not robustness or resilience, but antifragility. If fragility is losing more than gaining from variation, antifragility is gaining more than losing from variation. Experimentation is a good example of antifragility (gains of success likely to outweigh costs of failures).

Elizabeth Pisani, the epidemiologist and author of "The Wisdom of Whores" is an advocate of data sharing in research. Making research data widely available would likely diminish the likelihood of "the truth wearing off", and would help replication to highlight randomness and uncertainty. She reports on a new initiative from health research funders, led by Wellcome that will accelerate the sustainable sharing of research data in health.

Finally, a new website, admittingfailure.com, has been mentioned in several blogs. Not too many failures to browse as yet (I'm thinking of some to submit), but an interesting one from GlobalGiving which was uncovered by reports from community members, but hidden by self-reports from their grantees. We have a remarkable ability to brush off failure in development, and mechanisms like these are important to highlight them so we can learn.

So we need to take a lead from the scientists and watch out for selective reporting, embrace experimentation, look for ways in which variation can lead to positive outcomes and implement mechanisms to uncover failures. We also need to make our data more sharable and shared.

Of course I find none of this depressing, despite being in deep midwinter. I find it refreshing and energising. I guess those scientific models of the most depressing time of the year have more variability in them that we might think!


Anonymous said...

The comment about selective reporting opens the door to a wide debate about what comprises 'international development.'

Since the mid-1970s in Britain, and more widely since the U.N. promulgated its M.D.Gs, 'international development' has become almost synonymous with poverty relief. The latest IDS publication, purporting to assess public opinion on 'international development' aid, but actually asking them about poverty relief, starts from the same superficial conception. It's disappointing that a respected institution permits terminology to be used in such an inexact way, and lacks objectivity in its approach. There is more to development than a 'bottom-up' approach through alleviating poverty, and it's time the development community took a broader view of what is needed and what impact activities other than rural development can have.

Lawrence said...

Hi there...can you let me know exactly which publication you are citing and I promise a response...best, Lawrence