11 September 2011

Bad Social Science?

I'm a fan of "Bad Science" by Ben Goldacre, published in the Guardian. Focusing mostly on common flaws in the scientific process, this week's article was on errors committed in the psychology research literature. He reports on a common statistical error. When, for example, the nerves of normal mice are treated with a drug, does the rate of nerve firing change? And what happens when you use mutant mice? If the response of the firing rate to the drug in the normal mice is statistically different from zero and the response of the firing rate to the drug in the mutant mice is not statistically different from zero, have you found a difference in response between the two groups? You have, but is it a statistically different response?

Apparently half of the 157 studies published in 5 prestigious neuroscience journals failed to perform this "difference in difference" test. This is made more amazing, because it is an easy test. For economists, the test is equivalent to seeing if an estimated slope coefficient in a regression equation varies by population subgroup. We usually do this test by interacting a dummy variable tagged to the subgroup (i.e. mutant mice) with the independent variable of interest (i.e. the drug) and seeing if the estimated slope coefficient on the interacted variable is significantly different from zero. If it is then the slope (i.e response) is statistically different for the two subgroups.

Economists do not typically make this mistake, in fact I wish they were more interested in sub-groups in the population, but they are usually happy with average affects across a diverse group of observations, with the usual excuse (sometimes valid) being small sample sizes.

But this made me wonder what similarly blunderous statistical mistakes we economists make?

I would be interested in your answers.


Michael Clemens said...

Thanks for this interesting post and I think your overall point is correct. But since you ask: This paper describes an example of bad social science: The evaluation of the Millennium Villages Project, ostensibly headed by a prominent economist, which failed to use even the basic differences-in-differences approach you mention. This led to inaccurate conclusions that have still not been retracted.

Lawrence Haddad said...

Michael, thanks. I had read your paper and found the graphs comparing simple differences and differences within differences fascinating. It is amazing that the mid term evaluation of the MVP thought it could make do with only a "before and after" and that the initial funders and designers thought they did not need to worry about trying to construct control groups. As you know this is not uncommon, although it is surprising in an intervention so sure to attract scrutiny.

One other comment I received on the post(author wishes to remain anonymous)was the point that often economists trawl through our data until we find the result we want to find. I would agree that this is a problem. Do we need to require authors to tell us about all the regressions they have run?