Apparently half of the 157 studies published in 5 prestigious neuroscience journals failed to perform this "difference in difference" test. This is made more amazing, because it is an easy test. For economists, the test is equivalent to seeing if an estimated slope coefficient in a regression equation varies by population subgroup. We usually do this test by interacting a dummy variable tagged to the subgroup (i.e. mutant mice) with the independent variable of interest (i.e. the drug) and seeing if the estimated slope coefficient on the interacted variable is significantly different from zero. If it is then the slope (i.e response) is statistically different for the two subgroups.
Economists do not typically make this mistake, in fact I wish they were more interested in sub-groups in the population, but they are usually happy with average affects across a diverse group of observations, with the usual excuse (sometimes valid) being small sample sizes.
But this made me wonder what similarly blunderous statistical mistakes we economists make?
I would be interested in your answers.
2 comments:
Thanks for this interesting post and I think your overall point is correct. But since you ask: This paper describes an example of bad social science: The evaluation of the Millennium Villages Project, ostensibly headed by a prominent economist, which failed to use even the basic differences-in-differences approach you mention. This led to inaccurate conclusions that have still not been retracted.
Michael, thanks. I had read your paper and found the graphs comparing simple differences and differences within differences fascinating. It is amazing that the mid term evaluation of the MVP thought it could make do with only a "before and after" and that the initial funders and designers thought they did not need to worry about trying to construct control groups. As you know this is not uncommon, although it is surprising in an intervention so sure to attract scrutiny.
One other comment I received on the post(author wishes to remain anonymous)was the point that often economists trawl through our data until we find the result we want to find. I would agree that this is a problem. Do we need to require authors to tell us about all the regressions they have run?
Post a Comment