Apparently half of the 157 studies published in 5 prestigious neuroscience journals failed to perform this "difference in difference" test. This is made more amazing, because it is an easy test. For economists, the test is equivalent to seeing if an estimated slope coefficient in a regression equation varies by population subgroup. We usually do this test by interacting a dummy variable tagged to the subgroup (i.e. mutant mice) with the independent variable of interest (i.e. the drug) and seeing if the estimated slope coefficient on the interacted variable is significantly different from zero. If it is then the slope (i.e response) is statistically different for the two subgroups.
Economists do not typically make this mistake, in fact I wish they were more interested in sub-groups in the population, but they are usually happy with average affects across a diverse group of observations, with the usual excuse (sometimes valid) being small sample sizes.
But this made me wonder what similarly blunderous statistical mistakes we economists make?
I would be interested in your answers.