Monday, 28 April 2003

Lott-a-go-go

Tim Lambert has a Sunday update that links here. I agree with Tim that there were coding errors; however, as someone who’s worked with large CSTS data sets, it can be hard to get the coding right, particularly when you’re dealing with time-varying covariates (example: event X happened in 1991; do I change the dummy variable in 1991 or 1992?). One’s judgment of the maliciousness will probably depend on one’s overall assessment of Lott; I’m not going to go there.

The larger question: has Lott been discredited? I don’t know. Ayers and Donahue say yes, but the potential problems I identified with the econometrics apply both to them and Lott; without someone doing a proper analysis—dealing properly with missing data, justifying fixed effects (instead of using, for example, random effects or regional or state dummies), etc.—we just don’t know who is right. But again, someone who either (a) has tenure or (b) cares can do that—the topic’s too politicized for someone who doesn’t even have a Ph.D. yet, much less a job. I’ll just go with the default, Calvin Trillin response for now: it’s too soon to tell.

Tim Lambert has another post today arguing that there’s a systematic problem with Lott’s coding that favors his results; since I’ve not read Lott & Mustard (I have a copy of More Guns, Less Crime, but I never got past the first few pages and a skim of the tables due to other time constraints), I can’t speak to that, but it seems suspicious at first glance.

And, regretfully, picking and choosing one’s analyses is endemic to the social sciences; you present the models that work. Of course, if the model doesn’t work (at least in terms of the relationship you care about; who cares if the SOUTH dummy is significant or not), and you can’t fix it without doing fraudulent things with the data or the specification, then you’d better throw out your research or revise your hypotheses...