Not having analyzed the data (a big caveat for a social scientist, mind you) I’ll agree with the critics who aren’t buying the evidence from a Heritage report that suggests that “abstinance pledge” programs work. Not that the story makes that much sense, since it’s clear the author doesn’t actually know anything about social scientific research and just relies on an expert and the authors of the original study to rebut the paper.
But Matthew Yglesias’ critique really goes off the rails. First he complains, “the study was not peer-reviewed, is unpublishable in real academic journals, uses an unreliable data source, and only supports the conclusion when you use a non-standard test for statistical significance.”
The first two critiques are bizarre, since (a) it has never been submitted for peer review and (b) we don’t know whether or not it’s publishable, since submission for peer review hasn’t happened yet; the lack of publishability is an opinion expressed by someone in the article, not a factual statement. They don’t use any “non-standard test”; they use a p-value of 0.10 as their cutoff, which isn’t the traditional 0.05 and not quite as convincing as 0.05, but isn’t inherently invalid either, and confidence levels aren’t tests (examples of tests are “t” tests and “Wald” tests; p values are the results of statistical tests).
The only critique that’s even vaguely valid is that the data source is unreliable, as it relies on self-reporting by respondents of their behavior. This is a problem, to the extent you believe that people who have signed abstinence pledges are more likely to lie about their sexual activity than those who haven’t. I’ll concede that it’s possible that that’s the case. Mind you, Heritage didn’t come up with the data—HHS did—and trying to get people to accurately self-report anything is harder than it looks.
Then Yglesias turns and goes completely bizarro:
The only newsworthy information in the story is that the Bush Department of Health and Human Services has decided for some reason to start contracting out research on controversial questions to an ideological think tank that is non-partisan in name only, rather than to proper independent analysts.
There is no evidence in the story that Heritage was working under any sort of HHS contract. On the contrary, Heritage appears to have analyzed data, produced under HHS and CDC contract, which is in the public domain.* They then presented their results at a government-sponsored conference. The next step would be to fix any problems in the paper (and the article suggests there were some), and then submit the paper to a peer-reviewed journal. That’s how social science is done.
Now, mind you, it might be premature for the New York Times to be calling attention to this story, but given public interest in the issue—and the Times’ possible interest in discrediting this evidence, not that I’d suspect the paper of having an ideological bias in its reporting decisions—I’m not sure I can fault them for covering preliminary results that (potentially) rebut a serious critique of administration policy.
* If the CDC had helped fund either analysis, it would be traditional for the studies to acknowledge the funding at the beginning of the paper in a note. I think it’s more likely that the Times meant to say that the CDC helped fund the HHS survey, not the Heritage study.
5 comments:
Also, I’ve read that abortions are up since Bush was elected. Apparently the way it works is this: the more abstinance is a focus, the less they can talk about birth control. Everyone still has sex anyway. Go figure.
Geez, if we had that to worry about, I’d guess that 70 percent of all social scientific research would be invalid!
Heh… true enough, although false reporting by respondents is particularly acute when it comes to sexual activity and similar topics.
Yeah. Remember those “anonymous” drug questionnaires we all had to fill out in high school for the ONDCP?
But this is also a notorious problem in any sort of behavioral self-reporting. For instance, researchers at Ball State found that people underreported their media use by as much as four hours (!) per day. The researchers first gathered self-reporting diaries and then followed around a sample of those people for a week.
Ah, yes, the drug questionnaire. I believe my responses indicated I was a crack whore or something with a 4.0.