Sunday, 26 October 2008

Inside baseball

I’ve been remiss in pointing my readers recently to Zachary Schrag’s Institutional Review Blog, which patiently documents the overreach and misapplication of federal regulations regarding the protection of human subjects to the social and behavioral sciences, including research that by federal regulation is exempt from review by IRBs.

While Schrag is cautiously optimistic that the new head of the Office of Human Research Protections will be an improvement, the continued domination of the process at most levels by biomedical researchers—along with the general sense that, as Schrag notes, “researchers cannot be trusted to apply the exemptions themselves”—is still troubling to those of us who want to conduct human subjects research, particularly secondary data analysis. Technically speaking (even though I dare say most social scientists observe this requirement in the breach), even the analysis of secondary data collected by others and fully anonymized before we see it (e.g. use of data such as the General Social Survey, American National Election Studies series, the Eurobarometer series, etc.) requires IRB oversight and approval beforehand.

Thursday, 23 October 2008

Bust a move

I enjoyed last night’s episode of Mythbusters for a variety of reasons. For starters, I now have Smart Board Envy™, projectiles and explosions are always fun, and seeing Adam, Jamie, and Kari drunk was a hoot.

The social scientist in me, though, really enjoyed the “beer goggles” experiment. In fact, the show, edited down to just include that section, would make a great primer on “how social scientific experiments work” for my undergraduate methods course when I teach it again, presumably next fall. On the other hand, I was less thrilled with the “sobering up” experiment, but the comedy factor of drunken Adam trying to run on a treadmill without a handrail, with all-too-predictable results, made up for the scientific shortcomings therein.

Wednesday, 23 January 2008

What I said

Timothy Burke articulates in a far better way an idea that Frequent Commenter Scott and I discussed (probably very loudly in a way so as to maximally annoy other patrons) at a bar in Chicago sometime last year, although my pitch was more for Mythbusters gone social science. Then again, maybe just using straight Mythbusters is a better idea; I doubt very many people would tune in to watch Kari Byron analyze data in SPSS.

Wednesday, 15 June 2005

He said, she said journalism

Not having analyzed the data (a big caveat for a social scientist, mind you) I’ll agree with the critics who aren’t buying the evidence from a Heritage report that suggests that “abstinance pledge” programs work. Not that the story makes that much sense, since it’s clear the author doesn’t actually know anything about social scientific research and just relies on an expert and the authors of the original study to rebut the paper.

But Matthew Yglesias’ critique really goes off the rails. First he complains, “the study was not peer-reviewed, is unpublishable in real academic journals, uses an unreliable data source, and only supports the conclusion when you use a non-standard test for statistical significance.”

The first two critiques are bizarre, since (a) it has never been submitted for peer review and (b) we don’t know whether or not it’s publishable, since submission for peer review hasn’t happened yet; the lack of publishability is an opinion expressed by someone in the article, not a factual statement. They don’t use any “non-standard test”; they use a p-value of 0.10 as their cutoff, which isn’t the traditional 0.05 and not quite as convincing as 0.05, but isn’t inherently invalid either, and confidence levels aren’t tests (examples of tests are “t” tests and “Wald” tests; p values are the results of statistical tests).

The only critique that’s even vaguely valid is that the data source is unreliable, as it relies on self-reporting by respondents of their behavior. This is a problem, to the extent you believe that people who have signed abstinence pledges are more likely to lie about their sexual activity than those who haven’t. I’ll concede that it’s possible that that’s the case. Mind you, Heritage didn’t come up with the data—HHS did—and trying to get people to accurately self-report anything is harder than it looks.

Then Yglesias turns and goes completely bizarro:

The only newsworthy information in the story is that the Bush Department of Health and Human Services has decided for some reason to start contracting out research on controversial questions to an ideological think tank that is non-partisan in name only, rather than to proper independent analysts.

There is no evidence in the story that Heritage was working under any sort of HHS contract. On the contrary, Heritage appears to have analyzed data, produced under HHS and CDC contract, which is in the public domain.* They then presented their results at a government-sponsored conference. The next step would be to fix any problems in the paper (and the article suggests there were some), and then submit the paper to a peer-reviewed journal. That’s how social science is done.

Now, mind you, it might be premature for the New York Times to be calling attention to this story, but given public interest in the issue—and the Times’ possible interest in discrediting this evidence, not that I’d suspect the paper of having an ideological bias in its reporting decisions—I’m not sure I can fault them for covering preliminary results that (potentially) rebut a serious critique of administration policy.

* If the CDC had helped fund either analysis, it would be traditional for the studies to acknowledge the funding at the beginning of the paper in a note. I think it’s more likely that the Times meant to say that the CDC helped fund the HHS survey, not the Heritage study.