David Adesnik has a response to the critiques of his earlier posts at OxBlog and the Volokh Conspiracy. He first notes that he’s just as annoyed by new data sets as by old ones:
Actually, I’m far more frustrated by the new data sets than the rehashing of the old ones. Just three days ago I was at a presentation in which a colleague described the data set she assembled on over 120 civil wars that have taken place since 1945. Since Latin America is the region I know best, I pulled the Latin American cases out of the data to set look at them.
What I found was that a very large proportion of the cases were “coded” in a misleading or flat-out wrong manner. Why? Because no one can study 120 civil wars. But pressure to come up with data sets leads scholars to do this anyway and do it poorly. Of course, since their work is evaluated mostly by other scholars who lack the historical knowledge to criticize their work, they get away with it. And so the academic merry-go-round spins merrily along.
That’s a fair and reasonable critique—of that particular dataset. There’s always a tradeoff between parsimony on the one hand and depth on the other. You can collect data on 120 civil wars, and try to explain with parsimony why—in general—civil wars occur, or you can soak and poke in one civil war and try to figure out all the myriad causes for that particular one. Each has its pitfalls; figuring out why Cambodia had a civil war in 1970 (my years are probably off, me not being an IR scholar) through a “soak and poke” really doesn’t help explain why Pakistan had one in 1973. On the other hand, oversimplifying the causes can be problematic too.
But that strikes me as more of a coding problem in a particular dataset than a problem endemic to social science research; ultimately, you have to simplify the real world to make scientific explanations of it. And this isn’t a problem unique to “soft” sciences like political science: physicists don’t really think light is composed of photons that are both a particle and a wave (for example), but the only way for humans to currently understand light is to model it that way, and chemists don’t think that nuclei are indivisible (but, for their purposes 99.9% of the time, they might as well be).
David does take me to task for my admittedly flip remark that Hamas was comparable to the Sierra Club:
With apologies to Chris, his comment summarizes everything that is wrong with political science. Who but a political scientist could think that ideology is not a good explanation for the differences between the Sierra Club and Hamas?
Both groups have fairly revolutionary ideologies, yet they pursue their ends through different means. The Sierra Club operates in an environment where at least some of its goals can be accomplished from within the existing political system, while Hamas’ goal is the obliteration of the existing political system in Israel and the Palestinian territories. One need not resort to ideology to see that the Sierra Club doesn’t need to engage in violence to pursue its goals while it’s pretty clear that for Hamas to produce revolutionary change in the former Palestinian mandate, it does.
That the goal has something to do with Hamas’ ideology is rather beside the point; they can’t accomplish it without obliterating the Israeli state through violent action. The Sierra Club, on the other hand, has a sympathetic political party, a regulatory agency whose civil service employees (if not its politically-appointed overseers) share its goals, and other sources of active support that mean that they can achieve their goal of reducing pollution and other environmental impacts without resorting to violence. Ideology may define the goal, but the goal itself will be pursued through means that are shaped by the political environment.
Of course, in some cases, ideology may affect the means chosen. But a theory of how Osama Bin Laden operates isn’t very generalizable; it only explains how Bin Laden behaves, without explaining how ETA, the Tamil Tigers, or the Real IRA operate. That’s the tradeoff—you can spend a lot of time trying to explain how one actor will behave, and nail that, or you can spend a lot of time explaining how multiple actors will behave, and maybe get close. Maybe Bin Laden deserves case study attention. But most political actors don’t; they’re frankly not that interesting.
For example, in-depth case study of how my neighbor across the street makes his voting decisions tells me next to nothing about how my next-door neighbors vote, much less how people vote in general. My resources are probably better spent trying to explain how most people vote from large-scale survey data, and getting close, rather than studying one person so I can predict precisely how he’ll vote in 2032.
Around Harvard, all one hears is that incorporating statistics into one’s work significantly increases one’s marketability (and I don’t just mean at the p<.05 level—we’re talking p<.01 on a one-tailed test.)
I will grant that the use of statistics—or more accurately, the demonstrated ability to use statistics—helps the marketability of political scientists. For one thing, this is because of hiring practices in political science—your primary or major field defines the sort of job you will get. Unless you are looking for a job at a small liberal arts college, no school that is hiring in IR will care if your second (minor) field is comparative, theory, or American, since you’ll never teach or do research in those fields. The exception is in political methodology: you can get a job in methods with a substantive major and a minor in methods. The downside (if you don’t like methods) is that you will be expected to teach methods. The upside is that you aren’t tied to a particular substantive field.
More to the point, in some fields it is difficult to do meaningful research without statistics. In mass political behavior and political psychology—my areas of substantive research—at least a modicum of statistical knowledge is de rigeur. Which brings me to Dan’s point:
I’d argue that the greater danger is the proliferation of sophisticated regression analysis software like STATA to people who don’t have the faintest friggin’ clue whether their econometric model corresponds to their theoretical model.
For every political scientist that knows what the hell they’re doing with statistics, there are at least two who think typing
logit depvar ind1 ind2 ind3 at a Stata prompt is the be-all and end-all of statistical analysis. Frankly, a lot of the stats you see in top-flight journals are flaming crap—among the sins: misspecified models, attempts to make inferences that aren’t supported by the actual econometric model, acceptance of key hypotheses based on marginally significant p values, use of absurdly small samples, failure to engage in any post-estimation diagnostics. And, of course, “people who don’t have the faintest friggin’ clue whether their econometric model corresponds to their theoretical model.” Several thousand political scientists receive Ph.D.’s a year in the United States, and I doubt 20% of them have more than two graduate courses in quantitative research methods—yet an appreciable percentage of the 80% will pass themselves off as being quantitatively competent, which unless they went to a Top 20 institution, they’re almost certainly not.
David then trots out the flawed “APSR is full of quant shit” study, which conflates empirical quantitative research with positive political theory (game theory and other “rat choice” pursuits), which, as I’ve pointed out here before, are completely different beasts. Of course, the study relies on statistics (apparently, they’re only valid when making inferences about our own discipline), but let’s put that aside for the moment. The result of all this posturing is our new journal, Perspectives on Politics. Just in case our discipline wasn’t generating enough landfill material…
He then turns back to the civil war dataset his colleague is assembling:
Take, for example, the flaws in the civil war data set mentioned above. I’m hardly a Latin America specialist, but even some knowledge of the region’s history made it apparent that the data set was flawed. If political scientists had greater expertise in a given region, they would appreciate just how often in-depth study is necessary to get even the basic facts right. Thus, when putting together a global data set, no political scientist would even consider coding the data before consulting colleagues who are experts in the relevant regional subfields.
Undoubtably, this particular political scientist should have consulted with colleagues. What David seems to fail to understand is that she did: that is why your colleague presented this research to you and your fellow graduate students, to get feedback! Everything political scientists do, outside of job talks and their actual publications, is an effort to get feedback on what they’re doing, so as to improve it. This isn’t undergraduate political science, where you are expected to sit still and soak in the brilliance of your betters while trying not to drool or snore. You’re now a grad student, expected to contribute to the body of knowledge that we’ve been assembling—that’s the entire point of the exercise, even if it gets lost in the shuffle of “publish or perish” and the conference circuit.
And one way to do that is to say, “Yo, I think you have some coding errors here!” If this political scientist is worth her salt, instead of treating you like a snot nosed twit, she’ll say, “Gee, thanks for pointing out that the Colombian civil war had N participants instead of M” or “Cuba’s civil war was a Soviet-supported insurgency, not a indigenous movement? Thanks!” (Again, these are hypotheticals; I’m not an expert on Latin American history.)
As for the lag time in Pape’s piece, well that’s the peril of how the publication process works. If it’s anything like any other academic paper, it’s been through various iterations over several years; you don’t simply wake up one morning, write a journal article, and send it off to Bill Jacoby or Jennifer Hochschild. At least, not if you don’t want them to say nasty things about you to your colleagues. Anyway, you can fault the publication process to a point, but I think it’s a safe bet that Pape’s thesis predates 9/11, and that people were aware of it before his APSR piece hit the presses.