Monday, 25 June 2007

Methods meeting paper, day one

So far, I’ve attacked my data to get my most important variable by generating first-dimension ideology scores using the CJR model for the 110th House through this morning’s roll calls; pscl makes this incredibly easy, although a tweak to allow the readKH function to support the CSV files from Jeff Lewis’ site in addition to the traditional Poole-Rosenthal data dictionaries would be nice (if I get bored later in the week, I may hack something together).

The drudge work to come: marrying these data with some member and constituency demographics so I can slap together some models (some based on real data, some fake based on a known data generating process) with the ideology scores as independent and dependent variables to see if incorporating error really helps anything or not. If not, this may turn out to be the dullest paper in methods meeting history.

Monday, 18 June 2007

Much Ado About Mosquito Bites

A few of my fellow EITMers and I went to see the last performance of Much Ado About Nothing in Forest Park last night. I’d never seen Shakespeare set in the Wild West before, but it worked somehow. However, my reward for seeing Shakespeare was a mosquito bite on my scalp which has been bothering me all day.

I’d plan on taking some antihistamine, but if tomorrow is like today was I’d be out like a light—it probably didn’t help that the afternoon included a 90-minute lecture that could be basically summed up in a sentence as “econometric models that are misspecified and have omitted variables are incorrect in really bad ways, so don’t do that.” Perhaps it was a useful refresher for those who have forgotten the Gauss-Markov assumptions, but it did less for me in my insufficiently-caffeinated state.

Wednesday, 9 May 2007

Degrading grading

I think I’ve become lenient on grading in my young age. Maybe it’s just the non-tenure-track faculty member’s equivalent of senioritis (perhaps vistoritis?), but I’m pretty sure I’m a softer touch in the spring than in the fall. I’m just waiting on a few stragglers and my Congress class’ final exams before I can officially put a nice bow on this semester, except for the bits where I dress in fancy regalia.

Tomorrow’s project: figure out what to submit for my useR! proposal. It’s scheduled at a positively icky time for me, as I expect to be moving right around August 1st, but if I can squeeze it in it’d be both a good experience and nice CV fodder. Ideally I’d figure out a way to repurpose my methods meeting proposal, but I’m not sure it’ll work for useR! (boy that punctuation is annoying) very well, so plan B is to get my R package with epcp and friends into working order and write a paper on that.

I also owe a 900-word encyclopedia entry to Ken Warren by next Tuesday.

Monday, 23 April 2007

Teaching moments

Free Exchange notes New York Times reporter Erik Eckholm playing fast and loose with infant mortality statistics to extrapolate ominous trends from what appear to be random year-to-year fluctuations in infant death rates (and a downward-sloping overall trend, to boot).

One might also wonder what effect—if any—Hurricane Katrina had on the 2005 mortality rate. That is, if one weren’t Eckholm, who doesn’t even mention the possibility of a relationship with the largest natural disaster in Mississippi history.

Wednesday, 21 March 2007

Stats stuff

I’ll pitch a couple of items from the Harvard Social Science Statistics Blog worth mentioning.

First, Sebastian Bauhoff plugs a number of summer quantitative methods programs. My overall review of ICPSR would be more positive than his, but as he mentions much depends on the courses you choose: Charles Franklin’s MLE class is generally a subject of rave reviews, and I can personally vouch for Bill Jacoby’s class in scaling and Doug Baer’s class in latent variable structural equation modeling (LISREL models). I’ve also heard that the advanced MLE course has vastly improved since I took it in 2001 (when it batted around .500 while rotating four instructors). Other advanced classes that seem to get good reviews include Jeff Gill’s Bayesian class and the simultaneous equations class. Historically I know time series and categorical data analysis were somewhat hit-and-miss; the latter was regarded as excellent when taught by Jeremy Freese, but I’m told it has gone downhill since.

Second, James Griener expresses concern that people may start applying statistical models willy-nilly to explaining lower-court decision-making, on the basis that decisions are not iid but instead controlled largely by precedent. Certainly sticking circuit court opinions in as the dependent variable in a logit would be stupid without paying some serious attention to the error structure. But that hardly forecloses interesting analysis.

Also, my vague applied notion of the ideal-point model is that items (decisions) are not actually believed to be iid (there is at least one latent variable explaining them, so by definition they are not truly independent of each other), so I don’t think using an item-response theory model would be problematic—however, you’d certainly end up recovering a “respect for stare decisis” dimension in addition to the ideology dimension(s) you recover from the Supreme Court, which might actually help contribute to interesting substantive debates.

Tuesday, 12 December 2006

Cutting through the BS of multilevel models

Jeff Gill looks at the plethora of terminology surrounding multilevel models:

There is a plethora of names for multilevel models. Sociologists seem to prefer “hierarchical,” many statisticians say “mixed effects,” and there is heterogeneity about usage in economics. It seems reasonable to standardize, but this is unlikely to happen. ...

Some prefer “random intercepts” for “fixed effects” and perhaps we can consider these all to be members of a larger family where indices are turned-on turned-off systematically. On the other hand maybe it’s just terminology and not worth worrying about too much. Thoughts?

Silly me thought the plethora of terminology was a deliberate obfuscation effort by methodologists to make them look like they know more stuff than they actually do. For example, smarty-pants methodologists could say in casual conversation, “I know hierarchical models and mixed effects!” And unless you knew that they were the same thing, the smarty-pants methodologist would look like s/he was two things smarter than the non-smarty-pants methodologist who didn’t know either.

I may try this myself in interviews… “I know logistic regression and logit!” “I know dummy variables and fixed effects!” I feel smarter already…

Monday, 4 December 2006

Power outages are good for my productivity

The power outage at least had one silver lining for me: it forced me to spend some time in my office with minimal distractions, which allowed me to wrap up most of the textual revisions of the strategic voting paper.

I also am continuing to fiddle with the data analysis; I’m still not happy about the 2000 results, and I’m not sure there’s anything to be done about that (beyond getting a time machine, increasing the NES sample size, and figuring out some way to get more people to fess up to voting for Nader), but the 1996 results turn out to be stronger with the IRT measure of sophistication than they were with the interviewer evaluation. Plus I got the multiple imputation stuff to work.

So hopefully during the black hole between now and student paper grading time I can get this thing polished and ready for submission to a decent journal… and have time to spare to hack together about 8 bits of my dissertation and my job talk into a SPSA paper.

In other “I actually get work done, believe it or not” business in recent days, I took care of a paper review for a journal… I wish I could say it was punctual, but in fairness the first time they sent me the paper for review it got bounced from my SLU account because I was either over my mail quota or the mail system was mid-meltdown. I also wrote two recommendation letters.

Sunday, 5 November 2006

R moves in mysterious ways

Oddly enough, the graphics package code that I was using to add error bars to my dotcharts has mysteriously stopped working since upgrading to R 2.4.0. I can still make the dotcharts using dotchart, but the error bars don’t show up after adding them using segments. This clearly worked last month, or otherwise I wouldn’t have had a presentation to show at Mizzou.

Luckily enough I found another solution using dotplot in lattice instead in an article by Bill Jacoby in the most recent edition of The Political Methodologist… which I probably should have read before hacking together the code the first time around. So now it works… at least until R 2.5.0 comes out, at which point all bets are off.

Tuesday, 17 October 2006

Lancing the Lancet

Like Dan Drezner, I’m a little late to the discussion of the latest study of postwar casualties in Iraq that was recently published in the British medical journal The Lancet, following up an earlier study published in October 2004.

Setting aside the “October surprise” approach that this journal appears to be taking to these studies, there seem to be some methodological questions about the authors’ approach that are being raised; see Andrew Gelman and David Kane, the latter of whom is skeptical of the reported nonresponse rates—which do seem abnormally high, although Iraqis may be much more interested in responding to surveys than the typical citizen in developed (or even developing) countries, perhaps due to novelty effects. As David Adesnik notes, the folks at Iraq Body Count (an anti-war outfit) believe the numbers are seriously inflated as well, although this could just be a turf war among researchers rather than a legitimate grievance.

I think from my perspective the thing that jumps to mind in this discussion is “garbage in, garbage out”—basically, your statistical inferences about a population are only as good as your ability to get a true random sample and minimize response bias; this is Stats 101. These issues are problematic in developed countries, much less in countries undergoing civil upheaval, and solving them is not easy (look at the work of Leslie Kish if you don’t believe me). Does that mean that the numbers are wrong?—no, not necessarily. But my spidey sense tingles nonetheless.

Wednesday, 11 October 2006

Mizzou presentation

My presentation on measuring political sophistication with item-response theory models is here; it’s something of a work in progress, as I haven’t put together the pretty graphs for the American NES data yet.

Monday, 25 September 2006

Life as a Method(olog)ist

Jeff Gill perceives some salutary changes in the labor market for political methodologists:

Last Fall I counted 51 faculty methods jobs posted in political science. I paid close attention because I was on a relevant search committee. This was particularly interesting because equilibrium in past years was about five or so. Right now there are 39 methods jobs posted (subtracting non-tenure/tenure track positions). Now some of these are listed as multiple fields, but one has to presume that listing the ad on the methods page is a signal.

Apparently we have US News and World Report to thank for fundamentally changing the labor market by making methodology the fifth “official” field of the discipline. A number of (non-methodologist) colleagues believed that I must be exaggerating since an order of magnitude difference seems ridiculous. Actually, it turns out that I was underestimating as Jan Box-Steffensmeier (president of the Society for Political Methodology and the APSA methods section) recently got a count of 61 from the APSA. I think their definition was a little broader than mine (perhaps including formal theory and research methods jobs at undergraduate-only institutions).

So an interesting question is how quickly does supply catch up to demand here? My theory is that it will occur rather slowly since the lead time for methods training seems to be longer than the lead time for other subfields. This is obviously good news for graduate students going on the market soon in this area. I’m curious about other opinions, but I think that this is a real change for the subfield.

I concur in part and dissent in part.

I am less convinced that we can attribute this change to US News (although I’m not one of those academic US News haters) than simply to the broader market: people with superior methods training are more likely to get jobs than those who don’t have it, which means that methods training is more important at the graduate level—and increasingly the undergraduate level too. The booming enrollments at the ICPSR Summer Program, including from top-ranked schools that traditionally considered their own methods training sufficient for graduate students, are indicative of this trend as well.

As far as the supply-demand equilibrium works, I think there is a perception out there (perhaps unfair) of the existence of a methods clique—one, that if it exists, I am decidedly not a part of. Thus far, in-clique supply seems to have been sufficient to satisfy demand; we—and perhaps during this hiring season I—shall see whether this continues to be the case. My perception is that high demand is somewhat illusory; several unfilled methods jobs in the past two years have not reappeared, suggesting that filling these jobs is less of a priority than one might think.

The broader issue is a question of definition: what is a “methodologist”? As someone who generally doesn’t live to maximize my own likelihood functions, I’d self-identify as an applied method0logist at best—and certainly don’t consider methodology my primary field of inquiry; tools are great, but I gravitate toward more substantive questions.

As for why Gill thinks “research methods jobs at undergraduate-only institutions” shouldn’t count, I really wouldn’t hazard a comment. But I do think that if he wants to increase the supply of methodologists, getting more undergraduates (particularly at BA-granting institutions like liberal arts colleges) in the pipeline early so they can do advanced work out of the gate at the graduate level would seem to be a key part of the strategy.

Thursday, 31 August 2006

If you don't submit an SAT score, one will be imputed for you

This New York Times article (þ: Margaret Soltan) probably makes more of the vague trend towards deemphasis of the SAT in college admissions than is probably justified. Then again, I’m one of those weird social scientists who thinks that psychometrics are reliable and valid measures of student abilities, albeit—like all measures—subject to error. The real issue with the SAT is not its psychometric foundations or learning effects from “test prep,” but rather its wide error bounds, which make it too advantageous for students to repeat the exam. The scale of the numbers probably psychologically amplifies this effect; put the score on a range from 1.0 to 4.0, and I suspect you’d see retake rates plummet with absolutely no other changes to the exam.

Even though most admissions committees probably don’t do this in a very sophisticated way (at least, not yet, although one suspects that some of the SAT-optional trend can be attributed to admissions committee innumeracy or hostility towards numeric measures than any real problem with the SAT), the lack of SAT scores can be worked around with some fancy stats: you can just impute the missing data from the information you do have (mean SAT scores, likely available at the school or school district level; GPA; some measure of school quality; grades in math and English classes), albeit with an adjustment to account for an important selection effect: that the SAT score, which is probably known to the student, is more likely to have been reported if it is above the mean imputation (my gut suspicion is that reporting is distributed complementary log-log about the mean SAT score).

Monday, 21 August 2006

Not a conspiracy theory

Every time I feel like I’m making progress in turning “the damn strategic voting chapter” into a final paper worthy of submission, I stumble across a new bug in Zelig. I’d theorize that Gary King doesn’t want me to publish anything, but I’m afraid I’m far too insignificant a microbe in the whole political science universe to be squashed so deliberately.

If I were better organized, I’d spend the time I’m waiting for the bugs to be fixed writing up the changes I’ve made already—most notably, tossing the interviewer measure of sophistication in favor of an item-response theory model. That would probably cover the real reason I don’t seem to be able to publish anything—well, besides my lack of a research budget, RAs, and course releases for research, and a computer on my desk at work that probably was the cheapest thing Dell marketed to its education customers three years ago.

Thursday, 8 June 2006


Well, after doing all the recoding I needed to produce binary “correct/incorrect” scores for all the respondents, I ran the IRT model on the 1992 NES, and my computer at work (not exactly shabby – a 1.15 GHz AMD Athlon XP with 1 GB of RAM) ran out of memory when it tried to save the respondent abilities after about 30 minutes of pegging the CPU and eating up my memory and swap. I guess I had more respondents this time than when I did the Dutch model for my dissertation.

The moral of this story: rerun the model with a bit more thinning on my faster AMD64 box at home.

Update: It works much faster (and without killing my computer) when the data matrix is actually set up correctly. Go figure.

Tuesday, 6 June 2006

Thought of the day

Poring through the 1992, 1996, and 2000 NES codebooks looking for any variable that might possibly be perverted into a measure of political sophistication is not exactly fun. On the other hand, now that I’m done doing my penance, I get to go play with the IRT models in MCMCpack for a while, which is.

Saturday, 6 May 2006


I wrapped up my semester this afternoon with a marathon grading session of methods finals—most turned out to be quite good, although quite a few students got tripped up by the last question on the exam, which called on them to fix a hypothetical (and horrifically bad) regression model of the sort typically generated by a naïve student who just decides to randomly pick variables out of the raw 2000 NES data set and dump them into a linear regression model.

Wednesday, 1 March 2006

Fun with stats, Supreme Court edition

Stephen Jessee and Alexander Tahk, two Ph.D. candidates at Stanford, have put together a website that attempts to estimate the ideological positions of Samuel Alito and John Roberts from their votes on the Supreme Court this term.

Perhaps the most interesting result thus far is that Roberts’ estimated ideal point (position in the unidimensional ideological space) is virtually indistinguishable from that of his predecessor as Chief Justice, William Rehnquist, although that is of course subject to change as more cases come along. (The Alito estimates seem to solely reflect the uninformative prior that Jessee and Tahk have placed on him thus far.)

Wednesday, 8 February 2006

I am officially a geek

While doing some work on a small project for the Director of Undergraduate Studies of our department, I stumbled across this course in the undergraduate bulletin and my first thought was “this would be a really cool class to teach.”

Friday, 3 February 2006

My name in PDF (if not in print)

The piece that Dirk and I wrote for The Political Methodologist on Quantian is now out in the Fall 2005 issue, along with a mostly-glowing review of Stata 9 by Neal Beck that no doubt will annoy the R purists, as he suggests he will be ditching R in favor of Stata in his graduate methods courses; a review of a new book on event-history analysis by Kwang Teo, whose apartment floor I once slept on in Nashville; and an interesting piece on doing 3-D graphics in R.

In other methods news, I had the privilege (along with a packed house) of hearing Andrew Gelman of Columbia speak this afternoon on his joint research on the relationship between vote choice and income in the states, which uses some fancy multi-level modeling stuff that I have yet to play much with.

Incidentally, it was fun to see someone else who uses latex-beamer for their presentations; I could tell the typeface was the standard TeX sf (sans-serif) face, but I wasn’t sure which beamer theme Andrew was using off-hand.

Monday, 30 January 2006

R wastes my time

I have just wasted about two hours of my life trying to figure out how to make R draw a line graph (all I want to do is plot the conditional mean of a variable on the Y axis for certain categories of another variable) to stick in my undergraduate methods lecture for tomorrow—a graph I could have constructed trivially in Stata, Excel, or SPSS in about 15 seconds. This is patently ridiculous.

I am not an idiot; this should not be so hard to figure out. I like R, but it is actively user-hostile (even with Rcmdr and other packages loaded), and until it ceases to be such I will not foist it on my students.

Tuesday, 13 December 2005


I’m unsure whether to chalk it up to extreme diligence or just paranoia on their parts, but my students here seem to be atypically obsessed with their final papers and the (open book, open notes) final in my research methods class. I had at least 20 (of 33) students in a review session Monday night, I met with about a half-dozen today, and I expect to meet with at least another half-dozen tomorrow. It’s not a bad thing, just not what I really expected.

Friday, 28 October 2005

Fun with data mining

I’ve been doing some SPSS labs with my methods class this semester, and I stumbled upon a mildly interesting little finding: in the 2000 National Election Study, the mean feeling thermometer rating* of gays and lesbians is higher among respondents with cable or satellite TV than among those who do not have cable/satellite. It’s marginally significant (p = .057 or so in a two-tailed independent-samples t test). I’m not sure if the cable/satellite variable is standing in for a “boonies versus suburbs/urban areas” thing or something else.

It’s also fun because the test is significant at the .05 level if you do a one-tailed test (though, since I have no a priori theory as to why cable/satellite households would like gay people more than non-cable households, I’m not sure a one-tailed test is legitimate), but not significant at .05 if you do a two-tailed test, so it’s useful in illustrating that marginal case.

Thursday, 27 October 2005

Social desirability in action

Colby Cosh points out a poll showing that nearly 40% of Canadians would never vote for a candidate for public office with a history of alcoholism. Is it the prudes or the pollsters? Colby suspects the latter, and I am inclined to agree.

Friday, 23 September 2005

My name in print

My first real publication (broadly defined) in political science is now officially “forthcoming”; while it’s only a short piece in The Political Methodologist, the biannual newsletter of the Society for Political Methodology, I figure you have to start somewhere. It’s a brief overview of Quantian, a “Live Linux” DVD that’s geared toward use by social, behavioral, and natural scientists.

My co-author and Quantian’s developer, Dirk Eddelbuettel, has the current version of the piece up at his website, for the morbidly curious. The article probably will appear in the Fall 2005 issue, whenever that emerges.

Thursday, 8 September 2005

What I should have had my methods class do

A columnist for the Cornell Daily Sun rips on ESPN and brings some statistics to the table:

I recorded a normal hour-long SportsCenter and watched it, stopwatch and notepad in hand. I took record of how many of the 60 minutes were spent actually showing highlights. I defined highlights as any game footage, any top plays, any actual sports — no talking, no analyzing, just the visuals. This excludes time well spent on post-game interviews and relevant statistics, and the necessary evil that is the commercial — so I accept that the entire hour will not be used for highlights and highlights alone.

The results weren’t pretty…

þ: The Road From Bristol, who are now conducting an NIT of non-ESPN personalities that seems to comprise mostly baseball people I’ve never heard of.