As it’s Monday, it’s time for the weekly howls of outrage to erupt at the latest BCS standings. Unfortunately for fans of college football, however, the outrage is largely manufactured and misplaced. Why?
- Controversy sells. Getting people to watch the 6 EST SportsCenter is pulling teeth; hence why ESPN has pulled Dan Patrick back into full-time duty in Bristol, and why the BCS standings are a prominent part of the Monday show—to the point that they receive nearly 48 hours of pre-hype from “College GameDay Final” on.
- The controversy is manufactured by the entities that have the most to lose from an independent evaluation of college football: the media. 65 of America’s leading college football writers and broadcasters have a vested interest in their ratings being the sole indicators of quality in college football. The regional and other biases of both the writers and the coaches are notorious. Nothing like some diversionary controversy to deflect attention away from the gorilla sitting in the corner.
There are legitimate reasons to critique the BCS standings. The fundamental problem is that they’re an ad hoc amalgamation of polls, an arbitrary selection of computer rankings, and fudge factors, necessitated by the false legitimacy that the Associated Press and ESPN/USA Today polls have among college football fans. From an econometric standpoint, there are serious problems with the BCS.
A fundamental problem is that truly ordinal data is treated as metric in the formula. Your age, height, and weight are metric data: differences in age have real meaning. If I’m 27 and my cousin is 3, the difference in our ages—24 years—is a meaningful quantity. By contrast, poll rankings aren’t metric. LSU is #3 in the AP poll, and Ole Miss is #15. 15-3=12. Twelve doesn’t tell us much of anything about the difference between LSU and Ole Miss; it just tells us that there’s a difference. Missouri is #27. 27-15=12. Treating this difference as metric makes an invalid assertion: that the difference in quality between LSU and Ole Miss is the same as the difference between Ole Miss and Missouri.
This problem repeats itself throughout the BCS formula. Means of rankings in polls and computer rankings are taken. These means are added together. The strength of schedule component—which is a key component of many computer rankings—starts as metric data, then is converted to a ranking and arbitrarily scaled… then added to the means. Losses—which are metric—are then subtracted. Finally, an ad hoc adjustment is made for so-called “quality wins”—an adjustment one would hope that is incorporated in the polls and computer rankings anyway. Then the rankings are reported with these bizarre totals attached, apparently because totals look cool (I guess they got the idea from the AP and ESPN polls, who report the sorta-kinda metric Borda count in addition to the rankings).
Nonetheless, the fundamental idea of the BCS rankings is sound, even if there are too many compromises and too many ad hoc adjustments. So what would I do?
- Include more computer rankings.
- Use averaging methods appropriate for ordinal data. Or at least, recognize that taking the mean of a bunch of ordinal data doesn’t make it metric… so make it properly ordinal again.
- Eliminate the silly restriction that computer rankings cannot incorporate margin-of-victory as a factor in their formulas. (I’ll explain why this restriction is silly in another post.)
- Eliminate the ad hoc adjustments.
Next time (which I intended to be this time—sigh), I’ll talk about “computer rankings” in more detail. It turns out that they can be thought of as an application of the oft-maligned statistical technique known as factor analysis.