Wednesday, 14 April 2004

Misery loves company

Dan Drezner takes a look at John Kerry’s “new and improved!” misery index:

Every index can be challenged on the quality of the data that goes into it, and the weights that are assigned to the various components that make up the overall figure. A lack of transparency about methodology is also a valid criticism. For example, in my previous post on the competitiveness of different regions in the global information economy, the company responsible for the rankings provides little (free) information on how the index was computed. That’s a fair critique.

Even when the methodology is transparent, there can still be problems.

This is a subject near-and-dear to my heart. In quantitative social science, your econometric model is only as useful as your indicators; a crappy indicator renders the whole model essentially useless.

Unfortunately, our ways of dealing with the problem of how well an indicator reflects a concept leave a lot to be desired; “face validity”—which boils down to “I think the indicator reflects the concept, so we’ll a priori assume it does”—is relied on, even by good scholars, to an extent that will make you blanch. Even seemingly obvious indicators, like responses to survey questions, are often woefully inadequate for measuring “true” concepts (in the case of public opinion research, attitudes and predispositions).

Building an index helps with some of these problems—if your measurement error—but introduces others (like ascribing valid weights to the items, as Dan points out). A few cool tools, like factor analysis and its cousin principal components analysis, are designed to help in finding weights, but even they have problems and limitations, most of which basically boil down to the fact that human judgment is still involved in the process.