This New York Times article (þ: Margaret Soltan) probably makes more of the vague trend towards deemphasis of the SAT in college admissions than is probably justified. Then again, I’m one of those weird social scientists who thinks that psychometrics are reliable and valid measures of student abilities, albeit—like all measures—subject to error. The real issue with the SAT is not its psychometric foundations or learning effects from “test prep,” but rather its wide error bounds, which make it too advantageous for students to repeat the exam. The scale of the numbers probably psychologically amplifies this effect; put the score on a range from 1.0 to 4.0, and I suspect you’d see retake rates plummet with absolutely no other changes to the exam.
Even though most admissions committees probably don’t do this in a very sophisticated way (at least, not yet, although one suspects that some of the SAT-optional trend can be attributed to admissions committee innumeracy or hostility towards numeric measures than any real problem with the SAT), the lack of SAT scores can be worked around with some fancy stats: you can just impute the missing data from the information you do have (mean SAT scores, likely available at the school or school district level; GPA; some measure of school quality; grades in math and English classes), albeit with an adjustment to account for an important selection effect: that the SAT score, which is probably known to the student, is more likely to have been reported if it is above the mean imputation (my gut suspicion is that reporting is distributed complementary log-log about the mean SAT score).