Alex Tabarrok endorses an econometrics text that makes two rather bold simplifying assumptions:
Stock and Watson use a “robust” estimator of standard errors right from the beginning. This means that they can dump an entire chapter on hetereoskedasticity and methods of “correcting” for hetereoskedasticity (these rarely worked in any case.)
They do not waste time discussing the difference between the t-distribution and the normal-distribution. Instead, they assume reasonably large datasets from the get-go and base their theorems on large-sample theory.
I can sort-of-see the value of always using heteroskedasticity-consistent standard errors (although I think it’s better to model the heteroskedasticity if you can), but dispensing with the t distribution seems to be a bridge too far. Large sample theory is nice, but (a) common econometrics software (e.g. Stata, LIMDEP, and R) uses the t distribution even into sample sizes in the 100s, so you need to discuss it anyway, and (b) there are plenty of theorems that can only be tested with small samples due to data limitations. Now, these may be less problematic in the large-n world that economists inhabit, but I’d have real trouble justifying such a text for a graduate seminar in political science methods (undergrads rarely get beyond bivariate regression).