Herein I present a rant on one-tailed tests in the social sciences; feedback welcome:
See also: this FAQ from UCLA, which is a little more lenient—but not much.Unless you have a directional hypothesis for every coefficient before your model ever makes contact with the data, you have no business doing a one-tailed statistical test. Besides if your hypotheses are solid and you have a decent n, the tailedness shouldn’t determine significance/lack thereof.
Thought experiment: assume you present a test in a paper that comes out p=.06, one-tailed. That means you have a hypothesis that doesn’t really work to begin with (sorry, “approaches conventional levels of statistical significance”). More importantly, if you just made up the tailedness hypothesis post facto to put a little dagger (or heaven forbid a star) next to the coefficient, you really did a two-tailed test with p=.12 and then post-hoc justified it to make the finding sound better than it really was.
Now here’s the center of the rant: I really don’t believe you actually knew the directionality of your hypothesis before you ran the test and were willing to stick with it through thick and thin, since I know that you’d be figuratively jumping up and down with excitement and report a significant result if the “sign was not as expected” and it came out p=.003 two-tailed (p=.0015 one-tailed, opposite directionality), rather than lamenting how it turned out with p=.9985 on your original one-tailed test. I dare say nobody has ever published an article claiming the latter (although I might give it a positive review just for kicks).
And I really don’t feel the need to have these discussions with sophomores and juniors, hence why I prefer books that just talk about two-tailed tests (aka “not Pollock” [a textbook I really like otherwise]) so I don’t feel the need to rant.