I apparently have a love-hate relationship with my students; in my mailbox at work today were a Christmas card from a student and my abysmal (at least by Millsaps standards) course evaluations. Four students in my intro class apparently thought it would be amusing to give me the lowest possible ranking on all 19 questions, even such procedural items as “gives clear directions” and “presents [material] in a clear sequence.” Ah well, at least I “demonstrate knowledge” of what I’m teaching…
My response to all this, of course, was to finish my SPSA paper on voting in recent presidential elections and continue getting organized for my trip to Florida tomorrow.
My students are apparently laboring under the delusion that I am “hot.” Oy vey. I could buy that rating for Ms. Mueller or Dr. Galicki, to say nothing of the legendary Dr. Tegtmeier-Oertel, but not for me.
Elsewhere: Dr. Huffmon’s students love him (except the student who fails to properly recognize that he is the Messiah), but inexplicably fail to award the coveted chili pepper. Mass delusion, I tell you. (þ sorta-kinda: Mungowitz End)
Well, the real evaluations—rather than the fake ones here—are in, and they’re much better than those from last semester, by well over a standard deviation. (I’d sit down and do the independent samples t test, but I’m not that bored. t test below the fold…)
I’m just chalking this one up as yet another in a series of little ironies that have been running around for the past couple of months.
The result of the
t test: p(I am not a better teacher now than I was in the fall) < 0.00062. I feel
so much better now.
As someone broadly sympathetic to the idea that students should have full disclosure about the courses they take, I’d be remiss if I didn’t point out this effort at student-run evaluations launched by Duke sophomore Elliott Wolf. He further articulates his motivations in this op-ed in today’s Duke Chronicle.
And, in the interests of full disclosure, he’s my one lonely rating thus far.
Econ prof James D. Miller think colleges need to “fight” RateMyProfessors.com. I don’t know if it needs fighting, per se, but I’d say it’s only marginally valuable. For example, here at Duke I’m allegedly easy but at Millsaps I was easier yet was considered tough (and had the grade distribution to prove it—my classes were consistently below the college’s mean GPA).
That said, I don’t mind student-centered evaluations and have even lauded one effort to compile such things here at Duke, where the “official” evals for last semester are apparently so shrouded in secrecy that I still haven’t seen them 6 weeks after turning in grades. And I don’t even mind student evals in general, although they almost certainly were a factor in my failing upward in the academic universe.
Though, as a political scientist looking for a job, the mentality noted by this commenter (allegedly a faculty member in my field) is somewhat disturbing:
I have been to two academic conferences within the year (academic year 2005–06) where colleagues were running tenure-track job searches (political science) and when I made recommendations regarding two individuals who I thought might be a good fit for both jobs, I received subsequent emails that,“after having checked RMP” (talk about unprofessional behavior!!!) there were “concerns” whether either of the recommended colleagues could teach in liberal arts enviornment. Clearly RMP is being looked at by folks on search committees. Don’t believe for a minute that after having looked at RMP folks are not influenced by what they read. And don’t believe that search committee members are not going directly to RMP to, as I was told, “a snapshot” of job candidates. AAUP and the national associations for the various disciplines ought to step in on this debate and come down clearly on RMP and its use in job searches etc.