By now, a lot of academics (or at least academic economists) have heard about Scott Carrell and James West's paper on professor quality. They use data from the Air Force Academy (where students are randomly assigned to core courses and take common exams) and find that the 'value-added' of professors in intro courses is both positively correlated with student evaluations and negatively correlated with 'value-added' in follow-on courses (which the authors talk about as evidence of 'deep learning'). Basically, professors who seem to be better at inducing 'deep learning' in their intro students are also more likely to get lower evaluations from those students.
On the one hand, I have to say that this feels kind of validating for people like me - that is, I care a lot about helping my students learn to think critically and I think I put a lot of effort into trying to foster deep learning, rather than allowing my students to just memorize stuff, but I rarely get stellar evaluations (at least in my Principles course) and I've often told myself that my focus on deeper learning is one of the reasons why my evaluations aren't as high as some of my colleagues. Certainly, some of the open-ended comments from students on the anonymous department evaluations could be interpreted as them resenting that I ask them to actually think.
But on the other hand, I can't be quite so cynical as to blame it all on my students. I think about teachers, like those in Ken Bain's What the Best College Teachers Do, and I know that they are able to not only promote deep learning but they are able to do so in a way that students appreciate and respond to. So clearly, I have work to do...
But I have to say, one of the best things about tenure is that I don't really have to care about my teaching evaluations. I know that is exactly the kind of thing that makes a lot of people think tenure is a bad thing, and I do realize that for some people, the 'threat' of bad evaluations is the only thing motivating them to care about their teaching at all. But a) I don't think anyone could seriously argue that me not caring about my evaluations is equivalent to me not caring about teaching and b) since I'm going to keep working on my teaching regardless, the anonymous student evaluations done for my department tend to just stress me out without giving me much useful feedback. I get much more useful information from the end-of-course surveys I have students do that are tailored to the individual courses, which I will be talking more about soon...
On the one hand, I have to say that this feels kind of validating for people like me - that is, I care a lot about helping my students learn to think critically and I think I put a lot of effort into trying to foster deep learning, rather than allowing my students to just memorize stuff, but I rarely get stellar evaluations (at least in my Principles course) and I've often told myself that my focus on deeper learning is one of the reasons why my evaluations aren't as high as some of my colleagues. Certainly, some of the open-ended comments from students on the anonymous department evaluations could be interpreted as them resenting that I ask them to actually think.
But on the other hand, I can't be quite so cynical as to blame it all on my students. I think about teachers, like those in Ken Bain's What the Best College Teachers Do, and I know that they are able to not only promote deep learning but they are able to do so in a way that students appreciate and respond to. So clearly, I have work to do...
But I have to say, one of the best things about tenure is that I don't really have to care about my teaching evaluations. I know that is exactly the kind of thing that makes a lot of people think tenure is a bad thing, and I do realize that for some people, the 'threat' of bad evaluations is the only thing motivating them to care about their teaching at all. But a) I don't think anyone could seriously argue that me not caring about my evaluations is equivalent to me not caring about teaching and b) since I'm going to keep working on my teaching regardless, the anonymous student evaluations done for my department tend to just stress me out without giving me much useful feedback. I get much more useful information from the end-of-course surveys I have students do that are tailored to the individual courses, which I will be talking more about soon...
Words like deep thinking and critical thinking are good, but not enough. What we need to understand is that we must start from basic principles. Teachers must know how students think, and build from there using the principles and logic. See "Teaching and Helping Students Think and Do Better" on amazon.
ReplyDeleteWhat struck me most about the Carrell/West study were the unique conditions under which the USAF operates:
ReplyDelete1. Very small classes.
2. A large number of faculty.
3. Random assignment of students to sections.
4. What appears to be close to random assignment of faculty to sections.
5. Very clear sequencing of courses in many programs.
I seriously doubt whether their results could be replicated in any system in which a program has a small number of sections, or a small number of faculty, or student choice of faculty in introductory (and follow-on) courses, or....
So while I'm sympathetic with the effort, I think a lot of the comment on the study overstates the conclusions that can be drawn from it.
@Doc: There was an InsideHigherEd article that made a similar point (i.e., hard to see how to implement a system based on the results). I agree it would be unrealistic to try to evaluate faculty based on how their students do in follow-on classes but what was nice about the Carrell and West study is that since they didn't have any of those issues, one could say their results DO confirm that traditional teaching evaluations may be 'rewarding' the wrong things. So maybe the implication isn't that any other school should try to replicate the results, or do away with evaluations, but rather, should be thinking about how to put student evaluations in the proper context?
ReplyDeleteWell, I'd agree with that. I've thought for decades that standard CTEs are very limited instruments. One thing I have advocated (unsuccessfully) is, in fact, doing retrospective assessments--asking people about to graduate which courses contributed the most to their ability to learn and work with the material? I don't know of any place that does that...
ReplyDelete