Skip to main content

How much weight do you give evaluations?

By now, a lot of academics (or at least academic economists) have heard about Scott Carrell and James West's paper on professor quality. They use data from the Air Force Academy (where students are randomly assigned to core courses and take common exams) and find that the 'value-added' of professors in intro courses is both positively correlated with student evaluations and negatively correlated with 'value-added' in follow-on courses (which the authors talk about as evidence of 'deep learning'). Basically, professors who seem to be better at inducing 'deep learning' in their intro students are also more likely to get lower evaluations from those students.

On the one hand, I have to say that this feels kind of validating for people like me - that is, I care a lot about helping my students learn to think critically and I think I put a lot of effort into trying to foster deep learning, rather than allowing my students to just memorize stuff, but I rarely get stellar evaluations (at least in my Principles course) and I've often told myself that my focus on deeper learning is one of the reasons why my evaluations aren't as high as some of my colleagues. Certainly, some of the open-ended comments from students on the anonymous department evaluations could be interpreted as them resenting that I ask them to actually think.

But on the other hand, I can't be quite so cynical as to blame it all on my students. I think about teachers, like those in Ken Bain's What the Best College Teachers Do, and I know that they are able to not only promote deep learning but they are able to do so in a way that students appreciate and respond to. So clearly, I have work to do...

But I have to say, one of the best things about tenure is that I don't really have to care about my teaching evaluations. I know that is exactly the kind of thing that makes a lot of people think tenure is a bad thing, and I do realize that for some people, the 'threat' of bad evaluations is the only thing motivating them to care about their teaching at all. But a) I don't think anyone could seriously argue that me not caring about my evaluations is equivalent to me not caring about teaching and b) since I'm going to keep working on my teaching regardless, the anonymous student evaluations done for my department tend to just stress me out without giving me much useful feedback. I get much more useful information from the end-of-course surveys I have students do that are tailored to the individual courses, which I will be talking more about soon...

Comments

  1. Dr. Sanford AranoffJune 17, 2010 at 5:29 AM

    Words like deep thinking and critical thinking are good, but not enough. What we need to understand is that we must start from basic principles. Teachers must know how students think, and build from there using the principles and logic. See "Teaching and Helping Students Think and Do Better" on amazon.

    ReplyDelete
  2. What struck me most about the Carrell/West study were the unique conditions under which the USAF operates:

    1. Very small classes.

    2. A large number of faculty.

    3. Random assignment of students to sections.

    4. What appears to be close to random assignment of faculty to sections.

    5. Very clear sequencing of courses in many programs.

    I seriously doubt whether their results could be replicated in any system in which a program has a small number of sections, or a small number of faculty, or student choice of faculty in introductory (and follow-on) courses, or....

    So while I'm sympathetic with the effort, I think a lot of the comment on the study overstates the conclusions that can be drawn from it.

    ReplyDelete
  3. @Doc: There was an InsideHigherEd article that made a similar point (i.e., hard to see how to implement a system based on the results). I agree it would be unrealistic to try to evaluate faculty based on how their students do in follow-on classes but what was nice about the Carrell and West study is that since they didn't have any of those issues, one could say their results DO confirm that traditional teaching evaluations may be 'rewarding' the wrong things. So maybe the implication isn't that any other school should try to replicate the results, or do away with evaluations, but rather, should be thinking about how to put student evaluations in the proper context?

    ReplyDelete
  4. Well, I'd agree with that. I've thought for decades that standard CTEs are very limited instruments. One thing I have advocated (unsuccessfully) is, in fact, doing retrospective assessments--asking people about to graduate which courses contributed the most to their ability to learn and work with the material? I don't know of any place that does that...

    ReplyDelete

Post a Comment

Comments that contribute to the discussion are always welcome! Please note that spammy comments whose only purpose seems to be to direct traffic to a commercial site will be deleted.

Popular posts from this blog

Economics Education sessions at ASSA

If I missed any, please let me know... Jan 07, 2011 8:00 am , Sheraton, Director's Row H American Economic Association K-12 Economic and Financial Literacy Education (A2) Presiding: Richard MacDonald (St. Cloud State University) Teacher and Student Characteristics as Determinants of Success in High School Economics Classes Jody Hoff  (Federal Reserve Bank of San Francisco) Jane Lopus (California State University-East Bay) Rob Valletta (Federal Reserve Bank of San Francisco) [Download Preview] It Takes a Village: Determinants of the Efficacy of Financial Literacy Education for Elementary and Middle School Students Weiwei Chen (University of Memphis) Julie Heath (University of Memphis) Economics Understanding of Albanian High School Students: Student and Teacher Effects and Specific Concept Knowledge Dolore Bushati (University of Kansas) Barbara Phipps (University of Kansas) Lecture and Tutorial Attendance and Student Performance in t...

This is about getting through, not re-inventing your course

As someone who has worked hard to build a lot of interactivity into my courses, I have never been interested in teaching fully online courses, in part because I have felt that the level of engaged interaction could never match that of a face-to-face class (not that there aren't some exceptional online courses out there; I just have a strong preference for the in-person connection). But the current situation is not really about building online courses that are 'just as good' as our face-to-face courses; it is about getting through this particular moment without compromising our students' learning too much. So if you are used to a lot of interaction in your F2F class, here are some options for adapting that interaction for a virtual environment: [NOTE: SDSU is a Zoom/mostly Blackboard campus so that's how I've written this but I am pretty sure that other systems have similar functionality] If you use clickers in class to break up what is otherwise mostly lect...

Moving on...

I want to let everyone know that I am officially closing out this chapter of my blogging life. It was 17 years ago this May that I started this blog, back when blogging was still relatively new, and I was exploring ways to have my students do some writing. During the years from 2008 to 2015-ish, when I was most active with experimenting with different pedagogical approaches, this space helped me process what I was learning, and connected me with economists and other colleagues who care about teaching. As I have moved into other roles, I have been torn about what to do with this space, feeling a bit weird about posting anything not directly related to teaching. I have finally decided I need to start fresh so I will be writing (though I have no idea how regularly) on Substack .  Thank you to everyone who has read and commented over the years. I hope you'll find me on Substack, or in real life!