Welcome new readers!

The "New to the blog? Start here" page will give you an overview of the blog and point you to some posts you might be interested in. You can also subscribe to receive future posts via RSS, Facebook or Twitter using the links on the right-hand side of the page, or via email by entering your address in the box. Thanks for reading!

Monday, July 21, 2008

Why do I read my evaluations?

[warning: this post doesn't really have a point, I just need to vent!]

Sigh. Because I've been on leave since January, I never read my evaluations from Fall 2007, the first time that I taught the 500-seat Principles Micro course. But now I'm back in San Diego and was foolish enough to look at them. They aren't all bad - a number of my students are very sweet and made comments such as, "Professor Imazeki really did the best she could with a class this size," and "I don't think she needs to do anything to improve except not have so many slacker whiners in her class" (I may have to have that one framed to re-read every semester when I'm going through my evaluations!). Of course, there are the typical contradictions that I try to ignore (e.g., a bunch of students HATE Aplia and just as many LOVE it). There are also the perennial (for me) "she goes way too fast" and "her exams are crazy hard". I can accept those because I think they are probably accurate, but I also know the larger-than-usual number of such comments is because it was my first time teaching the super-sized class and I'm pretty sure I can improve the next time around.

But after all these years, I still find it hard not to be completely annoyed by the students who blame me for things that are just flat out wrong but they don't know they are wrong because they didn't pay attention, like saying that I never told them when assignments were due (apparently they missed the weekly posting on the course website), or it was unfair that they couldn't make up points when they missed class for good reason (they apparently didn't understand that when I said I'd drop their 8 lowest scores that meant they didn't lose any points for those missed classes). Maybe even more frustrating are the students who say that they want the class to be more interactive but also say they were confused by all the technology. On the one hand, I'll take that as a suggestion to do a better job explaining how Aplia works but on the other hand, it makes me just want to scream because, really, what am I supposed to do with that?!?!

Sigh. OK, back to the grind, trying to take all this 'constructive criticism' and use it to make the class better the next time...

6 comments:

  1. I can't imagine reading evals. from a 500-student course. Very brave.

    Just out of curiosity, how do you feel about the evaluation instrument? Were the students responding to valid questions about learning goals as well as rating your teaching effectiveness?

    ReplyDelete
  2. Read the evals, but keep them in perspective. Most faculty I know who take teaching evaluations seriously focus on the negative ones and downplay the positives. Don't do that. Teaching any new course or course in a new format I would expect the teaching evals to show lots of room for improvement. That's pretty normal. Comments about how hard economics exams are also the norm. Comments about teaching too fast are worth thinking about. Is it a small group of respondents who think that? Is it the majority? Each of those is a valid response but with dramatically different implications. Are you trying to cover too much content? Remember, it's not covering the content, but mastery that should be the goal. If you decide you need to move that quickly, you might want to warn students from the beginning that this is what's involved. Let them know up front they'll need to reallocate work effort towards your course if they want to be successful.

    Bottom line: Your students are fortunate to have someone who cares as much as you do.

    ReplyDelete
  3. dispersemos: The evals I was reading are from a university-wide instrument that really is about students reaction to instruction, and not about learning. There are a bunch of Likert-scale questions, which are only marginally useful, mostly because I don't think students differentiate much (that is, many students will fill in all 1's, all 4's, etc., which makes me think they aren't reading the actual questions all that carefully). The last three questions are open-ended and ask, "Please specify those things which you believe your instructor has done well and which you have especially liked", "Please specify the ways in which you think this course and your instructor's teaching might be improved", and "Please specify those things which you believe were the main obstacles to learning in this course (e.g., pace, textbook, exams, class size, hour of class meetings, etc.)"

    I find that only a small handful of students respond to the open-ended questions with comments that are actually about learning. I also do my own surveys throughout the semester, asking for student feedback on specific aspects of the course (like how helpful they found problem sets or clicker questions).

    Steve: you're right, I do tend to fixate on the negative comments. I'm in the process of re-designing the course more for depth than breadth so I already was aware of the need to drop a lot of stuff, but you're dead on about the need to keep it all in perspective. I appreciate the pep talk!

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. I think I have mentioned this before, but I have found that when the material was something that I saw value in, in terms of relation to the real world, like the Cap and Trade, or the Elastic/inelastic demand, that is when I was interested, and felt I did the best, or at least felt the best about learning new things. It was something I could see people on TV talking about, and say, "Hey!, I know what they are talking about", or even better is when I explain it to other people.

    I personally wasn't a fan of the in class demonstrations when you brought students up (I think yours was the tennis ball thing, but I'm not sure). I personally saw it as a gimmick to make the class a bit more fun, without really contributing to any learning of material, though I'm sure you didn't see it/intend it that way, or maybe you just wanted to throw some fun in a class that big. I have found that in class demonstrations are usually never beneficial on tests, and is therefore my criteria for judging them. I ended up coming to class because of my interest in learning more about econ, and not as a typical business major who just wants to get the class over with. I am not writing this in a negative fashion, but wanted to share my thoughts on the matter.

    ReplyDelete
  6. Matt: thanks for your comments. Ironically, the example you use (the tennis ball activity) is something that a lot of other students specifically mentioned as something they remembered and enjoyed - these sorts of directly contradictory comments are one reason why evaluations can be frustrating. Teachers learn quickly that it is impossible to please all of the students, all of the time, so we are always having to figure out what will work for the majority of students.

    ReplyDelete

Comments that contribute to the discussion are always welcome! Please note that spammy comments whose only purpose seems to be to direct traffic to a commercial site will be deleted.