Welcome new readers!

The "New to the blog? Start here" page will give you an overview of the blog and point you to some posts you might be interested in. You can also subscribe to receive future posts via RSS, Facebook or Twitter using the links on the right-hand side of the page, or via email by entering your address in the box. Thanks for reading!

Wednesday, June 27, 2012


Last year, I wrote a lot about my experience with SWoRD, a site that facilitates peer review of writing (including generating grades from peer review scores). Although I think there are a lot of neat things about SWoRD, there were also a lot of problems and I decided not to use it for the writing class this past spring. Instead, I used Turnitin's PeerMark tool, which is integrated into my school's Blackboard system.

Compared to last year, I made a few adjustments to the writing and reviewing process. The general pattern was that students would submit first drafts on Mondays, by class time; those papers would be made available to reviewers at the end of a two-hour grace period (i.e., class started at 3:30pm and papers were available to reviewers at 5:30pm so slightly late papers could still get reviewed without messing up any of the assignments) and reviews were due by class time on Wednesday (again with a two-hour grace period). Depending on the assignment, students reviewed three to five papers (the longest assignment was 5 pages, and most were a lot shorter, so this should not have been too huge a burden). Final drafts were due the following Monday and 'reflective memos' were due Wednesday. The reflective memos included students' reflections on their own work (e.g, 'what did you learn') and also their back evaluations of the peer reviews they received. For the back evaluations, I asked them to provide both a score of 1 to 5 and an explanation for their score. I did not have students do a peer review of the final draft (I just graded those myself).

Some big benefits of PeerMark, particularly compared to SWoRD, include:
  • I have lots of control over things like when the assignment is due, when the reviews are available, and how many reviews students must complete. Reviewers can either be randomly assigned or I can assign specific students to review specific authors, or some combination;
  • The reviewing interface allows students to highlight things in the paper and make notes at the exact spot they've highlighted;
  • Students can give both quantitative scores (on a 1-5 scale) and qualitative explanations;
  • Reviews can be anonymous to the authors but I can easily see who they are (you could also allow students to see who their reviewers are).
One of the obvious downsides, compared to SWoRD, is that grading is less 'automated' and therefore more work for me. With PeerMark, it's possible to get the average of the reviewing scores (overall and for individual items on the review), which could be used as at least part of the draft grade for the authors, but there's no fancy weighting system like SWoRD has. It's also possible to let authors give each review a score within the PeerMark interface, but only numeric scores between 1 and 10 are allowed and there's no place for an explanation of those scores. So I ended up reading all of the first drafts and the reviews much more carefully, giving them my own grades (and part of the review scores were based on whether the reviewer gave a similar score for the draft that I came up with myself). I also used my own weighting system for the draft grades; that is, the grade for the first draft was a weighted average of my own assessment and the peer review scores, weighted by the scores given to the reviews (so if someone did a bad job as a reviewer, their score did not count as much). This was all a bit of a pain but on the plus side, you can create whatever kind of weighting system you want and make it completely transparent.

There are a few things that are still not ideal and that I'm working on fixing the next time around. One is that there is no easy way to give students feedback on their reviewing skills, explaining exactly why they got the reviewing score they did and how to improve the next time around. The only thing I could come up with was adding comments to the reviewing grades in the Blackboard gradebook, which is a pretty clunky workaround. There was also some confusion in the back evaluations about which review students were evaluating. Since the reviewers were all anonymous, when students went into PeerMark to view their reviews, all they saw was 'Reviewer 1', 'Reviewer 2', etc. But in some cases, the order of those reviewers was not the same as what I was seeing so I could not be 100% sure who Reviewer 1 or 2 really was. Next time around, I may ask students to use some sort of identifier in their reviews, like a pseudonym or the last four digits of their student ID number.

But overall, things definitely were smoother this year. That was partly due to lessons I learned last year about the reviewing process in general (for example, I think my prompts for the reviews were clearer), but there was definitely less confusion about what was due when, thanks to everything being in Blackboard and being able to set deadlines that corresponded to class times, and I think reviewers gave better feedback because they could use the in-text comment tool to pinpoint specific spots that needed work. All in all, I think PeerMark is an excellent tool for facilitating peer review.

No comments:

Post a Comment

Comments that contribute to the discussion are always welcome! Please note that spammy comments whose only purpose seems to be to direct traffic to a commercial site will be deleted.