Skip to main content

PeerMark

Last year, I wrote a lot about my experience with SWoRD, a site that facilitates peer review of writing (including generating grades from peer review scores). Although I think there are a lot of neat things about SWoRD, there were also a lot of problems and I decided not to use it for the writing class this past spring. Instead, I used Turnitin's PeerMark tool, which is integrated into my school's Blackboard system.

Compared to last year, I made a few adjustments to the writing and reviewing process. The general pattern was that students would submit first drafts on Mondays, by class time; those papers would be made available to reviewers at the end of a two-hour grace period (i.e., class started at 3:30pm and papers were available to reviewers at 5:30pm so slightly late papers could still get reviewed without messing up any of the assignments) and reviews were due by class time on Wednesday (again with a two-hour grace period). Depending on the assignment, students reviewed three to five papers (the longest assignment was 5 pages, and most were a lot shorter, so this should not have been too huge a burden). Final drafts were due the following Monday and 'reflective memos' were due Wednesday. The reflective memos included students' reflections on their own work (e.g, 'what did you learn') and also their back evaluations of the peer reviews they received. For the back evaluations, I asked them to provide both a score of 1 to 5 and an explanation for their score. I did not have students do a peer review of the final draft (I just graded those myself).

Some big benefits of PeerMark, particularly compared to SWoRD, include:
  • I have lots of control over things like when the assignment is due, when the reviews are available, and how many reviews students must complete. Reviewers can either be randomly assigned or I can assign specific students to review specific authors, or some combination;
  • The reviewing interface allows students to highlight things in the paper and make notes at the exact spot they've highlighted;
  • Students can give both quantitative scores (on a 1-5 scale) and qualitative explanations;
  • Reviews can be anonymous to the authors but I can easily see who they are (you could also allow students to see who their reviewers are).
One of the obvious downsides, compared to SWoRD, is that grading is less 'automated' and therefore more work for me. With PeerMark, it's possible to get the average of the reviewing scores (overall and for individual items on the review), which could be used as at least part of the draft grade for the authors, but there's no fancy weighting system like SWoRD has. It's also possible to let authors give each review a score within the PeerMark interface, but only numeric scores between 1 and 10 are allowed and there's no place for an explanation of those scores. So I ended up reading all of the first drafts and the reviews much more carefully, giving them my own grades (and part of the review scores were based on whether the reviewer gave a similar score for the draft that I came up with myself). I also used my own weighting system for the draft grades; that is, the grade for the first draft was a weighted average of my own assessment and the peer review scores, weighted by the scores given to the reviews (so if someone did a bad job as a reviewer, their score did not count as much). This was all a bit of a pain but on the plus side, you can create whatever kind of weighting system you want and make it completely transparent.

There are a few things that are still not ideal and that I'm working on fixing the next time around. One is that there is no easy way to give students feedback on their reviewing skills, explaining exactly why they got the reviewing score they did and how to improve the next time around. The only thing I could come up with was adding comments to the reviewing grades in the Blackboard gradebook, which is a pretty clunky workaround. There was also some confusion in the back evaluations about which review students were evaluating. Since the reviewers were all anonymous, when students went into PeerMark to view their reviews, all they saw was 'Reviewer 1', 'Reviewer 2', etc. But in some cases, the order of those reviewers was not the same as what I was seeing so I could not be 100% sure who Reviewer 1 or 2 really was. Next time around, I may ask students to use some sort of identifier in their reviews, like a pseudonym or the last four digits of their student ID number.

But overall, things definitely were smoother this year. That was partly due to lessons I learned last year about the reviewing process in general (for example, I think my prompts for the reviews were clearer), but there was definitely less confusion about what was due when, thanks to everything being in Blackboard and being able to set deadlines that corresponded to class times, and I think reviewers gave better feedback because they could use the in-text comment tool to pinpoint specific spots that needed work. All in all, I think PeerMark is an excellent tool for facilitating peer review.

Comments

Popular posts from this blog

THE podcast on Implicit Bias

I keep telling myself I need to get back to blogging but, well, it's been a long pandemic... But I guess this is as good an excuse as any to post something: I am Bonni Stachowiak's guest on the latest episode of the Teaching in Higher Ed podcast, talking about implicit bias and how it can impact our teaching.  Doing the interview with Bonni (which was actually recorded a couple months ago) was a lot of fun. Listening to it now, I also realize how far I have come from the instructor I was when I started this blog over a decade ago. I've been away from the blog so long that I should probably spell this out: my current title is Associate Vice President for Faculty and Staff Diversity and I have responsibility for all professional learning and development related to diversity, equity and inclusion, as well as inclusive faculty and staff recruitment, and unit-level diversity planning. But I often say that in a lot of ways, I have no business being in this position - I've ne...

When is an exam "too hard"?

By now, you may have heard about the biology professor at Louisiana State (Baton Rouge) who was removed from teaching an intro course where "more than 90 percent of the students... were failing or had dropped the class." The majority of the comments on the Inside Higher Ed story about it are supportive of the professor, particularly given that it seems like the administration did not even talk to her about the situation before acting. I tend to fall in the "there's got to be more to the story so I'll reserve judgment" camp but the story definitely struck a nerve with me, partly because I recently spent 30 minutes "debating" with a student about whether the last midterm was "too hard" and the whole conversation was super-frustrating. To give some background: I give three midterms and a cumulative final, plus have clicker points and Aplia assignments that make up about 20% of the final grade. I do not curve individual exams but will cu...

This is about getting through, not re-inventing your course

As someone who has worked hard to build a lot of interactivity into my courses, I have never been interested in teaching fully online courses, in part because I have felt that the level of engaged interaction could never match that of a face-to-face class (not that there aren't some exceptional online courses out there; I just have a strong preference for the in-person connection). But the current situation is not really about building online courses that are 'just as good' as our face-to-face courses; it is about getting through this particular moment without compromising our students' learning too much. So if you are used to a lot of interaction in your F2F class, here are some options for adapting that interaction for a virtual environment: [NOTE: SDSU is a Zoom/mostly Blackboard campus so that's how I've written this but I am pretty sure that other systems have similar functionality] If you use clickers in class to break up what is otherwise mostly lect...