Welcome new readers!

The "New to the blog? Start here" page will give you an overview of the blog and point you to some posts you might be interested in. You can also subscribe to receive future posts via RSS, Facebook or Twitter using the links on the right-hand side of the page, or via email by entering your address in the box. Thanks for reading!

Wednesday, March 16, 2011

More about SWoRD reviewing

[If you missed them, I discussed the basics of SWoRD and whether SWoRD can replace instructor grading in earlier posts. I also have a follow-up post in August]

In addition to generating a writing grade from students' numeric rubric scores, SWoRD also generates a reviewing grade. My understanding is that a student's reviewing grade is based half on that student's 'consistency' and half on the 'helpfulness' of that student's comments. The consistency score accounts for whether a student is differentiating among papers - if a student gives all high scores or all low scores, or if a student's scores differ from the other scores for the same papers, then the consistency score will be reduced (this also affects how much weight that student's scores are given in the writing score of the reviewed papers). The helpfulness score comes from 'back evaluations' that reviewees complete. The back evaluations are numeric scores on a five-point scale; the instructor can create specific rubrics for the back evaluation scale but I simply told students to rate 1-5 in terms of whether they felt the comments were specific enough to be helpful in improving their paper with 5 being the most helpful. The back evaluations are done before students can see the numeric scores that the reviewers gave so the helpfulness score is based entirely on the open-ended comments.

When I have done peer reviewing in the past, I have not graded the reviews, in part because I simply didn't know how I would assess them and in part because the peer reviewing was not anonymous (so I did not really trust asking students to 'rate' their reviewers). But I think students do take things more seriously when there is a grade attached so from one perspective, I like that SWoRD generates a reviewing grade. But I'm not entirely convinced it's really a good measure of the students' work but, as with the peer reviews themselves, I'm not sure how much is SWoRD and how much is my inexperience with the whole set-up.

One problem is 'training' students how to do back evaluations (this goes for the reviewing process as well). At least on the first assignment, students did not seem to discriminate much in their back evaluations. In particular, when reviewers would give vague and not-useful comments like, "Everything looks fine to me," many reviewees  would give back evaluation scores of 5. At the same time, I saw at least a few cases where the reviewees just didn't agree with the comment and so the back evaluation score was lower, even if the comment was something the writer should have considered. For the second assignment, I gave the students a bit more guidance on what the different numeric ratings should mean; we'll see what happens with that.

Another issue that arose on the first assignment was students not taking the reviews of the final draft very seriously. A few students did not complete the second reviews, and several did not complete the back evaluations. I think this is partly because they did not think it would affect their own grade but also, they did not see any point. I guess I should have known that would happen; although some of my students really are interested in improving their writing, I forget that many of them are only really interested in their grade. So I clarified the impact on their grade (see below) and for the second assignment, I also changed the reviewers for the second draft (rather than having them review the same paper again) and I told them that their final assignment of the semester will be to choose one of their assignments and re-write it. So now, the comments on the final drafts may actually be incorporated into another draft and hopefully, that will give students an incentive to take them a bit more seriously.

I think part of the confusion about the impact of skipping reviews or back evaluations stemmed from the way SWoRD does the reviewing scores. If a student does some, but not all, of the assigned reviews, the reviewing score is simply averaged over the reviews that were completed. SWoRD has a separate category (the "Task" score) that tells you what percentage of reviews and back evaluations a student completed (so, for example, a student who does the reviews but skips the back evaluations will have a Task score of 50%). This Task grade can be given its own weight in the calculation of the final grade but I would much prefer to interact it with the Writing or Reviewing scores. It's possible to do this by hand (i.e., export the data to Excel and do the calculations there), but that sort of defeats the purpose of doing everything in SWoRD since students have to go somewhere else to find out their final grade. What I ended up doing was weighting the Reviewing score by the Task score so, for example, if a student only did half their reviews, they only got half the credit they would have otherwise gotten for reviewing. That definitely got the attention of the students who skipped the back evaluations for the final draft!

So far, I have to say that my experience with SWoRD hasn't been as smooth as I had hoped but I guess that's why it's called a pilot...


Also related: Other peer reviewing tools

No comments:

Post a Comment

Comments that contribute to the discussion are always welcome! Please note that spammy comments whose only purpose seems to be to direct traffic to a commercial site will be deleted.