Skip to main content

More about SWoRD reviewing

[If you missed them, I discussed the basics of SWoRD and whether SWoRD can replace instructor grading in earlier posts. I also have a follow-up post in August]

In addition to generating a writing grade from students' numeric rubric scores, SWoRD also generates a reviewing grade. My understanding is that a student's reviewing grade is based half on that student's 'consistency' and half on the 'helpfulness' of that student's comments. The consistency score accounts for whether a student is differentiating among papers - if a student gives all high scores or all low scores, or if a student's scores differ from the other scores for the same papers, then the consistency score will be reduced (this also affects how much weight that student's scores are given in the writing score of the reviewed papers). The helpfulness score comes from 'back evaluations' that reviewees complete. The back evaluations are numeric scores on a five-point scale; the instructor can create specific rubrics for the back evaluation scale but I simply told students to rate 1-5 in terms of whether they felt the comments were specific enough to be helpful in improving their paper with 5 being the most helpful. The back evaluations are done before students can see the numeric scores that the reviewers gave so the helpfulness score is based entirely on the open-ended comments.

When I have done peer reviewing in the past, I have not graded the reviews, in part because I simply didn't know how I would assess them and in part because the peer reviewing was not anonymous (so I did not really trust asking students to 'rate' their reviewers). But I think students do take things more seriously when there is a grade attached so from one perspective, I like that SWoRD generates a reviewing grade. But I'm not entirely convinced it's really a good measure of the students' work but, as with the peer reviews themselves, I'm not sure how much is SWoRD and how much is my inexperience with the whole set-up.

One problem is 'training' students how to do back evaluations (this goes for the reviewing process as well). At least on the first assignment, students did not seem to discriminate much in their back evaluations. In particular, when reviewers would give vague and not-useful comments like, "Everything looks fine to me," many reviewees  would give back evaluation scores of 5. At the same time, I saw at least a few cases where the reviewees just didn't agree with the comment and so the back evaluation score was lower, even if the comment was something the writer should have considered. For the second assignment, I gave the students a bit more guidance on what the different numeric ratings should mean; we'll see what happens with that.

Another issue that arose on the first assignment was students not taking the reviews of the final draft very seriously. A few students did not complete the second reviews, and several did not complete the back evaluations. I think this is partly because they did not think it would affect their own grade but also, they did not see any point. I guess I should have known that would happen; although some of my students really are interested in improving their writing, I forget that many of them are only really interested in their grade. So I clarified the impact on their grade (see below) and for the second assignment, I also changed the reviewers for the second draft (rather than having them review the same paper again) and I told them that their final assignment of the semester will be to choose one of their assignments and re-write it. So now, the comments on the final drafts may actually be incorporated into another draft and hopefully, that will give students an incentive to take them a bit more seriously.

I think part of the confusion about the impact of skipping reviews or back evaluations stemmed from the way SWoRD does the reviewing scores. If a student does some, but not all, of the assigned reviews, the reviewing score is simply averaged over the reviews that were completed. SWoRD has a separate category (the "Task" score) that tells you what percentage of reviews and back evaluations a student completed (so, for example, a student who does the reviews but skips the back evaluations will have a Task score of 50%). This Task grade can be given its own weight in the calculation of the final grade but I would much prefer to interact it with the Writing or Reviewing scores. It's possible to do this by hand (i.e., export the data to Excel and do the calculations there), but that sort of defeats the purpose of doing everything in SWoRD since students have to go somewhere else to find out their final grade. What I ended up doing was weighting the Reviewing score by the Task score so, for example, if a student only did half their reviews, they only got half the credit they would have otherwise gotten for reviewing. That definitely got the attention of the students who skipped the back evaluations for the final draft!

So far, I have to say that my experience with SWoRD hasn't been as smooth as I had hoped but I guess that's why it's called a pilot...


Also related: Other peer reviewing tools

Comments

Popular posts from this blog

When is an exam "too hard"?

By now, you may have heard about the biology professor at Louisiana State (Baton Rouge) who was removed from teaching an intro course where "more than 90 percent of the students... were failing or had dropped the class." The majority of the comments on the Inside Higher Ed story about it are supportive of the professor, particularly given that it seems like the administration did not even talk to her about the situation before acting. I tend to fall in the "there's got to be more to the story so I'll reserve judgment" camp but the story definitely struck a nerve with me, partly because I recently spent 30 minutes "debating" with a student about whether the last midterm was "too hard" and the whole conversation was super-frustrating. To give some background: I give three midterms and a cumulative final, plus have clicker points and Aplia assignments that make up about 20% of the final grade. I do not curve individual exams but will cu...

THE podcast on Implicit Bias

I keep telling myself I need to get back to blogging but, well, it's been a long pandemic... But I guess this is as good an excuse as any to post something: I am Bonni Stachowiak's guest on the latest episode of the Teaching in Higher Ed podcast, talking about implicit bias and how it can impact our teaching.  Doing the interview with Bonni (which was actually recorded a couple months ago) was a lot of fun. Listening to it now, I also realize how far I have come from the instructor I was when I started this blog over a decade ago. I've been away from the blog so long that I should probably spell this out: my current title is Associate Vice President for Faculty and Staff Diversity and I have responsibility for all professional learning and development related to diversity, equity and inclusion, as well as inclusive faculty and staff recruitment, and unit-level diversity planning. But I often say that in a lot of ways, I have no business being in this position - I've ne...

Designing effective courses means thinking through the WHAT and the HOW (in that order)

I think most folks have heard by now that the California State University system (in which I work) has announced the intention to prepare for fall classes to be primarily online. I have to say, I am sort of confused why everyone is making such a big deal about this - no matter what your own institution is saying, no instructor who cares about their own mental health (let alone their students) should be thinking we are going back to 'business as usual' in the fall. In my mind, the only sane thing to do is at least prepare  for the possibility of still teaching remotely. Fortunately, unlike this spring, we now have a lot more time for that preparation. Faculty developers across the country have been working overtime since March, and they aren't slowing down now; we are all trying to make sure we can offer our faculty the training and resources they will need to redesign fall courses for online or hybrid modalities. But one big difference between the training faculty needed ...