The short answer, for me, right now, is NO. However, I am not sure if that is because of an inherent problem with peer reviewing in SWoRD or it it is because of something about my assignments or my rubric. And to be fair, I have only had completed one full cycle (two drafts of a paper) in SWoRD so students are also still getting used to the system (if you missed it, I discussed the basics of SWoRD in my last post).
[Update: for my post-semester thoughts, and clarification of the grading, see my August follow-up post]
Some background: the first assignment (copy can be found here) was for students to write a very short (300 words + graph) data summary, based on the latest BLS Unemployment Situation report. I had comment prompts and rubrics related to three general categories: economic content, the graphic and the writing. One thing I realized is that I probably had too many prompts (there were 9 total to go with the 3 rubrics) so for future assignments, I will condense them. I only skimmed through the first drafts, just to make sure that students were on the right track, and did not give individual feedback; instead, I made some comments in class about a few common problems I saw. I did grade the second drafts, both marking up the papers and completing the same 7-scale numeric rubrics that the students did.
What I saw in those final drafts were a lot of problems with writing about content (for example, trying to explain how an increase in discouraged workers is related to a drop in the unemployment rate but not doing it well at all, partly because I think the students weren't super-clear on the concept to begin with). Those things should have been caught by their peers (one of the comment prompts specifically asked if there were any economic concepts that the author may not be using correctly) but when I looked at the comments, references to content issues were quite rare. I am not sure if students did not feel comfortable enough with the content themselves to correct their classmates, or they simply did not catch the errors.
Another issue was that student assessments of the graphics were pretty superficial. I stressed to the students that graphics should be self-sufficient, i.e., that labels, title, etc., should be descriptive enough that the reader doesn't have to hunt through the text to figure out what's going on. At the same time, the graphic should be integrated with, and complement, the text, not just be tacked on at the end. Students were not very good at assessing either of those issues, though I may have to put some of the blame on comment prompts that may not have been specific enough.
I also realized that there is nothing in the comment prompts or rubrics to address 'administrative' issues like whether the student had a title/headline (let alone a good one), had stayed close to the assigned length (or included the required word count at the end), or formatted the paper correctly, including citations. These are not huge issues but when I grade papers, I tend to knock a few points off if students don't actually follow directions. I am not sure how to incorporate this into SWoRD, other than maybe try to add it to the writing rubric (I don't feel that making a separate rubric would be appropriate because SWoRD weights all the different rubrics equally and this really doesn't seem as important as the other types of issues).
For 8 out of 26 students, my grade was higher than the writing grade assigned by SWoRD but for the majority, my grade was lower and in some cases, by a significant amount (the biggest difference was 19 points). Part of that is due to the fact that the SWoRD scores already incorporate a curve (the instructor can set the mean and standard deviation). I really wish the raw scores were also reported but since I will ultimately curve grades anyway, I guess it's not that big a deal. But I definitely do not feel comfortable using the SWoRD scores by themselves as the final grade for the assignment. My solution for the first assignment was to average my grade in with the SWoRD writing scores.
For the next assignment, I will have to re-vamp the comment prompts and rubrics. I also will be reading the first drafts more closely, and plan to fully grade the final drafts again. The total assignment grade also incorporates the reviewing scores, which I left alone. I have some issues with those scores as well, which I'll detail in another post.
Related posts:
Peer review with SWoRD
More about SWoRD reviewing
SWoRD follow-up
Also related: Other peer reviewing tools
[Update: for my post-semester thoughts, and clarification of the grading, see my August follow-up post]
Some background: the first assignment (copy can be found here) was for students to write a very short (300 words + graph) data summary, based on the latest BLS Unemployment Situation report. I had comment prompts and rubrics related to three general categories: economic content, the graphic and the writing. One thing I realized is that I probably had too many prompts (there were 9 total to go with the 3 rubrics) so for future assignments, I will condense them. I only skimmed through the first drafts, just to make sure that students were on the right track, and did not give individual feedback; instead, I made some comments in class about a few common problems I saw. I did grade the second drafts, both marking up the papers and completing the same 7-scale numeric rubrics that the students did.
What I saw in those final drafts were a lot of problems with writing about content (for example, trying to explain how an increase in discouraged workers is related to a drop in the unemployment rate but not doing it well at all, partly because I think the students weren't super-clear on the concept to begin with). Those things should have been caught by their peers (one of the comment prompts specifically asked if there were any economic concepts that the author may not be using correctly) but when I looked at the comments, references to content issues were quite rare. I am not sure if students did not feel comfortable enough with the content themselves to correct their classmates, or they simply did not catch the errors.
Another issue was that student assessments of the graphics were pretty superficial. I stressed to the students that graphics should be self-sufficient, i.e., that labels, title, etc., should be descriptive enough that the reader doesn't have to hunt through the text to figure out what's going on. At the same time, the graphic should be integrated with, and complement, the text, not just be tacked on at the end. Students were not very good at assessing either of those issues, though I may have to put some of the blame on comment prompts that may not have been specific enough.
I also realized that there is nothing in the comment prompts or rubrics to address 'administrative' issues like whether the student had a title/headline (let alone a good one), had stayed close to the assigned length (or included the required word count at the end), or formatted the paper correctly, including citations. These are not huge issues but when I grade papers, I tend to knock a few points off if students don't actually follow directions. I am not sure how to incorporate this into SWoRD, other than maybe try to add it to the writing rubric (I don't feel that making a separate rubric would be appropriate because SWoRD weights all the different rubrics equally and this really doesn't seem as important as the other types of issues).
For 8 out of 26 students, my grade was higher than the writing grade assigned by SWoRD but for the majority, my grade was lower and in some cases, by a significant amount (the biggest difference was 19 points). Part of that is due to the fact that the SWoRD scores already incorporate a curve (the instructor can set the mean and standard deviation). I really wish the raw scores were also reported but since I will ultimately curve grades anyway, I guess it's not that big a deal. But I definitely do not feel comfortable using the SWoRD scores by themselves as the final grade for the assignment. My solution for the first assignment was to average my grade in with the SWoRD writing scores.
For the next assignment, I will have to re-vamp the comment prompts and rubrics. I also will be reading the first drafts more closely, and plan to fully grade the final drafts again. The total assignment grade also incorporates the reviewing scores, which I left alone. I have some issues with those scores as well, which I'll detail in another post.
Related posts:
Peer review with SWoRD
More about SWoRD reviewing
SWoRD follow-up
Also related: Other peer reviewing tools
Seems like "calibration" - i.e. getting students closer to providing good and accurate scores & feedback - is really important. Do you know if the SWoRD folks recommend any kind of process for that? I can see pedagogical value in taking the students through some exercises that would help - but it is more time devoted to that rather than something else.
ReplyDelete