As I mentioned at the end of my last post, SWoRD does provide an infrastructure that makes it easier to have students do peer review - students can submit papers electronically, the system can randomly assign multiple anonymous reviewers, I can create specific comment prompts so reviewers must give both numeric and open-ended comments, and students can back evaluate the reviews to indicate how helpful they were (or weren't). Given that I am a firm believer in the value of the peer review process overall, I would perhaps continue to use SWoRD if there were no other options that could serve the same function. But if I'm not going to use the grades generated by SWoRD (or if I need to do a lot of work to make those grades work for me), then I do have other options. Each would require some tweaking to do exactly what I want to do but from what I can tell, they all provide some advantages over SWoRD as well. Please note that I have not yet actually used any of the three tools I mention below; what follows are my impressions based largely on what I've seen while playing around with them, and what I've heard from other people.
*** Note: If you don't care so much about having students do peer review (like if you are having students do writing assignments where the emphasis is more on the content and not so much the writing itself), Bill Goffe pointed out a site called SAGrader, also mentioned in a recent Chronicle article, that can automate grading of essays. It doesn't look like they have any economics assignments in their library so you'd have to work with them to create the assignments and rubrics but it looks like it could be pretty great for those teaching large classes.
Option 1: Turnitin's PeerMark
If your campus uses Turnitin, this seems like the best option for peer review of writing assignments. Most faculty are probably already familiar with (or have heard of) Turnitin's plagiarism detection tools, now called OriginalityCheck. The PeerMark tool is integrated with OriginalityCheck, and another tool called GradeMark (I'm not sure if a campus can subscribe to one of these tools and not the others, or if they are always integrated; we have all three at my school, integrated through Blackboard). Even if you aren't interested in peer reviewing, the GradeMark tool is kind of neat too - you can set up grading rubrics and it's pretty easy to insert your comments. With PeerMark, the students submit their files online and the system can either assign reviewers randomly or you can assign them manually, or students can even select which papers they review (or you can use a mix of all three). You can decide how many reviews each student must do and can make the papers and reviews anonymous (or not); there is also an option to allow (or not) students who did not submit a paper to still review. You can set up both open-ended comment prompts and scale response questions (i.e., discrete categories, like a 1 to 5 scale) for the reviewers. The interface also allows comments to be inserted at specific points in the paper, similar to comments in Microsoft Word if you're familiar with those (so, for example, you can just insert "This sentence needs a comma" right at the sentence instead of "The second sentence of the third paragraph on the first page needs a comma"). The instructor sets the dates and times when papers are due, when they become available to reviewers and when reviews become available to authors.
PeerMark has a way to just give students full credit for reviews that meet certain criteria (i.e., all review questions are answered and you can set minimum length requirements for open-ended questions), or you can grade them manually. You could also have students 'back evaluate' the reviews separately (I'd probably use a survey in Blackboard) and use those to grade reviews, or be part of the grade. It would also technically be possible to use the reviewers' scores on scale questions as part of a writing grade for the authors (that is, similar to what SWoRD does, though you'd have to do the calculations yourself) but from what I can tell, you'd have to do some cutting and pasting to get the scores out and into a spreadsheet.
Pros: Like SWoRD, PeerMark automates a lot of the process (assigning reviewers, etc.) so students can get/give feedback from multiple peers but in contrast to SWoRD, the system has a lot of flexibility in terms of setting options and students can easily insert comments and mark up the papers directly.
Cons: Only available if your campus already uses Turnitin.
Option 2: Google Docs
Profhacker has a post about using Google Docs Forms to run a peer-review writing workshop. Although that post is talking about an in-class workshop, I think everything would apply to out-of-class reviewing as well. The basic gist is students submit their papers via Google Docs, then use a Google Doc Form to complete their reviews. Forms allow for open-ended comment prompts as well as questions with discrete choice responses and Form responses are recorded in a Google Docs spreadsheet. Things wouldn't be quite as automated as with PeerMark - the instructor would have to match up papers with reviewers and you'd have to manually keep track of whether students met deadlines but on the other hand, the review comments will already be collected for you so if you want to use the scores for grading in some way, that should be easier.
Pros: Google Docs is free to both the instructor and students, and review comments and scores are recorded in a spreadsheet so you can manipulate them relatively easily. If you want to use the reviewer comments and scores to create grades, you could create your own algorithm but have a lot more flexibility with things like deadlines than with SWoRD.
Cons: The process is not as automated as other options; for example, if you want student papers to be anonymous, you'll have to figure out a way to do that outside the system (maybe have students create pseudonyms they use all semester?).
Option 3: Calibrated Peer Review
This is the option I am least familiar with but the general idea is that before reviewing their peers' work, students must first evaluate some sample essays and they get feedback on how good a job they do with those evaluations. From what I can tell, the calibration exercises require students to respond to discrete-choice questions (e.g., 'Does the essay have a clear thesis statement? yes or no'). The feedback they get is then a score of how many questions they answered 'correctly' (i.e., with the same answer as the professor), along with any additional comments the instructor wants to add about specific questions. Once students pass the calibration exercises, they review three of their peers' papers (I'm not sure if you can set it to be more or less than three) and they must review their own paper as well. I don't think it's possible to have students respond to open-ended review questions; it looks like all the review prompts require a discrete response. The system does generate writing and reviewing scores that could be used toward grades. To get the writing scores, the reviewers' responses to the reviewing questions are weighted based on how students did on the calibrations (higher calibration score, more weight given to that student's actual reviews). The system also generates a reviewing score for students by comparing their responses to the weighted average of the other reviewers of the same paper, plus it generates a self-assessment score that compares a student's self-evaluation to the other reviewers. Because I haven't used CPR myself, I don't know if the scoring has any of the same issues as SWoRD but my assumption is that the calibration stage means there is more consistency across reviewers so scores should be more consistent as well.
Pros: CPR gives students lots of guidance for being good reviewers (which ultimately, should mean more useful feedback for the writers). I should say that feeling ill-equipped to give useful reviews was one of my students biggest complaints so this aspect of CPR is really appealing. The way the writing and reviewing scores are generated seems more transparent than in SWoRD.
Cons: No open-ended comments from reviewers; major prep cost to set up the calibration examples (though
presumably a one-time fixed cost).
Personally, I will probably use PeerMark in the spring when I teach the writing class again, but I may try to replicate some aspects of CPR by giving students more examples of 'good' and 'bad' papers and reviews.
*** Note: If you don't care so much about having students do peer review (like if you are having students do writing assignments where the emphasis is more on the content and not so much the writing itself), Bill Goffe pointed out a site called SAGrader, also mentioned in a recent Chronicle article, that can automate grading of essays. It doesn't look like they have any economics assignments in their library so you'd have to work with them to create the assignments and rubrics but it looks like it could be pretty great for those teaching large classes.
Option 1: Turnitin's PeerMark
If your campus uses Turnitin, this seems like the best option for peer review of writing assignments. Most faculty are probably already familiar with (or have heard of) Turnitin's plagiarism detection tools, now called OriginalityCheck. The PeerMark tool is integrated with OriginalityCheck, and another tool called GradeMark (I'm not sure if a campus can subscribe to one of these tools and not the others, or if they are always integrated; we have all three at my school, integrated through Blackboard). Even if you aren't interested in peer reviewing, the GradeMark tool is kind of neat too - you can set up grading rubrics and it's pretty easy to insert your comments. With PeerMark, the students submit their files online and the system can either assign reviewers randomly or you can assign them manually, or students can even select which papers they review (or you can use a mix of all three). You can decide how many reviews each student must do and can make the papers and reviews anonymous (or not); there is also an option to allow (or not) students who did not submit a paper to still review. You can set up both open-ended comment prompts and scale response questions (i.e., discrete categories, like a 1 to 5 scale) for the reviewers. The interface also allows comments to be inserted at specific points in the paper, similar to comments in Microsoft Word if you're familiar with those (so, for example, you can just insert "This sentence needs a comma" right at the sentence instead of "The second sentence of the third paragraph on the first page needs a comma"). The instructor sets the dates and times when papers are due, when they become available to reviewers and when reviews become available to authors.
PeerMark has a way to just give students full credit for reviews that meet certain criteria (i.e., all review questions are answered and you can set minimum length requirements for open-ended questions), or you can grade them manually. You could also have students 'back evaluate' the reviews separately (I'd probably use a survey in Blackboard) and use those to grade reviews, or be part of the grade. It would also technically be possible to use the reviewers' scores on scale questions as part of a writing grade for the authors (that is, similar to what SWoRD does, though you'd have to do the calculations yourself) but from what I can tell, you'd have to do some cutting and pasting to get the scores out and into a spreadsheet.
Pros: Like SWoRD, PeerMark automates a lot of the process (assigning reviewers, etc.) so students can get/give feedback from multiple peers but in contrast to SWoRD, the system has a lot of flexibility in terms of setting options and students can easily insert comments and mark up the papers directly.
Cons: Only available if your campus already uses Turnitin.
Option 2: Google Docs
Profhacker has a post about using Google Docs Forms to run a peer-review writing workshop. Although that post is talking about an in-class workshop, I think everything would apply to out-of-class reviewing as well. The basic gist is students submit their papers via Google Docs, then use a Google Doc Form to complete their reviews. Forms allow for open-ended comment prompts as well as questions with discrete choice responses and Form responses are recorded in a Google Docs spreadsheet. Things wouldn't be quite as automated as with PeerMark - the instructor would have to match up papers with reviewers and you'd have to manually keep track of whether students met deadlines but on the other hand, the review comments will already be collected for you so if you want to use the scores for grading in some way, that should be easier.
Pros: Google Docs is free to both the instructor and students, and review comments and scores are recorded in a spreadsheet so you can manipulate them relatively easily. If you want to use the reviewer comments and scores to create grades, you could create your own algorithm but have a lot more flexibility with things like deadlines than with SWoRD.
Cons: The process is not as automated as other options; for example, if you want student papers to be anonymous, you'll have to figure out a way to do that outside the system (maybe have students create pseudonyms they use all semester?).
Option 3: Calibrated Peer Review
This is the option I am least familiar with but the general idea is that before reviewing their peers' work, students must first evaluate some sample essays and they get feedback on how good a job they do with those evaluations. From what I can tell, the calibration exercises require students to respond to discrete-choice questions (e.g., 'Does the essay have a clear thesis statement? yes or no'). The feedback they get is then a score of how many questions they answered 'correctly' (i.e., with the same answer as the professor), along with any additional comments the instructor wants to add about specific questions. Once students pass the calibration exercises, they review three of their peers' papers (I'm not sure if you can set it to be more or less than three) and they must review their own paper as well. I don't think it's possible to have students respond to open-ended review questions; it looks like all the review prompts require a discrete response. The system does generate writing and reviewing scores that could be used toward grades. To get the writing scores, the reviewers' responses to the reviewing questions are weighted based on how students did on the calibrations (higher calibration score, more weight given to that student's actual reviews). The system also generates a reviewing score for students by comparing their responses to the weighted average of the other reviewers of the same paper, plus it generates a self-assessment score that compares a student's self-evaluation to the other reviewers. Because I haven't used CPR myself, I don't know if the scoring has any of the same issues as SWoRD but my assumption is that the calibration stage means there is more consistency across reviewers so scores should be more consistent as well.
Pros: CPR gives students lots of guidance for being good reviewers (which ultimately, should mean more useful feedback for the writers). I should say that feeling ill-equipped to give useful reviews was one of my students biggest complaints so this aspect of CPR is really appealing. The way the writing and reviewing scores are generated seems more transparent than in SWoRD.
Cons: No open-ended comments from reviewers; major prep cost to set up the calibration examples (though
presumably a one-time fixed cost).
Personally, I will probably use PeerMark in the spring when I teach the writing class again, but I may try to replicate some aspects of CPR by giving students more examples of 'good' and 'bad' papers and reviews.
What about crowd grader by google?
ReplyDelete