Welcome new readers!

The "New to the blog? Start here" page will give you an overview of the blog and point you to some posts you might be interested in. You can also subscribe to receive future posts via RSS, Facebook or Twitter using the links on the right-hand side of the page, or via email by entering your address in the box. Thanks for reading!

Tuesday, August 23, 2011

Back-to-school ideas

  • A recent New York Times article points out that many children's books teach economic concepts (hat tip to Alex Tabarrok). If that article piques your interest, the Council for Economic Education has a whole bookicon that provides examples of children's stories that can be used to teach economics, including questions for students and follow-up activities. There's also a 2007 article by Yana V. Rodgers, Shelby Hawthorne and Ronald C. Wheeler, "Teaching Economics Through Children's Literature in the Primary Grades," in The Reading Teacher 61(1), p.46-55. That article lists the 'top five' books for a number of specific concepts; the full list of several hundred titles can be found at http://econkids.rutgers.edu/, which is an entire website devoted to using children's literature to teach economics (also mentioned in a follow-up NYT post on Economix). I should point out that although the obvious audience for these sorts of lessons is younger children, I can also imagine using children's books as the basis for an assignment for older students (for example, give them a list of the books and have them identify the key economic concepts associated with each, thus reinforcing the idea that economics is everywhere).
  • Tutor2u describes a first-day "golf" game to see if students are familiar with current events, using an included powerpoint file (note: the file provided with that post focuses on the U.K. and European Union but questions could easily be adapted for American students). The general set-up for the game would work well for any team contest: each question has four answer choices and students can choose to submit only one response (an 'eagle' if they get it correct), or two possible answers (a birdie if one of the two is correct), or three answers (for par). If none of their answers are correct, they get bogeys. Looks like a neat approach!
  • If you aren't on the tch-econ mail-list (why aren't you?), you missed Bill Goffe's message about a set of videos by Dr. Stephen Chu (a cognitive psychologist) on how to study effectively. Chu uses cognitive science, what we know about how people learn, to explain not only what students should do but why. These would be great to show and discuss with college freshmen, though I also think they'd be useful to students at any level. The next time a student asks you how to do well in your class, I'd suggest pointing them to these videos.

Friday, August 19, 2011

Other peer reviewing tools

As I mentioned at the end of my last post, SWoRD does provide an infrastructure that makes it easier to have students do peer review - students can submit papers electronically, the system can randomly assign multiple anonymous reviewers, I can create specific comment prompts so reviewers must give both numeric and open-ended comments, and students can back evaluate the reviews to indicate how helpful they were (or weren't). Given that I am a firm believer in the value of the peer review process overall, I would perhaps continue to use SWoRD if there were no other options that could serve the same function. But if I'm not going to use the grades generated by SWoRD (or if I need to do a lot of work to make those grades work for me), then I do have other options. Each would require some tweaking to do exactly what I want to do but from what I can tell, they all provide some advantages over SWoRD as well. Please note that I have not yet actually used any of the three tools I mention below; what follows are my impressions based largely on what I've seen while playing around with them, and what I've heard from other people.

*** Note: If you don't care so much about having students do peer review (like if you are having students do writing assignments where the emphasis is more on the content and not so much the writing itself), Bill Goffe pointed out a site called SAGrader, also mentioned in a recent Chronicle article, that can automate grading of essays. It doesn't look like they have any economics assignments in their library so you'd have to work with them to create the assignments and rubrics but it looks like it could be pretty great for those teaching large classes.

Option 1: Turnitin's PeerMark
If your campus uses Turnitin, this seems like the best option for peer review of writing assignments. Most faculty are probably already familiar with (or have heard of) Turnitin's plagiarism detection tools, now called OriginalityCheck. The PeerMark tool is integrated with OriginalityCheck, and another tool called GradeMark (I'm not sure if a campus can subscribe to one of these tools and not the others, or if they are always integrated; we have all three at my school, integrated through Blackboard). Even if you aren't interested in peer reviewing, the GradeMark tool is kind of neat too - you can set up grading rubrics and it's pretty easy to insert your comments. With PeerMark, the students submit their files online and the system can either assign reviewers randomly or you can assign them manually, or students can even select which papers they review (or you can use a mix of all three). You can decide how many reviews each student must do and can make the papers and reviews anonymous (or not); there is also an option to allow (or not) students who did not submit a paper to still review. You can set up both open-ended comment prompts and scale response questions (i.e., discrete categories, like a 1 to 5 scale) for the reviewers. The interface also allows comments to be inserted at specific points in the paper, similar to comments in Microsoft Word if you're familiar with those (so, for example, you can just insert "This sentence needs a comma" right at the sentence instead of "The second sentence of the third paragraph on the first page needs a comma"). The instructor sets the dates and times when papers are due, when they become available to reviewers and when reviews become available to authors.

PeerMark has a way to just give students full credit for reviews that meet certain criteria (i.e., all review questions are answered and you can set minimum length requirements for open-ended questions), or you can grade them manually.  You could also have students 'back evaluate' the reviews separately (I'd probably use a survey in Blackboard) and use those to grade reviews, or be part of the grade. It would also technically be possible to use the reviewers' scores on scale questions as part of a writing grade for the authors (that is, similar to what SWoRD does, though you'd have to do the calculations yourself) but from what I can tell, you'd have to do some cutting and pasting to get the scores out and into a spreadsheet.
Pros: Like SWoRD, PeerMark automates a lot of the process (assigning reviewers, etc.) so students can get/give feedback from multiple peers but in contrast to SWoRD, the system has a lot of flexibility in terms of setting options and students can easily insert comments and mark up the papers directly.
Cons: Only available if your campus already uses Turnitin.

Option 2: Google Docs
Profhacker has a post about using Google Docs Forms to run a peer-review writing workshop. Although that post is talking about an in-class workshop, I think everything would apply to out-of-class reviewing as well. The basic gist is students submit their papers via Google Docs, then use a Google Doc Form to complete their reviews. Forms allow for open-ended comment prompts as well as questions with discrete choice responses and Form responses are recorded in a Google Docs spreadsheet. Things wouldn't be quite as automated as with PeerMark - the instructor would have to match up papers with reviewers and you'd have to manually keep track of whether students met deadlines but on the other hand, the review comments will already be collected for you so if you want to use the scores for grading in some way, that should be easier.
Pros: Google Docs is free to both the instructor and students, and review comments and scores are recorded in a spreadsheet so you can manipulate them relatively easily. If you want to use the reviewer comments and scores to create grades, you could create your own algorithm but have a lot more flexibility with things like deadlines than with SWoRD.
Cons: The process is not as automated as other options; for example, if you want student papers to be anonymous, you'll have to figure out a way to do that outside the system (maybe have students create pseudonyms they use all semester?).

Option 3: Calibrated Peer Review
This is the option I am least familiar with but the general idea is that before reviewing their peers' work, students must first evaluate some sample essays and they get feedback on how good a job they do with those evaluations. From what I can tell, the calibration exercises require students to respond to discrete-choice questions (e.g., 'Does the essay have a clear thesis statement? yes or no'). The feedback they get is then a score of how many questions they answered 'correctly' (i.e., with the same answer as the professor), along with any additional comments the instructor wants to add about specific questions. Once students pass the calibration exercises, they review three of their peers' papers (I'm not sure if you can set it to be more or less than three) and they must review their own paper as well. I don't think it's possible to have students respond to open-ended review questions; it looks like all the review prompts require a discrete response. The system does generate writing and reviewing scores that could be used toward grades. To get the writing scores, the reviewers' responses to the reviewing questions are weighted based on how students did on the calibrations (higher calibration score, more weight given to that student's actual reviews). The system also generates a reviewing score for students by comparing their responses to the weighted average of the other reviewers of the same paper, plus it generates a self-assessment score that compares a student's self-evaluation to the other reviewers. Because I haven't used CPR myself, I don't know if the scoring has any of the same issues as SWoRD but my assumption is that the calibration stage means there is more consistency across reviewers so scores should be more consistent as well.
Pros: CPR gives students lots of guidance for being good reviewers (which ultimately, should mean more useful feedback for the writers). I should say that feeling ill-equipped to give useful reviews was one of my students biggest complaints so this aspect of CPR is really appealing. The way the writing and reviewing scores are generated seems more transparent than in SWoRD.
Cons: No open-ended comments from reviewers; major prep cost to set up the calibration examples (though
presumably a one-time fixed cost).

Personally, I will probably use PeerMark in the spring when I teach the writing class again, but I may try to replicate some aspects of CPR by giving students more examples of 'good' and 'bad' papers and reviews.

Wednesday, August 17, 2011

SWoRD follow-up

I really should have gotten back to this sooner but for those who are wondering how things went with SWoRD, the peer review writing site I used with my writing class in the spring, my overall reaction is that while it might be useful for some people, I probably won't use it the next time around. For those who missed my earlier posts, I discussed the basics of SWoRD, whether SWoRD can replace instructor grading, and some first reactions to SWoRD's reviewing process (after the first assignment) back in March. I made some tweaks as the semester progressed but overall, I have to say the experience was still pretty rough.

To briefly recap, SWoRD is an online peer review system where 1) students upload their papers, 2) the system randomly assigns other students to anonymously review those papers, 3) peer reviewers give both open-ended comments and numeric ratings in response to instructor-generated prompts, 4) authors 'back evaluate' their reviews, which means they give a numeric rating of how helpful the open-ended comments were, and 5) the system uses the numeric ratings from the reviewers to generate a writing score for the authors and uses the back evaluation ratings from the authors to generate a reviewing score for the reviewers. That last step, having the writing and reviewing scores generated entirely from the students themselves, is the main benefit of SWoRD, relative to other online peer review options like Calibrated Peer Review or Turnitin's PeerMark. My opinion is that the system has some problems that make those grades somewhat suspect. Unfortunately, I'm not sure there really is any satisfactory way to automate that process.

"Bad" reviewers may not be penalized
For starters, my original understanding of how the SWoRD grading system works was incorrect. I relied on some research papers that are posted on the SWoRD site (papers published a few years ago) and the system has since been changed but that is not explained anywhere on the site. The earlier papers said that the writing grades were weighted in such a way that if the score from one reviewer was substantially different from the scores from other reviewers, that score would be given less weight. However, that is not actually the case, which I discovered when one of my better students kept bugging me about his grade on one particular assignment. When I looked at the scores, there was one reviewer who gave 1's and 2's (out of 7) to all the papers he reviewed. Since that reviewer also did not provide very helpful comments, my guess is that he was either confused about the scoring or just lazy and not taking it seriously. Based on my original understanding, I thought the fact that his scores were so much lower than the other reviewers should have lowered that student's 'accuracy' reviewing grade and his scores should have been given a lot less weight for the students he reviewed. Neither of those things happened (his reviewing grade was actually somewhat higher than the class average and his scores definitely reduced the writing score for those papers). When I asked the SWoRD team about this, the response was that the "accuracy" part of the reviewing grade is based on rank orderings, not a comparison to the other ratings; that is, as long as the reviewer is giving higher ratings to 'better' papers and lower ratings to 'worse' papers, the system considers the ratings to be 'accurate'. The message from the SWoRD team said that they had "decided it wasn't valid to penalize someone for using a different range of the scale because often they were actually the most valid rater, with other students rating too high overall. If the instructor decides [a student] was unreasonably harsh, the thing to do is give [that student] a lower reviewing grade." On the one hand, I understand why they made that change, since I definitely noticed that my better students tended to give somewhat lower scores, on average (along with better comments justifying their scores), than their classmates. On the other hand, if I have to go through and scrutinize all the scores to see if students are scoring appropriately, that seems to defeat the whole purpose in having the scoring algorithm in the first place.

Incomplete information for back evaluations
Based on my reading of the research papers, in the earlier versions of the system, students could not submit back evaluations until after they turned in their second draft but they did see both the comments and the numeric scores from the reviewers (requiring them to turn in the second draft before doing the back evaluations was a way to make sure students actually had to process the comments before evaluating them). In the current version, students do not get to see the numeric reviewing scores until after they have submitted their back evaluations. Again, I can understand why this change was made; I can certainly imagine that some students would 'retaliate' for low reviewing scores by giving low back evaluation scores. But on the other hand, I saw many instances where reviewers gave scores that were not consistent with, or explained by, their open-ended comments (for example, a vague comment that 'everything looks fine' followed by a score of 3 or 4 out of 7). In my opinion, those reviewers should be given lower reviewing scores but the only way to accomplish this would be if the instructor goes in and manually reviews all the scores and comments, again defeating the purpose of having the scoring automated.

Reviewing itself is useful (but I'm still learning)
Given the problems with the scoring, I was expecting more negative comments from the students at the end of the semester but evaluations of the system were actually relatively positive, though less than half thought I should continue to use it in the future. Many of the critical comments were about the reviewing process itself (e.g., wanting more guidance for how to do good reviews, feeling like classmates didn't take it seriously enough or didn't give useful feedback, saying they should only review three papers instead of four or five, etc.), rather than the SWoRD system. The SWoRD-specific comments had to do with things like the deadlines being 9pm which was hard for students to remember (this isn't something the instructor can change), or the files being converted to PDFs so it was hard to refer to specific points in the papers (versus hard copies or Word docs that could be marked up). But students did seem to see the value in the reviewing, with several students commenting that doing the reviews helped them see where their own papers needed improvement.

So to sum up, I do think that the SWoRD system can still be useful for some instructors; if nothing else, it provides an infrastructure for students to submit papers, have reviewers randomly and anonymously assigned, and give/get feedback from multiple readers. You don't have to use the scores that the system generates. I particularly think SWoRD could be good for shorter assignments, where the evaluation criteria are relatively objective (and thus reviews might be more consistent). But if you aren't going to use the grades generated by the system, I think there may be other, better tools that could be used to facilitate peer reviewing; I'll talk about some of those options in my next post...

Sunday, August 14, 2011

Getting off-course

It's a frustrating time to be an economist, though I can't decide if it's worse to be a micro- or macro-economist these days - I have to assume that many macro folks are tearing their hair out over the stupid things Washington is doing and the even stupidier things the media is often saying but at least when someone asks a macro person what they think of all this stuff going on, they supposedly are in a much better position to talk about it than most micro people (I'm not saying that stops me from talking anyway; I'm just sayin' that as a micro person, I don't spend my life studying these things and really, my understanding of things is only slightly better than what we teach in Econ 101). I've almost entirely stopped reading anything about the economy from regular news outlets because I kept seeing things that made me wonder if I had some basic economic concepts totally wrong, only to realize that my understanding is fine but reporters apparently didn't learn anything in Econ 101. So I've largely kept up with things this summer through blogs written by economists, though sadly, the news isn't any less depressing when analyzed accurately...

But regardless of my somewhat basic grasp on macro policy, one thing that has crossed my mind a few times this summer is that I hope economists are talking with their classes about what's happening, even if it isn't directly related to the course material. It seems to me that the issues the country has been grappling with - how important is debt reduction, why hasn't the unemployment picture been improving and what needs to be done about that, etc. - are things that our majors should be aware of, even if they don't happen to be enrolled in a macro class. Perhaps even more important, they should be thinking critically about what is going on and what the media is saying about it. For example, does it make any sense to them that the stock market plunge was 'a result' of a the S&P downgrade, as many news analysts have been saying? Does it make any sense to them that huge cuts in government spending will somehow reduce unemployment, as some politicians have been claiming?

When I was teaching Principles, it wasn't a big deal to bring in current events, even if they were macro issues and the class was micro (since usually, macro issues can still be discussed in terms of core principles like incentives or supply and demand). But when teaching more narrowly-focused upper-division classes where the course subject may have nothing to do with the events that are happening, it seems harder to justify taking class time to talk about things that are not directly course-related. Still, it seems to me that we should, at least given the historic magnitude of what's going on right now. I'm not sure how I'll fit it in to my data analysis course in the fall but probably when the 'super-committee' comes out with its recommendation later this year, I will try to spend at least a part of a class talking it. What about you? Do you ever take class time to discuss 'off-topic' current events?

Tuesday, August 9, 2011

Do you give credit for participation?

Dilbert.com

This morning's Dilbert was perfectly timed as I was in the middle of trying to figure out the grade weights for my fall Econ for Teachers class and as usual, having a huge mental debate over how much weight to give 'participation'. A couple of Teaching Professor posts this summer hit on the same issue so it's already been at the back of my brain. In my data analysis course, participation is rolled into the team grades and that takes care of it; I've found that students have a strong tendency to 'punish' their peers for low participation by giving them low peer evaluation scores. But with the Econ for Teachers class, I do a lot of formative-type assessments that I'm not going to "grade" for content (e.g., student reactions to readings where I ask them to relate the reading to something in their own experience), so I have to decide how much credit to give students simply for completion. I want students to take those assignments seriously and the economist in me believes in incentives but at the same time, I don't want students to be doing things just for the points; ideally, I want them to be intrinsically motivated. To a certain extent, I think that carefully crafted assignments can go a long way with that - the intrinsic motivation comes when students see the purpose of what I'm asking them to do. But I don't feel like I can just ask them to do it and not give them any points at all (though really, why not?).

And then there's participation in the form of class discussion - how in the world does anyone ever assess credit for that? I don't usually try; I just rely on well-formed questions (which can often mean silence, when the questions aren't as well-formed as I had thought!). Again, groupwork helps; even if classes where I don't do formal teams, I try to have students talk in small groups when I really want them to discuss something. Given my class sizes, full-class discussion is simply never going to fully involve everyone. But should I give students credit for trying? For showing up? Shouldn't that be a basic expectation for all students? That is, why give points for doing what is expected (participating in your own education)? I'm guessing I'll still be asking these questions for years...