Welcome new readers!

The "New to the blog? Start here" page will give you an overview of the blog and point you to some posts you might be interested in. You can also subscribe to receive future posts via RSS, Facebook or Twitter using the links on the right-hand side of the page, or via email by entering your address in the box. Thanks for reading!

Thursday, March 31, 2011

Where are the female economists?

I teach in a department where the full-time tenure/tenure-track faculty is 43 percent female (6 out of 14); it's an even 50 percent among tenured associate (2 out of 4) and full (4 out of 8) professors (for anyone following the math, that leaves 2 assistant professors, both male). For any non-economists reading this, those percentages are highly unusual, even for a non-Ph.D.-granting institution. According to the 2010 Report of the Committee on the Status of Women in the Economics Profession (full disclosure: I'm on the Board), at Ph.D.-granting institutions, only 10.7% of full professors and 21.8% of tenured associate professors are women; at liberal arts institutions, those numbers are 25% and 32.7%, respectively. I also work in a sub-field (economics of education) that tends to have a lot of women, both economists and non-economists. And I spend a lot of time thinking about, and talking to people about, teaching economics, which tends to be an area that attracts relatively more women.

My point with all this is that it is easy for me to forget that economics is an incredibly male-dominated field. But then I read things like the CSWEP report and Matthew Kahn's recent article, which asked where are the female economics bloggers?, and I am reminded a) why I stick to my blissfully gender-balanced little sub-fields and b) that I should give thanks every day for my awesome department.

Kahn's question stems from his observation that while there are 52 women among the "top 1000" economists, none of them blog. In contrast, at least 14 men among only the "top 200" economists are bloggers. His definition of "top" comes from the rankings on RePEc (Research Papers in Economics). The article has prompted at least a couple good responses - Jodi Beggs points out that a lot of economics blogging is argumentative, which women may prefer to avoid, and EconomistMom Diane Lim Rogers seems to suggest that the self-involved nature of blogging appeals more to men. Rogers also hits on something broader, about why there is a lack of women in economics overall (and also captures part of why I personally tend to dislike a lot of economists). She says, "It could be because we women often find disciplines that assume everything can be objectively, precisely, formulaically valued, very limiting at best and maybe downright wrong at worst." And commenters on both those posts point out that there are women economist bloggers out there and Kahn's focus on "excellent" is problematic (though I have to say, regardless of where they rank on publications, there are still relatively few female economics bloggers).

For me, the question raised in Kahn's headline is really the least of my concerns - forget 'where are the female economics bloggers', how about simply 'where are the female economists'? While Kahn does suggest that the absence of female bloggers might be a problem, I would argue that it is a far bigger problem that only 52 out of the "top 1000" economists are women! Of course, the low number is partly a function of the metric he's using (basically, publication in mainstream econ journals). A commenter on Beggs' post notes that a lot of women economists focus on topics that are less likely to be found in those journals (that certainly describes me, considering only about 20% of the items on my C.V. are listed on RePEc). But that just begs the question of why women economists are more likely to choose paths that are less 'mainstream'.

It seems to me that the sorts of issues that lead female economists to be less likely to blog are similar to why women are less likely to choose economics in the first place. But although I could go on (or perhaps 'go off' is a better way to put it) for days about why econ is so male-dominated, and why there are so few women at the 'top' of the profession (and why this is a problem for everyone), I always dread that conversation because I find that there is simply no way to discuss the issue without getting into all kinds of gender stereotypes/generalities that are sure to annoy someone. Kahn does it when he suggests that women economists are busy with 'home production' (which is offensive on many levels), but Beggs and Rogers do it too; I just happen to agree with their stereotypes, partly because they seem more flattering to women. Still, it is an issue worth discussing, and a particularly important one for those of us who care about economics education. Whatever the reasons for economics being such a male-dominated field currently, certainly what we do in our classrooms has a role in changing that in the future. So I guess that's one more thing to add to my list of 'stuff to write more about when I have time'...

Tuesday, March 22, 2011

Hot off the presses...

For those who were not able to participate personally in the AEA's Teaching Innovations Program (TIP), you can still get a taste of the project through a new book, Teaching Innovations in Economics: Strategies and Applications for Interactive Instruction. The first few chapters talk about the program itself and then there is a chapter on each of the interactive strategies that TIP focused on, i.e., cooperative learning, classroom experiments, interpretive discussion, formative assessment, context-rich problem solving, teaching with cases, and active learning in large-enrollment courses (full disclosure: I'm a contributor to one of the chapters - take a wild guess which one!). There are tons of good ideas, with solid advice from people who have implemented the techniques themselves.

I also just got a notice that the paperback version of The Invisible Hook: The Hidden Economics of Pirates is coming out in May. I haven't read it yet but just have to give props to a book that can (apparently in complete seriousness) claim, "Leeson argues that the pirate customs we know and love resulted from pirates responding rationally to prevailing economic conditions in the pursuit of profits."

And while I'm sharing cool resources, someone sent me a link to a list of 15 Fascinating TED Talks for Econ Geeks, most of which actually are pretty fascinating. I'm a big fan of TED talks - they are short enough that I can justify watching them as a break from whatever else I should be doing but usually educational/insightful enough that I still feel like I'm doing something productive while I'm procrastinating...


Wednesday, March 16, 2011

More about SWoRD reviewing

[If you missed them, I discussed the basics of SWoRD and whether SWoRD can replace instructor grading in earlier posts. I also have a follow-up post in August]

In addition to generating a writing grade from students' numeric rubric scores, SWoRD also generates a reviewing grade. My understanding is that a student's reviewing grade is based half on that student's 'consistency' and half on the 'helpfulness' of that student's comments. The consistency score accounts for whether a student is differentiating among papers - if a student gives all high scores or all low scores, or if a student's scores differ from the other scores for the same papers, then the consistency score will be reduced (this also affects how much weight that student's scores are given in the writing score of the reviewed papers). The helpfulness score comes from 'back evaluations' that reviewees complete. The back evaluations are numeric scores on a five-point scale; the instructor can create specific rubrics for the back evaluation scale but I simply told students to rate 1-5 in terms of whether they felt the comments were specific enough to be helpful in improving their paper with 5 being the most helpful. The back evaluations are done before students can see the numeric scores that the reviewers gave so the helpfulness score is based entirely on the open-ended comments.

When I have done peer reviewing in the past, I have not graded the reviews, in part because I simply didn't know how I would assess them and in part because the peer reviewing was not anonymous (so I did not really trust asking students to 'rate' their reviewers). But I think students do take things more seriously when there is a grade attached so from one perspective, I like that SWoRD generates a reviewing grade. But I'm not entirely convinced it's really a good measure of the students' work but, as with the peer reviews themselves, I'm not sure how much is SWoRD and how much is my inexperience with the whole set-up.

One problem is 'training' students how to do back evaluations (this goes for the reviewing process as well). At least on the first assignment, students did not seem to discriminate much in their back evaluations. In particular, when reviewers would give vague and not-useful comments like, "Everything looks fine to me," many reviewees  would give back evaluation scores of 5. At the same time, I saw at least a few cases where the reviewees just didn't agree with the comment and so the back evaluation score was lower, even if the comment was something the writer should have considered. For the second assignment, I gave the students a bit more guidance on what the different numeric ratings should mean; we'll see what happens with that.

Another issue that arose on the first assignment was students not taking the reviews of the final draft very seriously. A few students did not complete the second reviews, and several did not complete the back evaluations. I think this is partly because they did not think it would affect their own grade but also, they did not see any point. I guess I should have known that would happen; although some of my students really are interested in improving their writing, I forget that many of them are only really interested in their grade. So I clarified the impact on their grade (see below) and for the second assignment, I also changed the reviewers for the second draft (rather than having them review the same paper again) and I told them that their final assignment of the semester will be to choose one of their assignments and re-write it. So now, the comments on the final drafts may actually be incorporated into another draft and hopefully, that will give students an incentive to take them a bit more seriously.

I think part of the confusion about the impact of skipping reviews or back evaluations stemmed from the way SWoRD does the reviewing scores. If a student does some, but not all, of the assigned reviews, the reviewing score is simply averaged over the reviews that were completed. SWoRD has a separate category (the "Task" score) that tells you what percentage of reviews and back evaluations a student completed (so, for example, a student who does the reviews but skips the back evaluations will have a Task score of 50%). This Task grade can be given its own weight in the calculation of the final grade but I would much prefer to interact it with the Writing or Reviewing scores. It's possible to do this by hand (i.e., export the data to Excel and do the calculations there), but that sort of defeats the purpose of doing everything in SWoRD since students have to go somewhere else to find out their final grade. What I ended up doing was weighting the Reviewing score by the Task score so, for example, if a student only did half their reviews, they only got half the credit they would have otherwise gotten for reviewing. That definitely got the attention of the students who skipped the back evaluations for the final draft!

So far, I have to say that my experience with SWoRD hasn't been as smooth as I had hoped but I guess that's why it's called a pilot...


Also related: Other peer reviewing tools

Sunday, March 6, 2011

Can SWoRD really replace instructor grading?

The short answer, for me, right now, is NO. However, I am not sure if that is because of an inherent problem with peer reviewing in SWoRD or it it is because of something about my assignments or my rubric. And to be fair, I have only had completed one full cycle (two drafts of a paper) in SWoRD so students are also still getting used to the system (if you missed it, I discussed the basics of SWoRD in my last post).
[Update: for my post-semester thoughts, and clarification of the grading, see my August follow-up post]

Some background: the first assignment (copy can be found here) was for students to write a very short (300 words + graph) data summary, based on the latest BLS Unemployment Situation report. I had comment prompts and rubrics related to three general categories: economic content, the graphic and the writing. One thing I realized is that I probably had too many prompts (there were 9 total to go with the 3 rubrics) so for future assignments, I will condense them. I only skimmed through the first drafts, just to make sure that students were on the right track, and did not give individual feedback; instead, I made some comments in class about a few common problems I saw. I did grade the second drafts, both marking up the papers and completing the same 7-scale numeric rubrics that the students did.

What I saw in those final drafts were a lot of problems with writing about content (for example, trying to explain how an increase in discouraged workers is related to a drop in the unemployment rate but not doing it well at all, partly because I think the students weren't super-clear on the concept to begin with). Those things should have been caught by their peers (one of the comment prompts specifically asked if there were any economic concepts that the author may not be using correctly) but when I looked at the comments, references to content issues were quite rare. I am not sure if students did not feel comfortable enough with the content themselves to correct their classmates, or they simply did not catch the errors.

Another issue was that student assessments of the graphics were pretty superficial. I stressed to the students that graphics should be self-sufficient, i.e., that labels, title, etc., should be descriptive enough that the reader doesn't have to hunt through the text to figure out what's going on. At the same time, the graphic should be integrated with, and complement, the text, not just be tacked on at the end. Students were not very good at assessing either of those issues, though I may have to put some of the blame on comment prompts that may not have been specific enough.

I also realized that there is nothing in the comment prompts or rubrics to address 'administrative' issues like whether the student had a title/headline (let alone a good one), had stayed close to the assigned length (or included the required word count at the end), or formatted the paper correctly, including citations. These are not huge issues but when I grade papers, I tend to knock a few points off if students don't actually follow directions. I am not sure how to incorporate this into SWoRD, other than maybe try to add it to the writing rubric (I don't feel that making a separate rubric would be appropriate because SWoRD weights all the different rubrics equally and this really doesn't seem as important as the other types of issues).

For 8 out of 26 students, my grade was higher than the writing grade assigned by SWoRD but for the majority, my grade was lower and in some cases, by a significant amount (the biggest difference was 19 points). Part of that is due to the fact that the SWoRD scores already incorporate a curve (the instructor can set the mean and standard deviation). I really wish the raw scores were also reported but since I will ultimately curve grades anyway, I guess it's not that big a deal. But I definitely do not feel comfortable using the SWoRD scores by themselves as the final grade for the assignment. My solution for the first assignment was to average my grade in with the SWoRD writing scores.

For the next assignment, I will have to re-vamp the comment prompts and rubrics. I also will be reading the first drafts more closely, and plan to fully grade the final drafts again. The total assignment grade also incorporates the reviewing scores, which I left alone. I have some issues with those scores as well, which I'll detail in another post.

Related posts:
Peer review with SWoRD
More about SWoRD reviewing
SWoRD follow-up

Also related: Other peer reviewing tools

Tuesday, March 1, 2011

Peer review with SWoRD

As I mentioned, I'm using SWoRD in my writing class for econ majors. SWoRD is a site that not only facilitates peer review, it allows for student grades to actually be determined by their classmates' reviews. For each assignment, the instructor creates both open-ended comment prompts and a numeric rubric (the SWoRD template requires a 1 to 7 scale, though you can sort of get around that by skipping some of the numbers). Students submit their papers to SWoRD and once the deadline has passed, papers are assigned to peer reviewers (minimum of three, maximum of six; the creators of SWoRD strongly recommend at least five reviews if the scores will be used for grading). Everything is anonymous, as each student creates a pseudonym within the system (you just have to make sure students don't put their names in the text of their files!). I can either assign specific reviewers or have the system automatically assign them randomly. After the reviews are completed, the authors have the opportunity to 'back evaluate' the open-ended comments, indicating how helpful the comments were, or weren't; this is done before the authors see the numeric scores assigned by reviewers so the back evaluation is based purely on the open-ended comments.

One of the coolest things about the SWoRD system is how it calculates grades. Students receive a grade both for reviewing and for writing. The reviewing grades are based half on 'consistency', which takes into account things like if a student just gives all high scores or all low scores, or scores that are really different from the other reviewers of the same papers, and half on the back evaluation 'helpfulness' scores. The writing grades are based on the numeric rubric scores from the reviewers but adjusted for the consistency of the reviewers so, for example, if a paper has four reviewers who give high scores and one reviewer who gives low scores, the low scores from that one reviewer will be given less weight. The instructor can also adjust how much weight is given to the reviewing and the writing scores for each assignment.
[Update: I was mistaken about the grading - see my follow-up post for clarification]

Part of the reason I agreed to do this pilot is that I have always had students do peer review for this course anyway. So I already have many of the comment prompts and rubrics created (though they need some revising for SWoRD) and the fixed costs of getting things set up in the system seemed like they would be rather low while the benefits are potentially huge. In particular, in the past, I had students simply swap papers with one other classmate, in class, and there has always been huge variance in the quality of feedback that students give/receive and I always felt bad for the students that did not get very good feedback. With SWoRD, each student gets feedback from several classmates so even if the comments from one or two are not that great, the combination should mean that they get something useful. I was also hoping that with the grading system, I might be able to focus much more on just giving students comments and not have to worry about grading as much. Next time, I'll talk about how that's working out...

Follow-up posts
Can SWoRD really replace instructor grading?
More about SWoRD reviewing
SWoRD follow-up

Also related: Other peer reviewing tools