Skip to main content

Teaching thinking

I mentioned in an earlier post that I had to explain to my writing class why economists do specification checks (we've been reading some empirical journal articles) and how many of the students are not really familiar with the process that economists go through when we write empirical papers. I think that part of what is hard for students to grasp is the constant questioning we do (and this is true not only for economists, but anyone who does research). As an empirical economist, I'm always asking: does my model make sense? What else, other than the variables I'm focusing on, could be driving my results? Can I get data on those other variables? Is this really causation, or just correlation? Given that I will be teaching a data and statistics course next semester, I've been thinking about how I am going to teach students to ask similar questions.

With that in mind, I thought a recent Freakonomics post was a great example of this process. Eric Morris has had a bunch of posts about gender differences in driving, first pointing out that men are more likely to drive when they get in the car with women and then following up with why that might be and whether men or women are actually better drivers. I think the latest post in this series would be a great example for teaching about how economists think for several reasons. The article begins with this:
We’ve established that men are more likely to take the wheel when a couple rides together, but should we care? I say we should. Aside from the cultural, sociological and psychological implications, the gender driving disparity might be costing us lives and treasure. If women are more skilled drivers than men, perhaps we’d all be better off if they were behind the wheel and men were in the passenger seat knitting. What do the data say?
This is my favorite question about any research project: why do we care? What's hard to explain to students is that while this can sound normative, the way the justification is phrased is actually positive. Here, Morris isn't saying women should always drive just because it's his opinion; he's saying that IF women are more skilled drivers, THEN having more women drive might save lives (the normative part is assuming that people want to save lives but that seems fairly uncontroversial). And then we turn to the data to find out if women really are more skilled.

Next, Morris points to data showing that women have fewer accidents than men. Many people would simply take that information and conclude women are safer drivers. But economists know that such simple statistics do not hold "all else equal" and Morris goes on to discuss some of the other issues that one should consider, like that men spend more time driving so you'd want to control for miles driven (men have fewer accidents per mile), or that not all accidents are the same so you might want to look at fatal accidents (more likely for men) separately from crashes involving just injuries (more likely for women). We also don't have data on who is at fault in these accidents or what time of day accidents happened (for example, men may be more likely to be driving at night). Or there might be selection bias because when a couple gets in the car, even if the man isn't a great driver, he might be better than his partner. In other words, we still don't know what is correlation versus causation.

By the end, Morris still hasn't definitively answered the question about whether women are 'better' drivers than men but the questions he raises are exactly the kinds of questions that one would want to have answered before making any kind of policy decision based on the empirical results.

Comments

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This is a great example of showing how one thinks through a problem. I like the application of it to a stats setting (and have forwarded the link to the people here who teach stats), but the general point is applicable to almost any analytical situation. When we're telling a story about why something happens, are we sure we have identified a causal relationship?


    My favorite personal example of this comes from a party I attended a long time age. I got into a conversation with soneone who worked for the Beef Council, and the conversation turned (as it will when economists converse) to the decline in consumption of beef. She asked me how I taught that in principles, and I said, when I dealt with it, I talked about the change in tastes that occurred when people got more information. She then asked, "If that's the cause, what would we expect to happen to the price of chicken?" Well, it'd rise.

    But, in fact, the price of chicken had declined relative to the price of beef. So her point was that people had not (necessarily) come to prefer beef less, they'd might just have substituted relatively less-expensive chicken for beef.

    The moral was clear. Don't talk to industry association economists.

    No, that's not it. It's make sure you've thought about the implications of the story you're telling and, insofar as posible find out whether those implications are valid.

    ReplyDelete

Post a Comment

Comments that contribute to the discussion are always welcome! Please note that spammy comments whose only purpose seems to be to direct traffic to a commercial site will be deleted.

Popular posts from this blog

When is an exam "too hard"?

By now, you may have heard about the biology professor at Louisiana State (Baton Rouge) who was removed from teaching an intro course where "more than 90 percent of the students... were failing or had dropped the class." The majority of the comments on the Inside Higher Ed story about it are supportive of the professor, particularly given that it seems like the administration did not even talk to her about the situation before acting. I tend to fall in the "there's got to be more to the story so I'll reserve judgment" camp but the story definitely struck a nerve with me, partly because I recently spent 30 minutes "debating" with a student about whether the last midterm was "too hard" and the whole conversation was super-frustrating. To give some background: I give three midterms and a cumulative final, plus have clicker points and Aplia assignments that make up about 20% of the final grade. I do not curve individual exams but will cu...

THE podcast on Implicit Bias

I keep telling myself I need to get back to blogging but, well, it's been a long pandemic... But I guess this is as good an excuse as any to post something: I am Bonni Stachowiak's guest on the latest episode of the Teaching in Higher Ed podcast, talking about implicit bias and how it can impact our teaching.  Doing the interview with Bonni (which was actually recorded a couple months ago) was a lot of fun. Listening to it now, I also realize how far I have come from the instructor I was when I started this blog over a decade ago. I've been away from the blog so long that I should probably spell this out: my current title is Associate Vice President for Faculty and Staff Diversity and I have responsibility for all professional learning and development related to diversity, equity and inclusion, as well as inclusive faculty and staff recruitment, and unit-level diversity planning. But I often say that in a lot of ways, I have no business being in this position - I've ne...

Designing effective courses means thinking through the WHAT and the HOW (in that order)

I think most folks have heard by now that the California State University system (in which I work) has announced the intention to prepare for fall classes to be primarily online. I have to say, I am sort of confused why everyone is making such a big deal about this - no matter what your own institution is saying, no instructor who cares about their own mental health (let alone their students) should be thinking we are going back to 'business as usual' in the fall. In my mind, the only sane thing to do is at least prepare  for the possibility of still teaching remotely. Fortunately, unlike this spring, we now have a lot more time for that preparation. Faculty developers across the country have been working overtime since March, and they aren't slowing down now; we are all trying to make sure we can offer our faculty the training and resources they will need to redesign fall courses for online or hybrid modalities. But one big difference between the training faculty needed ...