Welcome new readers!

The "New to the blog? Start here" page will give you an overview of the blog and point you to some posts you might be interested in. You can also subscribe to receive future posts via RSS, Facebook or Twitter using the links on the right-hand side of the page, or via email by entering your address in the box. Thanks for reading!

Monday, December 26, 2011

Education sessions at the ASSA

It's that time of year again so for everyone going to Chicago, here are the econ ed sessions I could find (if I missed any, please let me know!). I'll also claim a blogger's right to self-promotion and note that I'll be talking about this very blog in a session on Saturday morning (first in the list below) with a bunch of people much more famous than myself (nope, not intimidating at all...). Hope to see you there!

Jan 07, 2012 10:15 am, Hyatt Regency, Regency
Panel Discussion: Using Blogs to Teach Undergraduate Economics  (A2)
Presiding:  GAIL HOYT (University of Kentucky)
Economics for Teachers
JENNIFER IMAZEKI (San Diego State University)
Grasping Reality with a Sharp Beak: The Semi-Daily Journal of Economist J. Bradford Delong
J. BRADFORD DELONG (University of California-Berkeley)
Freakonomics
STEVEN LEVITT (University of Chicago)
Marginal Revolution
ALEX TABARROK (George Mason University)
Economists Do It with Models
JODI BEGGS (Harvard University)

Jan 06, 2012 8:00 am, Hyatt Regency, Toronto 
American Economic Association
K-12 Economic and Financial Literacy Education  (A2)
Presiding:  WILLIAM BOSSHARDT (Florida Atlantic University)
A Research Design for Measuring Student Engagement: An Evaluation of the International Economic Summit
JODY HOFF (Federal Reserve Bank of San Francisco)
JANE LOPUS (California State University-East Bay)
Impact of a High School Personal Finance Course on Student Attitudes
ANDREW T. HILL (Federal Reserve Bank of Philadelphia)
BONNIE MESZAROS (University of Delaware)
ERIN YETTER (University of Delaware)
Evaluation of Cumulative Impacts of the Student Enterprise Program on Academic Achievement
JENNIFER PITZER (University of Cincinnati)
Disparities in Advanced Course-Taking: The Case of AP Economics
JOHN SWINTON (Georgia College State University)
CHRISTOPHER CLARK (Georgia College State University)
BENJAMIN SCAFIDI (Georgia College State University)

Discussants:WILLIAM BOSSHARDT (Florida Atlantic University)
GEORGE VREDEVELD (University of Cincinnati)
JAMES O'NEILL (University of Delaware)
PAUL GRIMES (Pittsburg State University)

Jan 06, 2012 10:15 am, Hyatt Regency, Regency A 
American Economic Association
The Effects of Financial Education and Financial Literacy  (A2)
Presiding:  B. DOUGLAS BERNHEIM (Stanford University)
How Financial Literacy Affects Household Wealth Accumulation
JERE BEHRMAN (University of Pennsylvania)
OLIVIA MITCHELL (University of Pennsylvania)
CINDY SOO (University of Pennsylvania)
DAVID BRAVO (Universidad de Chile)
Financial Education and Timely Decision Support: Lessons from Junior Achievement
BRUCE CARLIN (University of California-Los Angeles)
DAVID ROBINSON (Duke University)
Financial Knowledge and Financial Literacy at the Household Level
ALAN GUSTMAN (Dartmouth College)
THOMAS STEINMEIER (Texas Tech University)
NAHID TABATABAI (Dartmouth College)
Economic Implications of Financial Education in the Workplace
ROBERT CLARK (North Carolina State University-Raleigh)
MELINDA MORRILL (North Carolina State University-Raleigh)
STEVEN ALLEN (North Carolina State University-Raleigh)

Discussants:JUSTINE HASTINGS (Brown University)
WILLIAM WALSTAD (University of Nebraska-Lincoln)
URVI NEELAKANTAN (Federal Reserve Bank of Richmond)
ANNAMARIA LUSARDI (George Washington University)

Jan 06, 2012 12:30 pm, Hyatt Regency, Atlanta 
National Association of Economic Educators
High School through College: Economic Education and Observed Outcomes  (A2)
Presiding:  JOHN SWINTON (Georgia College)
High School Grade Inflation and College Outcomes
CHRISTOPHER CLARK (Georgia College)
BENJAMIN SCAFIDI (Georgia College)
JOHN R. SWINTON (Georgia College)
Is it Live or is it Internet? Experimental Estimates of the Effects of Online Instruction on Student Learning
DAVID N. FIGLIO (Northwestern University and NBER)
MARK RUSH (University of Florida)
LU YIN (University of Florida)
Economic Education and Consumer Experience During the Financial Crisis
PAUL W. GRIMES (Pittsburg State University)
KEVIN E. ROGERS (Mississippi State University)
WILLIAM D. BOSSHARDT (Florida Atlantic University)
Exploring the Gender Gap in High School Math and Economics: Does the Gap Widen as Students Matriculate from Geometry and Algebra to Economics?
BRENT A. EVANS (Mississippi State University)

Discussants:ROGER B. BUTTERS (University of Nebraska-Lincoln)
KIMMARIE MCGOLDRICK (University of Richmond)
BRIAN W. SLOBODA (University of Phoenix)
SHELBY FROST (Georgia State University)

Jan 06, 2012 2:30 pm, Hyatt Regency, Columbus KL 
American Economic Association
Research on College and University Economic Education  (A2)
Presiding:  SAM ALLGOOD (University of Nebraska-Lincoln)
Why Don't Women Pursue a Major in Economics?
TISHA L.N. EMERSON (Baylor University)
KIMMARIE MCGOLDRICK (University of Richmond)
KEVIN J. MUMFORD (Purdue University)
Does Adding Intermediate Algebra as a Prerequisite for Economics Principles Courses Improve Student Success?
STEVEN BALASSI (Saint Mary's College of California)
RICHARD H. COURTNEY (Saint Mary's College of California)
WILLIAM LEE (Saint Mary's College of California)
Revisiting How Departments of Economics Evaluate Teaching
WILLIAM E. BECKER (Indiana University and University of South Australia)
WILLIAM BOSSHARDT (Florida Atlantic University)
MICHAEL WATTS (Purdue University)
"Dude, Who's Your CL Leader?" Characteristics of Effective Collaborative Learning Leaders
KIM P. HUYNH (Indiana University)
JAMES K. SELF (Indiana University)

Discussants:WENDY STOCK (Montana State University)
GEORG SCHAUR (University of Tennessee)
MICHAEL SALEMI (University of North Carolina)
JOHN SWINTON (Georgia College State University)

Jan 07, 2012 8:00 am, Hyatt Regency, Columbus IJ 
American Economic Association
What Economics Should We Teach Before Students Enter College: The Voluntary National Content Standards in Economics and the AP?  (A2) (Panel Discussion)
Panel Moderator:  DAVID COLANDER (Middlebury College)
JAMES GWARTNEY (Floriday State University)
RICHARD MACDONALD (St. Cloud State University)
STEPHEN MARGLIN (Harvard University)
DEIRDRE MCCLOSKEY (University of Illinois-Chicago)
JOHN SIEGFRIED (Vanderbilt University)
HELEN ROBERTS (University of Illinois-Chicago)

Jan 07, 2012 10:15 am, Hyatt Regency, Atlanta 
National Association of Economic Educators
Financial Education for Undergraduates: Just in Time? Too Late?  (A2) (Panel Discussion)
Panel Moderator:  BRENDA CUDE (University of Georgia)
BRENDA J. CUDE (University of Georgia)
THOMAS L. HARNISCH (American Association of State Colleges and Universities)
ANNAMARIA LUSARDI (George Washington University)
CLIFF ROBB (University of Alabama)
WILLIAM B. WALSTAD (University of Nebraska-Lincoln)

Jan 07, 2012 2:30 pm, Hyatt Regency, Columbus AB 
American Economic Association
Annual Poster Session on Active Learning Strategies  (A2)
Presiding:  WILLIAM BOSSHARDT (Florida Atlantic University)
Just in Time Teaching in Intermediate Microeconomics
DAVID ROSS (Bryn Mawr College)
The Economics of The Office
DIRK MATEER (Pennsylvania State University)
DANIEL KUESTER (Kansas State University)
Auctions
SHIZUKA NISHIKAWA (St. Mary's College of Maryland)
The Use of Popular Music to Teach Introductory Economics in a Live and Online Environment
ROD RAEHSLER (Clarion University)
Improving Community College Economics Instruction: What Can We Learn from Other Disciplines?
MARK MAIER (Glendale Community College)
Classroom Market Simulation using a Computer Network
DOUGLAS DOWNING (Seattle Pacific University)
Two Dollar Challenge: Beginning to Learn How the Other Half Lives
SHAWN HUMPHREY (University of Mary Washington)
JAREN SEID (Front Range Community College)
Teaching Dynamic Aggregate Supply-Aggregate Demand Model in an Intermediate Macroeconomics Class Using Interactive Spreadsheets
SARAH GHOSH (University of Scranton)
SATYAJIT GHOSH (University of Scranton)
Starting Point: Pedagogical Resources for Teaching and Learning Economics
JOE CALHOUN (Florida State University)
Issues in Economic Freedom: How a Topics Course Can Popularize the Dismal Science
HOWARD COCHRAN (Belmont University)
MARIETA VELIKOVA (Belmont University)
Don't Touch My Medicare
ELIZABETH PETERSON (Eastern Washington University)
CHARLOTTA EVANS (University of Utah)
Using Peer Assessment to Improve Literative Reviews in Economics
JULIE SMITH (Lafayette College)
Promoting Active Learning through Online Undergraduate Economics Journals
STEPHANIE DAVIS-KAHL (Illinois Wesleyan University)
ROBERT LEEKLEY (Illinois Wesleyan University)
MICHAEL SEEBORG (Illinois Wesleyan University)
How Does Economic Instruction Vary in Community Colleges across Four Regions?
JOHN MIN (Northern Virginia Community College)
AMBER CASOLARI (Riverside City College)
The Realm of Imperfection: Monopoly, Oligopoly and Monopolistic Competition
MARILYN COTTRELL (Brock University, Canada)
Is Your CFO Smarter Than a Sophomore? Applying Economics to University Managerial Decisions
MONICA HARTMANN (University of St. Thomas)
TeachingWithData.org: Online Resources for Bringing Data into the Classroom
LYNETTE HOELTER (University of Michigan)
GEORGE ALTER (University of Michigan)
WILLIAM FREY (University of Michigan)
JOHN DEWITT (University of Michigan)
SUZANNE HODGE (University of Michigan)
Using Macroeconomic Indicators in Managerial Decision-Making
BRIAN SLOBODA (University of Phoenix)
A Web-Based Interactive Macro-Econ Learning System
JINZHUO ZHAO (Hampden-Sydney College)
Socratic Discussion
AREERAT KICHKHA (West Kentucky Community and Technical College)
Using Collective Action Experiments to Teach the Economics of Social Issues
JAMES BRUEHLER (Eastern Illinois University)
LINDA GHENT (Eastern Illinois University)
ALAN GRANT (Baker University)
Community Based Design Approach to Computer Aided Teaching and Learning Applications
CARSTEN LANGE (California State Polytechnic University)
Utilizing Economic Modules for Undergraduate Teaching though Internship Program
MAUREEN DUNNE (Framingham State University)
MARTHA MEANEY (Framingham State University)
FAHLINO SJUIB (Framingham State University)
Big Think: A Model for Critical Inquiry in Economics Courses
ROBERT GARNETT (Texas Christian University)
KIMMARIE MCGOLDRICK (University of Richmond)


Jan 07, 2012 2:30 pm, Hyatt Regency, Atlanta
National Association of Economic Educators
Every Little Bit Counts: Factors that Influence Student Learning in Economics  (A1)
Presiding:  CHRISTOPHER CLARK (Georgia College)
The Effects of Community Social Capital on School Performance: A Spatial Approach
KAUSTAV MISRA (Saginaw Valley State University)
PAUL W. GRIMES (Pittsburg State University)
KEVIN E. ROGERS (Mississippi State University)
Making Extra Credit Count: Program Design, Drivers of Participation, and Impact on Student Performance
JODI N. BEGGS (Harvard University)
Does a Mandate Matter? The Impact of State Level Mandates on Economic Literacy
ROGER B. BUTTERS (University of Nebraska-Lincoln)
CARLOS J. ASARTA (University of Nebraska-Lincoln)
Economics Effects of Absenteeism on Exam Performance: Empirical Evidence and Implications
TIN-CHUN LIN (Indiana University-Northwest)

Discussants:JOHN R. SWINTON (Georgia College)
PAUL W. GRIMES (Pittsburg State University)
DOUG WALKER (College of Charleston)
BRENT A. EVANS (Mississippi State University)

Saturday, December 24, 2011

Happy holidays!

I realize that I've been particularly bad about posting the last few months but have high aspirations to get back on a more regular schedule in the New Year. In the meantime, I hope that everyone is enjoying a happy and safe holiday season!

Tuesday, November 29, 2011

Ways to avoid grading...

As we all recover from turkey overload, here are some sites to check out when you feel like escaping the end-of-semester madness...
  • PNC's 2011 Christmas Price Index is out! The site has gotten a lot more complex and makes you go through a bunch of screens sequentially to get to the total - that's great if you want to waste some time but I couldn't find any way to just jump to the punchline, which is actually kind of annoying, so for those who are as impatient as I am, the total cost is $24,263.18, up 3.5% from last year.
  • The always-awesome Dirk Mateer has a new website. In Dirk's words: "I created this site to act as a "virtual personal assistant" for all Econ professors and TAs, allowing you to easily find engaging pop culture clips and real world examples that will help bring the material to life for your students in a way they can relate to!" You can also follow Dirk on twitter (@dirkmateer) or Facebook.
  • My colleague, and founding editor of the Review of Economics of the Household, Shoshana Grossbard, posts lots of interesting links on her Facebook page, Economics of Love.
  • Tutor2u has put together a list of Economics teachers on Twitter (lots outside the U.S.)
  • And if you must think about coursework, ProfHacker had a very useful round-up of posts about evaluations.


Friday, November 18, 2011

Two more PollEverywhere suggestions

In the course of conversations with people about PollEverywhere over the last few months, some additional uses of the service have come up.  One is to use it as a backchannel during class, since it allows for open-ended responses. That is, you could create a poll that simply asks students to submit any questions or relevant comments that arise for them while the class is meeting. The one catch is the instructor needs a way to monitor those comments throughout class. If you are already using the computer for something else (like your Powerpoint slides), you could either use a second device (smartphone or tablet) to keep an eye on the website or stop every so often to check it. Derek Bruff has a nice post on backchannels in education if you're interested in more about the how and why of backchannels.

Another way to use PollEverywhere is as a replacement for clickers in distance-learning. I don't teach online classes so I'm not entirely sure what the options are if you wanted to ask clicker-type questions during a synchronous class meeting but PollEverywhere would be a way for students to submit responses from anywhere, in real time, and for everyone to see the responses immediately. You could also use it for asynchronous classes but there are probably better options (e.g., most course management systems have a way to administer polls or quizzes).

I haven't used done either of these personally so I can't speak to how well it would work but thought others might be interested...

Friday, November 11, 2011

Thank you to all Veterans, and their families

When I teach about public goods, the one clear example of a pure public good (perhaps the only one that no student has ever wanted to argue about) is national defense. Even the most hard-core of Libertarians will accept that there is at least one arena where government is needed, in the provision of a strong military. But what we don't usually talk about in economics classes is how lucky all of us civilians are that there are thousands of men and women who are willing to serve in our military. God knows, I would never want to do it, so I am exceedingly thankful for those that do.

On a personal note, my sister married a Navy man a couple years ago. For me, this has been great so far, since he was stationed in San Diego and I have had the opportunity to spend much more time with my sister and my nephew. But my brother-in-law's next assignment is overseas and they will be moving next year. Thankfully, it is not an active war zone but for the next few years, my sister will be raising a toddler in a foreign country, thousands of miles from friends and family, and by herself for months at a time while her husband is at sea. Thinking about it, and worrying about her, has driven home for me the immense sacrifices made not only by our solders but by their families. So to all who are serving, and who have served, in our armed forces, and to your families, my humble thanks...

Thursday, October 20, 2011

Analogies are to teaching as...

My boyfriend's daughter is applying to colleges so there has been a whole lot of SAT and ACT talk going on lately. She's actually focusing more on the ACT, which I know nothing about since I only took the SAT, so we've talked a little about the differences. In general when interacting with this young lady, I have tried very hard to avoid using the phrase "When I was younger..." (since apparently nothing makes you sound older to a teenager than referring to your own childhood) but when I found out that the SAT no longer has analogy questions, I couldn't stop myself. When did the analogies get dropped?!?* Of course, back in high school, I had no idea why the analogy questions were even on the test (not that I spent a lot of time thinking about it back then either but I do remember thinking that they were sort of weird). But as a teacher, I've found that a well-constructed analogy can often make a world of difference in my students' understanding. I was reminded of this the other day by Dr. Goose who uses what struck me as an awesome analogy related to the standard conservative argument for cutting taxes:
"Writing in the Wall Street Journal on the "Three Policies That Gave Us the [Steve] Jobs Economy," Amity Shlaes cites the slashing of the capital gains rate from a confiscatory 49% to 25% in 1978. Building on this evidence, she reaches the silly conclusion that "taxes on capital should always be lowered, and dramatically." One might just as easily conclude that, because a diet improved one's physique, that mealtime portions should always be dramatically lowered, too."
On the other hand, a poorly-constructed analogy can create even more confusion than you started with. Case in point: I had read about Herman Cain's 'apples and oranges' comments during the Republican debate the other night and like a lot of people, I thought he was making no sense (if you didn't hear about it, you can read the transcript and see the clip here). But somewhere in the third time I watched/read Cain's exchange with Romney, I finally realized that Cain's point is that his plan is intended to replace the federal tax structure (that's the oranges) and regardless of what one thinks we should do about that federal structure, no one is talking about any changes to state tax structures (that's the apples). The problem is that because his plan includes a sales tax, which everyone associates with state tax structures, it's not unreasonable for people to wonder how his plan would interact with the state taxes. So Cain's oranges and apples analogy was referring to who collects the tax (which is different: federal or state) but everyone else was talking about the type of tax (which is the same: sales) so his analogy seemed to make no sense.

One of the principles in How Learning Works: Seven Research-Based Principles for Smart Teaching is about how students' prior knowledge can help (or hinder) their learning. Analogies in teaching are all about connecting new ideas to other ideas that students already understand. Do you have favorite analogies you use when teaching?

* If anyone reading this doesn't know what I'm talking about when I refer to the SAT analogy questions, well, go ask your parents...

Wednesday, October 5, 2011

Supply and demand without the curves?

We discussed supply and demand in my Econ for Teachers class this week. This is usually one of my favorite weeks in this class because we do an in-class double-oral auction, which I don't get to do in principles anymore (since I'm not brave enough to try it with 500 students) - I use Aplia for that class instead and while it's better than not doing it at all, it's just not the same. I love watching the students get into their buyer and seller roles. There is always a few who surprise me, some students that I think of as being relatively quiet but they end up being enthusiastic negotiators. And students always tell me at the end of the semester how memorable the auction is for them.

But as I was preparing the materials for class, it dawned on me that no where in the California content standards, or in the national standards, are supply and demand curves mentioned. That is, standard 12.2.2 of the California standards says, "Discuss the effects of changes in supply and/or demand on the relative scarcity, price, and quantity of particular products" and standard 12.2.5 says "Understand the process by which competition among buyers and sellers determines a market price." National standards 7 and 8 say "(Students will understand) Markets exist when buyers and sellers interact. This interaction determines market prices and thereby allocates scarce goods and services" and "Prices send signals and provide incentives to buyers and sellers. When supply or demand changes, market prices adjust, affecting incentives." On the one hand, it certainly is possible to just talk intuitively about market prices and quantities adjusting to changes in supply and demand, without using a graph of the curves; but on the other hand, it seems like the intuition is so much easier when paired with the visual of the graph. But maybe it just seems that way to me because that's how I've always done it?

Partly, I'm thinking about how difficult it seems to be for students to go from the experiment data (giving the number of buyers with different buyer values and sellers with different seller costs) to graphing the curves. It feels a bit like just a graphing problem because when I go datapoint by datapoint (for example, asking, "At a price of $5, how many sellers will be willing to sell?"), they have no trouble answering and they intuitively understand the relationship between price, buyer value or seller cost, and willingness to buy or sell. But when left to finish drawing the curve, they often get lost again, as if they just can't connect the intuition they understand to the grid in front of them.

So I'm wondering how weird it would be to try to discuss supply and demand and equilibrium price and quantity without the curves. I just don't know if it would 'work'. I'm particularly thinking about student confusion about what I think of as the 'never-ending story' problem: something happens that causes demand/supply to change (shift) and that leads to a change in price, which students then think causes demand/supply to change as they confuse shifting the curve with moving along the curve. Of course, if they actually follow that argument through, that shift should cause price to change again, which causes demand/supply to change again, etc. Since that process could just continue forever (and yet we know it doesn't in reality), there must be a flaw in that argument, and I think the curves can help clarify that flaw. I can't really figure out how I would explain what's wrong with that argument without using the curves. Yet, I think there are a lot of high school economics courses that do not introduce supply and demand curves. Do they just not do supply and demand at all (that would be my own high school experience)? Or are they talking about supply and demand without the curves somehow? If anyone has taught it that way, I'd love to hear from you in the comments!

Monday, September 12, 2011

Professional Development PSA

A few opportunities that folks should know about...

- Before the ASSA meetings in Chicago, on Thursday, January 5, there will be a workshop at Roosevelt University in Chicago on Advanced Pedagogy and Course Design: Cutting Edge Teaching Techniques and Strategies for Pluralistic Economists, run by Geoffrey Schneider, Bucknell University. The overview: "Most heterodox economists today end up working at teaching-oriented institutions. Thus, our success in the academy depends significantly on our ability to teach successfully. This workshop is structured for heterodox graduate students and younger faculty to give them a comprehensive background in advanced pedagogical techniques and strategies that will help them succeed in the classroom. Drawing on the latest pedagogical research, the workshop will cover constructing and meeting learning objectives, syllabus design, models for pluralistic teaching, active and collaborative learning techniques, and teaching controversial topics."
- The 2012 AEA Continuing Education Program includes sessions on Advanced Interactive Teaching Methods in Economics, with Patrick Conway, University of North Carolina; Tisha Emerson, Baylor University; KimMarie McGoldrick, University of Richmond; Michael Salemi, University of North Carolina; and William Walstad, University of Nebraska. The program runs from 1:00pm on Sunday, January 8 to 5:00pm on Monday, January 9, right after the ASSA meetings in Chicago.
- The second Annual AEA Conference on Teaching Economics and Research in Economic Education will be May 30-June 1, 2012 in Boston. Proposals are due by December 1, 2011; full info here. Last year's conference was awesome - even if you don't submit anything to present, I highly recommend attending!

Wednesday, September 7, 2011

What was your high school economics experience like?

As I mentioned in my last post, I am asking my Econ for Teachers students to reflect on their reading by responding to discussion prompts. It occurred to me that it wouldn't be a bad idea for me to share my thoughts on those issues here and see if anyone wants to chime in. For this week, the students were asked to read the California and national content standards, an article by Mark Schug and others about why social science teachers dread teaching economics and how to overcome the dread, an article by William Walstad on the importance of economics for understanding the world around us and making better personal decisions (with some evidence on the dismal state of economic literacy in this country), and another article by Walstad on the status of economic education in high schools (full citations below). The reflection prompt asks the students to then answer the following questions:
What was your high school econ experience like? What do you remember most from that class? How does that relate to the readings (does it relate)? If you did not take economics in high school, why not (e.g., was it an elective that you chose not to take (why not?) or was it simply not offered? what state did you attend high school in)? Do you recall any economic content specifically incorporated into other classes (such as government or history)? Also note here any questions or thoughts that came up as you read through the content standards and articles.
For me, the only things I really remember about my high school economics class is learning about the different types of businesses (e.g., that corporations are "like people") and playing a stock market game, which seems to be a common experience, judging from my students' responses. I believe the requirement that all California students must take a semester of economics was adopted right before I started high school (please, no one needs to do the math); at least, I don't remember it being an elective. I really liked my teacher, who had also been my freshman history teacher; Mr. Goudy even wrote one of my college recommendation letters. I have the impression that he knew what he was doing but I just wasn't that interested in economics at the time, maybe because the emphasis seemed to be on business. I did have a strong interest in public policy (I was planning to major in International Relations) and I wonder if that economics class would have made more of an impression on me if there had been more discussion of social policy. I don't think there was a lot of math because I liked math - one of the things that drew me to economics in college was the math so I have to think my high school class wasn't quantitative at all.

When I first saw the California content standards for 12th grade economics, my first thought was, "I don't think I saw much of this in high school." My second thought was, "There is no way this is being taught in most high schools - if it were, my Principles class would be way different." That's a big part of what led me to create this Econ for Teachers class, by the way. I was also really surprised to see that economics is mentioned so explicitly in the standards for some of the early grades. For example, in Grade 1, one of the standards says, "Students understand basic economic concepts and the role of individual choice in a free-market economy," with standards about money and the work that goes into making and marketing goods and services. I'm sure we learned about money but no one mentioned the word 'economics' until many years later. Economics is mixed into the standards for later grades as well but more in the form of economic history (e.g., students learn about the economies of ancient civilizations or economic events like the Industrial Revolution) and there doesn't seem to be any connection to economics as a discipline. So I suppose it's not surprising that many students get to 12th grade and don't even realize that they already know a lot about economics.

Although California requires students take a one-semester course in government AND a one-semester course in economics, it appears that many high schools combine them into one semester, or give students the option of taking one or the other (that may reflect confusion over the state requirements, which are written sort of oddly). But I suppose half a semester is still better than in a lot of states where economics is not required at all. I assume that means that there are lots of schools that do not even offer it but that also makes me wonder: in the schools that do offer an economics course, are the teachers better? I'd be curious to hear what the high school economics experience is like in other states. Wherever you went to high school, feel free to share your experience in the comments...

Schug, Mark C., Jane S. Lopus, John S. Morton, Robert Reinke, Donald R, Wentworth, Richard D. Western. 2003. “Is Economics Your Worst Nightmare,” Social Education, 67(2), 73-78
Walstad, William. 1998. “Why it's important to understand economics,” The Region, 12(4), 22-26.
Walstad, William. 2001. “Economic education in U.S. high schools,” The Journal of Economic Perspectives, 15 (3), 195-210.

Monday, September 5, 2011

Scheduling issues

This fall, my Economics for Teachers class is meeting once a week, in the evening. It wasn't my first choice for scheduling but as I re-vamped the class over the summer, I began to think that maybe this will actually be a good format. When I originally created the class, one thought I kicked around was that I would "teach" a lesson and then we would de-brief that lesson as a group (i.e., discuss why I chose to present the material in that way, what worked and what didn't, what might be stumbling blocks for students learning the material for the first time, etc.). In practice, I haven't done as much of that debriefing as I would like, for various reasons. One of those reasons was the timing of 75-minute class meetings - 75 minutes is really too short to teach a lesson AND do a thorough debriefing (plus all the administrative odds and ends that seem to take a few minutes at the beginning and end of each class) so either the debriefing would have to be cut short or I'd have to try to carry it over to the next class, which tends to be a real momentum-killer. If the discussion did get carried over, then I'd have to figure out how to fill the remainder of that class meeting with something useful. With one 160-minute meeting each week, I've cut down on the number of topics but we should have a lot more time to really discuss the content and the pedagogy in more depth. I'm planning to give the class lots of breaks but the timing of those will be dictated by how the discussion is going. And I can 'fill in' the odd batches of time with some economics of education topics that are relevant to the course but more flexible in when/how long we discuss them.

I've also realized that a benefit of the class meeting once a week is I don't feel bad about assigning a lot of reading, since students will obviously have plenty of time to complete the reading before the next class. Most weeks they will be asked to reflect on that reading by responding to a specific discussion prompt. I had thought my campus would be upgrading over the summer to Blackboard 9.1, which has blogging features that I thought would be good for the reflections, but since the upgrade was pushed back, they are just posting them in a Discussion Board. I don't think it's not ideal for generating discussion (ironically) but didn't get my act together soon enough to work out having them create their own blogs through an outside site.

Now that the semester is under way, I'm more concerned about the fact that the class is in the evening than once a week. It's been a long time since I taught an evening course and I'm worried about keeping my energy up at a point in the day when I'm used to winding down (not to mention that I'll be getting home around the time I'm using heading to bed). I assume the students will be fine but I'm curious to see if there's any noticeable difference in their energy. If anyone has advice about how to stay energized for an evening course like this, please share!

Tuesday, August 23, 2011

Back-to-school ideas

  • A recent New York Times article points out that many children's books teach economic concepts (hat tip to Alex Tabarrok). If that article piques your interest, the Council for Economic Education has a whole bookicon that provides examples of children's stories that can be used to teach economics, including questions for students and follow-up activities. There's also a 2007 article by Yana V. Rodgers, Shelby Hawthorne and Ronald C. Wheeler, "Teaching Economics Through Children's Literature in the Primary Grades," in The Reading Teacher 61(1), p.46-55. That article lists the 'top five' books for a number of specific concepts; the full list of several hundred titles can be found at http://econkids.rutgers.edu/, which is an entire website devoted to using children's literature to teach economics (also mentioned in a follow-up NYT post on Economix). I should point out that although the obvious audience for these sorts of lessons is younger children, I can also imagine using children's books as the basis for an assignment for older students (for example, give them a list of the books and have them identify the key economic concepts associated with each, thus reinforcing the idea that economics is everywhere).
  • Tutor2u describes a first-day "golf" game to see if students are familiar with current events, using an included powerpoint file (note: the file provided with that post focuses on the U.K. and European Union but questions could easily be adapted for American students). The general set-up for the game would work well for any team contest: each question has four answer choices and students can choose to submit only one response (an 'eagle' if they get it correct), or two possible answers (a birdie if one of the two is correct), or three answers (for par). If none of their answers are correct, they get bogeys. Looks like a neat approach!
  • If you aren't on the tch-econ mail-list (why aren't you?), you missed Bill Goffe's message about a set of videos by Dr. Stephen Chu (a cognitive psychologist) on how to study effectively. Chu uses cognitive science, what we know about how people learn, to explain not only what students should do but why. These would be great to show and discuss with college freshmen, though I also think they'd be useful to students at any level. The next time a student asks you how to do well in your class, I'd suggest pointing them to these videos.

Friday, August 19, 2011

Other peer reviewing tools

As I mentioned at the end of my last post, SWoRD does provide an infrastructure that makes it easier to have students do peer review - students can submit papers electronically, the system can randomly assign multiple anonymous reviewers, I can create specific comment prompts so reviewers must give both numeric and open-ended comments, and students can back evaluate the reviews to indicate how helpful they were (or weren't). Given that I am a firm believer in the value of the peer review process overall, I would perhaps continue to use SWoRD if there were no other options that could serve the same function. But if I'm not going to use the grades generated by SWoRD (or if I need to do a lot of work to make those grades work for me), then I do have other options. Each would require some tweaking to do exactly what I want to do but from what I can tell, they all provide some advantages over SWoRD as well. Please note that I have not yet actually used any of the three tools I mention below; what follows are my impressions based largely on what I've seen while playing around with them, and what I've heard from other people.

*** Note: If you don't care so much about having students do peer review (like if you are having students do writing assignments where the emphasis is more on the content and not so much the writing itself), Bill Goffe pointed out a site called SAGrader, also mentioned in a recent Chronicle article, that can automate grading of essays. It doesn't look like they have any economics assignments in their library so you'd have to work with them to create the assignments and rubrics but it looks like it could be pretty great for those teaching large classes.

Option 1: Turnitin's PeerMark
If your campus uses Turnitin, this seems like the best option for peer review of writing assignments. Most faculty are probably already familiar with (or have heard of) Turnitin's plagiarism detection tools, now called OriginalityCheck. The PeerMark tool is integrated with OriginalityCheck, and another tool called GradeMark (I'm not sure if a campus can subscribe to one of these tools and not the others, or if they are always integrated; we have all three at my school, integrated through Blackboard). Even if you aren't interested in peer reviewing, the GradeMark tool is kind of neat too - you can set up grading rubrics and it's pretty easy to insert your comments. With PeerMark, the students submit their files online and the system can either assign reviewers randomly or you can assign them manually, or students can even select which papers they review (or you can use a mix of all three). You can decide how many reviews each student must do and can make the papers and reviews anonymous (or not); there is also an option to allow (or not) students who did not submit a paper to still review. You can set up both open-ended comment prompts and scale response questions (i.e., discrete categories, like a 1 to 5 scale) for the reviewers. The interface also allows comments to be inserted at specific points in the paper, similar to comments in Microsoft Word if you're familiar with those (so, for example, you can just insert "This sentence needs a comma" right at the sentence instead of "The second sentence of the third paragraph on the first page needs a comma"). The instructor sets the dates and times when papers are due, when they become available to reviewers and when reviews become available to authors.

PeerMark has a way to just give students full credit for reviews that meet certain criteria (i.e., all review questions are answered and you can set minimum length requirements for open-ended questions), or you can grade them manually.  You could also have students 'back evaluate' the reviews separately (I'd probably use a survey in Blackboard) and use those to grade reviews, or be part of the grade. It would also technically be possible to use the reviewers' scores on scale questions as part of a writing grade for the authors (that is, similar to what SWoRD does, though you'd have to do the calculations yourself) but from what I can tell, you'd have to do some cutting and pasting to get the scores out and into a spreadsheet.
Pros: Like SWoRD, PeerMark automates a lot of the process (assigning reviewers, etc.) so students can get/give feedback from multiple peers but in contrast to SWoRD, the system has a lot of flexibility in terms of setting options and students can easily insert comments and mark up the papers directly.
Cons: Only available if your campus already uses Turnitin.

Option 2: Google Docs
Profhacker has a post about using Google Docs Forms to run a peer-review writing workshop. Although that post is talking about an in-class workshop, I think everything would apply to out-of-class reviewing as well. The basic gist is students submit their papers via Google Docs, then use a Google Doc Form to complete their reviews. Forms allow for open-ended comment prompts as well as questions with discrete choice responses and Form responses are recorded in a Google Docs spreadsheet. Things wouldn't be quite as automated as with PeerMark - the instructor would have to match up papers with reviewers and you'd have to manually keep track of whether students met deadlines but on the other hand, the review comments will already be collected for you so if you want to use the scores for grading in some way, that should be easier.
Pros: Google Docs is free to both the instructor and students, and review comments and scores are recorded in a spreadsheet so you can manipulate them relatively easily. If you want to use the reviewer comments and scores to create grades, you could create your own algorithm but have a lot more flexibility with things like deadlines than with SWoRD.
Cons: The process is not as automated as other options; for example, if you want student papers to be anonymous, you'll have to figure out a way to do that outside the system (maybe have students create pseudonyms they use all semester?).

Option 3: Calibrated Peer Review
This is the option I am least familiar with but the general idea is that before reviewing their peers' work, students must first evaluate some sample essays and they get feedback on how good a job they do with those evaluations. From what I can tell, the calibration exercises require students to respond to discrete-choice questions (e.g., 'Does the essay have a clear thesis statement? yes or no'). The feedback they get is then a score of how many questions they answered 'correctly' (i.e., with the same answer as the professor), along with any additional comments the instructor wants to add about specific questions. Once students pass the calibration exercises, they review three of their peers' papers (I'm not sure if you can set it to be more or less than three) and they must review their own paper as well. I don't think it's possible to have students respond to open-ended review questions; it looks like all the review prompts require a discrete response. The system does generate writing and reviewing scores that could be used toward grades. To get the writing scores, the reviewers' responses to the reviewing questions are weighted based on how students did on the calibrations (higher calibration score, more weight given to that student's actual reviews). The system also generates a reviewing score for students by comparing their responses to the weighted average of the other reviewers of the same paper, plus it generates a self-assessment score that compares a student's self-evaluation to the other reviewers. Because I haven't used CPR myself, I don't know if the scoring has any of the same issues as SWoRD but my assumption is that the calibration stage means there is more consistency across reviewers so scores should be more consistent as well.
Pros: CPR gives students lots of guidance for being good reviewers (which ultimately, should mean more useful feedback for the writers). I should say that feeling ill-equipped to give useful reviews was one of my students biggest complaints so this aspect of CPR is really appealing. The way the writing and reviewing scores are generated seems more transparent than in SWoRD.
Cons: No open-ended comments from reviewers; major prep cost to set up the calibration examples (though
presumably a one-time fixed cost).

Personally, I will probably use PeerMark in the spring when I teach the writing class again, but I may try to replicate some aspects of CPR by giving students more examples of 'good' and 'bad' papers and reviews.

Wednesday, August 17, 2011

SWoRD follow-up

I really should have gotten back to this sooner but for those who are wondering how things went with SWoRD, the peer review writing site I used with my writing class in the spring, my overall reaction is that while it might be useful for some people, I probably won't use it the next time around. For those who missed my earlier posts, I discussed the basics of SWoRD, whether SWoRD can replace instructor grading, and some first reactions to SWoRD's reviewing process (after the first assignment) back in March. I made some tweaks as the semester progressed but overall, I have to say the experience was still pretty rough.

To briefly recap, SWoRD is an online peer review system where 1) students upload their papers, 2) the system randomly assigns other students to anonymously review those papers, 3) peer reviewers give both open-ended comments and numeric ratings in response to instructor-generated prompts, 4) authors 'back evaluate' their reviews, which means they give a numeric rating of how helpful the open-ended comments were, and 5) the system uses the numeric ratings from the reviewers to generate a writing score for the authors and uses the back evaluation ratings from the authors to generate a reviewing score for the reviewers. That last step, having the writing and reviewing scores generated entirely from the students themselves, is the main benefit of SWoRD, relative to other online peer review options like Calibrated Peer Review or Turnitin's PeerMark. My opinion is that the system has some problems that make those grades somewhat suspect. Unfortunately, I'm not sure there really is any satisfactory way to automate that process.

"Bad" reviewers may not be penalized
For starters, my original understanding of how the SWoRD grading system works was incorrect. I relied on some research papers that are posted on the SWoRD site (papers published a few years ago) and the system has since been changed but that is not explained anywhere on the site. The earlier papers said that the writing grades were weighted in such a way that if the score from one reviewer was substantially different from the scores from other reviewers, that score would be given less weight. However, that is not actually the case, which I discovered when one of my better students kept bugging me about his grade on one particular assignment. When I looked at the scores, there was one reviewer who gave 1's and 2's (out of 7) to all the papers he reviewed. Since that reviewer also did not provide very helpful comments, my guess is that he was either confused about the scoring or just lazy and not taking it seriously. Based on my original understanding, I thought the fact that his scores were so much lower than the other reviewers should have lowered that student's 'accuracy' reviewing grade and his scores should have been given a lot less weight for the students he reviewed. Neither of those things happened (his reviewing grade was actually somewhat higher than the class average and his scores definitely reduced the writing score for those papers). When I asked the SWoRD team about this, the response was that the "accuracy" part of the reviewing grade is based on rank orderings, not a comparison to the other ratings; that is, as long as the reviewer is giving higher ratings to 'better' papers and lower ratings to 'worse' papers, the system considers the ratings to be 'accurate'. The message from the SWoRD team said that they had "decided it wasn't valid to penalize someone for using a different range of the scale because often they were actually the most valid rater, with other students rating too high overall. If the instructor decides [a student] was unreasonably harsh, the thing to do is give [that student] a lower reviewing grade." On the one hand, I understand why they made that change, since I definitely noticed that my better students tended to give somewhat lower scores, on average (along with better comments justifying their scores), than their classmates. On the other hand, if I have to go through and scrutinize all the scores to see if students are scoring appropriately, that seems to defeat the whole purpose in having the scoring algorithm in the first place.

Incomplete information for back evaluations
Based on my reading of the research papers, in the earlier versions of the system, students could not submit back evaluations until after they turned in their second draft but they did see both the comments and the numeric scores from the reviewers (requiring them to turn in the second draft before doing the back evaluations was a way to make sure students actually had to process the comments before evaluating them). In the current version, students do not get to see the numeric reviewing scores until after they have submitted their back evaluations. Again, I can understand why this change was made; I can certainly imagine that some students would 'retaliate' for low reviewing scores by giving low back evaluation scores. But on the other hand, I saw many instances where reviewers gave scores that were not consistent with, or explained by, their open-ended comments (for example, a vague comment that 'everything looks fine' followed by a score of 3 or 4 out of 7). In my opinion, those reviewers should be given lower reviewing scores but the only way to accomplish this would be if the instructor goes in and manually reviews all the scores and comments, again defeating the purpose of having the scoring automated.

Reviewing itself is useful (but I'm still learning)
Given the problems with the scoring, I was expecting more negative comments from the students at the end of the semester but evaluations of the system were actually relatively positive, though less than half thought I should continue to use it in the future. Many of the critical comments were about the reviewing process itself (e.g., wanting more guidance for how to do good reviews, feeling like classmates didn't take it seriously enough or didn't give useful feedback, saying they should only review three papers instead of four or five, etc.), rather than the SWoRD system. The SWoRD-specific comments had to do with things like the deadlines being 9pm which was hard for students to remember (this isn't something the instructor can change), or the files being converted to PDFs so it was hard to refer to specific points in the papers (versus hard copies or Word docs that could be marked up). But students did seem to see the value in the reviewing, with several students commenting that doing the reviews helped them see where their own papers needed improvement.

So to sum up, I do think that the SWoRD system can still be useful for some instructors; if nothing else, it provides an infrastructure for students to submit papers, have reviewers randomly and anonymously assigned, and give/get feedback from multiple readers. You don't have to use the scores that the system generates. I particularly think SWoRD could be good for shorter assignments, where the evaluation criteria are relatively objective (and thus reviews might be more consistent). But if you aren't going to use the grades generated by the system, I think there may be other, better tools that could be used to facilitate peer reviewing; I'll talk about some of those options in my next post...

Sunday, August 14, 2011

Getting off-course

It's a frustrating time to be an economist, though I can't decide if it's worse to be a micro- or macro-economist these days - I have to assume that many macro folks are tearing their hair out over the stupid things Washington is doing and the even stupidier things the media is often saying but at least when someone asks a macro person what they think of all this stuff going on, they supposedly are in a much better position to talk about it than most micro people (I'm not saying that stops me from talking anyway; I'm just sayin' that as a micro person, I don't spend my life studying these things and really, my understanding of things is only slightly better than what we teach in Econ 101). I've almost entirely stopped reading anything about the economy from regular news outlets because I kept seeing things that made me wonder if I had some basic economic concepts totally wrong, only to realize that my understanding is fine but reporters apparently didn't learn anything in Econ 101. So I've largely kept up with things this summer through blogs written by economists, though sadly, the news isn't any less depressing when analyzed accurately...

But regardless of my somewhat basic grasp on macro policy, one thing that has crossed my mind a few times this summer is that I hope economists are talking with their classes about what's happening, even if it isn't directly related to the course material. It seems to me that the issues the country has been grappling with - how important is debt reduction, why hasn't the unemployment picture been improving and what needs to be done about that, etc. - are things that our majors should be aware of, even if they don't happen to be enrolled in a macro class. Perhaps even more important, they should be thinking critically about what is going on and what the media is saying about it. For example, does it make any sense to them that the stock market plunge was 'a result' of a the S&P downgrade, as many news analysts have been saying? Does it make any sense to them that huge cuts in government spending will somehow reduce unemployment, as some politicians have been claiming?

When I was teaching Principles, it wasn't a big deal to bring in current events, even if they were macro issues and the class was micro (since usually, macro issues can still be discussed in terms of core principles like incentives or supply and demand). But when teaching more narrowly-focused upper-division classes where the course subject may have nothing to do with the events that are happening, it seems harder to justify taking class time to talk about things that are not directly course-related. Still, it seems to me that we should, at least given the historic magnitude of what's going on right now. I'm not sure how I'll fit it in to my data analysis course in the fall but probably when the 'super-committee' comes out with its recommendation later this year, I will try to spend at least a part of a class talking it. What about you? Do you ever take class time to discuss 'off-topic' current events?

Tuesday, August 9, 2011

Do you give credit for participation?

Dilbert.com

This morning's Dilbert was perfectly timed as I was in the middle of trying to figure out the grade weights for my fall Econ for Teachers class and as usual, having a huge mental debate over how much weight to give 'participation'. A couple of Teaching Professor posts this summer hit on the same issue so it's already been at the back of my brain. In my data analysis course, participation is rolled into the team grades and that takes care of it; I've found that students have a strong tendency to 'punish' their peers for low participation by giving them low peer evaluation scores. But with the Econ for Teachers class, I do a lot of formative-type assessments that I'm not going to "grade" for content (e.g., student reactions to readings where I ask them to relate the reading to something in their own experience), so I have to decide how much credit to give students simply for completion. I want students to take those assignments seriously and the economist in me believes in incentives but at the same time, I don't want students to be doing things just for the points; ideally, I want them to be intrinsically motivated. To a certain extent, I think that carefully crafted assignments can go a long way with that - the intrinsic motivation comes when students see the purpose of what I'm asking them to do. But I don't feel like I can just ask them to do it and not give them any points at all (though really, why not?).

And then there's participation in the form of class discussion - how in the world does anyone ever assess credit for that? I don't usually try; I just rely on well-formed questions (which can often mean silence, when the questions aren't as well-formed as I had thought!). Again, groupwork helps; even if classes where I don't do formal teams, I try to have students talk in small groups when I really want them to discuss something. Given my class sizes, full-class discussion is simply never going to fully involve everyone. But should I give students credit for trying? For showing up? Shouldn't that be a basic expectation for all students? That is, why give points for doing what is expected (participating in your own education)? I'm guessing I'll still be asking these questions for years...

Wednesday, July 6, 2011

Why NOT have cell phones in the classroom?

If you've read my last few posts about PollEverywhere with skepticism (or skipped them entirely) because you just can't imagine ever letting students use their cell phones in class, my question for you is WHY NOT? Or more specifically, is your aversion to cell phones driven by concerns about helping good students or reining in bad students? By 'good' and 'bad', I'm not talking about those who get good and bad grades; I'm talking about those who care about learning, who want to be there, and those who don't. I've been thinking a lot lately about how much I tend to focus on the latter group, and how often I tend to forget about the former group, and how backward that is...

"Students will cheat"
For example, I know that for some teachers, the biggest problem with a service like PollEverywhere is concerns that students will use their phones to cheat. At a meeting to discuss options for a new clicker vendor for our campus, a few faculty flat out said, "I don't want to use PollEverywhere because I think students will cheat (either by texting their friends for the answers or perhaps by submitting answers when they aren't in class)". There were some student representatives at the meeting who were understandably offended. One of those students responded (in a somewhat indignant tone) that she would be more focused on getting her answer submitted on time, not trying to text friends for the answer. Her answer made me think about how much energy we faculty sometimes expend to prevent cheating, and how much of a disservice we may be doing to all the other students. I'm not saying there aren't plenty of students who cheat, and of course we don't want to make it too easy or tempting for them, but if students really want to cheat, they are going to find a way. I don't think I should compromise pedagogy just to try to stop them.

"Students will pay less attention in class"
A related concern is that if cell phones are out, students will use them for non-class-related things. This is similar to concerns about allowing laptops. Over the last few years, I have become less sympathetic to faculty who ban cell phones or laptops from their classrooms entirely in the hope of retaining their students' attention, partly because I believe it's a futile tactic and partly because I think it's misplaced blame. With regard to the futility, I was recently talking to a couple of my students and I asked them what they thought about PollEverywhere. They both thought it was better than clickers because they preferred using their phones. I specifically asked them if they used their phones more, or noticed people texting a lot during class (I should add that these were two of my better students). One said that she did text her friends in class, and knew others did as well, but a) instead of doing it while I was lecturing or there was class discussion going on, she did it when they were working in their teams, and b) she didn't text any more or less than in any of her other classes, but she appreciated that she didn't feel like she needed to hide the phone. Her next words really struck me: "If there's one thing you learn in high school, it's how to text without getting caught." Her classmate nodded vigorously in agreement. So if you're banning cell phones because you think it means students will pay closer attention, you may be fooling yourself.

Where does it end?
I have to say that the comment about texting during groupwork instead of other times made me kind of happy; I interpreted it to mean at least that particular student found my lectures and the class discussion to be interesting and useful. But others might see it as an indictment of team-based learning - after all, I can't ensure that they spend every single minute of team discussion time actually discussing the assigned problem. But students who are engaged and want to learn will be engaged, even if they also still try to multi-task (just as I can still contribute in a meeting where I'm also surreptitiously checking my email periodically). If students are totally disinterested, taking away their cell phone or simply lecturing all the time isn't going to force them to engage - they'll just pass notes, doodle in their notebook, fall asleep, or not come to class at all. While I do know that many college students are still immature enough that we need to give them some extrinsic incentives to do what we know is 'good for them' (especially in lower-level classes with mostly freshmen and sophomores), I guess I'm wondering how far we really should go? If we avoid pedagogical innovations that could improve engagement because they also give students more freedom, and we are worried about the handful of students who may not be mature enough to handle that freedom, isn't that a disservice to the students who are mature enough to handle it?  At what point is being paternalistic just counter-productive?

Thursday, June 30, 2011

PollEverywhere: Summing up

[This post wraps up my reflections on my pilot of PollEverywhere this past spring. If you missed my last two posts, I discussed how I used PollEverywhere in my data analysis course, and student reaction to it.]

So here are my general thoughts on PollEverywhere, particularly relative to standard clickers:

Advantages
  • Convenience for students. They all have cell phones so they never 'forget' their device the way they do with clickers.
  • Easy to ask open-ended questions. Even on clicker systems that have this feature, it is generally easier with cell phones/laptops.
  • Relatively low-cost. If you have fewer than 30 students, the service would be totally free; if you need to track more responses, there is a cost for a PE account that someone (you, your institution, or the students) will have to bear. For students who do not have unlimited texting, there may be costs related to sending/receiving messages; the total cost will depend on how many questions you ask (in a previous post, I calculated that for my class, assuming 10 cents per text, it would still be cheaper for students than buying a clicker).
  • Relatively low-commitment. Even if clickers are used a lot on your campus (as is the case at SDSU), PE can be a great complement for faculty who only want to use the technology occasionally. For example, in my spring writing course, there was one class meeting where I wanted to survey the class about some plagiarism issues, but since it was only that one day, it really didn't seem worth having them buy and register clickers. In the past, I probably would have either skipped the surveying or I would have asked students to answer the questions before class; instead, I used PE (note that in this situation, I did not need to track responses so students did not need to have accounts or register their phones).
  • Few problems with software. This is a relative thing - many faculty on my campus have had major issues with the clicker software we've been using. In comparison, PE was really easy and almost always worked well. Not perfect but glitches were pretty rare.
Disadvantages [with the caveat that some of these things seem like a pain to me just because I'm used to clickers; if you've never used clickers, you may not find most of these issues particularly problematic. Also, according to a comment on my first post, the PollEverywhere folks are working on several of these concerns!]
  • Lack of integration with University systems. Students have to register on a separate site and getting grades into the course management system requires more work than most clicker systems.
  • Multiple submissions from same person recorded as separate responses. If you allow multiple responses (which allows students to change their answer after submitting something), you'll have to deal with sorting those out afterwards.
  • Integration with PowerPoint is kind of clunky. I do everything in PowerPoint and prefer not to switch to a separate site to ask questions so I embedded all my PE questions into PowerPoint. To do this, you create all the questions on the PE site and then export a PowerPoint file. The questions themselves are shockwave flash objects embedded in the slides that I would then cut and paste into my class PowerPoint presentation (where I also created a timer, which I'm used to having for clicker questions). It's not difficult; it's just more time-consuming and not nearly as seamless as with clicker systems.
  • Difficult to ask questions on the fly. Many clicker systems have a way to get responses to questions that have not been prepared ahead of time (e.g., asking a question verbally or creating a new question on the fly). It isn't hard to create questions in PollEverywhere but in order to generate the codes for responses, you do have to go into the system and create the question with answer responses. I think trying to ask something that you didn't prepare ahead of time would be too time-consuming to do spontaneously in class. It occurs to me as I write this that one possible workaround (if you anticipate needing this option) is to create a generic question so the codes are already created...
  • Need consistent cell service. Most of my students do not have laptops so if cell service doesn't work for someone, there is no alternative way for them to submit responses to the system. I really am not sure what I would do if I had more than two or three of these students. In my opinion, this is the biggest reason not to have PollEverywhere completely replace clickers (e.g., if a campus is trying to decide on one standard, as SDSU does).
Other issues [Not necessarily good or bad but stuff you should think about if you're considering using PE]
  • Students have cell phones out during class. Duh. This is not a problem for me but I know it is a huge issue for other people. Aside from issues related to students cheating (which I didn't worry about because all my questions were low-stakes) and students texting their friends (which I don't worry about in general), the one thing I was a bit concerned about was whether students would remember to turn off their ringers. I put a reminder on the first slide to silence phones (there's an image in my previous post) and I have to say, I don't think I had any more ringers go off than in a typical semester; if anything, I think there may have been fewer.
  • Monitoring answer distribution is kind of clunky. This is true with the eInstruction clicker system as well - if you want to see what the answer distribution looks like but don't want students to see it (e.g., if you think the distribution is likely to be mixed and you plan to re-ask the question), the system itself doesn't really let you do this since whatever you see on the computer is what is shown on the projector screen. My workaround is that I usually 'freeze' the projector (so the image on the screen doesn't change even if what I see on the computer monitor changes). This is only possible in some smart classrooms on my campus, not in others.
  • No technology is 100% perfect. You still need to have a plan for how you will handle things like the system being slow, or students who insist they submitted responses but the system didn't register them. As with any technology, it's best if you go in with a flexible attitude. One of my students commented, "I loved polleverywhere... my only concern would be if other professors would be able to utilize the program as efficiently... Sometimes when professors who are not tech savvy attempt to use such programs, it sometimes eats up more class time just for them to figure out how to use it." PollEverywhere is pretty user-friendly but if you're the type of person who gets flustered easily when things don't go entirely according to plan, you may want to do a lot of practice runs before integrating into a live class.
Overall, I think PollEverywhere is a great service, particularly for faculty who want to incorporate clicker-type activities but don't want to make their students buy/register a clicker, or if you really want to ask open-ended questions. I do think it is best for low-stakes activities, particularly given the issues I encountered with cell reception, but it would be great as a back-channel or in classes where the technology is only needed once in a while.

Monday, June 27, 2011

Student response to PollEverywhere

In my last post, I described how I used PollEverywhere in my data analysis course this spring. In this post, I'll discuss student reaction to PollEverywhere; in my next post, I'll wrap up with my own impressions and thoughts/suggestions for others who are considering using it in their classes.

At the end of the semester, I surveyed my students about a number of aspects of using PollEverywhere. SDSU's Instructional Technology Services asks all clicker-using faculty to administer the same survey every semester and I adapted most of those questions for PE. Many of those questions aren't really about the specific technology but are about using any kind of response system (e.g., "Clickers/PollEverywhere usage helps me to remember course content"). I discussed student responses to clickers a couple years ago and reactions haven't changed much (if anything, the percentages of students agreeing with most of the statements has increased); reactions to PE on those questions look really similar. More relevant to PE specifically, I did ask a couple questions about PollEverywhere versus other options, and some questions about technical problems (percentages based on both sections combined, n=122 for most questions):
  • In the future, I would select a course section which uses PollEverywhere over another section of the same course which did not use PollEverywhere (note that the comparison here is really another course that does not use any response technology, not comparing to a course that uses clickers instead of PE): 60% strongly or somewhat agree; 6% strongly or somewhat disagree (that's 33% who neither agree nor disagree)
  • In the future, I would select a course section which uses PollEverywhere over another section of the same course which uses traditional clickers: 75% strongly or somewhat agree; 6% strongly or somewhat disagree (19% who neither agree nor disagree)
  • I would like more professors to use PollEverywhere in their courses: 75% strongly or somewhat agree; 6% strongly or somewhat disagree (19% who neither agree nor disagree)
  • I was able to set up my PollEverywhere account and register my phone (if applicable) with few or no problems: 92% strongly or somewhat agree; 6% strongly or somewhat disagree
  • Once everything was set up and registered, PollEverywhere has worked well for me: 90% strongly or somewhat agree; 5% somewhat disagree
Preferred over clickers: I should point out that for the first and third questions (about choosing a section that uses the technology over a section that does not, and wanting more professors to use the technology), the percentages agreeing about PollEverywhere are much higher and percentages disagreeing are much lower than when I asked the same questions about clickers in my fall data classes, where roughly 37% agreed and about 20% disagreed (remember, I pretty much just replaced clickers with PollEverywhere for the spring classes; the questions themselves, and how the technology was used, were the same). Students also seemed to prefer PE over clickers specifically (about 80% of the students had used clickers in other classes). Of course, I could be really cynical and suggest that maybe students prefer PE because it meant they could have their cell phones out in class, and I don't doubt that actually does make students more receptive to PE. But in their open-ended comments, students pointed out that it was more convenient because they always have their phones with them (versus having to remember to bring clickers), they liked having multiple options for submitting responses, and they liked that it was free, for those with unlimited texting. On the other hand, a couple students complained that it wasn't free for them because they pay per text. Other student complaints were similar to complaints I've heard about clickers: phone died and had to miss points (I drop 10% but that doesn't stop students from complaining), need more time to submit responses, and sometimes responses wouldn't go through.

Reception issues: That last comment deserves more discussion. One uncontrollable factor that has a huge impact on the success of PollEverywhere is cell phone service. The vast majority of my students (89%) submitted their responses via text message; only a handful submitted via web browser either using their smartphone (3%) or laptop (5%). But there were also a couple of students in each class (3% total) who were never able to submit via phone because of bad cell reception and they did not have laptops; my understanding is that pre-paid phones and Nextel phones were the worst. I allowed those students to submit their responses on paper and I manually recorded their scores. Note that this policy only made sense for me because a) almost all the PE questions were low-stakes and b) I only had a couple in each section (out of 75). There were also a number of students who could submit via text most of the time but would have a problem every once in a while; I told those students they could either write their responses on a piece of paper or just let that be part of the 10% of points that would be dropped.

There were a few times when the glitches did seem to be with the PollEverywhere site (e.g., it took a long time to get a confirmation text, or some students didn't get a confirmation text at all but the system did record their response) but I don't expect perfection from any technology (and the problems were definitely rarer than the last time I used eInstruction clickers in the 500-seater). Just as with clickers, I think it's imperative to have a flexible policy (like dropping some points or having alternative ways to get the points) so students don't freak out. More about that in my next post...

Friday, June 24, 2011

Using PollEverywhere instead of clickers

Months ago, I mentioned that I was part of an ITS pilot of PollEverywhere this past spring. Quick reminder: PollEverywhere is a web-based service where anyone can create a multiple-choice or open-ended question and people can respond via text, Twitter or website. I first used PollEverywhere in the fall when I wanted a way for my teams to submit open-ended responses. The free version only allows up to 30 responses per poll which was fine for 13 team responses but wouldn't work for individual responses (since I have 75 students in each of my sections) so I used clickers for any individual responses. In the spring, the University bought a PE account subscription so there could be unlimited responses. It also meant that students could register and their responses were recorded so I could use PollEverywhere as a replacement for clickers. In this post, I'll explain the mechanics of how I used PollEverywhere and some of the associated pluses and minuses. In my next couple posts, I'll talk about what the students thought and my overall impressions.

Low-stakes assessments: I mostly used PE to have students submit their individual responses to multiple-choice application problems before they discussed those same problems in their teams. My main concern was making sure that students had to think about the problem individually a little and commit to an answer before discussion. I didn't care so much what specific answer they chose so students received credit just for answering anything (i.e., participating); there were only a couple times when I made credit dependent on selecting the 'right' answer. PE allows you to embed polls in PowerPoint and that is what I did, rather than switching over to the website each time. One downside of PE, relative to clickers, is that there is no timer so I created one using animation in PowerPoint. It's a clunky workaround but if you want to give students a visual indication of how much time they have to answer a question, I'm not sure what the alternatives are.

Grouping questions: One thing I had to decide was if I wanted to 'group' my questions together or not. The way PE normally works, each answer choice has a randomly-generated unique keyword; for example, if you want to choose answer A, then you send in '70101' and if you want to choose answer B, then you send in '70103', etc. With a paid account, you can also create your own keywords to replace the random numbers (but they still have to be unique since the keyword identifies both the question and the specific answer choice). An alternative is grouping multiple questions together and assigning a keyword to the group. Once you do that, respondents send in the group keyword before any questions are asked; they get a response that says they are enrolled in that 'event'. Then, the codes for individual answer choices within the group are numbered 1-9 and then alphabetically. That is, say the first question in the group has five answer choices; they would be numbered 1, 2, 3, 4, and 5 so students who wanted to select the fifth answer would only have to text in the number '5'.  If the next question in the group also has five answer choices, they would be numbered 6, 7, 8, 9 and A so students who wanted to select the fifth answer there would text in the letter 'A'. This can get a little bit confusing since most of us are used to talking about answer choices as A though E but I got used to it. For me, grouping questions together made it easier to keep my poll questions organized. I created one group for each class session, for each section, and named the group with the date and section time; for example, May11AM for my 11am class and May11PM for my 2pm class. As soon as students got to class, they knew they should text in the day's keyword so they would be ready to go when the first question came up. Here's what the first slide of every class looked like (this was up as students walked in):



And here's what a typical question looked like (note that the strip along the side was my 'timer'):


No integration with Blackboard: In order to give students credit for their PE responses, they first have to create accounts in PE and if they want, they can register their cell phones (so any response sent from that phone number is automatically connected to their account). If they don't register their phones, they have to log in each time and submit responses using a browser. Another drawback of PE, relative to clickers, is that it is not integrated with Blackboard, the course management system. This means there are extra steps for students (registering on a separate site) and extra steps for me. To get their daily points into Blackboard, I had to create a 'report' in PE, download that to Excel, make any necessary adjustments in Excel (such as giving credit for right answers versus just participation, or just summing up the points for the day), then upload to Blackboard. A colleague in the business school who also piloted PE this spring has apparently developed an Excel macro that can take care of some of the Excel manipulations but I just did things manually. For me, the extra work wasn't a huge issue but one thing that was frustrating was that in order to upload to Blackboard easily, I asked students to change their identifier to their University ID number (the default when they create their accounts is their email address). By the third week (and after multiple reminders), almost all students had done this but I had two students (one in each section) that never made the change; since I stopped making the adjustment for them after Week 3, this meant that their PE points were zero for every single class and they STILL didn't figure it out! [Note: if PE were used a lot more across campus so this happened in all their classes, I have to assume they would eventually fix it but I'm still amazed...]

Dealing with multiple responses: Another issue I had to consider was how to handle multiple responses. With most clicker system, students can change their responses as long as the question is open and the system will simply retain the last answer submitted. With PE, you can set an option to only allow up to X responses per person or unlimited responses; if you choose to allow multiple responses, PE records every response separately (every response is time-stamped). PE also can send a confirmation text so students can verify their response was received (this is an option you can turn on or off). In my case, since it usually didn't matter which specific answer a student selected, I set things up so they could only submit one response; on the few occasions where their specific choice 'mattered', I made sure to tell students that they needed to be extra careful before sending in their responses since they would only get one shot at it. My colleague in the business school allowed multiple responses and then used his Excel macro to only count the last submission for each student.

Next time, I'll share some of the feedback I got from students...

p.s. While I was working on writing this post, InsideHigherEd had an interesting article on standardization of clickers that mentions cell phones replacing clickers. And if you're more old-school, ProfHacker just posted an article about low-tech alternatives to clickers.

Related posts:
Texting in response to open-ended questions
Student response to PollEverywhere

Thursday, June 23, 2011

Econ Ed sessions at the Westerns

For anyone attending the Westerns next week, it looks like there are only a few econ ed sessions; here's what I could find. Also note that CSWEP is sponsoring a panel on Thursday morning (8:15-10am) called 'Striking a Balance: Getting Tenure and Having a Life' - grad students and junior faculty are particularly encouraged to attend!

Friday, July 1, 8:15-10:00am
TEACHING ECONOMICS I
Chair: Robert L. Sexton, Pepperdine University
Papers: Satyajit Ghosh, University of Scranton, and Sarah Ghosh, University of Scranton
Beyond ‘Chalk and Talk’: Teaching Macroeconomic Policy with Spreadsheet Simulation
Denise L. Stanley, California State University, Fullerton, and Morteza Rahmatian, California State University, Fullerton
Can Technology Make Large Classrooms Neutral for Learning? The Case of an Upper-Division CBE Core Class
David M. Switzer, St. Cloud State University, and Kenneth Rebeck, St. Cloud State University
Using Online Tools to Improve the Quantity and Quality of Student Evaluations
Gandhi Veluri, Andhra University
Usage of Computer Techniques in Understanding Economics
Discussants: Denise L. Stanley, California State University, Fullerton
Frank M. Machovec, Wofford College
Gandhi Veluri, Andhra University
Sheena L. Murray, University of Colorado, Boulder

Saturday, July 2, 8:15-10:00am
TEACHING ECONOMICS II
Chair: A. Wahhab Khandker, University of Wisconsin, La Crosse
Papers: Manfred Gaertner, University of St. Gallen, Bjorn Griesbach, University of St. Gallen, and Florian Jung, University of St. Gallen
The Financial Crisis and the Macroeconomics Curriculum: A Survey among Undergraduate Instructors in Europe and the United States
Susan Jacobson, Regis College
Community-Based Learning—Making It Stick
David F. Kauper, University of the Pacific
Cost-Minimizing Students
A. Wahhab Khandker, University of Wisconsin, La Crosse, and Amena Khandker, University of Wisconsin, La Crosse
What Should We Teach Our Students About Interest Rate Determination
Discussants: Denise A. Robson, University of Wisconsin, Oshkosh
David F. Kauper, University of the Pacific
David E. Chaplin, Northwest Nazarene University
Manfred Gaertner, University of St. Gallen

Saturday, July 2, 2:30-4:15pm
STRATEGIES FOR TEACHING UNDERGRADUATE ECONOMETRICS
Organizer and Moderator: Elia Kacapyr, Ithaca College
Panelists: Pierangelo De Pace, Pomona College
Lesley Chiou, Occidental College
Christiana E. Hilmer, San Diego State University
Nicholas Shunda, University of Redlands

Monday, June 20, 2011

Early adoption

I've always been a semi-early adopter of technology. I'm not a fanatic about it; I'm just sort of fascinated by the internet, by the ability we now have to reach people we never could in the past, and I like playing around with stuff. Back in college, I would 'chat' with friends via the VAX (I think that's what it was called), and in grad school I learned some html so I could create a personal webpage with lots of random stuff on it. I was actually excited when our campus started using Blackboard because it was easier to post my class stuff there than on the webpages I created on my own. And as the number of tech and web-based communication tools has exploded, I've explored a bunch of them, as I've written about here a lot.

But even though I think technology is a wonderful thing, when it comes to teaching, I don't think I use technology just for technology's sake. Rather, I'd say that when I'm faced with a problem, I tend to look to technology as part of the solution. Lisa Lane points out that many faculty are OK with using technology for non-pedagogical problems, like recording grades, and technology is clearly great for simplifying things like distributing course materials. But my interest in using technology for teaching really kicked into high gear when I started teaching the 500-seat class. I certainly can't imagine teaching that class without clickers but once I started using them, I quickly realized the opportunities they create for student engagement so that now, I wouldn't teach a class of any size without them. And that experience has led me to look for other ways that technology can increase interaction both inside and outside the classroom.

Given my own inclinations, I have to admit that I find it a bit odd when I encounter people who seem to be anti-technology. On the one hand, I do understand why some people think Twitter, Facebook, blogs, etc. are a waste of time (because goodness knows they can be!), and I certainly understand the frustration many teachers have with their students' texting all the time and all the associated issues that we could blame on the 'net gen' connection to technology. But on the other hand, I can't help but think that people who make those kinds of comments are, well, big fuddy-duddies, particularly since these comments often come from people who don't actually know anything about the technologies they are disparaging. To me, it sounds a lot like the latest version of, "Eh, kids these days!" And when I hear those comments from teachers, I can't help but wonder: do they not understand that at least some of these tools have the potential to help them reach students, to increase student interaction and engagement? Or is it that they don't care about reaching students? Or, to put that more nicely: why is that some people perceive the costs of learning about technology to be so much greater than the benefits?