Monday, August 11, 2014

Conversation pitfalls in peer instruction

Dave Patterson and I have long used Eric Mazur's brand of Peer Instruction in CS169 lectures:

  • We display a multiple-choice question on the screen, intended to test students' understanding of the last 5-10 minutes of lecture material;

  • Everyone votes on their preferred answer individually (some people use clickers; we use colored index cards);

  • Each student turns to a neighbor and discusses the answer for about a minute;

  • Everyone votes again.

Students we surveyed have said they enjoy this activity; we find that it keeps students more engaged during lecture (and forces us to organize our material better); and there is a ton of literature documenting real studies showing that it improves retention and comprehension.

But what are students discussing when they talk to each other?  In a 2010 paper, James & Willoughby qualitatively analyzed a sample of 361 conversations held among 147 students discussing 45 such questions in introductory physics and astronomy.

They found two fascinating things.  One is that instructors' "idealized" imagination of what mistakes students might make (as embodied by the "distractor" incorrect answers) and what kinds of conversations they'll have are often woefully wrong.  The other is the discovery of several "nonstandard conversation" types, that is, conversations in which students are not in general discussing and discarding wrong answers to converge on the right answer:

  1. Unanticipated student ideas about prerequisite knowledge (12.5% of conversations): students may share incorrect prerequisite knowledge unanticipated by instructor (including misunderstanding of very basic material), apply prerequisite knowledge in not quite the right way, or naively try to match "keywords" in the question with those heard in lecture to determine which prerequisite knowledge to apply.

  2. Guessing: Students may use cues such as keywords to select a clicker response, or may simply defer to a student they think is more knowledgeable.  So the statistical feedback provided by clickers is not necessarily representative of student comprehension.

  3. Pitfalls (37.7% of conversations): Some discussions do not surface statements about the specific reasons an answer is correct. Three variants are: no connection to the question stem ("I think it's (C), do you agree?"  "Yeah, makes sense to me"), especially when everyone agrees that the (wrong) answer is self-evident (30%) ; passive deference to another student (5%); and inability to converse because all discussants lack knowledge to even attempt the question, as stated in their transcripts (2%).  Deference was more pronounced in "high stakes" settings where getting the right answer as  a result of peer instruction counted relatively more towards the students' grade.

The pitfalls occur in roughly the same proportions for recall questions as for higher-level cognition questions.

We're working on some research to allow peer learning to happen in online settings such as MOOCs.  A nice side effect of moving such discussions online is that not only can we instrument them more closely to get a better sense of when these pitfalls happen, but we can even move students among (virtual) groups if it turns out that certain elements of student demographics or previous performance are predictors of how they'll behave in  a given group type.


No comments:

Post a Comment

Comments are disabled because the only commenters are spammers, despite Google's best efforts. But I welcome actual comments: Google my name and you can easily direct an email to me, and I'll publish your comment here.

Note: Only a member of this blog may post a comment.