Monday, August 11, 2014

Conversation pitfalls in peer instruction

Dave Patterson and I have long used Eric Mazur's brand of Peer Instruction in CS169 lectures:

  • We display a multiple-choice question on the screen, intended to test students' understanding of the last 5-10 minutes of lecture material;

  • Everyone votes on their preferred answer individually (some people use clickers; we use colored index cards);

  • Each student turns to a neighbor and discusses the answer for about a minute;

  • Everyone votes again.


Students we surveyed have said they enjoy this activity; we find that it keeps students more engaged during lecture (and forces us to organize our material better); and there is a ton of literature documenting real studies showing that it improves retention and comprehension.

But what are students discussing when they talk to each other?  In a 2010 paper, James & Willoughby qualitatively analyzed a sample of 361 conversations held among 147 students discussing 45 such questions in introductory physics and astronomy.

They found two fascinating things.  One is that instructors' "idealized" imagination of what mistakes students might make (as embodied by the "distractor" incorrect answers) and what kinds of conversations they'll have are often woefully wrong.  The other is the discovery of several "nonstandard conversation" types, that is, conversations in which students are not in general discussing and discarding wrong answers to converge on the right answer:

  1. Unanticipated student ideas about prerequisite knowledge (12.5% of conversations): students may share incorrect prerequisite knowledge unanticipated by instructor (including misunderstanding of very basic material), apply prerequisite knowledge in not quite the right way, or naively try to match "keywords" in the question with those heard in lecture to determine which prerequisite knowledge to apply.

  2. Guessing: Students may use cues such as keywords to select a clicker response, or may simply defer to a student they think is more knowledgeable.  So the statistical feedback provided by clickers is not necessarily representative of student comprehension.

  3. Pitfalls (37.7% of conversations): Some discussions do not surface statements about the specific reasons an answer is correct. Three variants are: no connection to the question stem ("I think it's (C), do you agree?"  "Yeah, makes sense to me"), especially when everyone agrees that the (wrong) answer is self-evident (30%) ; passive deference to another student (5%); and inability to converse because all discussants lack knowledge to even attempt the question, as stated in their transcripts (2%).  Deference was more pronounced in "high stakes" settings where getting the right answer as  a result of peer instruction counted relatively more towards the students' grade.


The pitfalls occur in roughly the same proportions for recall questions as for higher-level cognition questions.

We're working on some research to allow peer learning to happen in online settings such as MOOCs.  A nice side effect of moving such discussions online is that not only can we instrument them more closely to get a better sense of when these pitfalls happen, but we can even move students among (virtual) groups if it turns out that certain elements of student demographics or previous performance are predictors of how they'll behave in  a given group type.

 

Wednesday, August 6, 2014

Learning from examples: how to do it right

This survey of the learning-from-worked-examples literature highlights some best practices for using worked examples as a learning aid.

Learning from examples is most effective in stages 1 and 2 of the four-stage ACT-R cognitive framework:

  1. learners solve problems by analogy

  2. learners develop abstract declarative “rules” to guide problem solving (some generalization from step 1)

  3. learners no longer need to consciously invoke the “rules script” to solve problems

  4. learners have practiced many types of problems, so can instantly “retrieve a solution template”


Throughout the survey, “A is more effective than B”  is generally measured by pre/post testing to measure transfer in controlled experiments. In some cases a hypothesis is proposed to explain the result in terms of one or another theoretical cognitive framework; in other cases no interpretation of result is offered.

A key finding is that students who engage in “self explanation” [Chi et al., many many cites], in which a learner pauses while inspecting an example to construct the omitted rationale for a particular step, outperform those who don’t.  Here are several ways to stimulate this behavior (*) along with other best practices for creating and using worked examples:

  1. * Identify subgoals within the task.

  2. * Several partially-worked examples of varying complexity and illustrating various strategies/approaches, with enough “missing” to stimulate some self-explanation, are more effective than fewer but more-thoroughly-worked examples.

  3. * Don’t mix formats in one example, eg, use either a labeled diagram showing some concepts or a textual explanation of those concepts, but not both: the “split attention” cost actually retards learning.

  4. Don’t assigning an “explainer” role to stimulate self-explanation: it actually hinders learning, possibly because of increased stress and reduced intrinsic motivation for the learners.

  5. Visuals accompanied or immediately followed by aural comments are more effective than either visuals or comments alone.

  6. Alternate worked examples with practice problems, rather than showing N examples followed by N problems.

  7. Novices tend to overfocus on problem context rather than underlying conceptual structure; to compensate, use the same context/background for a set of different problem types.