Thursday, March 6, 2014

A few high-order bits from Learning@Scale

I tried to gather some notes from the excellent presentations at Learning@Scale, a new conference publishing scholarly research on large-scale online learning.  Co-chairs were Marti Hearst and I from UCB and Micki Chi who directs the Learning Sciences Institute at Arizona State University (which has a long track record innovating in online and hybrid education).

Many researchers presented great ideas and insights—based on analyzing actual data—about how learners use MOOCs, how they interact with the material, and how we might make improvements.

Here’s a few highlights, but complete information is available on the conference website:

Philip Guo (MIT, now going to U. Rochester as faculty) talked about understanding how learners in different demographics navigate MOOCs, examining ~40M events over several edX courses and segmenting by country and (self-reported) age, and tried to draw some design recommendations from the results:

  • Most learners (>1/2) jump backwards in course at some point, usually from an assignment to a previous lecture => opportunistic learners => rethink linear structure of course

  • Learners have specific subgoals that fit poorly with "pass/fail" of overall certificate: they care about specific skills, and beyond that, just try to get minimum points to pass.  => get away from single "pass/fail" and move towards something like individual skill badges?


In another talk, Guo described the properties of engaging and affordable video segments:

  • Preproduction to plan for ~6 min segments results in more engaging videos than when professor records "straight through" and expects postproduction to decide segmenting.

  • Talking head in videos is more engaging than slides-only (as measured by video drop-out rate over the length of a video).

  • Informal shots can beat expensive studio production!  Dropoff is WORSE for expensive 3-camera/studio setup.  (Different instructors/courses, but shows that expensive studio doesn't trump other things.)

  • Khan-style ("tablet drawing") tutorials beat "slides + code" tutorials.  => Use hand-drawn motion, which promotes extemporaneous/casual speaking (vs "rehearsed script") which in turn "humanizes" the presentation and makes it feel more informal/1-on-1.


SUMMARY RECOMMENDATIONS: short <6 min videos; pre-plan for short segments; talking head contributes to personal 1-on-1 feel; Khan-style informal drawing + extemporaneous beats slides + tightly scripted presentation.

Juho Kim talked about analyzing Video Drop-outs—people who don't watch all the way to the end of a video segment:

  • Tutorial videos have more drop-outs than lecture videos, but also show more "local peaks" of dense interaction events, especially around "step boundaries" in step-by-step tutorials and video "transitions" (eg, talking head => draw on screen) in lectures.

  • Re-watching videos exhibits more "local peaks" of interaction events than first-time watching.  => Learners coming back to specific points in video, vs watching linearly.


Jonathan Huang from Stanford compared Superposters (MOOC students who disproportionately participate in forums) to non-superposters: superposters tend to be older, take more courses, are 3x more likely to also be superposters in other courses, perform better (~1 stdev) in course (controlling for those who watched >90% lectures), although the margin is highly course-dependent.  And they don't "squelch" non-superposters—ratio of superposter to non-superposter responses doesn't change significantly with number of superposters.

Berkeley was well represented with two full papers and several short/work-in-progress papers.  Derrick Coetzee described how the incorporation of chatrooms into MOOCs did not result in improved learning outcomes or increased sense of community, though it did seem to engage students who don't post in the forums, and didn't hurt any learning outcomes.  This was one of several interesting examples of doing a live A/B test (“between-subjects experiment”) in a MOOC.  Kristin Stephens reported results of surveying over 90 MOOC instructors at various schools to understand what sources of information they value in understanding what's going on in their courses, and how they might want those information sources visualized.  A special-topics course taught in Fall 2013 by Profs. John Canny and Armando Fox yielded several work-in-progress papers on adaptive learning, automatic evaluation of students' coding style in Python, best practices for affordably producing MOOC video, and more.  (Drafts of all these papers are linked from the MOOCLab Recent Publications page, and the archival versions will soon be available in the ACM Digital Library.)

Eliana Feasley of Khan Academy gave a hands-on tutorial on using their open-source Python-based tools to do item response analysis of MOOC data.

More summary notes coming soon.

No comments:

Post a Comment

Comments are disabled because the only commenters are spammers, despite Google's best efforts. But I welcome actual comments: Google my name and you can easily direct an email to me, and I'll publish your comment here.

Note: Only a member of this blog may post a comment.