At the recent Educause Learning Initiative meeting in Austin Texas, I came across some very interesting recent randomized control trial results from the University of Wisconsin at Milwaukee. They’re testing a combination of mastery learning plus “amplified assistance” (data-driven suggestions for faculty about who to intervene with and how) in an introductory Psychology course, and, with thousands of students (!!) having run through controlled trials, they’re showing significant improvements in pass rates and long term retention.
It’s great to see this kind of work taking place: a learning institution at scale, using their scale to test out instructional approaches that fit in with learning science in careful ways. And getting good results, creating a runway for more work and impact.
They’ve tackled this by taking advantage of the data stored in a standard course management system used at UW-M, changing the instructional model for students:
- Mastery based learning: Instead of simply marking students’ quiz results and using them to determine a final grade, in the U-Pace model, a student is not allowed to proceed beyond a quiz unless he or she reaches 90% on periodic randomly generated quizzes which cover approximately a half chapters’ worth of content. The system uses the local learning management system (LMS) to record quiz scores, number of quiz attempts, the time since last quiz, etc., so that there’s a record of how the student is using the system – 10 attempts at quizzes in 20 minutes would end up captured and visible to the faculty.
- “Amplified assistance”: Indeed, the data captured in the LMS is (after some training) used by instructors to identify students who might be struggling. Depending on the type of problem the data in the LMS shows, instructors can find a template of a suggested e-mail that they can edit to give constructive, proactive help to students. Instead of waiting for students to visit them, instructors see that students are getting in deep, and offer them a paddle before they sink. The templates speed up a faculty member’s ability to give consistent, supportive feedback – makes it feasible to cover a whole class in the same amount of time as conventional instruction. The messages include motivation components as well as specific places for content feedback – ensuring both critical components for a student are in place in the message sent.
In an initial controlled trial (done with 1700+ students!) reported in the Educause Quarterly, U-Pace instruction performed very well (vertical axis is percentage of A’s and B’s):
Notice that even with underprepared students (either low ACT scores, or below 2.0 GPA in college), U-Pace gave significant benefits, lifting them up to match the outcomes of conventional, prepared students! Remarkable!
In addition, giving a conventional cumulative exam either at the end of the course, or six months later, showed significantly better learning for the U-Pace group than the conventional group.
They now have funding from the US Department of Education’s Institute of Education Sciences to roll this out to more courses and more institutions, to scale their results up. As part of this, they’ve begun to take apart the intervention to understand which parts are effective:
- They’re seeing that each piece, the mastery approach and the amplified assistance, separately add to the total effect – neither one alone works as well.
- Students feel more of a sense of achievement, and more of a sense that “they can do this work” (self-efficacy) after the U-Pace course than the conventional course – indeed, the conventional course lowers students’ sense of achievement and self-efficacy, unfortunately, while each component of U-Pace and U-Pace as a whole lift these two measures.
This is exactly the sort of thing that is worth testing at scale. It fits what learning science might predict:
- Focusing students on what it takes to master, and insisting students do master content, causes students to spend more time on the materials if they are not doing well, working harder until they get “it,” rather than settling for a low grade on more difficult material.
- By focusing on mastery, and providing individualized proactive assistance, the U-Pace structure adapts the feedback (and need for it) to a students’ own situation. If they are doing fine, they can go faster; if they are struggling, they get more guidance, help, and time to work.
- The templates include motivation guidance as well, encouraging students that they really can get this work done along with very specific strategies for what a student should do, all customized/finished by the instructor who knows the course. Motivation accounts for a large amount of learning success (30-40%), so explicitly guiding interventions in this dimension makes sense.
It is very likely that the most effective, efficient, and engaging learning environments, especially at an introductory level for students, are going to be integrated combinations of evidence, technology, information, practice activities, feedback, and people, all working to personalize results for each student. U-Pace shows a practical, scaled-up example with very promising data.
Finally, they’ve tackled this the way many university researchers would tackle their own research domains, but rarely do so for learning: carefully constructed experiments designed to eliminate extraneous explanations for why one group of students does better than another.
More of us with a lot of students need to do this kind of work: dive in to try approaches that match what evidence says about learning, and try them out in a way that we can really measure the impact compared with other things we might do with students’ and faculty members’ time. The more we learn about what really lifts learning, either on its own or in combination, the better off our learners – and our institutions – will be.