Paul Tough is publishing an inspiring article, Who gets to graduate?, in the New York Times Magazine this weekend. It describes work at the University of Texas at Austin to combat failure-to-graduate by lower income students with few natural external support structures for success. It showcases the use of cognitive science at scale along with randomized controlled trials "in the wild" to objectively improve student outcomes.
All of us in our industry need to get busy doing more of this kind of "learning engineering!"
- Most importantly for students, applications of smaller scale cognitive science studies at scale in practical environments can make a material difference to student success. This is especially important for students like those focused on in Paul Tough's article - students with very limited external support as they come in to higher education. It's worth pointing out the beneficial cycle here: helping more learners succeed has material positive impact on the image (and economics and alumni pool!) for higher education institutions.
- Contrary to some views, you really can do randomized controlled studies at scale "in the wild" to replicate effects seen "in the small." Such studies make it difficult to interpret success any other way than that the intervention actually worked. When done well enough, these kinds of studies make it more feasible to "follow the evidence" rather than our intuitions. (People often instinctively deploy their intuitions to pick apart poorly-designed studies that go against their intuition, while embracing the results of poorly-designed studies that confirm their intuition. A well-enough designed study, especially when it is consistent with others like it, makes this much harder to get wrong.)
- This work on motivation is a promising area to explore. There are other learning science areas that have good evidence they make a difference for learners - I've often referred to the book E-Learning and the Science of Instruction by Clark and Mayer for a range of evidence-based guidance on how to optimize the design of individual learning experiences to reduce their cognitive load, among other things. This work on motivation, which is different from (possibly independent of) these other kinds interventions is also an area we're pursuing: we're discussing multiple studies with researchers affiliated with the work of folks like Angela Duckworth and Carol Dweck (who joined a recent panel we put together at AERA to talk about research opportunities at scale from online learning environments). Fingers crossed we can find similar benefits as we apply this type of work in our environments as well.
- Could it be that using randomized controlled trials to nail down ways to make our learners successful is gaining traction? The work on U-Pace at University of Wisconsin is another example in a brick and mortar institution. The PERTS network, centered in Stamford, is starting to create networks of researchers and at-scale learning institutions to try to accelerate this kind of work as well. It may be that virtual education providers have a chance to work faster than brick and mortar institutions over time (our own Kaplan University has a number of courses with hundreds or more students starting every month, and we are now literally running dozens of controlled trials at once). It would be terrific if more institutions start to see (and more importantly learners themselves start to demand!) the benefits of applying learning science work at scale, and of using a variety of data-analytic methods to unpack what's happening for different subgroups of students.
- This work is tricky to do well. One of the key points Tough relates in the article is that these sorts of interventions don't work the same for all students. Just as in medical studies where we see that different treatments work better or worse for different subgroups of patients, the same is likely to be true for some learning interventions. In the case of the motivation interventions reported here, students with strong family experience with higher education gained nothing from the interventions, while those without strong family experiences gained a lot. It's very important to discover such patterns - significant benefits for smaller subgroups could be buried by the noise from the majority, if they're not specifically tagged ahead of time.
- On a very inspiring and practical note, the article also shows that this kind of "learning engineering" work (applying learning science at scale to solve practical problems) can generate very supportive mainstream press, not just research papers in journals.
Comments
You can follow this conversation by subscribing to the comment feed for this post.