Last month there were two events in fairly close proximity that are worth reviewing together. During AERA and at the ASU Education Innovation Summit (EIS) there were sessions talking about why evidence-based approaches continue to get short shrift in education technology and education at scale. It’s clear (at least to researchers) that there’s a tremendous opportunity to improve learning, and researchers are interested in being involved – if they can figure out how to be helpful, not merely asked to rubber-stamp projects at the end. It’s a mix of things to tackle, it seems: more informal communication between investors, decision-makers, and learning scientists about what’s possible; more practical evidence at scale about how to do the work; more examples of success to draw on; more support for better decision-making by buyers. It’s not simple, clearly, but not impossible either.
The AERA session took place in San Francisco at their big annual meeting a couple of weeks later. We had a terrific cast of discussants: Richard Clark from USC again, Ken Koedinger from Carnegie Mellon, Michael Moe from GSV Advisors, Michael Horn from Innosight Institute, Stacey Childress from the Gates Foundation, and Nadya Dabby from the Department of Education. You can find the full video of the 90 minute session here. Again, a wide range of issues seem to be behind the problem, including the lack of “weak networks,” to use Nadya Dabby’s term, that would link large numbers of researchers and investors – they don’t stumble enough across each other for investors to see the value from their work. Michael Moe suggested setting up some kind of informal networking approach to get this going – might well be worthwhile.
At both sessions, one of the problems raised is that all of us have had our own extended education experiences, and we tend to think we “know” how education should work based on our experiences. Almost always, this ends up distracting from what actually will work at scale, because that “we,” typically, does not represent the correct targets at scale. (I remember one gentleman who was sure that long sets of multiple-choice questions would be a perfect way to do virtual instruction based on his own success in such a course in college.)
It’s great to hear this topic getting discussed. As you can hear from these conversations, what we at Kaplan are trying to do is move this forward in practical ways, by training our internal folks, setting up to do rapid controlled trials for engineering purposes in the way that Amazon or Facebook does, and connecting to some of the best researchers in these areas to scale up what seems to work in labs.
Comments