I recently came across a very nice posting from Bob Slavin titled “Who Opposes Evidence-Based Reform?” Although he's addressing the K-12 arena, his wise words are applicable across the entire spectrum of teaching, learning, and training.
Dr. Slavin has been a proponent of intelligent and systematic use of learning science and evidence about learning for decades. He is director of the Center for Research and Reform in Education at Johns Hopkins University, but is also well-known as one of the creators of Success for All, a well-received literacy program for schools. He's also well known for a smart approach to thinking about meta-analyses of learning research for practical use – essentially “learning engineering” guidance – that he calls “best evidence synthesis.” Finally, he's done fascinating work (including best-evidence synthesis) looking at the impact and best-practices around collaborative learning, where he makes a crucial point that learners' individual accountability for their learning “is the essential element most often left out of cooperative learning—and when it is, teachers lose a lot of cooperative learning's potential.”
So this posting about why folks oppose evidence-based practices in learning comes from a person who's worked practically in schools for decades, as well as been at the front edge of research in learning. He's got the experience (scars?).
What gets in the way, according to him?
-
People's intuition about how education should work, based on misinterpretation of their own, or their own family's, experience. He quotes university faculty who have misgivings about applying evidence-based methods to schools because it might change their own children's schools, and “the system was working very well for [them], thank you very much.” Most people who control learning environments at any level are, indeed, the selected survivors of all the random types of learning environments they've been in. It's very human to assume “it must be good – it worked for me,” but not a good way to select at-scale treatments, either in medicine, or for learning.
-
People's natural aversion to being wrong, leading them to reject evidence that contradicts their own practices and assumptions. Confirmation bias - it's one of a long list of cognitive biases we have that behavioral economists point out mess up our decision-making. Other fields of human endeavor have made progress in the face of this tide – completely different areas, like medicine and direct-mail marketing – with real impact. (Ahem.) Surely we need to get on the right side of this – using evidence, not intuition, to drive better results for learning? (While watching out for misuses of evidence gathered, of course.)
-
Career issues within research itself. There's a risk that established funding mechanisms that don't require/celebrate careful collection of learning data end up, unintentionally, building inertia against this kind of work. It is quite hard to get right. Good people who start their careers without wrestling with evidence, and are rewarded without needing to pay attention to it, may inspire even more co-workers who stay away from that difficult work. I remember once congratulating a senior learning researcher who had just finished a multi-year scaled up RCT funded by IES. The researcher's response: “I'm never doing that again. Completely thankless activity – and my research productivity went way down. I'm sticking with small-scale pilot studies from now on, so I can get publications and doctoral students out.” Nothing wrong with qualitative research or pilot studies, but how can we accelerate/reward the use of better evidence, and those who do that hard work? (Technology may help here, by streamlining certain kinds of studies and assessment tasks.)
-
Teachers are concerned they'll lose control over their students, and that evidence will target them, not help them. (In university settings, there's a different but mostly unspoken problem: younger faculty may feel they cannot focus time on hard-to-win student outcomes the way they must on research.) As Dr. Slavin points out, it's critical to involve teachers in the brain-storming and decision-making about using data to help students. This also means better training for them - they have the same need in their craft as any other learners for well-designed cognitive support (taking account of what they've already mastered and not, providing sufficient practice and feedback to do new things, and, yes, evidence about their own work), and well-designed motivation support (“Why is this valuable? Can I actually do this? What's in my way?” and even, “Why do I hate my life?” ;-) ).
As Slavin puts it, the key is to rephrase the issue as “How can we use evidence to make sure that students get the best possible outcomes from their education?”
Within Kaplan, we're doing our best to evolve towards more “learning engineering” across the board. We've trained our instructional designers to apply learning science results at scale (and now comes the hard bit, practically applying that), we're running randomized controlled trials in a number of our units to investigate what helps students to succeed more successfully, and we're altering development processes to be more nimble, use technology in better ways, and to take into account what's known about how learning actually works.
It is challenging work, with a lot of institutional change aspects to it, but this transformation to a more evidence-based practical approach is likely to be the only way forward if we seek to transform, at scale, all our skills to match the increasing pace of change all our careers are experiencing.
Comments
You can follow this conversation by subscribing to the comment feed for this post.