The 2013 AERA meeting was a huge (13000 plus attendees), complicated (2400+ sessions), long (five full days in San Francisco for 2013) education research conference, with some unusual sessions. Yet, if you dig a little, you can find real gems, including research from David Feldon, Briana Timmerman, and colleagues, showing a simple intervention change based on cognitive task analysis can lift both learning and retention in an early science course in college.
The full story is even more interesting!
However, if you have the patience to dig through the catalog, you can find useful sessions for “learning engineering” – at least 5-10% of the sessions, which adds up to potentially hundreds of useful sessions. Some of the best learning researchers in the world come with their students and colleagues, including people like Richard Clark, Ken Koedinger, Roy Pea, Kurt Vanlehn, Valerie Shute, Richard Mayer, John Sweller from Australia and Jeroen von Merrienboer from the Netherlands. (And, yes, those are rock stars in the evidence-based learning science world, if some of those names don’t ring a bell.)
I’ll write more later about a panel Kaplan helped put together on why there isn’t more learning science applied at scale (spoiler alert: Richard Clark from USC, Ken Koedinger from Carnegie Mellon, Michael Moe from GSV Advisors, Michael Horn from Innosight Institute, Stacey Childress from the Gates Foundation, and Nadya Dabby from the Department of Education were on the panel, just to whet your appetite).
While we wait for the video, I thought I’d wax on about this one great presentation by David Feldon.
The short story: David Feldon and colleagues modified instructional video used to “flip” a college biology laboratory course, and then tested to see if things got better. They did. Sounds fairly simple, right? Isn’t this what should be happening every day in education research?
It’s the details that matter.
First, the setting: a “flipped” classroom, where students are expected to log in to view information ahead of class and then do other kinds of work in class. If you try to improve a conventional classroom, you can find yourself wrestling with instructors or TAs to modify what they do every day, using videotape or other complex observation procedures to check if they actually do new things they’ve been asked to do. Feldon and colleagues avoided all this – they modified only the videos viewed up-front by students. (Yes, leaving open what benefits could come from doing more than this – see below.) One of the benefits of technology delivered instruction: every time the changed video was to be delivered to a learner, they would indeed get the correct treatment.
Doing this had other benefits. Because the only change was to the video, seen out of class by students, no one could tell which students had the intervention or the control treatment. This meant Feldon and colleagues could execute a true double-blind study: the faculty, TAs, the students, and the experimenters themselves during the trial had no way of knowing which ones had which interventions!
Related to this, a subtle point: There’s a lot of (justified) concern that when you are doing various kinds of odd experiments on human beings you take great care to respect their rights, inform them of things that are going to happen, let them opt out, etc. There is, however, an exception for “normal” education interventions. As all of us have experienced, faculty can change much of what they do on a whim, no evidence, no permission needed. (I do love the funny hats and swinging on a pendulum, I must say.) For better or worse, that’s how the field has evolved – and, indeed, the federal government provides an exception to “informed consent” when interventions are normal education interventions.
However, universities are often unwilling to accept this. You have the curious circumstance that if a professor wishes to completely change her class, she can do so without permission or without notice, but if she wants to change half her class, collect evidence, and share what she has learned, she has to go through a variety of Internal Review Board (IRB) procedures, give notice and permissions for students to opt-out, etc. Feldon and colleagues managed to get rationality to prevail: the research team was doing things that are completely “normal” for education, just changing the scripts for a professor in a video, and so “Participants did not need to provide consent for data collection, because the study was granted exempt status by the university’s institutional review board.”
This is more important for long-term progress than you might think. It opens the door to more frequent and inexpensive testing of small changes, more like the continuous way Amazon, Facebook, and Google test changes to their user interfaces to help customers. Especially with technology delivered instruction and assessments, this opens up a potentially very rapid channel of improvement and innovation. (Which, of course, has to be used responsibly.)
A bit more about the careful way Feldon and colleagues set this up. Unlike the usual gathering of “the data we need to show we’re right,” the team thought through various things that could be alternative explanations for positive or negative results ahead of time, and designed the study to address them:
- The videos of the professor doing what he normally does were done before he was given the new scripts, to make sure he was doing what came naturally.
- The professor chosen to do the videos was an award-winning teacher – you should be so lucky to have such a professor doing your MOOC off his cuff, so we’re not just “fixing” a lousy professor here.
- The experimental video was designed to be as close to the control video as possible, changing just the one thing, the script used: the professor literally wears the same clothes for the experimental condition video taping as the first, and they’re the same length, with the same backgrounds.
- The double-blind nature of the trial handles all kinds of others issues about bias in the results and the grading that could come up.
This attitude – “instrument for failure, but celebrate success” – is not just for publishing papers. It’s the critical “learning engineering” attitude to a pilot – all kinds of things can go wrong, or can explain what’s happening in the small, but we want to make sure we’re going to get the benefit in the large. The stakes are too high to get it wrong – or, worse, to get it “right,” but not for scalable or useful reasons. You design the pilot and data to force your hypothesis to be the last thing standing.
OK, as you can tell, I find this work exciting, and I haven’t even talked about what they actually tested! (Yeah, I know, my kids think I should get out more too. . .)
They worked on a hard problem: mastering the scientific method. Students often come out of introductory science courses with lists of memorized facts and processes, but the real essence of science, the careful searching of literature to find where the field is, the design of experiments (as above) to really get at hypotheses of interest, the analysis and discussion to tease out what actually worked and didn’t and its implications, and, of course, the actual write-up to allow others to replicate the work – this is complex, challenging work for students.
What’s remarkable is they made a simple media-based change – altered the script of videos – to improve a complex skill. That such a simple change could make a difference for mastering such a complex skill flies in the face of conventional thinking about learning rich, complex tasks: you need complex, human-mediated work and dialogue for students to improve, right? They need to be guided and coached by expert, sympathetic humans, to improve, yes? How else will they learn to do complex things, if they don’t have a complex learning environment?
There’s no question that human coaching is critical. However, much research on learning (see Clark and Mayer’s E-Learning and the Science of Instruction) suggests it is when the task or decision is the most complex that most students benefit by very careful breakdown and reassembly of the parts of the task or decision, with practice and feedback at each stage after that. So the intervention by Feldon and colleagues to change the video, if it structured the information better for students in ways that aligned with how experts actually think about it, had good support from learning science.
Feldon and colleagues picked a technique especially suited to a better breakdown and structure for learning: they performed a cognitive task analysis (CTA) to get at what expert biologists actually decide and do when they’re designing an experiment.
As the article describes, CTA is based on evidence that experts no longer reliably describe what their own minds do – too much of the processing has become non-conscious, or tacit. To get “under the hood,” you do careful independent interviewing of several experts and reassemble explicitly from these more of what their expert minds actually decide and do. This is a technique with many variants, but the approach gives significant improvements to learning and decreases time to mastery. It is another example of a learning science technique with significant empirical support – it’s important enough to figure in NSF’s future plans for research in higher education – yet it is not in wide-spread use at scale.
Feldon and co. used this technique in a very parsimonious way to test their hypothesis: they used the CTA to change only the scripts of the videos, nothing else. This let them find the impact of CTA on video in instruction, but leaves to later the potentially even more impactful impact of redesigning all the instruction based on the CTA – but so be it.
Yes, I know, this is getting long, but there’s yet another interesting thing they did in this study, maybe the most practically important. They not only looked at learning gains (using properly validated instruments to test if students were objectively better at constructing a complex performance artifact, the lab report – OK, I’ll stop), they also looked at retention of students, a measure of motivation.
This seems like such a simple and obvious thing, but often in laboratory settings rather than “live” settings, learning science research either focuses on learning gains, or on motivation improvements, not both. For practical use at scale, both are critical: it does little good to have a technique that works in the laboratory, but leaves students cold and unmotivated to persist in the real world – students drop out of science in the early years of college in droves, even those with the intention of pursuing science or technology out of high school. In the same way, having a merely “exciting” learning environment that doesn’t lift mastery of hard, important objectives is not helpful either (with all due respect to AERA-reported research on comic books). We need better insights into what does both.
So, let’s add it up: Feldon and colleagues executed a very careful experimental design of a highly scalable intervention for a critical and complex learning task that takes no more time or effort from students or faculty. They designed it to objectively improve the match between what students do and what experts do, and it did, indeed, lead to better learning and higher retention in an introductory science course. And the design gives us confidence that only the specific change made could have caused the improvement, not changes in personnel, calendar, a volunteer effect, researcher bias, or other factors.
A seriously good study!
And delivered at AERA, to boot! Some folks think AERA is nearly valueless for the real world. However, I beg to differ – this, at least, is one study (there are more) that shows there really are gems buried in the beach.
Think about it, you who are venture capitalists or entrepreneurs or educators at scale: a report on results that lift learning, improve motivation, are based on long-standing, long-published learning science, done in a scalable way – aren’t you sorry you weren’t there in San Francisco digging near the water with me? ;-)
Comments