On the heels of a recent blog I did on evidence-based thinking in education worldwide, I thought it worth saying a bit more about another article in EdSurge that appeared recently. This article makes some great points about folks in education looking more carefully at claims about evidence to support efficacy of different interventions. It’s clearly not easy to make real-world decisions, given how tricky it can be to get good-enough real-world data. Hopefully, ed-tech itself, leavened with expertise, can help to improve the situation.
For educators:
- Look carefully at the conditions of the study and comments about the study’s conclusions. Many efforts end up being correlation studies, e.g., a study showing that more time spent with an intervention lifts performance. It’s possible time with the intervention is responsible for the benefit, but it’s also possible you’ve just found a very expensive way to find very diligent students/teachers! Similarly, teachers or students who volunteer to try something new are different than “most” teachers or students, and these differences can confuse what’s really going on with an intervention.
- Distinguish between different kinds of evidence underpinning an intervention. It’s great (and still somewhat rare) if an intervention explicitly takes into account results from learning science, e.g. is compatible with things like E-Learning and the Science of Instruction by Clark and Mayer. It’s also great if a company has done systematic usability testing in an environment “like yours,” so that they know (and you can find out) how it really works at scale – and that’s different than just building something compatible with learning science. (It may be compatible with learning science, but not be usable in the field – yikes!) It’s perhaps the best (and exceedingly rare, still) if a company has actually done a controlled trial with students and teachers like yours – that can give real confidence that it works at scale as intended, and has the intended effect.
- Look for, ask for (and participate in!) real experiments, especially if there are claims of very large gains (which should have good odds to be visible in a proper experimental design, if they are real). Sure, these are harder to set up, but controlled trials are the key way to really nail down what was responsible for a gain. Real world randomized pilots are helpful in many other industries, from consumer goods to health care – we should be looking to get better at this over time.
For companies:
- Do get your study design reviewed, ideally before you start the study. You want to generate the best, most convincing data you can, and analyze the data to get the most information from it. This is not so simple, especially when you start to think about subgroups of students – well worth getting some expertise applied early. Nothing worse than having a potential client apply more expertise to the analysis of your data than you did – and more and more clients are becoming aware of the need to be analytically careful.
- Keep collecting evidence. This shouldn’t be “one and done” – if you’ve got something that really works, the evidence should keep showing that it really works – and you’ll be able to show what can be expected in different environments. Ultimately, this gives you opportunities to test out improvements, too.
This work is not easy to do, of course, and it’s still rare to be doing it. It’s great to see resources like this article from EdSurge and the Rapid Cycle Technology Evaluation project from the Department of Education, among other efforts, working to get more visibility on better evidence-gathering and evidence-using in education.
I also think as more learning environments become hybrid, mixing technology with teachers and students, with learning management systems and on-line collection of interaction and evaluation data, we can start to have an acceleration of quality evidence-gathering. (Assuming our learning management systems make the right kind of data available to us! ;-) )
It’s not easy, though. There’s a lot written about the promise of “learning analytics,” but what folks will find out very soon is that if the assessments of competencies are not, in fact, valid and reliable probes of what you intended, all the amazing analytic pyrotechnics will be for nought: we’ll be studying the impact on the wrong measures.
As more and more data is captured on-line, however, I think the quality of our learning measures can be made increasingly visible: the techniques to look at reliability and validity of evidence have been around a long time, and can be applied to all that data getting captured to sort out which measures really do “hang together” as measures of what we intend. (Is that set of assessment items really a set of science mastery items, e.g., correlated with later related-concept science success, or are they, in fact, mostly reading tests? Tricky stuff – but critical to get right, if you want to build and gain from your learning data, e.g. for adaptive learning.)
It’s never been more exciting to work at the intersection of ed-tech, learning science, evidence-gathering, and learner success. At the same time, we’ve got to marshal the right expertise and critical insights to make real progress. It’s too important to be distracted by the wrong kind of data or conclusions whose support we don’t understand.
Comments
You can follow this conversation by subscribing to the comment feed for this post.