Kurt VanLehn recently published a meta-analysis of almost a hundred well-constructed papers about computers used to tutor learners. It's an eye-opening layout of a huge amount of work that's been done over the last few decades, showing that there's real promise in using these systems to help students – and suggesting, indeed, that these systems may be getting a lot closer to human tutoring performance than we've all been aware of.
If Kurt's analysis is right, more of us need to get busy figuring out how to scale more of these ideas – and test them at scale. Hard work, mistakes to come – but an average effect size of 0.75 standard deviation units for the better types of tutoring systems ain't chump change!
Continue reading "Machine tutoring: whoaa – shouldn't we act?" »