You know the cocktail party cliché, “Hey, doc, I've got this nagging pain in my hip. . .” I did have a nagging pain in my hip, and finally went to see somebody about it. It gave me a front-row seat to what you might call “post-science professionalism” - how a huge array of science, engineering, technology can be deployed (or not) to solve a human problem.
Wouldn't it be cool to get learning to be like this?
I'm quite healthy (“Oh, hey, doc, that back-surgery in my late 20's – that doesn't count, does it?” Yeah, I know – nightmare patient, it's true.), so I connected with our general family health group. The scheduler was perfectly happy to connect me up to my usual nurse practitioner, but hearing the issue, suggested tying me in directly with the practice's sports MD. Made sense to me, and we set a date and time.
In I come. A nurse asks a few questions, does a few basic measures, and then asks, “Do you mind if the doctor brings along a medical student this morning?” Gotta learn somehow, eh?
The doc arrived, medical student in tow, and, after glancing at the notes, asked a targeted series of questions about when my hip hurt, what made it worse, what made it better, how many days ago this first started, etc. When I told him “a few months ago,” his eyebrows went up but he smiled and said, “Well, glad you're here now. . .”
He had me stretch and push and lean in various directions, all the while talking to both me and the medical student, alternating technical musculo-skeletal descriptions and rationales to the student with reassuring and explanatory comments directed at me: “This make sense” and “Let's do this one slowly, 'cause I bet it'll be sore.” All this clearly communicating he had this under control and was making progress nailing down what the issue was.
After a few more pushes, prods, pokes, he leaned back and said, “I think it's most likely you've got a strain in the smaller muscles that move the upper part of your leg side ways. There's two of them – can't tell which of them is strained, but it doesn't matter 'cause what we can do to start most likely to make this better is the same.” He then said, “I can either suggest some exercises and stretches to get you started, or I can get you into physical therapy – which do you think you can do?” I indicated I could probably try things myself. We spent a few minutes with him directing me and modeling what the squats, stretches, lifts should look like. He wrote 'em out on a yellow sticky note, reassured me that if this didn't handle it, there was plenty more we could do, and off I went.
I tell this every-day medical story in part for what did not happen. You don't have to be an M.D. these days to find out (via the Internet if nothing else) there's a huge array of incredibly challenging, difficult things that can cause pain in your joints. There's also a huge armamentarium of really cool technologies and diagnostic tools poised over our heads to plunge into action with the least excuse. We can find out how thin the joint linings are – we can figure out exactly which muscle might have some issues – we can discard hundreds of rare but horrible diagnostic potentialities with a wide array of tests! And all of this is backed up by rigorous testing, giving confidence that this is, indeed, “What works.”
Isn't it great to have so much science and technology at our fingertips?
How odd then that my experience was so non-technological. The guy didn't even use an iPad, for heavens sake. What kind of training are they giving M.D.s these days anyway, if they don't even use iPads? Is this malpractice, since he obviously was not flourishing the full weaponry of modern science and technology against the evils that beset my hip?
In fact, something much better happened. With the full backing of 100 years of medical science and technology, with a decade or more of experience resolving real-world problems with patients, he could tell he didn't need any of that - yet.
My problem was not a lack of science and technology. My problem was with a muscle (or two? We may never know) in my hip. What he did was to apply his deep understanding of my problem to my history and circumstances, to come up with the most efficient and effective starting point for me, with full understanding of the huge number of tools available.
His goal was to solve my hip problem, not to apply a specific technology. (Not even an iPad! What is this world coming to?) And he approached it as any good engineer would: “What's the least expensive, most likely to work way to diagnose this? And, given the evidence from this, what's the least expensive, mostly likely way to start solving the problem revealed?”
This, to me, is the essence of a post-science professionalism. A professionalism that is not blinded by the amazing array of data-gathering, visualization, and treatment tools available. Rather, it starts, first and foremost, from the real problem at hand – in this case, something not quite right with my hip. And then musters the right intensity (and expense) of tools that fit the problem and my circumstances.
By “post-science,” I don't mean dismissive of science. What I mean is that the science is so deeply woven into the profession that the (good) practitioners are not blinded by the science or the technology. Instead their vision of what is happening is deeply altered by the science they've mastered, allowing them to choose the simplest tools and answers that fit with confidence. And scale up from there, when needed.
What if our learning environments evolved into this kind of post-science professionalism, a kind of “learning engineering,” as Rick Hess and I write about in our recent book? We're seeing glimmers of this now: many Response to Intervention (RTI) environments in K-12 have that scaling-of-solutions feel to them: start with simple interventions, and work up if the (high-quality) evidence shows more is needed.
However, we also still have an unhealthy blindness around technology and learning: schools, districts, superintendents, funders, universities, work-places, all very well-meaning, often confuse “solving learning challenges” with “using a lot of technology.” Students with learning challenges aren't suffering from a lack of technology – that's not a diagnosis. They've got a problem with their learning system or learning environment – a lack of the appropriate precedent skills, motivation challenges, a lack of understanding of how necessary challenging practice is, difficulties getting a home environment that supports practice, other language and organic issues – the array of potential problems for a learner goes on and on. But “not having an iPad” is not a learning problem, any more than “not having had an MRI scan” was my hip problem.
What we need is deep understanding of how learning works, and what goes wrong with the learning system and environment itself, coupled with diagnostic tools (from simple to complex) and treatment options (from simple to complex) that can be fitted to the circumstances of an individual patient – oops, learner.
A few more parallels in the health care setting: When I knew I had a problem but no idea what to do about it, a para-professional (a scheduler) could tell I would benefit by going straight to a sports MD. The nurse did a variety of basic data-gathering tasks before the M.D. came in. There's a division of decision-making and tasks, all tied in with an infrastructure of understanding of problems and solutions, to ensure “stuff was ready” for the final, specialized, conversation with the most skilled and focused professional about the problem itself.
We've got a long way to go in learning to get this kind of systematic splitting of decisions and roles to be common practice across a district, state, or university. (Technology has a role here too: “Get the state's best teenage ratio-and-proportion problem diagnostician into a Skype call with that student, stat!”) And to have confidence that every specialist really is tied deeply in to current evidence on what works for learning – and what doesn't.
Finally, the M.D. was coaching another generation of practitioners. The student is already exposed to the full array of science and technology for health care in medical school. The practitioner was modeling, instead, how you should use the simplest of diagnostic tools to arrive at an evidence-based conclusion incredibly quickly – and provide options for interventions that could be as low-tech as would fit a patient's circumstance. This was science and technology backing up making the right call for a given “biological engineering” problem – avoiding the heavy artillery in this case.
Won't it be a great day, when a learner has the same confidence that a low-tech learning solution suggested by a teaching professional is backed up by the same accumulated science and wisdom? And that, if more is needed, evidence-based big guns fitted to your problem are just outside your door?