A recent article in the New York Times talks about new ways to rank colleges based on salary potential after graduation, giving very different rankings and results for colleges than, e.g., US News and World Report. It's worth thinking for a moment about all of this – how would any of us decide which learning environments are “better,” either generically or for a specific individual?
Mind you, that's true in life generally: What's the best apartment to pick to live in a city? The best house? The best car? None of these turn out to be easy decisions, and insights about how we end up making choices for these gives some direction for what we might do with learning environments.
With complex decisions, most of us try to collect information about a number of outcomes that are important to us, and then we triangulate on our choices. We know there's noise in the information we're getting, and know that our own specific weighting factors are not going to be the same as someone else's.
Triangulating with multiple lines of evidence is exactly how more advanced specialists in learning assessment like Bob Mislevy think about judging whether learning has been successful or not. You can't actually see directly whether a student's mind has been changed – they sit staring and smiling at you as they always do. The best you can do is use evidence thrown off from tasks you ask them to complete (including fluency, how they solved the problem, not just whether they “got it right” or not), to see if their minds are behaving “as if” they've mastered what you hoped they'd mastered. Noisy indeed. (It's possible that some form of fMRI scanning in the future can give us more direct insight into expertise development in the brain. There's some evidence about that already for reading expertise – but not likely to be practical for a long time.)
The same principles should apply when thinking about a learning environment as a whole: you can't just look at one measure to decide if a learning environment is “good.” The best you can do is try to triangulate across a range of measures, and then determine whether a learning environment is “good” for the purposes you have in mind. Note that, just like an apartment or a car, “your mileage may vary” - different people may weight different factors in different ways, so that finding “the best car” in some absolute sense doesn't seem sensible.
Let's think about some important purposes for a learning environment:
-
Becoming more aware of the historical context in which people live, either/both in your culture or across cultures.
-
Exploring your own curiosity about a field or topic in increasing depth.
-
Maximizing the skill/decision-making gains for the resources you invest (time and money).
-
Discovering a passion that will carry you through life as a career, or at least for a decade or two (since careers are starting to have shelf lives shorter than a lifespan!).
-
Developing enough expertise in one or several areas to be able to get work to support yourself and others you wish to support in the years to come.
-
Meeting a range of compatible people who you hope to stay connected to, learn from, and grow with, for the rest of your life.
There are many more such purposes, but even this brief list shows two challenges:
-
Successful outcomes in each of these won't necessarily line up, but also won't necessarily contradict each other – it's messy.
-
We have very little useful, objective information from learning environments to help us sort them all out.
The former challenge is the usual one, whether for housing, cars, relationships, careers, etc.: many different beneficial outcomes to choose from, but they won't all line up. That just means we have to decide, either for a group or as an individual, what things are most important, and then get multiple lines of information together to sort out (fuzzy) priorities. In the final analysis, a choice of learning environment, like many other things in life, is a personal choice fraught with noise, and we do the best we can.
However, the second problem is more vexing: we don't have much real information to help us, especially when you overlay the issue of what a learner herself brings to the learning environment. A learning environment that appears terrific at providing long-term economic success and personal satisfaction for extremely well-prepared students may not be as successful for students with more varied academic backgrounds, and, of course, the social settings and backgrounds of “most” students, how the college works, and what an individual student brings in has its own, separate, dynamic from any academic background. (There's fascinating recent work on stresses within Harvard Business School caused by very different economic and social circumstances of students accepted at that elite institution.)
It will never be easy to judge learning environments for any of these purposes. However, a few things would help:
-
Institutions could sort though a systematic way to describe what learners bring to the table, and describe success for those subgroups. That will help separate effects of selectivity from learning effectiveness for different categories of students. Obvious choices are demographic measures, but more likely better choices (harder to collect) are specific measures of competences, personal traits, and the social environment a student comes from, all of which have a more causal connection to learning outcomes than mere demographics.
-
Institutions should expand on post-college success measures. As the New York Times article points out, simply focusing on post-college salary averages is wildly misleading for an individual student: if most of a college's graduates are engineers and you're thinking about marketing or social services, the raw dollar measure is meaningless. The web site PayScale.com makes a start on this by providing information on post-college salaries by major for each college, but there's more that should be in place. What about career advancement? Student satisfaction on the job after college? Indeed, did students with a specific major actually wind up in a related field?
Part of the problem is that our learning environments are not completely engineered with an eye towards success of learners for the long run, whether you look at high school environments and college or work success, or college environments and work success. There's been little focus (and no reward) for academics to follow students systematically out past their time in school to see what parts of their learning actually helped and which parts were irrelevant to future success or satisfactions. It's been difficult enough for most of these harried professionals to track regulatory changes, state assessments, and (in higher education) research obligations in their domains, leaving long-term student success as mostly unexplored terrain.
There's clearly more to be done. Whether it is tracking what, exactly, the most expert performers in a career decide and do (vs. what academics think they should cover), or more detailed information about how different backgrounds and types of students gain in different learning environments, the evidence about what works for learning needs a lot more refining to help individual learners determine what learning environments will be a fit for them.
For example: under what conditions is any given MOOC on a topic the right choice for you to build real expertise to use at work? How would a learner know before trying? (The cash price seems great – but . . . )
Comments