I just completed a course in education research. Well, technically I haven’t completed it, because my main project, if it’s accepted by the instructor, can’t even take place until this summer, but in any case the course is over.
Here’s what I’ve learned: all data is bogus.
I know you’ll find this difficult to believe, but scientific research can’t seem to pin down what works and doesn’t work in our schools. “Smaller class size,” says the Kentucky study. “Not really,” says a study from London. “Accelerated Reader,” says Renaissance Learning and all its ‘research institute’ fronts. “Not likely,” says other studies.
“Read to your kids,” says all kinds of studies. “Nope,” says a study released today by the feds, which says nothing parents do makes as much difference as how much money they make and how much education they got before having children.
What’s the deal here? Big Pharma does this all the time: control group, test group, crunch the numbers, and hey presto! reliable data. And Vioxx.
So why can’t education do the same thing? This is an easy one: they can’t control the variables. Ever. In any way. Sure, you can “take them into account using statistical methods,” like chicken feathers and eye of newt, I suppose, but the problem there is garbage in, garbage out.
However, there is a bigger problem with educational research, and that is measuring results. Scratch a study and you’ll find they’re all about the same thing: increasing student achievement.
Quick: what is “achievement”?
You see the problem. Even if we all agreed that “student achievement” was properly measured by the standardized tests we have or might develop, which we don’t, by the way, the problem remains that the variables going into the results of standardized tests are just as squirrelly and uncontrollable as those skewing the study itself.
Here’s a direct quote from the horrible, horrible textbook from the course which just ended: “Of course, if the mechanisms underlying the creation of academic achievement were understood completely, and if each of the variables was measured well, then a longitudinal survey… could provide adequate information on causal effects.” [Haertel, G. D. & Means, B. (Eds.). (2003). Evaluating educational technologies: Effective research designs for improving learning. p. 196-7]
This of course is the classic Ham & Egg routine from vaudeville: “If we had any ham, we could have ham and eggs, if we had any eggs.” But nobody’s laughing, somehow.
Until we all agree on what achievement is, until we have a universal standard to measure and ways to measure it, then all educational research must be regarded with suspicion.