If you have always done it that way, it is probably wrong.
I realise I must have come as something as a disappointment for all those expecting the curly-headed medical mischief-maker, Ben Goldacre, but it was wonderful to have the opportunity to try to explain where my thinking currently is on the thorny matter of education research. Really I have no right to a place on the big stage at a conference like ResearchED; I’ve never done any proper research; I have no qualifications beyond my PGCE. I’m just a very geeky chancer with a big gob and a certain way with words. But, for those who want ’em, here are my sides:
- What works is a lot better than what doesn’t
- Intuition vs evidence: the power of prediction
- Some tentative thoughts about evidence in education
- Further thoughts about evidence in education
If you read them is sequence you may get a sense of how my thinking is evolving – please be aware that this is a work in progress…
So, I started by asking how many education studies get published annually – the internet doesn’t seem to know but the consensus is tens of thousands. This being the case, why is it that we seem to have made so little headway on solving the problems we keep researching? And why is it that research seems to have so little impact on teaching? Earlier in the day, Dylan Wiliam suggested that the problem with research is it only tells what was the case not might be possible. I’m of the opinion that maybe we can do a little better than that.
As I’ve discussed before, part of the problem is there’s very little agreement on what education is for as Willingham, Biesta and Egan have all said (I’m pretty sure Egan said it firsts) education is “values saturated. No matter what evidence tells us we’ll ignore it if it clashes with what we hold most dear. Until we address this pressing concern, researching how to improve it seems somewhat pointless.
At this point I ran through some of the compelling reasons there might be to indicate that we’re all wrong, all the time. We considered various physiological and psychological blind spots all of which prevent us from perceiving reality as it really is and from spotting where we’ve gone wrong. As Henri Bergson said, “The eyes see only what the brain is prepared to comprehend.” The most alarming of these intellectual confounds is the bias blindspot; the fact that even when we understand our limitations we still fail to spot the flaws in our thinking.
But possibly, this lack of certainty isn’t as bad as we might think:
I can live with doubt and uncertainty and not knowing. I think it is much more interesting to live not knowing than to have answers that might be wrong.
The growth of our knowledge is the result of a process closely resembling what Darwin called ‘natural selection’; that is, the natural selection of hypotheses: our knowledge consists, at every moment, of those hypotheses which have shown their (comparative) fitness by surviving so far in their struggle for existence; a competitive struggle which eliminates those hypotheses which are unfit.
Ideas are, perhaps, no less random than biology and equally unlikely to lead to inexorable progress. The second law of thermodynamics suggests entropy is our natural state and any apparent sense of progress is merely a temporary delusion. I was quite pleased with insight until @turnfordblog pointed out that some fellow called Kuhn got there some 50 years earlier! Hey ho.
We then thought about some of the problems with evidence as it is conduction and consumed in the field of education. Evidence is all too often misrepresented as proof: it isn’t. You can, as sundry loons often declaim, prove anything with facts. I explored the idea that the context of classroom research is limited by the context in which the research is undertaken. Dylan Wiliam made the same point much better earlier in the day but essentially I was suggesting that regardless of how large and well-controlled our samples are, the one variable that’s rarely accounted for are the biases of the research team. We revisited the old idea that correlation is by no means the same thing as causation (thanks to Glen Gilchrist for theses slides) and that if we look hard enough for a link we’ll more than likely find one.
Wittgenstein observed that, “The existence of the experimental method makes us think we have the means of solving the problems which trouble us; through problems and methods pass one another by.” This is an issue at work in all too much research. Consider the example of the How People Learn project which set to establish how we should to teach by using such principles as “To develop competence in an area of inquiry, students must a) have a deep foundational knowledge of factual knowledge, b) understand facts and ideas in the context of a conceptual framework, and c) organize knowledge in ways that facilitate retrieval and application”. He points out that a) b) and c) are definitions of ‘competence in an area of inquiry’. No amount of empirical research could ever demonstrate that these things are not connected!
I also raised the issue of measurability – in order to measure a thing we have to agree a scale – if you’re using miles and I’m using kilometres there’s going to be some confusion. But there’s no such agreement in education: what is the unit of education? The effect size would have us believe it’s a bout time, or progress but I’m just not sure this is either true or reliable. And then there’s the burden of proof – extraordinary claims require extraordinary evidence but intuitive or common sense findings require little if any evidence. This is where RCTs come into their own; as Popper said, “Good tests kill flawed theories; we remain alive to guess again.” All we need do is ask whether such tests are testing a thesis which is falsifiable, that the test is replicable, well-controlled, large enough, and, crucially, published. (Less that 1% of published research is replication and journals and researchers routine conspire not to publish negative findings.)
So what should schools do? My argument is that we can and should look to research that allows us to make meaningful and measurable predictions. Carl Weiman draws a parallel between physics and education and points out that a physicist has no need to examine all atoms in every context to be able to make predictions about the behaviour of most atoms in most contexts. This brings us to what we believe about how we learn. Do we believe children are broadly similar or different? Can we make generalisations about how we learn? Well, maybe.
I briefly mentioned Bayes’ Theorem which I barely understand but which seems awesome. Basically, back in the 1800s the Reverend Bayes came up with an equation to test the probability that a theory was correct in the light of new evidence.
- P(A), the prior probability – the initial degree of belief in A.
- P(A|B), the conditional probability – the degree of belief in A having accounted for B.
- The quotient P(B|A)/P(B) represents the support B provides for A.
Now, I think I’ve got a long way to go before I’m able to apply this but Old Andrew helpfully pointed me in the direction of Reckoning with Risk: Learning to Live with Uncertainty by Gerd Gigerenzer, which I promptly ordered. I’ve discovered a whole community of folk engaged in what they call Bayescraft in order to strip away the nonsense of what we believe in order to live a rational life. I’ve no idea where this might lead, but watch this space…
As yet I haven’t applied the theorem but I’m under the impression that there are various things unearthed in the highly controlled conditions of psychology laboratories which seem likely. These include the spacing effect, the testing effect and cognitive load theory. If these things are, broadly speaking, correct, then I can use them to make accurate predictions about how children are likely to respond in a classroom. After all, one has to trust something.
Too much openness and you accept every notion, idea, and hypothesis — which is tantamount to knowing nothing. Too much skepticism — especially rejection of new ideas before they are adequately tested — and you’re not only unpleasantly grumpy, but also closed to the advance of science. A judicious mix is what we need.
Hopefully this helps you make sense of the slides. If you have any constructive critique to offer on where I might be wrong or what I might be missing, I’d be terribly grateful.
There’s also a video of me speaking here (it starts about 5 mins in.)