If you have always done it that way, it is probably wrong.
I realise I must have come as something as a disappointment for all those expecting the curly-headed medical mischief-maker, Ben Goldacre, but it was wonderful to have the opportunity to try to explain where my thinking currently is on the thorny matter of education research. Really I have no right to a place on the big stage at a conference like ResearchED; I’ve never done any proper research; I have no qualifications beyond my PGCE. I’m just a very geeky chancer with a big gob and a certain way with words. But, for those who want ’em, here are my sides:
- What works is a lot better than what doesn’t
- Intuition vs evidence: the power of prediction
- Some tentative thoughts about evidence in education
- Further thoughts about evidence in education
If you read them is sequence you may get a sense of how my thinking is evolving – please be aware that this is a work in progress…
So, I started by asking how many education studies get published annually – the internet doesn’t seem to know but the consensus is tens of thousands. This being the case, why is it that we seem to have made so little headway on solving the problems we keep researching? And why is it that research seems to have so little impact on teaching? Earlier in the day, Dylan Wiliam suggested that the problem with research is it only tells what was the case not might be possible. I’m of the opinion that maybe we can do a little better than that.
As I’ve discussed before, part of the problem is there’s very little agreement on what education is for as Willingham, Biesta and Egan have all said (I’m pretty sure Egan said it firsts) education is “values saturated. No matter what evidence tells us we’ll ignore it if it clashes with what we hold most dear. Until we address this pressing concern, researching how to improve it seems somewhat pointless.
At this point I ran through some of the compelling reasons there might be to indicate that we’re all wrong, all the time. We considered various physiological and psychological blind spots all of which prevent us from perceiving reality as it really is and from spotting where we’ve gone wrong. As Henri Bergson said, “The eyes see only what the brain is prepared to comprehend.” The most alarming of these intellectual confounds is the bias blindspot; the fact that even when we understand our limitations we still fail to spot the flaws in our thinking.
But possibly, this lack of certainty isn’t as bad as we might think:
I can live with doubt and uncertainty and not knowing. I think it is much more interesting to live not knowing than to have answers that might be wrong.
Richard Feynman
The growth of our knowledge is the result of a process closely resembling what Darwin called ‘natural selection’; that is, the natural selection of hypotheses: our knowledge consists, at every moment, of those hypotheses which have shown their (comparative) fitness by surviving so far in their struggle for existence; a competitive struggle which eliminates those hypotheses which are unfit.
Karl Popper
Ideas are, perhaps, no less random than biology and equally unlikely to lead to inexorable progress. The second law of thermodynamics suggests entropy is our natural state and any apparent sense of progress is merely a temporary delusion. I was quite pleased with insight until @turnfordblog pointed out that some fellow called Kuhn got there some 50 years earlier! Hey ho.
We then thought about some of the problems with evidence as it is conduction and consumed in the field of education. Evidence is all too often misrepresented as proof: it isn’t. You can, as sundry loons often declaim, prove anything with facts. I explored the idea that the context of classroom research is limited by the context in which the research is undertaken. Dylan Wiliam made the same point much better earlier in the day but essentially I was suggesting that regardless of how large and well-controlled our samples are, the one variable that’s rarely accounted for are the biases of the research team. We revisited the old idea that correlation is by no means the same thing as causation (thanks to Glen Gilchrist for theses slides) and that if we look hard enough for a link we’ll more than likely find one.
Wittgenstein observed that, “The existence of the experimental method makes us think we have the means of solving the problems which trouble us; through problems and methods pass one another by.” This is an issue at work in all too much research. Consider the example of the How People Learn project which set to establish how we should to teach by using such principles as “To develop competence in an area of inquiry, students must a) have a deep foundational knowledge of factual knowledge, b) understand facts and ideas in the context of a conceptual framework, and c) organize knowledge in ways that facilitate retrieval and application”. He points out that a) b) and c) are definitions of ‘competence in an area of inquiry’. No amount of empirical research could ever demonstrate that these things are not connected!
I also raised the issue of measurability – in order to measure a thing we have to agree a scale – if you’re using miles and I’m using kilometres there’s going to be some confusion. But there’s no such agreement in education: what is the unit of education? The effect size would have us believe it’s a bout time, or progress but I’m just not sure this is either true or reliable. And then there’s the burden of proof – extraordinary claims require extraordinary evidence but intuitive or common sense findings require little if any evidence. This is where RCTs come into their own; as Popper said, “Good tests kill flawed theories; we remain alive to guess again.” All we need do is ask whether such tests are testing a thesis which is falsifiable, that the test is replicable, well-controlled, large enough, and, crucially, published. (Less that 1% of published research is replication and journals and researchers routine conspire not to publish negative findings.)
So what should schools do? My argument is that we can and should look to research that allows us to make meaningful and measurable predictions. Carl Weiman draws a parallel between physics and education and points out that a physicist has no need to examine all atoms in every context to be able to make predictions about the behaviour of most atoms in most contexts. This brings us to what we believe about how we learn. Do we believe children are broadly similar or different? Can we make generalisations about how we learn? Well, maybe.
I briefly mentioned Bayes’ Theorem which I barely understand but which seems awesome. Basically, back in the 1800s the Reverend Bayes came up with an equation to test the probability that a theory was correct in the light of new evidence.
- P(A), the prior probability – the initial degree of belief in A.
- P(A|B), the conditional probability – the degree of belief in A having accounted for B.
- The quotient P(B|A)/P(B) represents the support B provides for A.
Now, I think I’ve got a long way to go before I’m able to apply this but Old Andrew helpfully pointed me in the direction of Reckoning with Risk: Learning to Live with Uncertainty by Gerd Gigerenzer, which I promptly ordered. I’ve discovered a whole community of folk engaged in what they call Bayescraft in order to strip away the nonsense of what we believe in order to live a rational life. I’ve no idea where this might lead, but watch this space…
As yet I haven’t applied the theorem but I’m under the impression that there are various things unearthed in the highly controlled conditions of psychology laboratories which seem likely. These include the spacing effect, the testing effect and cognitive load theory. If these things are, broadly speaking, correct, then I can use them to make accurate predictions about how children are likely to respond in a classroom. After all, one has to trust something.
Too much openness and you accept every notion, idea, and hypothesis — which is tantamount to knowing nothing. Too much skepticism — especially rejection of new ideas before they are adequately tested — and you’re not only unpleasantly grumpy, but also closed to the advance of science. A judicious mix is what we need.
Carl Sagan
Hopefully this helps you make sense of the slides. If you have any constructive critique to offer on where I might be wrong or what I might be missing, I’d be terribly grateful.
There’s also a video of me speaking here (it starts about 5 mins in.)
[…] David Didau ResearchED from David Didau It’s something of an understatement to say all this was cobbled together at the last moment (I was still tweaking during Nick Gibb’s wilful misreading of Visible Learning.) but I’m still very much reaching for an understanding here – I know there are gaps in my thinking and I’m sure I’ve made drawn some erroneous conclusions but, by God! that’s all part of the process. Read more on The Learning Spy… […]
I think one of the key issues with educational research is that only a minority of teachers are in a position to something about the research. The reasons below go some way in explanation, but are in no way exhaustive.
1 – a large minority of teachers do not care about research evidence. They want to go into work, control their classes and then go home with a pile of marking and a minimum of extra fuss.
2 – teacher training is very behind the times AND it is not long enough for student teachers to learn all that there is to know.
3 – many teachers have the perception that just by being a teacher they are developing expertise. This isn’t true and is not supported by the evidence of the development of expertise. However, this doesn’t stop teachers who have been in the job for twenty years speaking as if they have expertise. Has anyone researched how well teachers develop their expertise? I have read very little research from this area and think it needs serious consideration if we are going to tackle the paucity of evidence based education.
And perhaps most importantly
4 – Whilst researchers like Hattie (and authors like yourself) have made efforts to bring educational research to the masses there is still a chasm between what is written and what teachers understand. I have been training teachers in my locality for five years. I am always revising the language I use due to the experiences I have through this training. I conducted a survey during one training session on a list of words commonly found in educational material. The words included ‘metacognition’, ‘cognition’, ‘intrinsic’, ‘extrinsic’, ‘motivation’ and ‘volition’ (among many others). The survey was completed by 40 teachers (yes I know a small sample – however, the sample was representative of a normal school). The results were surprising and a little shocking: only 5% of the teachers could define and use the majority of the words, in the way educational journal articles do. What impact does this impoverished research vocabulary have on teacher education? The words and terms used are a barrier to teacher development. AND without having a good grasp of these terms developing expertise is problematic as many texts are unfathomable.
It is not all bad news and my experiences certainly support the main thrust of ResearchED 14: over the last two years I have noticed an upshift in understanding of research language in secondary schools. No longer does ‘metacogntive’ raise eyebrows and many more staff are happy to use the term ‘intrinsic motivation’. Perhaps what every school needs is a research evidence based education champion. Perhaps the government should look for a small pot of cash to entice more research trained Doctoral educators into the state school sector. Doctoral teachers who can translate and drip feed the vocabulary and skills needed to promote evidence based education. Maybe raising the profile and standing of educators along the way? Maybe helping, in small way, to turn teaching into a lifelong educational experience?
Have a look at Lakatos and Musgrave’s work in the 1960s too – post Karl Popper and, like Bayes’, looking at serious models for how we construct truth.
Thank you – will do!
I’ll lend you some maths books if you want 😉 Although Proofs and Refutations by Lakatos is a classic.
You might also find interesting McGrayne S B (2011) The Theory That Would Not Die. London; Yale U P on Bayesian ideas and their ructions; I wrote about it at http://bit.ly/bayesBlog.
Thank you James – wonderful! I’ve just added that to my growing pile of books to read up about Bayes.
[…] Het voorbije weekend was er EdResearch, een congres over onderzoek en onderwijs in de UK waar ik zeer graag was bijgeweest. David Didau sprak er zaterdag en dit zijn zijn slides. Je kan hier ook meer lezen. […]
[…] #ResearchED – Everything you know about education is wrong […]