No, I’m not using evidence, but I’m not using prejudice either. I am exercising my professional judgement.

Sue Cowley

It doesn’t make a difference how beautiful your guess is. It doesn’t make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong.

Richard Feynman

A few days ago I wrote about why we shouldn’t credulously accept evidence, and that it wasn’t as simple as suggesting that teachers either use evidence or prejudice to inform their decision. We are all guilty of using prejudice whether or not we use evidence. I proposed that we should ask two questions when reviewing evidence: firstly, what evidence? And second, what is my prejudice?

One reader left this comment in favour of an anti-evidence stance: “I think maybe when Sue Cowley and others say they know that certain kinds of activity are beneficial, perhaps they know this … from long and deep experience?” This is a position I’ve addressed a number of times, but particularly here. The problem is that teaching may be a ‘wicked domain’ in which expert judgement doesn’t routinely develop as a result of ‘long and deep experience’. There are two main barriers to teacher improvement. One is that we often fail to notice whether our ability to help children make good progress is any good. The other is the way we are held to account. We are asked to justify and explain why students failed to make the grade; we are under pressure is to make excuses and conceal mistakes to avoid being blamed. Instead of admitting that what we’re doing doesn’t appear to be effective we shrug and say, “These things happen” and “What can you expect with kids like these?

Instead, if we want to improve as teachers we have to acknowledge our errors. If school leaders want this to happen they must create a culture where it’s both safe and normal. We should change the norm from using evidence to confirm our prejudices to using it explore how we might think and act differently.

After asserting her professional judgment in the comment thread, educationalist, Sue Cowley then wrote this blog. Exercising the principle of charity, I should make clear that there’s quite a lot we agree on. She points out – as I have done – that there are problems with evidence in education. Yes, of course. I’m not aware of anyone who would dispute this.

Then she argues – as I have done – that part of the problem is that we don’t agree what education is actually for. No matter how much empirical evidence we could come up with proving the effectiveness of rote learning, corporal punishment, circle time or group hugs, if it comes into conflict with your moral and ethical beliefs about the world you will ignore it. Again, I don’t think anyone disputes this.

From here, Sue then wants to claim that schools are not like labs and that research conducted in labs is therefore unhelpful. Well, it’s certainly true that classrooms are very different environments to psychology labs but – as I argued here – that doesn’t mean we should dismiss the findings of psychologists. Good science has the power to make useful predictions; if research can be used to inform our actions then it is useful. It’s unnecessary to accurately control and predict how every student in every context will behave or learn, just as a physicist has no need to control or predict how every single atom will behave in a physics experiment. All that’s necessary is that we can predict an outcome that is both meaningful and measurable. The insights of cognitive science, gleaned over more than a century and predicated on well designed, repeatable tests that build on prior research and which produce broadly consensual meaningful and measurable outcomes should not be dismissed as unlikely to work in the classroom. Of course it’s the case that a specific intervention may not be effective with a particular student, but that doesn’t mean that it will not be effective in a majority of cases.

She then suggests it’s ridiculous to assess the efficacy of an intervention such as Philosophy for Children on whether children’s SATs results increase. She would have a point if the claim underlying P4C wasn’t that it “aims to improve children’s reasoning, social skills, and overall academic performance.” If an increase in overall academic performance doesn’t show up in SATs results then we can reasonably conclude that this claim isn’t correct. Of course, the P4Cers can still say they improved children’s social skills as no one would expect that to show up in SATs. To test that claim – as I explained here – we’d need to design an assessment of social skills.

Her next sally is to suggest that trials are expensive and that the DfE is wasting money which could be spent elsewhere. We should be clear that the trials the DfE wants to fund are completely different to the laboratory trials Sue railed against a few paragraphs earlier. These are randomised controlled trials conducted in real classrooms with real teachers and real children. She’s right to say that the rhetoric around ‘closing the gap’ is misguided; the best we can probably do is seek to move the entire bell curve to the right. I also agree that there is good reason to be sceptical about the EEF. She also makes a fair point about the hypocrisy of the DfE rolling out new grammar schools in the face of all the very clear evidence that this is a bad idea. But does that means it’s a waste of money to fund trials into the efficacy of classroom interventions?

Sue points out that every intervention is likely to come with negative side effects. That’s true. But it’s true of interventions whether they’re research or not. It’s well established that feedback can have a powerfully negative impact but no one’s suggesting that we should never give children feedback. In fact, wouldn’t it be better that these side effects were better understood through well designed tests? As Sue says, “history tells us a story about the issues this has caused in the field of medicine.” It does indeed. Consider the woeful tale of Ignaz Semmelweis. His clear evidence that doctors washing their hands reduced infection rates was ignored by the learned profession who saw the idea as demeaning and pointlessly trivial. Doctors’ professional judgement cost lives. Ben Goldacre catalogues many other instances where medical trials contradicted professional judgement here.

Sue concludes by saying she’s not happy for her children to be guinea pigs in classroom trials. But weirdly, she’s “fine for my children’s class teachers to try out new approaches that they think will suit my child.” I may be missing something but this seems a very right-wing, capitalist approach to education. She seems to be saying that she’s fine for her children to be experimented on as long as test doesn’t involve any reputable protocols. Because that’s what happens when we ignore evidence: we just footle about with ‘what we reckon’ is a good idea without ever finding out if we’re wrong. And if we’re wrong we’re gambling with children’s life chances. We create a closed circle in which we put our vanity, our prejudices, and our misplaced sense of professional pride ahead of what’s best for children.

Other professions worthy of the name have set aside such naive notions of ‘professional judgement’ in favour of being critical consumers of research. If we really care about children’s life chances we should set great store by that.

Let’s give the last word to Douglas Carnine:

Until education becomes the kind of profession that reveres evidence, we should not be surprised to find its experts dispensing unproven methods, endlessly flitting from one fad to another. The greatest victims of these fads are the very students who are most at risk.