I wrote earlier in the week about why, despite it’s limitations, research is better than a hunch. Since then, I’ve been reading Daniel Willingham’s article on Real Clear Education; he says that it’s not that people are stupid but that science is hard. He refers to the nobel prize winning physicist Carl Weiman whose interest in science education came from many years of working closely with physics undergraduates and observing that “their success in physics courses was such a poor predictor of a student’s ultimate success as a physicist.” Or in other words, performance was not a useful indication of learning.

Weiman argues that rigorous eduction research is not so very different to ‘hard’ science as some might want to suggest. Good science has the power to make useful predictions; if research can be used to inform our actions then it is useful. It’s unnecessary to accurately control and predict how every student in every context will behave or learn, just as a physicist has no need to control or predict how every single atom will behave in a physics experiment. All that’s necessary is that we can predict an outcome that is both meaningful and measurable.

This tells us that the insights of cognitive science, gleaned over more than a century and predicated on well designed, repeatable tests that build on prior research and which produce broadly consensual meaningful and measurable outcomes should not be dismissed as unlikely to work in the classroom.

Of course it’s the case that a specific intervention may not be effective with a particular student, but that doesn’t mean that it will not be effective in a majority of cases. As an example, I conducted a mini investigation into the effects of praise on feedback last year. I’ll confess that I didn’t put any effort into designing a fair test, I just wanted to see what would happen if I stopped praising my students and just concentrated on giving them instructional and corrective feedback. I explained to them what I was doing and why I was doing it and after a 6 week period I asked them to evaluate how they thought the experiment had gone. 29 out of 30 students said they had found it easier to act on feedback and felt that they had improved as a result. One students said that she had struggled with not been praised for ‘what went well’ and had failed to make progress as a result. My conclusions? I would continue with not giving praise for the majority of the class but that the student that felt she needed praise would get it.

Now I’m not claiming my ‘research’ has any value as science: it doesn’t. But I decided to make this change after reading some research which predicted that my students might find praise counter-productive. The research I’d read was useful because it allowed me to test a theory that was underpinned by empirical evidence. I’ll allow to that (as Carol Webb has pointed out in the comments below) that I’m as susceptible to ‘observer effects’ as anyone else and that my students may well have been experiencing the Hawthorne Effect. Maybe so, but even if my trial hadn’t ‘worked’, it still allowed me to make a useful prediction. Obviously if I’d found the prediction to be ineffective for my particular students that would have been a useful finding too.

So, yes: applied sciences are different form ‘natural’ sciences and education research is, as Willingham puts it “saturated with values” but any research that offers meaningful and measurable outcomes is worth at least considering.

But does this trump the intuition of an experienced and thoughtful practitioner? Yes, I think it does. We don’t know what we don’t know and without research to make these predictions we will inevitable rely on what we’ve always done. We’re very good at jumping to conclusions about what we feel, intuitively, must be effective. But as Daniel Simons and Christopher Chabris have shown in The Invisible Gorilla, our intuition is not to be trusted.

As an example, I found this on Richard Wiseman‘s blog yesterday:

Screen Shot 2014-05-07 at 09.31.44

The ‘black’ pieces are the identical shade and colour as the ‘white’ pieces. I know, right? That’s obviously not true. I had to print it out and line up the pieces before I could accept it. Our brains refuse to allow us to accept the evidence of our eyes. We cannot trust our intuition because it is demonstrably wrong. And if it’s wrong about little things, maybe it’s wrong about some of the bigger ones too?

Related posts

The Cult of Outstanding
Everything we’ve been told about teaching is wrong and what to do about it!
Testing & assessment: have we been doing the right things for the wrong reasons?