We used to think that if we knew one, we knew two, because one and one are two. We are finding that we must learn a great deal more about ‘and’.

Arthur Stanley Eddington

After my presentation on Slow Writing at the researchED Primary Literacy Conference in Leeds, I was asked a very good question by Alex Wetherall. Basically – and I hope he forgives my paraphrase – he asked whether it would be worth conducting some ‘proper’ research on my good idea.

I said no.

It seemed as though this came as something of a surprise to the research literate audience.

I’ve had a good think since and I stand by the justification I gave.

Let’s imagine we want to conduct some research on the effectiveness of a new teaching strategy (strategy X). How would we go about it? Well, we’d probably want to test its effectiveness across a range of different groups of pupils and we’d probably consider getting several different teachers to try it out. We’d also want to have a control group which didn’t get the intervention so that we could try to establish what sorts of things happen to children deprived of the wonders of strategy X (and some would argue that it’s unethical to deny children something that might benefit their education). Any particularly reputable research might also want to be a double-blind experiment to try to avoid such confounds as the Hawthorne effect, but it’s pretty tricky to keep teachers in the dark about how they’re teaching their pupils, so in practice this is something that very rarely happens. We’d then need to decide on our success criteria – how will we know if strategy X works? For that we need something to measure, but what? Test results maybe?

OK, so we’ve set up our study and we’ve done the statistics and, guess what? It turns out strategy X is effective! It works! Probably because the teachers participating in the study were enthusiastic about taking part – it’s likely they were pretty darn keen to make it work. (If I were to chose teachers who thought the idea was a bit silly, I’m sure they’d find it didn’t work) The overwhelming majority of studies show the successful implementation of ideas, frameworks, teaching materials, methods, technological innovations and so on. It doesn’t seem to matter whether studies are well-funded, small scale or synthesising analyses of other studies: almost everything studied by education researchers seems effective. Of course, there are some studies that report failures, but because of the perennial problem of under-reporting negative results they’re rare. We have acres of information on how to improve pupils’ learning such that it seems inconceivable that learning would not improve. But almost every one of these successful studies has absolutely no impact on system wide improvement. Why is this?

One of the problems we have in believing what’s effective in education is that so many of the most valuable findings are counter-intuitive and deeply troubling. Teachers often exhibit mimicry – copying what they see others doing – but without trying to develop the understanding of the expert teacher. Education is so saturated with values, and so contested in its aims, that it cannot really be dealt with in the same way as physical sciences. We may pay lip service to this fact, but we still assume that all learning is part of the natural world and therefore conforms to the same rules that govern the rest of nature. I’m not sure that’s true. Rather, learning is shaped by a combination of evolution, culture, history, technology and development, and as such it’s a slippery devil.

Subjecting good ideas to the rigours of science is bit bogus. If that’s what we mean by research then I struggle to see the point. But maybe we could think differently about research. The research I consider myself to be involved goes a little like this:

Step 1 – have a good idea

Step 2 – try it out with some students

Step 3 – think about what happened – is it worth doing again?

Step 4 – think about why it worked. Maybe dig into the reams of existing research to find out what others think. Come up with a theory which provides a post hoc justification for your good idea’s success.

Step 5 – share your idea with other teachers. Ask them to tell you what they liked and what they didn’t like.

Step 6 – improve the idea.

Step 7 – resist, with all your might, the temptation to slap numbers on to your idea in an attempt to justify why it’s good; this is cargo cult science.

And that, in my opinion, is the sort of research every teacher could try and be a little bit better at. Let’s not call it research, let’s call it being a professional. And just to be absolutely clear, this categorically does not mean research should not be done. I just think there are better ways for teachers to spend their time.

Image courtesy of Shutterstock