We used to think that if we knew one, we knew two, because one and one are two. We are finding that we must learn a great deal more about ‘and’.
Arthur Stanley Eddington
After my presentation on Slow Writing at the researchED Primary Literacy Conference in Leeds, I was asked a very good question by Alex Wetherall. Basically – and I hope he forgives my paraphrase – he asked whether it would be worth conducting some ‘proper’ research on my good idea.
I said no.
It seemed as though this came as something of a surprise to the research literate audience.
I’ve had a good think since and I stand by the justification I gave.
Let’s imagine we want to conduct some research on the effectiveness of a new teaching strategy (strategy X). How would we go about it? Well, we’d probably want to test its effectiveness across a range of different groups of pupils and we’d probably consider getting several different teachers to try it out. We’d also want to have a control group which didn’t get the intervention so that we could try to establish what sorts of things happen to children deprived of the wonders of strategy X (and some would argue that it’s unethical to deny children something that might benefit their education). Any particularly reputable research might also want to be a double-blind experiment to try to avoid such confounds as the Hawthorne effect, but it’s pretty tricky to keep teachers in the dark about how they’re teaching their pupils, so in practice this is something that very rarely happens. We’d then need to decide on our success criteria – how will we know if strategy X works? For that we need something to measure, but what? Test results maybe?
OK, so we’ve set up our study and we’ve done the statistics and, guess what? It turns out strategy X is effective! It works! Probably because the teachers participating in the study were enthusiastic about taking part – it’s likely they were pretty darn keen to make it work. (If I were to chose teachers who thought the idea was a bit silly, I’m sure they’d find it didn’t work) The overwhelming majority of studies show the successful implementation of ideas, frameworks, teaching materials, methods, technological innovations and so on. It doesn’t seem to matter whether studies are well-funded, small scale or synthesising analyses of other studies: almost everything studied by education researchers seems effective. Of course, there are some studies that report failures, but because of the perennial problem of under-reporting negative results they’re rare. We have acres of information on how to improve pupils’ learning such that it seems inconceivable that learning would not improve. But almost every one of these successful studies has absolutely no impact on system wide improvement. Why is this?
One of the problems we have in believing what’s effective in education is that so many of the most valuable findings are counter-intuitive and deeply troubling. Teachers often exhibit mimicry – copying what they see others doing – but without trying to develop the understanding of the expert teacher. Education is so saturated with values, and so contested in its aims, that it cannot really be dealt with in the same way as physical sciences. We may pay lip service to this fact, but we still assume that all learning is part of the natural world and therefore conforms to the same rules that govern the rest of nature. I’m not sure that’s true. Rather, learning is shaped by a combination of evolution, culture, history, technology and development, and as such it’s a slippery devil.
Subjecting good ideas to the rigours of science is bit bogus. If that’s what we mean by research then I struggle to see the point. But maybe we could think differently about research. The research I consider myself to be involved goes a little like this:
Step 1 – have a good idea
Step 2 – try it out with some students
Step 3 – think about what happened – is it worth doing again?
Step 4 – think about why it worked. Maybe dig into the reams of existing research to find out what others think. Come up with a theory which provides a post hoc justification for your good idea’s success.
Step 5 – share your idea with other teachers. Ask them to tell you what they liked and what they didn’t like.
Step 6 – improve the idea.
Step 7 – resist, with all your might, the temptation to slap numbers on to your idea in an attempt to justify why it’s good; this is cargo cult science.
And that, in my opinion, is the sort of research every teacher could try and be a little bit better at. Let’s not call it research, let’s call it being a professional. And just to be absolutely clear, this categorically does not mean research should not be done. I just think there are better ways for teachers to spend their time.
Image courtesy of Shutterstock
‘Step 4 – think about why it worked. Maybe dig into the reams of existing research to find out what others think. Come up with a theory which provides a post hoc justification for your good idea’s success.’
Some interesting points and I mostly agree. You do clearly value other people doing research, however, otherwise there wouldn’t be any to ‘dig into’ and you wouldn’t have the language to present your revised theory. Is it just laziness (!) not to test your ideas formally, or good sense as it keeps things moving?
I think we need a balance of researched ideas and not researched ideas, which is what we have, and I think what you are arguing. It’s good job the effectiveness of systematically teaching phonics has numbers attached otherwise a lot of people would never have tried it: and now they love it. Well, I do!
I definitely value research. I really like it when the research is scientific. I, like most teachers, am incapable of actually test my ideas scientifically. I’d need so much training and domain-specific knowledge that it hardly seems an efficient use of my time. Someone one else is, of course, welcome to apply scientific principles to my(or anyone else’s) good ideas, but I’ll not be doing it.
But it seems unwarranted to call me lazy.
Just teasing, hence the exclamation mark. I love your blogs as they make me think and most people I know don’t want to mull things over this much. You are clearly far from lazy!
As a classroom teacher I have done action research and it taught me loads but meant nothing in the real world. It does need to be done by the right people in the right way.
Interestingly I have a theory I’d love to test with proper research (maths focused, not English: I’m a primary teacher). I don’t know how I could do that properly because it’s so different from the usual way we do things it would be seriously frowned upon if I just went for it in the classroom. But who can you put your ideas to? Not sure.
I’d get in touch with the EEF – they’re pretty good at helping people put proposals together. And they provide funding! https://educationendowmentfoundation.org.uk/
Thank you. Could be interesting!
Hi David,
I disagreed with you yesterday and I still do now. As I am briefly procrastinating form marking my year 13 essays burning a hole in my living room floor, I thought I’d reply.
One, I think you pose heightened scientific conditions with your imagined research. Does research have to have all the conditions you describe to be better than simple trial and error – no, I don’t think so. Or perhaps we disagree on terms: research, enquiry, disciplined enquiry, scientific research, numbers/data, evidence – there are pot-holes and misunderstanding everywhere. Perhaps ‘proper research’ (not exactly sure what you mean there) and the steps you describe are irreconcilable and there is no useful middle ground.
Let’s take your imagined research. What if the good ideas we try could be supported by those who are experts? What if teachers could be involved in research with such supports so that they better understand the ideas in the ‘reams of existing research’ in a real, more meaningful way? What if some of the core principles you describe – selecting groups, controls, use and analysis of data, psychological effects that attend ‘interventions’/teaching strategies – could be better understood by teachers and school leaders to guide the teaching strategies and interventions they will try regardless. I think that would improve our judgment and make us critical of messages that are given to us through research evidence. I think the notion that teachers should read what you and I may say about topics like feedback, or the cognitive science principles that underly many of the ideas that you or I may recommend, without ever looking to apply any of the principles of disciplined research as a bit odd.
You appear to recommend simple trial and error – with all the flaws and biases that inherent that approach. At the other end of the scale is the controlled, scientific conditions of many of the Bjork experiments. I recognise there is ample distance between the two. Can we recreate the Bjork conditions in the complex realm of the classroom – no. Can we have a much more disciplined approach to teacher enquiry and evaluation – yes, I think we can. Is it to be dismissed as ‘cargo cult’? Well, I reckon it may prove considerably better than a school leader taking a research finding off the shelf and unquestioningly asking their teachers to apply it without consideration of piloting it, analyzing the support factors, or undertaking some disciplined evidence seeking in that unique school context.
I am of the opinion that we raise the profession to the heights of better evaluating what we do, with all the supports required, both within school and without. Do you need a double blind experiment to have a better understanding of your practice or to evaluate the effectiveness of a teaching strategy in the classroom – no. Do you need some better controls – yes, I think you do.
One such example for me would be a control group. In every school, school leaders and teachers are choosing interventions to improve GCSE results. People are putting a vast amount of energy in extra sessions and expensive resources and the like. Would not piloting an intervention with a control group and some matched pairings, perhaps some stealth elements to try to avoid the Hawthorne effect, prove useful in deciding whether the energy and school resources should be expended in future? I think we are largely poor at properly evaluating what we do. Let’s attempt to use some of the disciplined principles to do it better. If it anything that gets near the complexity of the classroom is dismissed as ‘cargo cult’ science then what hope do we have in ever making better judgments beyond our intuitions and educated guesses.
Subjecting our ideas to the rigours of principles of good evidence is surely better than hit and hope? I think it will give us a better bet than the steps you describe. Does every teacher have the time or support to do more all the time – of course not – but we should aim for better. By being more active agents in the process – not just receiving pronouncements from the leaders of the Lab – we will better understand the principles of good learning (counterintuitive and all) and have a hope of changing our habits for the better. I happen to think if teachers have no involvement, however small, then we can write about the evidence as much as we like but little will ever change in terms of what happens in the classroom.
I think it is rather counterintuitive to me that we would accept the principles of Robert Bjork, for example, without ever trying them out in the classroom with at least some of the discipline that would be applied in the original conditions. The research steps you describe happens every day and I undertake it in my classroom too and I will do after some training on Monday afternoon, but I don’t think that means I cannot have more disciplined approaches seeking out evidence of my effectiveness.
Alex
Thanks for such a considered response. I think it’s fine for us to disagree – the world needs more than one opinion. As long as we all understand that they’re just opinions.
But just to be clear, in my view teachers lack the time and expertise to do ‘proper’ research. For that matter, so do a lot of education researchers! Training someone to be able to do all you say is not maybe the most profitable way to expend limited resources. And all it gets is us some very dubious stuff which cannot seriously be considered evidence. That’s not to say no one should do it. And it’s not to ay teachers shouldn’t do it. It’s just a very costly – and largely wasteful – enterprise.
Thoughtful, critical analysis is an honourable approach. Maybe, rather than being so quick to say, ‘the research shows…’ we might be better to formulate our thinking with ‘analysis has concluded…’? Of course, we would still have to contend with just as much nonsense and dogma, but we’d waste a lot less cash.
Finally, submitting my manuscript to Bjork was pretty nerve-wracking. I was sure he’d pick holes in my analysis. But he didn’t. And that makes me feel somewhat vindicated.
I think considering school leaders, as well as classroom teachers, is important here too. We conduct ‘interventions’ and undertake policies daily. I think engagement with research, including disciplined evaluative thinking about our methods, and support is essential.
I haven’t written on Bjork as comprehensively as you – and I can imagine he sees the practical applications you describe as being powerful, but I’d still want to undertake research on testing effect etc. and it be evaluated in classroom contexts – whether there are flaws or not. I hope, in future, to do just that.
I recognise the premium of teacher time, but I envision a future where that is reconciled as part of what we do – from ITT onwards. Perhaps it is blind, ignorant hope!
In terms of the wasted money. I reckon we can count in the billions the waste on untested decisions (pilot proving a dirty word for many!) against the millions spent on doing independently evaluated research. I am sure you have school experience of wasted money where better evidence may have helped!
Of course, our different opinions and the disagreement in this case is fine. Indeed, having my opinions and beliefs challenged is pretty much essential. Without such challenge, hubris and folly lies in wait!
Now, I MUST get marking!
Best,
Alex
You know there have been extensive classroom trials of testing effect, right?
Yup – but I reckon English teachers want to see that work in English classrooms. Those Yanks shouldn’t have all the fun.
Well, who am I to stand in the way of fun! I can see it now: “Reseach – the teachers absolutely love it!” 😉
I think we could do with replicating the findings of cog. sci. in an English context more fully if we want to convince people to change their practice (especially with the more counterintuitive stuff.
As ever, a thought-provoking blog entry! I am assuming you don’t have much time for qualitative (qual) research, although I think many serious researchers would say that “qual” yields richer, more nuanced and possibly more valid findings than the quantitative approach, which often offers a very narrow statistical viewpoint, which I think you suggest above. Coming up with a decent, clear methodology and implementing a good piece of small-scale qualitative research (which is what I “attempted” for my PhD educational research) might be a more valid way of measuring the success of something like “Slow Writing”. I recommend Hammersley, who is good:
http://oro.open.ac.uk/36138/
and Denzin: http://www.amazon.co.uk/SAGE-Handbook-Qualitative-Research-Handbooks/dp/1412974178
No, you assume wrongly 😉 I love qualitative research. I’m a huge fan of science. I’m an avid reader of psychological research which can, I think, give us meaningful, measurable outcomes gleaned from well-controlled test conditions which we can use to predict what should happen. I’m just sceptical that genuine science can be done by teachers in classrooms.
I am a Scientist, and, though I rarely generate data anymore, I do have to analyse research and opinions/analysis of that research all the time, and work out how it can/should be applied. And it’s very, very tough. There’s almost always not enough data, and almost always a couple of ‘outliers’ in the data. And this is the physical sciences. Problem is, my area applies research into the real world, and the real world is complex. I’m seeking to take specific findings and generalise them. Difficult, and fraught with uncertainty. Research in the education sphere will be even more difficult. More variables. More real world. More scenarios. More uncertainty.
David is right, imo, with respect to how far research can get the edu-sphere. Schools are too complex for broad theories to be robustly proven. On the other hand, Alex is right that useful research can be carried out…but, the research question must be very carefully drawn, and the success criteria very narrowly set. All research should be carried out with a sense of scepticism. Vague questions like ‘Students learn better if X’ should be avoided. What is the intervention/process/approach designed to actually achieve? Is it measurable?
Research (sceptically) augmented education, if you will.
I’m more than happy to agree with Chem. Bravo sir!
Come on, we can’t all end up agreeing. That is not how Twitter works!
Step 7 – it’s the tough one – without it comes the anxiety of ‘why should people listen to me’.
The strength for me here is that we all come up with ‘common sense’ hypotheses about approaches that might be an excellent way forward in our classrooms, but which don’t seem to feature in the literature.
I agree that the ordinary teacher can’t manage to turn their classroom into a highly controlled double-blind cognitive psychology experiment without an unusual amount of expertise, time and effort.
Therefore, if we’re not going to abandon our (possibly brillian) context specific hunch because ‘it’s not been proven out there’ then we should have a reflectively considered way of giving our idea a go and cautiously watching where it leads.
What’s wrong with that…?
It is not unethical to have a control group as no-one is being denied anything that is ‘known’ to be beneficial. The control group gets current best practise and the test group get something that might be better, as good or maybe worse. This is a widespread misconception in many Edu circles. How does the Edu profession think ethical RCTs are done in medicine?
In a recent post I seem to remember you arguing for a Darwinian pruning of bad ideas. So the problem is we don’t know if the idea is a bad idea. That’s where proper research is needed, where Edu researchers try to disprove, not prove their hypotheses. Sceptic-proponent collaborations get round some of the false positives you describe as excuses for not reresearching whether your ideas are good, but they can be tricky. Here is a simple acid test: If you are not prepared to properly test your hypothesis to destruction, and see if it still stands afterwards, I’m not prepared to take it seriously. Don’t take it personally (I don’t know what your idea is); I apply this idea to all new ideas that are propsed to me and I expect the same treatment from my peers. Karl Popper would approve.
I understand how RCTs work in medicine and about the ethical arguments – I did say, “some would argue…”
I’m happy for you not to take my ideas seriously – they’re just ideas. I’m not trying to prove or sell anything and what I’m proposing is logically plausible. I identified a problem: children tend to default to boring writing, and then set out to see a) why and b) how I might make it better.
The reason why is that children get a lot of practice in school at writing badly. Most teachers ask students to write in order to show their understanding of subject content but are largely uninterested in the quality of the writing.
My hypothesis is if some emphasis is placed on how as well as what, children’s writing might improve. This is both testable and falsifiable but only an a purely subjective scale. Maybe I could commit time to designing a scale to measure improvements in writing – or I could just use existing GCSE criteria (which I did) but these are obviously still subjective because what something like ‘quality of writing’ doesn’t readily lend itself to data analysis. So instead I rely on my subjective opinion and see, yes – students’ writing has improved.
I then think, I wonder why? So I start thinking about scaffolding and metacognition and develop a theory which explains the phenomenon. I then tell others and invite them to see if it helps their students’ writing improve. And it seems to. Would doing some calculations and drawing some graphs really help? (That’s a genuine question by the way.)
I appreciate you said ‘some would argue’, but the wording haf an element if support. I would have said ‘some erroneously argue’.
I commented after thinking about what you said in the recent post about how crap ideas hang around, and I had Hatties (and a possibly misquoted Dylan William?) views ringing in my ear about how teachers shouldn’t try to do proper edu research. You are sort-of supporting that point in a way.
No drawing graphs and statistics wouldn’t necessarily help and is not what I’m getting at. Qualitative and/or subjective will do. You know the expected endpoint measure of success and can relay that. Doing your experiment from the point of trying to disprove your hypothesis might help. Is the intervention the only variable? Can something else be causing the observed effect. Some other variable, a bias (you maybe?). Once other confounders excluded, and you still have an effect, you might have something. However as you allude to, teachers (and student cohorts) are a great experimental variables in teaching. Not like my simple and reproducible analytic laboratory instrumentation. Maybe that’s why I get so grumpy about edu research.
Most ideas that teachers want to put into practice will never be able to be tested to the extreme – they are personalised to how that teacher does it in their own situation.
Better dissemination of findings which HAVE been established by full blown research + better training of teachers in understanding research principles, will allow individuals to better evaluate – in a purely unique situation – whether what they are doing has real value or not.
I could accuse you of being likely to miss out on the educationally best thing since sliced bread with your acid test – but actually I agree – life’s too short – we all need a way of simply evaluating what others tell us.
Nevertheless – I can still take some certainty regarding what works for ME in MY situation using this approach, as the mainstream will never be able to tell me for sure…
A fair point and well put. I suppose that is the relative joy of blogging vs trying to publish the next best thing in peer reviewed journals. You can have an idea, everyone can have a pop at it, and you get opinions in the form of constructive feedback. You can discount the ‘obviously wrong’ and have a go at reproducing something that seems plausible.
There you go
I think the main thing that the kind of research you are proposing needs is honesty!! Too many in education espouse the next best thing, no one knows how or wants to think about evaluating it and voila a fad is born!! I don’t blame you for not wanting to take it further there are plenty of researchers who could and this is where the link between academia and schools could actually work.
My take on this is that there are two levels of research: primary and applied. So in medicine, testing a drug treatment is applied research, but it is normally founded on a solid basis of primary research-based knowledge about human physiology and biochemistry. This is quite different from a doctor having a bright idea based on their “philosophy” of medicine, and then believing it “worked” due to confirmation bias (as was the norm 150 years ago).
So it seems to me that if a teaching method is based on a solid understanding of the full range of well-replicated primary research into cognition and learning, then it is unlikely to go too far wrong.
[…] Do all good ideas need to be researched? – David Didau: The Learning Spy. […]
A fascinating post, thought I would like to make the following observations
1 Rather than research – you appear to be advocating a form of evidence-informed practice
2 The steps you propose seem eminently sensible, though are full of assumptions and potential biases
3 How do you know you have had a good idea? Good for who?
4 How do you know whether an idea is worth doing again – how do you distinguish between merit and worth?
5 Failures can often generate more learning than success – indeed, a source of cognitive bias comes from only looking at the successes
6 Sharing with other teachers – how do you ensure that there are not social pressures to agree that the idea is a good one
7 There is little mention of the needs of learners
Personally, I believe the approach of Timperley, Kaser and Halbert and their spiral of inquiry is a far more useful model, and which consists of the following steps:
1 Scan what’s going for our learners
2 Focus on where our energies will potentially make the most difference
3 Develop and check-out your hunches of what might be a good idea
4 How and where can we learn more about what to do?
5 What can we do differently to make enough of a difference?
6 Have we made ‘enough’ of a difference ?
Although, neither model is perfect, at least the ‘spiral of inquiry’ puts the needs of the learners first and acknowledges the roles of personal biases.
I trust you find these comments useful
[…] blog on metacognition. It’s a new thing, I’m innovating and it seems to be working. David Didau perhaps would see that as enough, but past experience suggests attempting some research might be a […]
[…] Do all good ideas need to be researched? 10th May (637 views) […]