The TES reports today that Professor Hattie, the crown prince of education research, isn’t much keen on teachers conducting research in their classrooms. Apparently he thinks we should leave education research in the hands of academics. Because, I assume, they know best.
Now I’m certain TES journos have rubbed this particular story vigorously on the crotch of their cricket whites, the better to produce a savage topspin in the hope of enraging the new breed of research literate teachers, but Hattie is quoted as saying,
Researching is a particular skill. Some of us took years to gain that skill. Asking teachers to be researchers? They are not… I want to put the emphasis on teachers as evaluators of their impact. Be skilled at that. Whereas the whole research side, leave that to the academics… [Teachers] are more obsessed about how they ride a bike than whether they can ride a bike well… I don’t have any time for making teachers researchers. We have got no evidence that action researchers make any difference to the quality of teaching.
Well, that’s clear, isn’t it? We should put away our pretensions to anything other than enacting whatever Hattie and his ilk tell us works.
Although I’m not sure why he’s set his face against action research, I’d be the first to agree that teachers conducting small scale inquiries on their classes tells us diddly squat about what might work in anyone else’s classroom. The idea that you can draw meaningful, measurable conclusions from trying out stuff yourself is of course ludicrous. But then, most classroom research conducted by academics is equally unlikely to find the Holy Grail. The idea that pouring a load of correlational confusion and bias into the meta-analytic blender and distilling magic fairy dust for teachers to sprinkle on their lessons is just as fatuous. All classroom research can tell us it what worked in one particular context – dressing this up as science is like a toddler parading round in mummy’s high heels pretending to be a grown-up: cute but ridiculous.
For my money, the most useful research is distilled from the clean, cold, controlled conditions of psychology laboratories. This at least has the advantage of being subject to double blinds and other rather important scientific principles which just aren’t possible to implement in the hurly-burly of the classroom.
If teachers are discouraged from testing out what works in their particular classroom with their children, everyone is the poorer. We end up the kind of uncritical consumption of research summaries that tell us giving feedback is ace and then teachers being forced to mark more and more despite it having very little in the way of positive impact on students. If instead teachers test stuff out and think hard about what students do in response, we’re massively more likely to spot that the Emperor Hattie is running round in the buff.
Unfortunately Hattie has bypassed one of the key elements of action research, namely that it encourages reflection and deeper understanding of the teaching process. I have worked as a university researcher and I am now running action research projects in schools. While Action research is by design not a scientific endeavour, although it does try to adopt some aspects of science, it has great value as a form of professional development.
Reflection on teaching has been consistently shown to be a main factor in improving teaching. The development of any expertise requires reflection in order to identify issues and seek ways to improve one’s practice.
Hattie has such a limited view that perhaps he does not understand professional learning and the development of expertise.
One issue is that action research is risky when there is a lack of scientific supervision. Then results can suggest something, e.g. a way of teaching, that is completely invalid (a type one error: thinking you have found something when you haven’t). However, this does not negate the powerful reflective transformative process of conducting action research.
Action research shouldn’t guide policy and it shouldn’t be used as a replacement for fully scientific studies. What it can do is point the way for researchers; it can act as a pilot study.
I say bring on the action research, yes recognise that it has its limitations (just as large scale controlled studies lack real world validity), but let’s not accept academic snobbery.
Doug
I’ve spent several years supervising students doing classroom research (and parents doing ‘home’ research, and yes, Doug, agreed. It is not only or simply about a ‘what’ i.e. what the students find out nor whether it is immediately applicable to anyone else – though it may be. One key aspect of it is the development of that teacher. However, on the matter of whether it’s applicable or not…why shouldn’t it be? Many teachers find themselves in similar situations: e.g. inner city classroom, Year 5s, over half the children EAL etc etc. Why shouldn’t one teacher in that situation make a discovery that might well be applicable to the other teacher’s classroom? The two teachers concerned might be allowed (!) to talk to each other, to share their research and observations and it might help them both.
As for controlled studies: it’s not the only criteria of usefulness. In life and society we share ideas and observations in many different ways: we read fiction and poetry, we watch films and plays. We read newspapers. We talk to each other. The controlled experiment is one method amongst many. And there are problems with it, largely due to the difficulty of holding all variables constant bar the one you’re looking at. Human beings are not inanimate objects being subjected to one intervention.
And what if you invert the colonialist model of research and encourage the researched to do the researching? So, in the case of action research in education you discuss with the pupils what it is you want to find out, and as the project develops, you invite them to alter it, adapt it and even take it over? Alternatively, you go to a class, explain you’re going to a school somewhere else and with the pupils devise a study that you think will benefit everyone.
If and when you publish this (or just circulate it, share it etc) who’s to say that this won’t be a) reproducible (one of the criteria for valid experimentation) or b) applicable elsewhere?
What’s more the method itself holds within it a critique of the colonialist model of research, which carries over into analysis of other people’s projects.
I was at the Hattie event that the TES refers to here. One of their quotes seems to me like a subtle but significant misquote. The TES says that Hattie said:
“We have got no evidence that action researchers make any difference to the quality of teaching.”
Whereas I heard:
“We have got no evidence that action research makes any difference to the quality of teaching.”
I’m confident that action researchers make a significant difference to quality of teaching but I’m open to the possibility that the research itself does not. i.e. The person makes a difference in their own space and their own class but the research, as you point out, is only relevant in their context.
If this public comment has been misquoted, I wonder what has been changed in the private interview.
There is no doubt that John Hattie’s work is well worth teacher scrutiny particularly in identifying changes to pedagogy that are likely to be effective. I’m surprised by his apparently “excluding” approach to teachers as researchers. Clearly they can’t be researchers in the full academic sense of the term but they can and should be reflective practitioners in the Schon sense of reflecting on their practice in their practice preferably taking into account the findings of academic research and meta studies of research. Such reflection might not be research in the accepted sense but it does give considerable insight and contributes significantly to teachers’ professionalism.
If there is spin going on in the TES article then I think you too are guilty of spinning. You have championed long and hard in your blog the importance of subject knowledge – are you denying that there are skills in being an educational researcher? We might debate what these are but I would have thought that a higher degree which included learning in research methodology would be one good indicator.
There are of course teachers in schools who have these qualifications and expertise and I personally would want all teachers to have them as I am keen supported of teaching as a Master’s level profession which would include teachers developing there skills. However the current preferred routes into teaching bypassing as they are, in many circumstances, HEIs do not promote the development of these skills in new teachers.
Also, as the responsibility for CPD and ITE is being transferred to the teaching schools they have as one of their “big six” research and development – yet when one reads the guidelines from the DfE on this there is little in the way of developing or gaining these research skills and the new website support (researchrichschools) says that you should “start with willing volunteers and natural enthusiasts” which is great but is hardly advocating the acquisition of qualifications in the area.
The way forward here is to use the expertise and experience of university trained educational researchers in partnership with those in schools who are keen both to use research findings in their classroom (what ResearchED and others are advocating) and also to get involved in research in the classrooms which has good methodology. I would argue ideally by doing some structured and supported Masters or Doctoral level research thus blurring the “us and them” boundaries of teachers and academics.
Also can we get away from the bipolar “positivist research good” “all other research bad” – this is not what Goldacre said in his influential paper and as Doug says above there is a place for action research, practitioner researcher, interpretative etc… as long as we are honest about the claims, the validity and the reliability of such studies.
Oh, I am unapologetic of spinning out my prejudices and biases. I wasn’t castigating TES just musing as to whether they were mischief making. I think they were 🙂
And I’m not denying the skills of being an ‘education researcher’, I’m just questioning their worth. Lots are excellent, but many more just play at science in quite the most appallingly misleading way.
Do you think this blog is in some way advocating bipolarism? I can assure that’s not my belief or intention.
Another David, David James the director of educational enterprises at Wellington talks of having the research lead in schools and says that,
“Ideally this person will have a background in research, having completed a further degree in research methods. Failing that, a current member of staff should be appointed and encouraged to apply to do such a course: gaining a deep understanding of pedagogy and methodologies is fundamental to the role. The post holder should also be a teacher and one who knows how schools work, how teachers teach and how children learn.”
This is the collaboration that already does exist in goodly amounts but I would want to encourage drawing on the skills, experience, expertise and interests of both groups.
You do seem to favour a positivist approach, from my understanding of your writings and in the language that you use – above you use the phrase “playing at science” and I agree that some research purports to be positivist and quantitative when it is not but whilst there is a place for this kind of research it is not the only valid form of research in education and there is still significant debate about the reliability and validity of positive approaches in classroom based research and the transferability of psychology lab studies into the field – and I speak as physicist who is keen and well versed in the positivist framework.
I have nothing against David’s brief – seems fair enough. I wonder if he said this to wind up @C_Hendrick? 😉
Let me summarise my views thus: positive research is (or at least attempts to be) scientific. Other types of education research are narrative. They may be none the worse for that, but they dress up in the clothes of science and as such are, often, inherently dishonest.
Thanks for this summary – I guess we are not going to full agree (wouldn’t the world be dull if we all agreed ;-)) – I would just want to amend your last sentence to “those narrative (interpretive) studies which do dress themselves in the clothes of science and thus purport to be what they are not, are dishonest”.
OK – I’ll take that
I think that there is a truth that the experience of teachers is often overlooked or pushed aside in favour of what education researchers want to find and believe. It is telling that many of the problems facing us at the coalface do not seem to be the subject of research at all.
Take the whole area of nurture groups – the research that I have seen is biased, not particularly comprehensive, small sample sizes and the results are not reliable. Yet the approach and belief is promoted by some respectable universities despite all of this.
I think there is a bias among education researchers toward progressive, idealistic notions of teaching and education, ‘child-centred’, etc which inform the subject matter of their research.
Action research can at least throw up some interesting ideas and questions which can be further researched and are based on a reality rather than an adherence to a particular theoretical approach.
Hi, teachwell – I wish I could say that this is not true but I agree that there are those is all field of research who are driven by their theoretical approach and then influenced by this – it is hard to free of ones biases – especially by definition the sub-conscious ones.
I also agree that there has been a lack of a partnership between the academic in the university (who may well have been a teacher but has probably been away from the ‘chalkface’ for a while, who may never have been there at all) and the those still at the chalkface and it is for that reason that I would like to see more teacher having the chance to do structured and supported research in their classrooms.
I also think that doing a Masters or Doctorate rooted in research in your classroom, your school or a group of schools is the very best form of professional development especially if this is done as part of the school’s own reflective development.
Completely agree. Many education departments have a bad rep even in the better universities as they are seen as overly theoretical and less respected in terms of their real life application and rigour. The other half works as a lecturer and he avoids pedagogical lectures from their education department for that precise reason!!! He’d rather talk to other lecturers about how to deal with any issues he might face. I think collaboration would benefit both!!
Absolutely spot on David. My favourite blog of yours – nicely done. If we can take this quote as a fair representation of what he said, a number of questions arise. First – how is ‘evaluating impact’ separate from research? How are teachers supposed to evaluate the impact of their practice if they don’t inquire, or take baseline measures, or triangulate qualitative and quantitative data? Teachers developing the ability to do this in systematic ways, through small scale research inquiry, as you say doesn’t tell us anything about other contexts. But neither do eye-wateringly expensive RCTs, or mega-meta-analyses based on dodgy stats. But I do strongly believe that small scale teacher inquiry is the best model we have for professional development testing the findings of controlled psychological studies you mention, and studying their utility in the field. Also, this unending obsession with impact is alarming. I associate the word impact with car crashes; with a dull thud, followed by internal bleeding. If a teacher does action research and finds that something in their context does *not* work in the way you might assume by uncritically accepting the accepted wisdom of education research – is this somehow not deemed to be worthwhile, simply because it doesn’t cause Hattie’s barometer to stir?
I too believe teachers need to evaluate psychological studies in real classrooms – the advantage of good lab studies is that because they ar properly controlled (and often less ideologically loaded) it’s possible for teachers to make meaningful., measureable predictions about what *should* happen.
Absolutely. It should go without saying that all approaches have their strengths and limitations, and thus each have a role to play. He should know this, which is why his comments seem so odd. Was it an attempt to suggest that teachers will never replace ed res perhaps – a defence against those who dismiss ed res as a Marxist blob?
In that context his comments might make some sort of sense I suppose. But as you say, to pick on AR is weird. Unless teachers are critically engaged in an ongoing process of finding what works *for them*, questioning not only what pedagogy works within existing constraints but whether those constraints should continue to exist – we are mere technicians of the regime – just educated enough to perform, to quote the stereophonics, but not to ponder the machine at large.
Surely David, you are having your cake and eating it:
“If instead teachers test stuff out and think hard about what students do in response, we’re massively more likely to spot that the Emperor Hattie is running round in the buff.”
It is impossible to reconcile this with: “The idea that pouring a load of correlational confusion and bias into the meta-analytic blender and distilling magic fairy dust for teachers to sprinkle on their lessons is just as fatuous.”
So the correlation that the teacher draws is more valuable than the correlation a researcher draws from hundreds of studies, using thousands of students?
No way. You may point out that there are flaws with both, but the latter has to be far superior a measure than the former. Just because it does not reach a truth that can be categorically proven does not mean that it is wrong.
After all, each of your blogs is filled with your own personal opinions. Just because they can’t be proven in a psychology lab doesn’t mean that they are all wrong.
I enjoy your blog. But you’re wrong this time.
The cake: a teacher testing a theory on his own students can draw meaningful conclusions about how these ideas impact on them: valid.
Eating it: A research drawing conclusions from (let’s charitably assume) well-designed studies on thousands of students about what will work in that teacher’s context: bollocks.
My opinions don’t need to be proved for me to hold them; they are my opinions and need not be held by anyone else unless they find them persuasive.
I enjoy your comments, but you’re wrong this time.
Thanks for your reply. I would be interested to know if your own experience corroborates your belief. It seems to me you are arguing that context is king. You’ve taught in a range of schools. Did you find that everything that worked in your teaching in one context simply failed to work in the next?
More likely, most things still worked, even if slightly differently. Otherwise, how could you want to write books on teaching? What would be the point if context is so powerful a factor? What has your experience been?
I am sorely tempted to write a book about how to use Hattie’s and Coe’s research to improve classroom teaching, and then ask you to write the foreword denouncing the whole premise as ridiculous.
I would love to denounce that book 🙂
No, you’re right – what I find works then tends to become part of my lexicon of successful teaching. What I like is, usually, just what I like. My observations are then subject to confirmation bias and impervious to legitimate criticism. As such it’s probably worth sharing with others as a set of maybes and coulds. The minute my (or anyone else’s) subjective experience is translated into shoulds and oughts it becomes worthless pap.
But also, what has seemed to work with one class will, sometimes, fail with another. Is that not your experience?
I’ll take you up on the denouncement one day!
Yes, I have found that the class or school can change whether things work, but this comes as a shock as it is not the norm. This shock itself demands confirmation bias, I feel.
I completely agree that advice is best as ‘could’ rather than ‘should’.
The problem I have with the Hattie approach is this: I teach primarily GCSE & A level mathematics and would be interested in the effect of (say) feedback. What Hattie gives me is an average effect size for feedback of all subjects over all ages. I see no reason that something that may have worked in year 8 French or year 4 literacy will be of any use to me.
Thanks Phil, your objections make perfect sense if you are looking at feedback about a subject. But really, that is not the best way to think about feedback. You want to feed back to students about what they are not learning, and it has to be, ‘just in time”.
So in literacy, it might be: write me three sentences, each requiring a comma and two verbs. This responds to the mistakes the student is making with the comma splice. What you would take from this as a maths teacher is:
1. Focus on the student’s main problem.
2. Make them respond straight after you have marked their work.
3. Mark early so you can correct what they can’t do.
4. Make the feedback task they respond with short, and target only the problem you want them to solve.
So, these would be directly relevant research findings for you in maths (assuming 1-4 were the findings). Even if the research was based on learning basket weaving and tennis it would have just as much validity.
You might legitimately argue that there may be even better research based on maths, which may be true. But maths dominates the research anyway, because answers are so easy to measure in maths. You’ll probably find that the average is biased in favour of maths in the meta studies as things are.
The Visible Learning program that is based on Hattie’s research in collaboration with him provides a framework and process for teachers to run ongoing impact cycles in their classrooms. The cycles focus on teachers collecting evidence of their practice around one of the VL strands – then examine that evidence with an impact partner and planning changes in practice based on that evidence. They key is for teachers to track the impact of the changes in terms of outcomes for learners. The Impact Cycles include a laser like focus on outcomes – this is the nuanced version and the Hattie truth on how teachers can be very focused researchers of the IMPACT of their practice. Kind of like action research on steroids!
David I really like your posts because they show a thoughtful and reflective mindset. I would ask you to look at your first paragraph and think about whether you were at your best when you wrote this.
For a moment but aside anything else you might know about Hattie and look at the quote again. Is his argument really invalid (not write or wrong just valid or invalid).
Action research might be useful and reflective but there is an argument that it is not quantitative or reliable enough to draw useful general conclusions from (which you argue do a degree yourself in this piece). He doesn’t seem to be saying that teachers are useless in research just that we should focus on evaluation and leave the methodology to specialists as it is a very difficult skill to learn correctly.
In summary please ask yourself if this quote really justifies your response and if there is other information that you are factoring in about Hattie could you direct me to it as I am not averse to some critical analysis of his work as it is so central to some of my beliefs.
P.s this is my first post on any blog how did I do?
I am planning to use this blogpost – and some of the subsequent comments – to stimulate discussion with our PGCE trainee teachers in our professional studies session this week. I’m really looking forward to hearing about their Action Research assignments (which are part & parcel of the Cambridge Uni PGCE, and can be used to “replace” the first year of a 2-year MEd) and their views generally on teacher-led research.
[…] 22nd April – Whose research is it anyway? […]
[…] classrooms. Apparently he thinks we should leave education research in the hands of academics. Whose Research is it Anyway? by […]