I wrote earlier in the week about why, despite it’s limitations, research is better than a hunch. Since then, I’ve been reading Daniel Willingham’s article on Real Clear Education; he says that it’s not that people are stupid but that science is hard. He refers to the nobel prize winning physicist Carl Weiman whose interest in science education came from many years of working closely with physics undergraduates and observing that “their success in physics courses was such a poor predictor of a student’s ultimate success as a physicist.” Or in other words, performance was not a useful indication of learning.
Weiman argues that rigorous eduction research is not so very different to ‘hard’ science as some might want to suggest. Good science has the power to make useful predictions; if research can be used to inform our actions then it is useful. It’s unnecessary to accurately control and predict how every student in every context will behave or learn, just as a physicist has no need to control or predict how every single atom will behave in a physics experiment. All that’s necessary is that we can predict an outcome that is both meaningful and measurable.
This tells us that the insights of cognitive science, gleaned over more than a century and predicated on well designed, repeatable tests that build on prior research and which produce broadly consensual meaningful and measurable outcomes should not be dismissed as unlikely to work in the classroom.
Of course it’s the case that a specific intervention may not be effective with a particular student, but that doesn’t mean that it will not be effective in a majority of cases. As an example, I conducted a mini investigation into the effects of praise on feedback last year. I’ll confess that I didn’t put any effort into designing a fair test, I just wanted to see what would happen if I stopped praising my students and just concentrated on giving them instructional and corrective feedback. I explained to them what I was doing and why I was doing it and after a 6 week period I asked them to evaluate how they thought the experiment had gone. 29 out of 30 students said they had found it easier to act on feedback and felt that they had improved as a result. One students said that she had struggled with not been praised for ‘what went well’ and had failed to make progress as a result. My conclusions? I would continue with not giving praise for the majority of the class but that the student that felt she needed praise would get it.
Now I’m not claiming my ‘research’ has any value as science: it doesn’t. But I decided to make this change after reading some research which predicted that my students might find praise counter-productive. The research I’d read was useful because it allowed me to test a theory that was underpinned by empirical evidence. I’ll allow to that (as Carol Webb has pointed out in the comments below) that I’m as susceptible to ‘observer effects’ as anyone else and that my students may well have been experiencing the Hawthorne Effect. Maybe so, but even if my trial hadn’t ‘worked’, it still allowed me to make a useful prediction. Obviously if I’d found the prediction to be ineffective for my particular students that would have been a useful finding too.
So, yes: applied sciences are different form ‘natural’ sciences and education research is, as Willingham puts it “saturated with values” but any research that offers meaningful and measurable outcomes is worth at least considering.
But does this trump the intuition of an experienced and thoughtful practitioner? Yes, I think it does. We don’t know what we don’t know and without research to make these predictions we will inevitable rely on what we’ve always done. We’re very good at jumping to conclusions about what we feel, intuitively, must be effective. But as Daniel Simons and Christopher Chabris have shown in The Invisible Gorilla, our intuition is not to be trusted.
As an example, I found this on Richard Wiseman‘s blog yesterday:
The ‘black’ pieces are the identical shade and colour as the ‘white’ pieces. I know, right? That’s obviously not true. I had to print it out and line up the pieces before I could accept it. Our brains refuse to allow us to accept the evidence of our eyes. We cannot trust our intuition because it is demonstrably wrong. And if it’s wrong about little things, maybe it’s wrong about some of the bigger ones too?
Related posts
The Cult of Outstanding
Everything we’ve been told about teaching is wrong and what to do about it!
Testing & assessment: have we been doing the right things for the wrong reasons?
Hi David, I liked your experimental approach. Have you heard of the “observer effect”? See http://youtu.be/DfPeprQ7oGc that puts this in wowing quantum terms, but then compare with “The Hawthorne Studies”. Maybe you are familiar with that already too? See http://en.m.wikipedia.org/wiki/Hawthorne_effect for more fascinating info :-)))
Thanks Carol – yes I’m familiar with these issues; do you think I collapsed the wave pattern of my students’ normal behaviour? And does it matter? Have thought in the past that it might be interesting to subject classes to constant experimentation to try to harness some sort of permanent Hawthorne Effect. But that’s crazy talk, right?
Actually… That sounds pretty damn good!!!
Thanks Carol for posting the link to the double slit experiment. I find it fascinating, as explained in the clip, that the single electron goes through both slits, goes through neither slit, goes through just one slit and also goes through just the other slit yet when the electron is under observation its behaviour changes significantly. I wonder if quantum terms could be used to help explain the outcomes of good educational research in that classroom practice sometimes works for an individual, for a different individual, for the whole group and also for nobody in the group and the moment we step in to try observe what is happening we change the dynamics considerably!
I’d love to think so: quantum learning? 🙂
Perhaps viewing learning from a quantum perspective might help us understand more about why different teachers using differing approaches can achieve similar results, and more about why a teacher ‘s approach in the classroom can be highly successful one year but be nowhere near as successful the following year with similar students. The reality of teaching and learning really is full of conundrums and contradictions!
Thanks for another thought-provoking piece, David. Dare I ask if it was in any way connected with my post yesterday which might have been construed as saying the opposite (especially as I commented on your previous piece)?
It was helpful of you to elaborate on your interpretation of research, and in fact I think it is very similar to things I do all the time – but I suspect that this is not the only thing that people understand from the word.
What is the difference between this kind of rather informal research (conducted over an extended period of time) and what I call ‘working awareness’ – a.k.a. focused, reflective Experience? I assume you will assimilate your experiment on praise into your working knowledge and deploy as appropriate; that’s what I call experience! The key thing is not then to use it to lay down the law to others, for all that it may contain some grains of more universal truth.
It would indeed be madness to do things in class based entirely on spur-of-the-moment inclination, but I don’t think that this is what working from experience means. It simply refers to using one’s own collected observations and approaches that seem to have worked in the past – but always moderated through the application of moment-by-moment empathy to the specific classroom situation facing one. That surely is not such a bad way to proceed, even when it can’t be more universally evidenced?
As for trusting our intuition, surely it is just as flawed to assume it will always be wrong as right? I’ve had plenty of times when intuition achieved the right outcome – but intuition is, in the end only the projection of subconscious awareness of past experiences.
The only way forward is tentatively and reflectively, with full awareness of all the pitfalls that beset us such as confirmation bias, Hawthorne effect and more. But the one thing that gets left out in this is the empathy needed to ‘read’ a given situation to help one determine what to do when. And I don’t think that can be developed by anything other than doing the time and being in possession of the necessary sensitivities.
p.s. I spotted the gorilla unassisted first time 😉
It wasn’t in response to your post (although I did read it) – as I said, it was prompted by reading the Weiman paper. I think maybe I’ve not explained myself well: what I described was not what I would consider research. I was enabled to test out an idea because of some actual research. If I’d just decided not to praise kids on a hunch, that would have been different and, I think, wrong. This could lead us down a rabbit hole of asking whether the end justifies the means, but I really don’t think it does. We certainly shouldn’t celebrate teachers ‘getting lucky’.
And you’re in a majority: about 60% of people spot the gorilla. But they tend to miscount the passes 😉
Thanks for the clarification. Just to stir the pot a little, though, lots of great discoveries came about as a result of people getting lucky or following (informed) hunches! On the issue of praise, there’s a lot that can be pondered simply from first principles – not too difficult to wonder whether praise *might* be counter-productive – just needs some insight into the human condition, surely?
That, I think, is the problem: people doing stuff because it’s ‘obviously right’ and because they believe they have insight into the human condition. Better, I think, to test these insights in well designed trials which we can then use to predict how our pupils will behave. If there isn’t a well designed trial we cna use to make these predictions then that’s a problem: we may well be left with our hunches. But it there is, surely it’s irresponsible to ignore it?
David, rather than occupy your comments board, I’ve posted a reply on my own blog here http://ijstock.wordpress.com/2014/05/09/teachers-and-the-art-of-surfing/ should you care to read it.
David, what has prompted you to change the title of the post?
I liked this one better.
[…] I wrote earlier in the week about why, despite it’s limitations, research is better than a hunch. Since then, I’ve been reading Daniel Willingham’s article on Real Clear Education; he says that it’s not that people are stupid but that science is hard. He refers to the nobel prize winning physicist Carl Weiman whose interest in science education […]
Malcolm Gladwell argues the case for intuition in Blink, but I have issues with his ideas because the people who ‘blink’ and get it right often have many many years in their profession. Their intuition is bound to be, shall we say, good, within that field.
“We don’t know what we don’t know” is the best argument for research. Our common sense only suggests what we already know.
Loving the counterintuitive, the alternative, even the nonsensical at the moment. Just finishing Nassim Nicholas Taleb’s Black Swan (for the second time) and can recommend it for discussion of research and enquiry.
Yes, Black Swan is a great source of ‘obvious’ truths which no one predicted. The problem with not knowing what we don’t know is that we might not be researching the right stuff?
Antifragile is even better! I strongly recommend it for teachers, and also particularly relevant to this post.Regarding your point, I think Taleb would argue that as Black Swans can never be predicted (except in hindsight) that far from worrying about researching the right stuff or not, you would be better insulating yourself as much as is ever possible against any possible negative impacts.
Antifragile develops this idea to come up with the concept of a system which gets stronger when exposed to stresses (like the immune system).
The way science is full of accidental discoveries like Penicillin is an tremendous example of how that works!
I think you’re right to be sceptical of Gladwell there. As you’ve realised those people are only making an intuitive judgement on a surface level whereas in reality they are probably subconsciously accessing similar situations they’ve been exposed to repeatedly over years. Expertise is far more domain specific than we openly acknowledge in secondary education.
It’s the only Gladwell book which doesn’t really work for me. I wasn’t convinced with the way his 10,000 hours argument in Outliers suggested genius for everybody or the way it seems to have been appropriated as meaning you have to do 10,000 hours before anything is worthwhile, but Blink just seemed a bit off the cuff. Still a fan though!
You’re right not to be convinced. The 10,000 hours thing he butchered from Ericsson’s very important paper which you can read online here: http://graphics8.nytimes.com/images/blogs/freakonomics/pdf/DeliberatePractice(PsychologicalReview).pdf If you read Ericsson’s original (in fact you really only have to read the title) you can see Ericsson is referring to 10,000 hours of deliberate practice which is very different indeed. Far from Ericsson recommending 10,000 hours of practice makes you an expert he’s saying, that practice won’t get you to elite performance level UNLESS it’s deliberate. Ericsson has also publically distanced himself from Gladwell. http://www.bbc.co.uk/news/magazine-26384712
Sadly, more teachers have read Gladwell than Ericsson! If you’re interested in the power of deliberate practice, follow the link to the “Dan Plan” at the side of the BBC article above. It’s very inspiring!
Chess players are apparently an example of that – their supposed intuition relies heavily on a mental back-catalogue of previously-experienced situations, and here is only one way to acquire that.
I think you’re right that ‘experience’ means the wide ability to draw on memories of prior situations and their outcomes. I don’t see why it is a dirty word – apart from the fact that you can’t acquire it simply by stamping your foot or as easily as acquiring a fancy job title – and it also presents a good case for the value of internalised knowledge!
Agree on all fronts, the chess research is pretty well known in a variety of settings. In terms of domain specificity, Chess masters are no stronger than others when given situations to analyse outside of their area of expertise and in terms of the power of deliberate practice the Polgar sisters are referenced in the Ericsson article I referenced in a previous post.
One of the issues that arise out of not knowing what we don’t know is that we sometimes believe what we know because we don’t now otherwise. Good educational research has the potential to introduce us to new insights that prompt us to question our understandings and enable us to distinguish between rhetoric and reality. Credible educational research brings to our attention dubious claims about things which we may never query had the research not opened our minds. Many examples can be provided of instances where educational leaders and teachers have become unwittingly caught up in the latest fad such as “learning styles”, “brain hemispherical dominance” and “generational differences” to name a just few of them. High quality educational research can help us take a firmer stance when choosing what we do in our classrooms.
What I don’t think even the best educational research is capable of doing is providing us with a clear-cut theory of teaching and learning which will reveal the essence of what works well, always or indeed most of the time. The act of undertaking noteworthy research in education begins with a specific focus on the issue at hand. All educational research therefore takes account of only the factors that are believed to be relevant to its focus which means it is limited in what it can reveal. There will always be variables that could have been included in the design of the research and we will never know if these would have made a significant difference to the research findings. When viewed from this standpoint, the outcomes of educational research can be seen to be nothing more than provisional. The highest quality educational research cannot produce complete and final understandings or answers to problems and consequently perhaps, there should be no circumstances in which we would expect it to do so.
What you’re describing in the later half of your post are the standard issues with using any model to represent the reality. In this case theory being the model. However, what might serve your purposes of finding the essence of what works well might be to use theory and practice to develop some heuristic guidelines which work effectively in most situations for most students most of the time and use research to refine them over time. Paretto guidelines for teachers if you will (80/20). I think Doug Lemov’s teach like a champion has some of these heuristics in it already such as “stronger teachers have slicker and more time efficient starts to the lesson.” I’ve paraphrased him but you get the idea.
[…] posting it here rather than occupying David’s own comments board. The original post can be found here. I am also including David’s response to my contribution for the sake of […]
[…] So, what’s the alternative? Well, wrote in this post that […]
[…] Intuition vs evidence: the power of prediction […]
[…] certainly true that classrooms are very different environments to psychology labs but – as I argued here – that doesn’t mean we should dismiss the findings of psychologists. Good science has […]