The effects of feedback are more complex than we often realise. While expertise and mastery is unlikely to develop without feedback it’s certainly not true to say that giving feedback results in expertise and mastery. There are few teachers who do not prioritise giving feedback and yet not all teachers’ feedback is equally effective.
My understanding of the effects of feedback has grown as I’ve come to accept and internalise the profound differences between ‘performance’ and ‘learning’. If you’re not clear on these, I’ve summarised them here.
Hattie and Timperley point out that, “Feedback is one of the most powerful influences on learning and achievement, but this impact can be either positive or negative.” We know from Kluger & DeNisi’s meta analysis that in 38% of the most robust studies they were able to find, giving feedback had a negative impact on outcomes. So what goes wrong?
It’s interesting to consider the view from cognitive psychology. As Soderstrom and Bjork point out, there is empirical evidence that “delaying, reducing, and summarizing feedback can be better for long-term learning than providing immediate, trial-by-trial feedback.” Further, they point out that, “Numerous studies—some of them dating back decades—have shown that frequent and immediate feedback can, contrary to intuition, degrade learning.”
This might seem on first reading to contradict your lived experience. After all, as every teacher knows, if you give students feedback on how to improve their tennis backhand, essay writing or the process by which to solve quadratic equations they will then make these improvements. There is no doubt whatsoever that giving feedback will improve students’ performance but sadly this does not mean that these improvements will be retained or transferred. In fact, there’s compelling evidence that giving students cues and prompts to improve performance in the short term actually reduces the likelihood of retention and transfer.
In the past I’ve used the analogy of navigation to explain this. Using a map is effortful and it’s easier to memorise routes than to have to map read your way to every destination, especially if you intend to go there more than once. A SatNav, on the other hand, is the perfect machine for giving feedback: its GPS knows exactly where you are, you tell it exactly where you want to go and it provides immediate, trial-by-trial feedback on your progress. If you make a mistake it adapts and provides new instructions to compensate for the error. Navigation becomes effortless and memorising routes is hardly worth the trouble.
When we compare this to the way feedback is given in schools it’s no very great stretch to see how students might become dependent on their teachers for feedback. If teachers give too much feedback too quickly and don’t encourage their students to struggle, it hardly surprising that students would avoid taking the trouble to memorise procedures and processes.
So, does this mean that the only feedback we should give is of the map reading variety? To answer this question we need to understand that feedback has two potential effects; it can promote learning and it can also promote confidence. The trouble is, these effects are often at odds. In order to promote learning feedback should induce struggle and be designed to get students to think hard about subject content, but to promote confidence it needs to be designed to encode success and give students the belief that they can successfully tackle a problem. If there is too much struggle involved in attempting a task we may end up encoding failure with the result that students might believe that they ‘can’t do maths’ or that they’re ‘crap at French’.
My suggestion therefore is to adapt the type of feedback we give depending on where students are in their studies. If they’re at the beginning of a course they will lack the knowledge to successfully perform a task without carefully scaffolded feedback. Being shown how to perform well and being given ‘SatNav feedback’ will help motivate them to see that they can be successful. However, such feedback is unlikely to promote learning. Therefore, as time goes by and students become increasingly confident, teachers ought to reduce the amount of feedback they give and raise their expectations of how much struggle students can reasonably cope with. At this stage, the most effective kind of feedback is of the ‘map reading’ variety. Having to struggle helps students recognise that it’s worth the effort to memorise how to solve specific types of problems and to internalise certain procedures. Once these things have been internalised, students are no longer dependent on teachers’ feedback and performance becomes increasingly effortless. By the time the end of a course is approached there should be little need for teachers to give feedback at all, as students ought to have learned everything they need to be successful.
This then is my suggestion for a feedback continuum. Lots of SatNav feedback at the beginning to promote confidence and encode success. Then we should increasingly reduce, delay and summarise feedback to promote the retention and transfer of the concepts and procedures we need students to master. The more effortful it is to follow feedback, the greater the likelihood that students will find ways to make do without it. Finally, as students have internalised what needs to be learned, teachers should see giving feedback as a last resort and as evidence that teaching has been ineffective.
My feedback:
+You have described almost perfectly what happens in a series of Reading Recovery lessons as a child moves along a continuum from ‘not known at all’ to ‘known in all circumstances’. It really works, doesn’t it?!
– and + Only 4 SPAG errors today.
Well done. Gold star. Please aim for Platinum next time.
The effectiveness of ‘Reading Recovery’ is a matter for debate…
You don’t say?
Well, I left out the whole language mumbo jumbo, but thanks all the same.
[…] Feedback for Learning:Seven Keys to Effective Feedback. The feedback continuum. […]
[…] Behaviour Management Strategies from Bill Rogers Without doubt the greatest personal challenge I’ve faced as a teacher was moving from the Sixth Form college in Wigan where I started teaching, to Holland Park School in London in my mid-20s. Having established the idea in my mind that I was a pretty good teacher, it was a massive shock to discover that in my new context, I was a novice. Using Weekend Language to engage students. Learning to be employable. The feedback continuum: why reducing feedback helps students learn. […]
While I think it is okay to start your argument with research that is over 20 years old, having new research to show whether this is still the case is really needed to convince the reader that it is still valid. Surely, there have been changes in how we teach over this time. I know my teaching has changed and in discussions with my peers, I know this to be the case.
There may have been changes in teaching (many nugatory) but it’s curious to think the way people learn has changed over time. Is there any evidence you can offer in support of this idea? Of any vintage?
I’m not entirely sure which research I cited offends you. I began with Soderstom & Bjork’s 2013 review, then Hattie & Timperely’s 2007 paper on feedback. I then referred to Kluger &DeNisi’s 1996 meta analysis of feedback – is this the paper you take issue with? If so, you need to do better than complain it’s 20 years old. It is still probably the most rigorous and comprehensive meta analysis ever conducted in the sphere of education and while there are of course more recent studies it has never been bettered or refuted. Later studies have only built on this seminal piece of work and it continues to be widely recognised as the gold standard in research into feedback.
If you can offer anything that contradicts Kluger & DeNisis’s findings then I’d be very interested in reading it. In the meantime, you might like to make your way through these studies https://educationendowmentfoundation.org.uk/public/files/Toolkit/Technical_Appendix/EEF_Feedback_Technical_Appendix.pdf
Best of luck.
Not offended by any of the research – thanks for the link and I will endeavour to make sense of it all. Unfortunately, I am unable to offer anything past the anecdote option which is a poor poor cousin of evidence.
I was referring to the Kluger& DeNisi research.
As a staff, we are being flooded with the need to further develop our formative assessment (Dylan Wiliam) use, without any critical analysis of this approach.
Reducing the amount and type of feedback at the right time makes perfect sense. There in lies the value of knowing your students.
Thanks
[…] research-based tips for providing students with meaningful feedback – Edutopia The feedback continuum: why reducing feedback helps students learn – Learning Spy Seven ways to give better feedback to your students – The […]
“We know from Kluger & DeNisi’s meta analysis that in 38% of the most robust studies they were able to find, teachers giving feedback to students had a negative impact on outcomes.”
Is that what the research actually states? I couldn’t see it.
Here you go https://www.tamu.edu/faculty/payne/PA/Kluger%20&%20DeNisi%201996.pdf
I have seen the paper, but where does it state/claim; “that in 38% of the most robust studies they were able to find, teachers giving feedback to students had a negative impact on outcomes.”
My understanding is that Kluger and DeNisi research involved many studies not related to schools, so to draw your numbers from that isn’t quite right.
Warren is correct. Kluger and DeNisi’s review covered all feedback studies, and most of them were laboratory-based, so applying a figure like 38% to feedback in schools would not be warranted. The most important thing about the K&N’s review is that, on the final page, they say that we should stop asking whether feedback works or not. It might be successful in the short term but lower achievement over the long-term. As they say, and as I keep repeating to people, the important thing is what students do with the feedback. And, again as Warren notes, that’s all down to relationships.
Excellent, thank you for the clarification Dylan.
[…] ways to reduce feedback over time to support learning over the […]
[…] science. I’ve written about why it makes better sense to reduce the feedback teachers give here. Weirdly, the report also finds that where students report higher perceptions of receiving […]
[…] The feedback continuum: why reducing feedback helps students learn. […]
[…] they receive, feedback that seeks to raise current performance may end up degrading learning. In this post I suggested a feedback continuum where the aims of giving feedback are different at different […]
[…] In ‘The Feedback Continuum’ Didau (2016) explains the merit of delaying, reducing and summarising feedback to promote memorisation. As feedback is reduced and delayed, learning requires greater effort with a ‘new, higher level of learning’ the result (Clark & Bjork, 2014). […]
[…] ideas considered included David Didau’s suggestion (https://www.learningspy.co.uk/learning/the-feedback-continuum/) that the way in which we provide feedback could move along a continuum. As students progress […]
[…] this post I discussed how this approach might look in regards to how and why we give students feedback, and […]
[…] David Didau once more, this time in discussion of Feedback. During this session, David shared his feedback continuum, which, despite having seen him present the idea before had not resonated fully with me until […]
[…] as David Didau points out, feedback needs to be less scaffolded over time so that students don’t come to rely too […]
[…] The feedback continuum: why reducing feedback helps students learn […]