Over the past year or so, I’ve been doing some very informal research into students’ attitudes and opinions with some of the schools I work with on an ongoing basis.
Two years ago I wrote 2 posts summarising the problems with marking and suggesting an alternative way forward:
Since then I’ve been recommending that one of the ways schools can seek to reduce teachers’ work load is to move away from the expectation that teachers must write extended comments in response to children’s written work and instead move to a system of giving whole class feedback.
The best counter-argument to this suggestion is when teachers ask, but what about motivation? Surely by writing extended comments we’re showing that we value the work they’ve done and if we don’t do this they’ll become demotivated and start to put in less effort.
Whilst I hadn’t seen any evidence that this was true, it seemed like it would be a good idea to find out what students at the schools I was working with actually thought, and so I arranged to speak to a series of representative panels of students ranging in age from Year 2 to Year 13.
One of the things that came out of this was that students really love written comments. Almost universally they related that finding a written comment made them feel they’re work was valued and the longer the comment the more valued they felt. However, almost all the students I spoke to, said that they were unlikely to read what the teacher had written and that the longer the comment, the less likely they were to read it. Kids love comments, but they don’t read them! It seems fair to say that written feedback must be having a motivational effect without actually having any effect on children’s progress. This is a consistent finding from five different schools.
Having said that, children really appreciate their teachers spending time talking to them about their work. There wasn’t any consistency about whether they preferred face to face time over the proxy of time spent marking, but all were pleased to have their teachers’ attention in lessons. And, more importantly, all reported finding verbal explanations much easier to understand than written comments.
I also asked students to show me their books and talk about what else teachers do that they either liked or didn’t like. Interestingly, kids love ticks! And they really love double ticks! They don’t love them as much as written comments but they like them almost as much. I found it fascinating that children interpret ticks – and double ticks – very differently: some said that it allowed them to see precisely what their teacher had liked, others thought it was a sign that the teacher was acknowledging that they’d seen and approved of quality of their work. When I asked what children found more helpful, ticks or comments, there was very little agreement. Some thought that the comments might be useful if they ever decided to go back and read them, others guiltily admitted that ticks were easier to understand. I should say, the only group that consistently valued comments over ticks were the A level students, but even they were conflicted.
What can we learn from all this? Well, I need to acknowledge that this process was far from scientific and doesn’t provide anything like robust evidence on which to make decisions. I would really encourage you to go through a similar process with students in your schools to see if their views are similar or different. Having said that, I think it’s safe to say that children really do find teachers marking their books motivational. I hadn’t appreciated just how much they valued the quantity of red pen in their books (Oh, and literally none of the children I spoke to cared about pen colour!)
If I were to offer tentative advice it would be that reducing written feedback in order to spend more time on whole class feedback still seems like the best bet. What I’d add is that when reading children’s work in order to provide whole class feedback, make sure you’re liberally ticking anything you like.
Reading Lemov’s Practice Perfect at the moment and he talks about how immediacy of feedback is vital to its effect. The longer the delay the less it works. Effective strategies I have seen are miniwhite boards (we call them showme boards) and also getting the class to work silently on a paragraph answers while the teacher calls each student up in turn for personal feedback – ideally every student gets a minute or so of direct feedback in one lesson.
The problem with this idea about feedback is that it’s wrong. Immediate feedback certainly improves current performance but seems to have a negative effect on storage strength in the longer term. This review from Soderstrom & Bjork is useful: http://mrbartonmaths.com/resourcesnew/8.%20Research/Memory%20and%20Revision/Learning%20versus%20Performance%20-%20Soderstrom%20and%20Bjork.pdf
“One common assumption has been that providing feedback from an external source (i.e., augmented feedback) during an acquisition phase fosters long-term learning to the extent that that feedback is given immediately, accurately, and frequently. However, a number of studies in the motor and verbal domains have challenged this assumption. Empirical evidence suggests that delaying, reducing, and summarizing feedback can be better for long-term learning than providing immediate, trial-by-trial feedback. However, the very feedback schedules that
facilitate learning can have negligible (or even detrimental) performance effects during the acquisition phase.” p. 23
Fasincating to learn. Thanks for the reference.
Surely immediate feedback is negative if it is not corrective. We don’t want a misconception inbedded. Now feedback that isn’t essential would be more of a dilemma.
A colleague and I were discussing this entry, and at the risk of stating the obvious, we agreed that you need to set aside lesson time for children to read marking to make it worthwhile (not that you can prove that’s what they use the time to do, unless feedback is required from them, which can then become the dreaded double/triple marking but at least the expectation is there). We also agreed that our take away was to try double ticking!
Hi Jody – of course I agree that students are more likely to read teacher’s feedback if you set aside lesson time for them to do so. But what then? The usual assumption if they then correct their work they will internalise the feedback but this is contradicted by evidence. Instead what seems to happen is that they improve their performance in the hear and now which reduces the likelihood that feedback will be retained and transferred.
The other problem that routinely occurs is that students don’t understand or don’t know how to act upon teachers’ feedback and end up asking for verbal feedback on the written feedback!
I don’t have the second problem, but I’m now concerned that I’m getting children to act on my feedback in that time, but it’s not going to have the positive effect I’m doing it for, so what is a better way to do it? And if you have any references to the evidence for that for me to follow up I’d gladly do so.
I was internally cheering as I read this until reached “precisely what they’re teacher had liked”.
Please correct the typo so that I can share.
Thanks. Corrected.
Been rereading the Bjork article and thought about Davids response to Derek. I think there is a confusion over immediacy. When we say students need immediate feedback we are saying that writing a comment they read next week, or giving students advice about last weeks work, is not as effective as responding within the lesson or a short period of time. This seemingly contrasts with Bjork who argues that a pause is necessary but in some cases this can be only a few seconds. These are very different timescales.
It seems obvious that constant feedback could introduce excessive cognitive load and potentially prevent schema formation but allowing errors to become embedded clearly can not be helpful. I did notice the benefits of regular testing (which is a form of feedback) and Bjork’s description of desirable difficulties (which is quite specific).
The type of knowledge may matter as well. I was reading recently how non-faded prompts seemed to help (over faded prompts) in some circumstances (maths) while those same prompts could be unhelpful when writing. This reminded me of Greg Ashman talking about worked examples (which work well in maths) and David on writing proformas (which can cause more problems than they solve). The difference might be thats writing proforma forces you to engage with it (you have to use X) while a worked example can be ignored if the student does not require it.
Going to have a longer reread of Bjork again next week.
Interesting points, Michael. I too, have been pondering this after reading Bjork etc. and I have serious doubts whether their findings can be applied to the more complex learning contexts we face in the classroom. As I music teacher I think of teaching an instrument. When a student starts out they need instant feedback on how to hold their instrument, where to place their fingers and so on. If they do not get that, and it requires constant correction, then they start to develop bad habits which are very hard to fix. Definitely guidance is faded as the student becomes expert. Also in real learning situations, there are a whole series of lessons in which students are ‘trained’ in a procedure over weeks and months. This is probably very different to the experimental contexts of the research Bjork summarises.
If there are boundary conditions which define when different types of feedback can be used then that would make using the research harder not impossible. My experience is that this is normal when applying research.