Question: How important is it for teachers to provide written feedback on students’ work?
Answer: No one knows.
This is essentially the substance of the Education Endowment Foundation’s long-awaited review on written marking.
The review begins with the following admission:
…the review found a striking disparity between the enormous amount of effort invested in marking books, and the very small number of robust studies that have been completed to date. While the evidence contains useful findings, it is simply not possible to provide definitive answers to all the questions teachers are rightly asking. [my emphasis]
But then they go and spoil it all by saying something stupid like:
Some findings do, however, emerge from the evidence that could aid school leaders and teachers aiming to create an effective, sustainable and time-efficient marking policy. These include that:
- Careless mistakes should be marked differently to errors resulting from misunderstanding. The latter may be best addressed by providing hints or questions which lead pupils to underlying principles; the former by simply marking the mistake as incorrect, without giving the right answer
- Awarding grades for every piece of work may reduce the impact of marking, particularly if pupils become preoccupied with grades at the expense of a consideration of teachers’ formative comments
- The use of targets to make marking as specific and actionable as possible is likely to increase pupil progress
- Pupils are unlikely to benefit from marking unless some time is set aside to enable pupils to consider and respond to marking
- Some forms of marking, including acknowledgement marking, are unlikely to enhance pupil progress. A mantra might be that schools should mark less in terms of the number of pieces of work marked, but mark better.
The only one of these statements that can reasonably be concluded from the flimsy research base the review’s authors unearthed is the finding that awarding grades seems to undermine the effects of written feedback. All the rest is speculation at best and unexamined, biased assumption at worst.
Let’s consider each claim in turn.
1. “Careless mistakes should be marked differently to errors resulting from misunderstanding.”
This, in and of itself, is probably correct. I find the distinction between ‘errors’ (misconceptions) and ‘mistakes’ (typos & slip-ups) pleasing. Clearly, giving detailed written feedback on something students already know is a waste of time. The problem is how to distinguish between something a student doesn’t know and something a student doesn’t do. I’ve seen reams of work in which capital letters are missing but have encountered almost no students in mainstream secondary schools who do not conceptually understand the use and purpose of a capital letter. The fact they don’t use them isn’t down to ignorance, but habit. They have practised writing without capital letters and have, consequently, become superb at it: they do it effortlessly. The advice offered in the review is that teachers should “simply mark the mistake as incorrect, without giving the right answer.” There’s just no evidence for this. The only way to undo to undo this habit is to make it more onerous for students to continue making the same mistake than not. I’ve found it useful to refuse to mark work which students haven’t proofread: failure to spot mistakes which I know they know need to result in some sort of consequence.
2. “Awarding grades for every piece of work may reduce the impact of marking”
Lots of people are aware of Ruth Butler’s small-scale studies demonstrating the nugatory effects of grading, but these wouldn’t count for much on their own. Much more interesting is the research conducted in Sweden by Klapp et al.
During 12 years (1969 to 1982) Swedish municipalities decided themselves whether or not to grade their students and this natural setting makes it possible to investigate how grading affected students’ subsequent achievement. This natural setting caused some students in the 6th Grade in Sweden to obtain grades while others did not. This circumstance, in combination with the fact that a longitudinal cohort study included a large sample of students both with and without grades offers an opportunity to use a quasi-experimental longitudinal design in order to investigate how grades affect students’ later achievement.
The results are still somewhat equivocal, but it seems pretty clear that although grades might be useful (or even essential) for some purposes, they do seem to undermine many children’s academic performance.
My advice would be that if you really need to grade a piece of work, don’t then undermine your efforts by also writing feedback. Conversely, if you’ve spent time writing feedback, it’s probably not a good idea to also grade the piece of work.
3. “The use of targets to make marking as specific and actionable as possible is likely to increase pupil progress”
As the report says, “Very few studies appear to focus specifically on the impact of writing targets on work.” Unfortunately, instead of simply acknowledging this deficit and moving on, the review’s author decide to extrapolate from research on other forms of feedback to draw their conclusions. In a review on the evidence of written marking this is odd, to say the least. It’s definitely the case that findings from other areas of research suggest that further research is desirable, but how can we reasonably conclude anything more beyond suggesting that setting specific targets might be a good idea. Or it might not. This is just guesswork.
4. “Pupils are unlikely to benefit from marking unless some time is set aside to enable pupils to consider and respond to marking”
This is, I think, the most controversial of the review’s assertions. The only evidence which currently exists are some surveys of whether students like responding to feedback. Apparently they do, at least in Higher Education settings. Well, so what? Students like Calypso ice pops, watching The Next Step and Snap Chatting each other inappropriate pictures. What students like is hardly qualification for making education policy. And what HE students like tells us precious little about what school students need. Again, the conclusion drawn by the review ought to have been that it might be a good idea to encourage students to respond to feedback, but equally, it might not.
5. “Some forms of marking, including acknowledgement marking, are unlikely to enhance pupil progress”
Well, maybe. It might be the case that tick’n’flick has little impact on students’ progress, but there’s a possibility that it could provide much-needed motivation. Also, teachers receiving feedback from students may actually be more important students receiving feedback from teachers. This marks a powerful change of perspective. John Hattie says in Visible Learning, “It was only when I discovered that feedback was most powerful when it is from the students to the teachers that I started to understand it better.” When we read students’ work we take feedback from them. We find out something about what they’re thinking. We shouldn’t be deceived into thinking that this is evidence of learning, but we should see it as useful information which gives us some indication about whether our teaching is having the effects we intend. Having taken feedback from our students, we are then in a better position to fine-tune our instruction, give whole class feedback on common errors and misconceptions, and talk to individuals about their work at quiet points in a lesson.
The only really useful findings the report has to offer is that “The quality of existing evidence focused specifically on written marking is low.” Without proper research we’re operating in the dark with guesswork and intuition. It could be that all the reviews recommendations are spot on. It could be the equivalent of encouraging teachers to use bloodletting to balance humours in their patients. We just don’t know enough to make reliable recommendations or draw meaningful conclusions. The authors are right to point this out as both surprising and concerning and the call for further study is welcome. The pages of speculation and guesswork are not.
Richard Farrow sums the situation up here:
This report is a living, breathing, example of why you should NEVER only read the executive summary. But that aside, the report has no evidence about anything useful (to do with written marking) and should never have been published. In fact, it could have been one paragraph saying the following: “we can’t find anything to look at so we are saying to the research community that they MUST research this. In the meantime we will not be publishing a report on this, just in case school leaders take it out of context.”
IF this report is quoted at you in your workplace to make you do something you feel is daft, say the following: “there is no evidence to back up any conclusions you draw from the report. We are still waiting for decent studies on this area of research.”
[…] Marking is an act of love. Public Critique. 8. October 2013 #blogsync: Marking with Impact! Whenisitdueinsir. Session 117 – Feedback: how can we make marking make an impact? Marking Codes. Adventures with gallery critique. The start of the journey. The 5 Minute Marking Plan by @TeacherToolkit and @LeadingLearner #5MinPlan. My butterfly: the sentence escalator. Embedded Formative Assessment – Dylan Wiliam. A marked decline? The EEF’s review of the evidence on written marking. […]
[…] post A marked decline? The EEF’s review of the evidence on written marking appeared first on David Didau: The Learning […]
re number 3 and 4, surely they go together. The effectiveness of a written target is in how it used formatively in subsequent tasks and lessons to aid meta-cognitive development and understanding. This report seems to make the mistake of seeing marking as a singular event rather than an ongoing learning process.
I remember one lesson, one of those that generated a few ‘light bulb’ moments for students, where the big leap of progress made wouldn’t have been reflected in the visible marking in their books at all.
I’d checked all their attempts at an analytical paragraph but written little in the form of targets as writing ‘you just don’t get it do you?’ in every book didn’t seem worth it. Instead I tried an activity that involved me rewriting the paragraph they were analysing and, for once, one of my ideas worked! I could see the pennies dropping and they all wrote much better responses the next lesson.
If anyone looked in their books they would have seen the students’ poor attempts at analysis, with a cursory red tick after it to show it had been seen by me, followed by a much improved paragraph (seemly magical progress despite no helpful www or ebi). My ‘marking’ was invisible as the feedback was built into the next lesson.
That’s the danger of just looking in books in order to assess marking and its impact – not all the progress and learning will be there in black, white, red, green or purple.
I use books as student feedback so tick and flick in lesson only adding Ebi /Www or comment to action a change in real time . As mentioned in a previous comment light bulb moments showing progress aren’t necessarily written ones, but the improvement in written work will be obvious. Secondly, I only grade assessment work. 1 done in class and 1 piece of Hwk each half term. I mark thoroughly on these adding feedback on a front sheet for the students to act upon in dedicated improvement time. They make the corrections first and only AFTER this is done do they get their grades. Corrections and improvement in craft have become more embedded over time as a result. Grade is only given when all corrections made. This has reduced the dismissal that my marking got when kids only wanted to know the grade first . Correcting and improving is surely what marking/feedback is for?
Hi David,
I’ve included your latest posting in my comments about the criticism of the Education Endowment Foundation. I suggest that there is enough material about people’s worries to warrant some investigation into the work of the EEF:
http://www.iferi.org/iferi_forum/viewtopic.php?f=2&t=591
[…] I give them some feedback. […]
[…] This is a very interesting response to the DfE’s review on marking by David Didau, and it really forces you to question what you’re doing and how valuable it is (based on the evidence we have). […]
[…] hard to think about. Think about how hard it is to get teachers to mark less. Despite the lack of evidence that extensive marking improves students’ outcomes and in face of the overwhelming time demands of some marking policies, the belief that marking must […]
[…] Obviously we shouldn’t leave such an important issue to the vagaries of what I reckon. Instead we ought to consult the research into marking practices. It turns out that while there’s a ton of research into the effects of feedback (which tells us that in the most robust studies feedback results in negative outcomes 38% of the time!) there is no reliable research into the effects marking. Nothing., Nada. Zilch. (Or, close to nothing. I wrote about the EEF’s summary of the dearth of useful evidence here.) […]
[…] that marking in this way has a positive effect on students’ outcomes or learning. As David Didau points out, there is actually no academic research as to the efficacy of students responding to marking, so […]