Opportunity makes a thief. – Francis Bacon
I wrote recently about the differences between marking and feedback. In brief, and contrary to popular wisdom, they are not the same thing; feedback is universally agreed to be a good bet in teachers’ efforts to improve student outcomes whereas as marking appears to be almost entirely unsupported by evidence and neglected by researchers.
Marking takes time
Although there are some who dislike the use of the term opportunity cost being applied to education, there’s no getting away from the fact that whilst we may be able to renew all sorts of resources, time is always finite. Once it’s gone, it’s gone forever. Time spent marking cannot also be spent doing something else. The cost of our decisions is measured not only in the effectiveness of what we have done, but in terms of the value of the alternative forgone.
Up until recently, it’s been assumed that the time teachers spend marking is time well spent. The assumption is that it will result in students being given feedback, which will, in turn, help them improve whatever it is they’ve been practising. (There are all sorts of flaws in this theory, many of which I describe here.) But, if marking does not necessarily lead to children receiving feedback, maybe it’s a poor investment. What other, more profitable, activity might teachers have been engaged in?
Here are a few of the reasons why we might decide to spend time marking:
- To grade and summatively assess students’ performance
- To correct students’ mistakes
- To help students to improve their current level of performance
- For teachers to receive feedback from students about how well they appear to be understanding the content being taught
- To motivate students to work harder
- Because parents like it and students have come to expect it
- To prevent students from having to struggle or think
- For accountability purposes (as a proxy for convincing managers that you are a good teacher)
Some of these are legitimate reasons for marking, some are not. I definitely thinking reading students work might be very important, but the process of making or giving marks may not be.
To that end, I had a fascinating conversation last week with Dr Chris Wheadon of No More Marking. As far as I can tell, it appears to be an exciting development and might end up saving teachers precious time to give students valuable feedback on their work. Although still in the pilot phases of development, the system asks teachers to upload essays which are compared and placed into a rank order. The system doesn’t rely on computer programmes or complex algorithms, instead teams of subject experts (PhD students working for the sheer love of it) read a couple of essays and decide which one they like best. Each essay is judged by a number of different experts and their subjective opinions are aggregated. There are no vague or over-complicated markschemes to interpret and teachers can select any scale – 1-20, 1-100 they wish; the system will record the aggregate of the experts’ marks accordingly. This then allows teachers and students to have meaningful discussions about why an essay has scored a particular mark to drive precise, generalisable feedback on how performance might be improved.
Imagine it: all a busy teacher would need to do is photograph and upload their students’ essays and wait for the marked results to drop into their inbox overnight ready for analysis and debate the following lesson. Sounds almost magical, doesn’t it? And the best news is, it’s completely free! Chris is currently keen to hear from schools and teachers who would like to participate in further trials.
The other question is, how many of the bullet points above might this system cover? And does it matter if all bullets aren’t covered? I’d love to know your thoughts.
Against the reasons you offer I have *the ones I value, and ^ones I think marking/programme could tackle:
To grade and summatively assess students’ performance*^
To correct students’ mistakes* (^eventually, if not already)
To help students to improve their current level of performance*^
For teachers to receive feedback from students about how well they appear to be understanding the content being taught* (^if provides feedback)
To motivate students to work harder* (^in hitting criteria for marks)
Because parents like it and students have come to expect it^
To prevent students from having to struggle or think^(any ineffective marking system can do this)
For accountability purposes (as a proxy for convincing managers that you are a good teacher)^
My limited understanding is that No More Marking offers:
– no more marking by the teacher who still provides verbal feedback to students
– standardised marking through comparative marking (I understood using algorithms)
– comparative marking which is the most reliable method of assessment
I’m not clear:
– if the comparative marking/scoring is based on the class, school’s yeargroup, or nationally?
– is it free ‘at the moment’ because it is a trial? [i.e.if it’s free then you are the product]
– post-trials what is the likely costs? if no more expensive than single marking is that based on in-school or outsourced cost?
– how many assessments would be scored this way or is cost-effective for all work to be scored this way?
– is there a time and motion study to compare individually photographing and then analysing scored essays vs classteachers simply analysing and marking essays?
– who sets the criteria? is it currently as per essays that are tailored to meet the requirements of one exam board, then teachers and students re-tailor essays with each exam board change?
-The discussion sounds like it may result in more class time explaining how students can score marks against a criteria (even in primary?); does that involve more time than currently
– does feedback time detract from subject knowledge and practice time during classtime?
My thoughts at this point:
On the information available No More Marking offer reliable, standardised scoring, can filter out classteacher bias, and allows the teacher and student to learn exactly how to meet the testing criteria. Which addresses some issues.
However, if the teacher still needs to read the papers, analyse, and verbally feedback to the student I can’t see there being a time-saving factor for the teacher so the selling point is simply marking reliability, and there is then an outsourcing cost. But teacher still engages with students’ work.
However, if the scoring comes with assessment analysis already carried out I can see it saving the teacher time. Personally, I think this further divorces the teacher from knowing the student and suspect it de-skills future generations of teachers; perhaps another step along the road of virtual teaching and a nail in the coffin of actual teachers in classrooms.
Thanks.
In response to your points of clarification:
– the comparisons are based solely on the class, but it would be relatively straightforward to scale this up to compare and rank all exam scripts in, say, a national, externally marked exam. Ofqual are, appparently, investigating the possibilities.
– My understanding is that the service will continue to be free – at least, that’s what Chris Wheadon indicated.
– That would depend on the teacher or the school.
– No. But photgraphing and pressing send is pretty quick. Chris said it could be done at the rate of a few seconds per essay.
– There are no criteria. This is an aggregation of purely subjective comparative judgements.
– You could spend time discussing how marks are scored but I can’t see why you’d bother. Much better to explore why one essay is better than another and consider how to emulate these features.
– Again, that depends; only if you feel this is valuable.
Reading students’ work is valuable; marking marks on it is, possibly, not. I understand your fears but feel they are unjustified.
Sorry if my response seems dense, but surely reading a class’ books is the best way of understanding whether they have understood a concept or area taught.
So I have a Yr11 class preparing for a Lit controlled assessment on Romeo and Juliet. I set them the task of writing a paragraph in response to a question on a particular scene. By reading their responses I get to understand where individuals are in their understanding and ability to respond analytically but also the level of understanding across the class – whether I have taught a concept well (or not) and where the next lesson needs to go. As I move around the class I can have an informed conversation as necessary about their work.
Having an independent marking service sounds wonderful but surely this depersonalises the feedback process, if you aren’t engaging with work that a student has invested their time and effort in? I know many students who would respond negatively along the lines of “Why should I write this if you aren’t going to bother reading it.” If you have taught a particular concept or skill, my marking will be tuned into gauging the level of understanding for the individuals and the class – the service described does not sound like this would work best for marking class work.
This could howeve be a service that is useful for formal assessment of students’ work. This is particularly true with the move to terminal examinations.
An interesting read! Thank you.
[…] appearance of written feedback tells us little about whether feedback is being given and arguably, more making might mean less feedback. It seems perfectly sensible to assume – although I may well be wrong – that less […]
[…] this post I suggested that less marking might mean more feedback. There might be others, but these are the […]
[…] is a complete waste of time. In this post I explore the difference between marking and feedback and here I suggest that less marking might mean more […]
[…] harmful. I wrote here about some of the differences between marking and feedback, and in this post I suggested that less marking might actually lead to more and better feedback. If marking is […]