Men are more apt to be mistaken in their generalizations than in their particular observations.
Machiavelli
In a recent blog post, children’s author, Michael Rosen has suggested how teachers should teach, assess and share students’ writing. He has helpfully broken his thoughts into three areas: teaching & assessment, editing, and sharing. In this post, I’m going to consider his ideas on the teaching and assessment of ‘good writing’.
Rosen points out that schools teach children to write for exams and that writing for exams is not the same thing as writing well. This is, of course, true; we teach what’s assessed and if the wrong things are assessed if follows that the wrong things are likely to be taught.
If we use mark schemes to work out what or how to teach, we end up with cargo cult teaching and learning. The answer begins with modelling:
Good modelling requires that we share not just the content of a piece of writing but the thinking which underpins it. A few years ago I decided to take some tennis lessons to improve me game after realising I was never going to improve by watching Wimbledon every year. The coach didn’t just show me how to play, he told me how to think. We don’t learn well from watching experts perform, we need to have their performance broken down and analysed. Although I could replicate the movements required to return a serve, I had no idea what I was doing until my coach taught me to watch my opponents’ shoulders instead of the ball. I’m still not much good at tennis because I don’t practice enough, but I’m a lot better at watching Wimbledon because I have an idea about how a tennis player thinks.
Deconstructing exemplars can be very useful, but possibly the most effective way to share both thinking and outcome is to write a live model in front of a class and speak your thoughts aloud as you go.
When students see expertise in action they can mimic. With scaffolding and plenty of practice they become increasingly expert.
Once we’ve got students to do some writing, how best should we assess it? Typically, we rely on rubrics. Rubrics only contain the superficial, the easily described and the obvious. This is probably why Rosen is affronted by the horrible soup of fronted adverbials, embedded relative clauses, and noun phrases which permeate teachers’ understanding of what constitutes good writing. But when we actually judge ‘real’ writing we don’t tend to apply any kind of rubric, we either like it or we don’t.
To remedy this, Rosen suggests writing ought to be assessed on a 3 point scale: very good, good and, not so good. This 3 point scale should, he reckons, be applied to four areas: first impressions, surprise, sustaining interest and the transformation of source material. These seem pretty reasonable criteria on which to judge writing, and, as an attempt to move away from limiting assessment rubrics it is laudable.
But Rosen’s rubric will be as limiting as any other. Using mark schemes to assess students’ work narrows the validity of what constitutes good writing to what’s on the mark scheme. If it’s on the rubric, we credit it, if it’s not, we don’t.
Expert performance depends on huge amounts of tacit knowledge. Because it’s tacit it’s very hard to articulate – even (maybe, especially) for experts. As Michael Polanyi said, “So long as we use a certain language, all questions that we can ask will have to be formulated in it and will thereby confirm the theory of the universe which is implied in the vocabulary and structure of the language.”The answer is to do away with rubrics and, instead, use the aggregated comparative judgment of experts which allows for much greater reliability and validity in our assessments of students’ work. (See this post this post for details.)
I’m not totally sure that modelling is the answer, and certainly not the ‘only answer’, when it comes to a child developing his or her own writing. Surely we run the same risk that the rubrics cause – that we model our expert perception/understanding of what ‘good writing’ looks like, but the child can only hope to mirror this and will not necessarily embed it. My feeling is that getting to ‘good writing’ is like a series of steps, and that children need to take each step in turn. The idea that we can help them jump ahead by teaching them grammatical constructions is clearly creating some very odd pieces of ‘good writing’, if my experiences are anything to go by. To a great extent, you will only ever get better at writing by doing it yourself, in a manner that looks an awful lot like the much maligned ‘learning through discovery’.
If you were to ask writers how they work at improving their writing, the answer would probably include finding an effective inspiration, having a very clear sense of audience, and then doing lots of playful experimentation. That’s where I would start, if it was up to me. I still don’t really ‘get’ how the aggregated thing works (I mean, I get the theory, but I don’t get the practicalities). The teacher needs to know his or her children’s writing very thoroughly, and to understand which ‘next steps’ need to be taken to improve it. Aggregating a score by outsourcing the marking to a larger group of assessors would take away the opportunity to do this when you mark a child’s books. You don’t really need to know where each child stands in relation to the other children, you need to know where each child stands in relation to his or herself, because that’s how you plan for learning.
I see you’ve commented on an early draft of my post 🙂 Modelling leads to mimicry and must be followed by scaffolding and practice.
I’m sorry you don’t get the “aggregated thing” – maybe you should see it in action? I’m not sure why you think an assessment system which allows for great reliability and validity might prevent a teacher from giving individual feedback, but just to be clear, you can most certainly do both.
And I certainly haven’t claimed that knowing “where each child stands in relation to the other children” is important, but if planning for learning is contingent on the sort of hit and hope you seem to be suggesting it will be hugely time consuming, unhelpful and inefficient. Much better to anticipate errors and plan recursively from the outset.
It’s not ‘hit and hope’ to have a deep knowledge of the individual child’s learning, gained via marking the work. An assessment can be as accurate as you like but it is the individual teacher who needs to know what his or her children can and can’t do, because that feeds into planning, not just into feedback.
I really think AfL *is* based on hit and hope. Inaccurate assessment helps no one – it just generates misinformation. Who’s interest is that in?
And is it really the teacher that most needs to know what to do? Planning that depends on assessment is bad planning.
Planning that relies on your professional assessment of a child’s next steps is the best kind of planning there is, surely? I don’t really understand what other kind of planning there might be, unless you don’t plan to differentiate at all for different needs, amounts of knowledge and levels of ability.
Do you really think the only way to teach children with different needs or abilities is to adjust planning in light of assessment?
No, of course not. But perhaps you are assuming that by ‘planning’ I mean ‘writing a lesson plan’, and that’s not how planning for next steps works. Surely one of the main values of assessment is to understand what you need to plan to do next to ensure the children keep learning?
‘Aggregating a score by outsourcing the marking to a larger group of assessors would take away the opportunity to do this when you mark a child’s books.’
Possibly, but if you take part in judging with colleagues you have a great chance to see a large amount of work quickly, and you soon learn as a group where issues lie.
I can see that this might be useful for a group of teachers who want to identify where their teaching isn’t working as well as it might, but ‘next steps’ in writing will still tend to be fairly specific to the individual child.
Just for you Sue, I will write a post in the near future where I show how comparative judgement can be used to provide very precise feedback to individuals.
It might be as well to differentiate what is required at different stages of development. For example, primary vs secondary, laggards/strugglers vs kites. Students struggling with the mechanics might need a different emphasis/assessment/feedback from those needing to sharpen focus of their writing?
I think if a teacher gets a kick out of assigning a score/grading to students writing, then the teacher just needs to express to the student why them scoring/grading it will be of use to the student without pressuring them to say yes. If the student says ‘yes please’ then the teacher can enjoy scoring/grading that student’s work. If the student is not bothered, the teacher either needs to express the benefit to the student with more empathy, or find another way to help. Would have loved to have known as a student that real writing is about creating something you care enough about to edit, get advice on and share with others off your own back; not as part of a course, not because a teacher wants your work in a competition… but because you need to say what you’ve said and you hope it’ll influence someone else in the positive way you imagined when you created the work.
“It may well be that a good writer is able to use fronted adverbials and embedded relative clauses, but they would never set out with this as their goal. By teaching these proxies we limit both our own understanding and children’s ability.” True-ish. It seems to me that those are the sorts of things we teach when we analyse language and we may well use them to reflect on our own writing in a constructive way. I’m not sure how useful they are at primary school, though. Having waded my way through a third of my pupils’ books, in order to make (spurious) judgements about ‘meeting’ etc., I would welcome a real focus on the development of technical accuracy.
And that comment segues neatly into Part 2 🙂
If only all English teachers could write sufficiently well so as to give students maximum benefit from the shared writing/active modelling process…
Rosen’s little rubric is all well and good but appears to rely upon the assumption that some grasp of basic grammatical structures and reasonable syntax is already in place. Unless some consideration of technical accuracy (which often requires some teaching of terminology) features in how the quality of writing is judged, we do our neediest students of literacy a grave disservice.
Chemistry poet is spot on – kites who are using adverbial sub-clauses to great effect may not require them to be explicitly taught in order to generate ‘good’ writing. A pupil at the other end of the spectrum who fails to employ any adverbial sub-clauses at all, however, may require them to be formally taught in order to raise the quality of their description. Of course, the second student is unlikely to ever possess the flair or sophistication of the first, but that’s no reason not to identify the kind of repetitive structural flaws or omissions from their work and aid them to address these areas accordingly.
Rosen’s post is fine if one presupposes that all students have the ability to become a competent (if not ‘good’) writer but reality tells us otherwise. It also ignores the fact that, but for a tiny handful of us, writing is as much a functional, communicative tool as it is an expressive, creative one and the functional element should not be neglected: much as I would love for all of my students to be able to produce evocative descriptive works, for many, I am pleased if I can help them produce literate emails of enquiry or job applications that are not likely to be instantly binned on account of their grammatical incoherence and comedic informality.
As for the supposed mutual-exclusivity of teaching students to ‘write well for exams’ and ‘write well’… Yawn. Perhaps Rosen hasn’t encountered many English teachers worth their salt.
[…] Rubrics are inherently opaque and rarely provide anything meaningful to pupils. In subjects such as English this is especially pronounced: mark-schemes ask us to draw a distinction between such descriptions as ‘confident’ and ‘sophisticated’. One is apparently better than the other, but any difference is arbitrary. (Daisy Christodoulou calls this “adverb soup”.)  Exam boards are forced to provide exemplar work for teachers to understand the process by which they arrive at their decisions. Instead of wasting students’ time with vague, unhelpful success criteria, why not spend time deconstructing exemplars and modelling the expert processes we would use to complete a task. […]
[…] tried a short-cut, you see. They tried to do the clever thing, to do what the best do, but they did it without being expert practitioners, so it didn’t work for […]
[…] https://www.learningspy.co.uk/writing/writing-drafting-crafting-and-sharing/ […]