Men are more apt to be mistaken in their generalizations than in their particular observations.


In a recent blog post, children’s author, Michael Rosen has suggested how teachers should teach, assess and share students’ writing. He has helpfully broken his thoughts into three areas: teaching & assessment, editing, and sharing. In this post, I’m going to consider his ideas on the teaching and assessment of ‘good writing’.

Rosen points out that schools teach children to write for exams and that writing for exams is not the same thing as writing well. This is, of course, true; we teach what’s assessed and if the wrong things are assessed if follows that the wrong things are likely to be taught. 

If we use mark schemes to work out what or how to teach, we end up with cargo cult teaching and learning. The answer begins with modelling:

Good modelling requires that we share not just the content of a piece of writing but the thinking which underpins it. A few years ago I decided to take some tennis lessons to improve me game after realising I was never going to improve by watching Wimbledon every year. The coach didn’t just show me how to play, he told me how to think. We don’t learn well from watching experts perform, we need to have their performance broken down and analysed. Although I could replicate the movements required to return a serve, I had no idea what I was doing until my coach taught me to watch my opponents’ shoulders instead of the ball. I’m still not much good at tennis because I don’t practice enough, but I’m a lot better at watching Wimbledon because I have an idea about how a tennis player thinks.

Deconstructing exemplars can be very useful, but possibly the most effective way to share both thinking and outcome is to write a live model in front of a class and speak your thoughts aloud as you go.

When students see expertise in action they can mimic. With scaffolding and plenty of practice they become increasingly expert.

Once we’ve got students to do some writing, how best should we assess it? Typically, we rely on rubrics. Rubrics only contain the superficial, the easily described and the obvious. This is probably why Rosen is affronted by the horrible soup of fronted adverbials, embedded relative clauses, and noun phrases which permeate teachers’ understanding of what constitutes good writing. But when we actually judge ‘real’ writing we don’t tend to apply any kind of rubric, we either like it or we don’t.

To remedy this, Rosen suggests writing ought to be assessed on a 3 point scale: very good, good and, not so good. This 3 point scale should, he reckons, be applied to four areas: first impressions, surprise, sustaining interest and the transformation of source material. These seem pretty reasonable criteria on which to judge writing, and, as an attempt to move away from limiting assessment rubrics it is laudable.

But Rosen’s rubric will be as limiting as any other. Using mark schemes to assess students’ work narrows the validity of what constitutes good writing to what’s on the mark scheme. If it’s on the rubric, we credit it, if it’s not, we don’t.

Expert performance depends on huge amounts of tacit knowledge. Because it’s tacit it’s very hard to articulate – even (maybe, especially) for experts. As Michael Polanyi said, “So long as we use a certain language, all questions that we can ask will have to be formulated in it and will thereby confirm the theory of the universe which is implied in the vocabulary and structure of the language.”

In our attempts to break down what experts do, we spot superficial features of their performance and makes these proxies for quality. It may well be that a good writer is able to use fronted adverbials and embedded relative clauses, but they would never set out with this as their goal. By teaching these proxies we limit both our own understanding and children’s ability.

The answer is to do away with rubrics and, instead, use the aggregated comparative judgment of experts which allows for much greater reliability and validity in our assessments of students’ work. (See this post this post for details.)