If we are going to use the curriculum as a progression model, it’s useful to build in checkpoints to ensure students are meeting curriculum related expectations. So far I written about replacing age related expectations with curriculum related expectations, and another on replacing grades more generally with curriculum related expectations.
But how specific do these expectations have to be in order to be useful? If they’re too specific we risk generating endless tick box checklists, but if they’re too broad there’s the risk they become meaninglessly bland and tell us nothing about how students are progressing. It seems tempting to suggest there should be perfect midpoint. a Goldilocks zone, where specificity is ‘just right,’ but this is far easier to say than do.
I’m going to argue a contrary position: that our curriculum related expectations should be, at once, very specific and very broad. This might seem mad, but it depends on the audience for the information. So, when teachers assess whether children are making progress through the curriculum we need very specific, granular information about what it is that students can do and what they struggle with. In fact, unless our expectations are highly specific we’re likely to end up with lists of “I can…” statements. For instance, “I can analyse the effects of language.” This statement has the appearance of containing information, but what does it actually mean? Does it mean that a student has successfully analysed the effects of a single, straightforward example or that they can analyse a range of sophisticated examples? In order to tell whether children are mastering the curriculum, teachers need to know whether children can successfully answer specific questions. Here are some example of questions an English teacher might use in order to establish whether students are making progress:
– How does Dickens show that Scrooge has changed in Stave 5 of A Christmas Carol?
– How does Macbeth respond to the news that Malcolm has been named as Duncan’s heir?
– What does the phrase “wrings with wrong” suggest about the relationship in ‘Neutral Tones’?
In order to answer these questions, students have to know something about each of the texts, but they also have to apply this knowledge to make meaning. By reading (or listening to) students’ responses, teachers are able to judge how well each student is meeting curriculum related expectations. If students have met an expectation then we can be satisfied and moved to the next checkpoint. If a majority of students have not met our expectation then we might infer that there is probably with the quality of instruction. Maybe we need to go back to the drawing board and think though an alternative means of helping students to make progress. If a minority of students have failed to meet the expectation then we might infer that instruction is broadly acceptable but we also need a plan to intervene with those for whom it has been insufficient. Practically speaking, we might log the difficulty and move on, on we might provide some form of appropriate intervention.
The other important audience for this highly specific information are students themselves. They need to know precisely what it is they’re struggling with. There’s no point telling students that they need to ‘analyse in more depth’ or ‘explore alternative explanations’. For the most part, if students can do these things, they will. What they need is specific information relations to the texts they’re studying. The more detailed this information is, the more likely they can make expected progress.
So far, so good: the high degree of specificity has enabled us to make some important and nuanced decisions about our students. But what should we record and report?
At this point, specificity becomes cumbersome and pointless. Whether or not students are meeting our very specific curricular expectations, there’s little point in recording the detail for another audience. All anyone else needs to know is an answer to the binary question of whether students have met a minimum threshold for the part of the curriculum that has been taught. Here are three different levels of specificity with which this could be done:
– Understands and can comment on how Dickens’ achieves effects in Stave 5 of A Christmas Carol.
– Understands Stave 5 of A Christmas Carol.
– Understands A Christmas Carol.
Which of these would be most useful to a school leader trying to check which students have met expectations across all the subjects they’re studying? ~How much information do parents need to have a sense of whether their child is making sufficient progress? My suggestion is that the third example, “Understands A Christmas Carol” provides more than enough information for these purposes. It’s specific enough in that it doesn’t try to make some global statement about ability and is focussed on students’ understanding of a particular area of the curriculum, but it’s broad enough not to be overwhelming. A senior leader can see at a glance whether an individual students is considered to have met this particular expectation, and a parent is provided with information related a specific part of the course. Ifd either of these audience wants or needs more specific information then their best bet is to have a conversation with the teacher rather than asking for more detailed recording or reporting.
And that is my proposed solution of the specificity problem: as much as possible for teachers and students: as little as necessary for reporting and recording.
[…] tendency to want to use CRE as an assessment tick list is contain within what I’ve called the specificity problem: if CREs are too specific we risk generating endless tick box checklists. If they’re too broad […]