One of the barriers to using the curriculum as a progression model is that there is too little understanding of what this might mean. It sounds great but a bit mysterious. I’ve spoken to a number of people who are happy to agree that the curriculum provides a map of the quality of education a school provides and even approvingly use the phrase ‘curriculum as a a progression model’ who never the less continue to attempt measuring progress using ‘age related expectations’ or some other meaningless confection.
Let’s first deal with why age related expectations are unhelpful. First, they are guesswork. We look at what some children can do at a particular age and then label this as an expectation for what all children should achieve. These expectations are then given levels or descriptors. Children are then described as ‘working towards,’ ‘working at,’ or ‘working above’ these standards. Alternatively they might be given labels such as ’emerging,’ ‘meeting,’ or ‘mastering.’ This might be a convenient way to compare children – it is easy to see that some appear to be making more progress than others – but it’s a poor way to understand what progress an individual student is making in learning a specified body of knowledge.
Second, an age related expectation provides little sense of why it is or isn’t being met. Teachers assess students’ work in an effort to fit what they have produced to the levels or descriptors they’re working with. It’s perfectly possible for a school to decide that a child is ‘working towards’ a particular expectation without anyone being any the wiser about what, specifically, the child needs to do in order to meet our expectations of what some, similarly aged children are able to do. There’s often just a general sense that some children are ‘less able’ than others.
Over the years I’ve come to understand that almost all the assessments conducted in schools do not actually test ability. Instead, they test knowledge. This is more obvious for some tests than for others. But consider reading comprehension tests: children are given an unseen passage of text and asked to answer a series of questions designed to reveal how good their understanding of that text is. Surely this is a reasonable test of reading ability? Well, imagine two children turn up at your school on Monday with no prior attainment data. Both are give a reading comprehension test and one scores really highly whilst the other scores poorly.
It would seem reasonable to conclude that the child with the higher score is more able, but let’s flesh out these fictitious children. Let’s call the high scorer Jemima. She has been born to parents who place great value on reading and who have read to her every night of her life. When she started school she moved quickly from learning to read to reading to learn. If asked she would say she ‘loves reading.’ The other child, let’s call him Jarred, has been less fortunate. His mum is by herself and, although she’d like to read to her son more often she is often exhausted form working two jobs. Further, when Jarred started school he was suffering from undiagnosed glue ear (as do an estimated 80% of children between the ages of 4 and 10) and therefore struggled to map spelling combinations of sounds when being taught phonics. Because he struggled to do something most of his peers found simple, he learnt to avoid reading wherever possible. If asked he’d admit that he ‘hates reading’.
Is Jarred less able than Jemima? In practical terms, yes he is, but that’s because he knows less. To say that she is ‘mastering’ and he is ’emerging’ tells us nothing of use. All too often, the Jemimas of this world are put into ‘more able’ groups and given more challenging curriculum material at a faster rate than the Jarreds, ensuring that that the gap between them widens over time. But, if the curriculum is our progression model, instead of fooling ourselves that we are assessing ability, we can try to directly assess what each student knows. Such a test would make it clear that Jarred doesn’t know – at least, not with sufficient fluency, the phoneme-grapheme correspondences that make up the ability to decode English. Once we understand that, steps can be taken to fill the gap.
Using the curriculum as a progression model simply means that we make judgements of progress based on how much of the curriculum a children has learned. The more carefully we have specified what we intend to teach, the more easily we can assess whether they’ve learned it. Unlike an age related expectation (which is just something a child of a particular age is assumed to be able to do, regardless of what they’ve been taught) having curriculum related expectations helps us to specify, teach and assess the knowledge we expect children to acquire. It becomes reasonable to expect children to have met these expectations because they are – or should be – directly connected to what has been taught. If many students have failed to meet our expectations we should assume that the fault is with either the curriculum or its teaching. If a few students have failed to meet some of our expectations we can assume that they might need extra support and begin the process of establishing what knowledge they are missing.
This is the essence of what it means to use the curriculum as a progression model. If you’re using age related expectations you’re doing something else.
This makes so much sense! ARE have always seemed like a meaningless tool/ measure and I found your discussion helpful food for thought. Thanks ⭐
[…] checkpoints to ensure students are meeting curriculum related expectations. So far I written about replacing age related expectations with curriculum related expectations, and another on replacing grades more generally with curriculum related […]
I’m currently looking at our faculty (Computing & Business) development plan and trying to “Ensure that the progression model is a coherent journey for students in Key Stage 3” I’m still stuck on the problem of HOW you measure whether the students have learnt the curriculum or not?
Many students that demonstrate understanding with their verbal responses fail to express that knowledge when giving a written response to an exam question.
I find the mantra, ‘don’t practise until you can do it, practise until you can’t not do it’ very helpful. If you design an assessment to sample whether students have learned the curriculum, inevitably some will do better than others. I think it’s OK to report this as a %
Thank you David. Your blogs are really helpful.
I found that message on another article of yours…
https://learningspy.co.uk/assessment/replacing-grades-with-curriculum-related-expectations/
“The message should always be that it doesn’t matter if you know/can do these things today, but that you can do them later. And, don’t stop asking when students get the answer right, keep asking until they can no longer get it wrong.”
So I’m going to try low stakes quizzes based on facts building to application of facts in a new context.
[…] Age Related Expectations and incoherent statements of progress and all will be well. (See here and here for a discussion of why these approaches are […]
[…] curriculum progresses. I’m also pinning my colours to a mast here, and choosing to follow David Didau’s principle of using our curriculum as a progression model, as the lack of a coherent national […]
[…] the first step is to determine the Curriculum Related Expectations (CREs) students should be able to meet. As I argued here, it’s crucial that the curriculum […]