I’ve recently argued that one way to ensure schools are explicitly using the curriculum as a progression model is to assess children against curriculum related expectations. Briefly, this means that if your curriculum specifies that students have been taught x, they are then assessed as to whether they have met a minimum threshold in their understanding of x. So, for instance, if I’ve taught you about, say, the differents of metrical feet and their effects, are you now able to demonstrate this knowledge? If you can then you have met a curriculum related expectation; if you cannot then you haven’t.
In my last post I wrote about the potentially harmful effect of grading. One of the dangers of grading in the students begin to associate themselves with the grade. Rather than viewing a particular piece of work as demonstrating ’emerging’ competence or ‘mastery’ of a topic, they apply these adjectives to themselves. Rather than understanding that they achieved a 5 in their latest assessment, they think of themselves as embodying the qualities of a 5. And it seems increasingly common for students to rate themselves in these ways.
However, if internal assessment is precisely aligned with the curriculum we are teaching and students are assessed as knowing or not knowing facts – as being able or not being able to demonstrate applications of knowledge – then not only will be able to compose a clear picture of what individual students can and cannot do, we are likely to mitigate the risk that children internalise the judgements we make. It’s harder to see how not knowing enough about, say, plate tectonics, can be viewed as an inherent quality. For vast majority of students it seems clear that were they to put the effort into learning more about a subject, then they would both know more and be able to do more with that knowledge.
One simple way to assess whether the curriculum has being learned is to include regular low-stakes quizzing in lessons. The message should always be that it doesn’t matter if you know/can do these things today, but that you can do them later. And, don’t stop asking when students get the answer right, keep asking until they can no longer get it wrong. Think of the difference between two different athletics events: high jump and the hurdles. A hurdler has to race between hurdles and successfully leap over every one to continue the race. Any hurdles they knock over result in a penalty. The difference with highjump is that the bar is, at first, set relatively low and is gradually raised. All competitors will probably get over the lowest bar with progressively fewer able to make it over the as the competition continues. This kind of approach exemplified by low stakes quizzing ensures that we’re not setting up one-off hurdles which students have to periodically leap over but are instead gradually raising the bar of the high jump.
Assessing students against curriculum related expectations might have a range of benefits.
Benefits for teachers:
- The lack of emphasis on raising students’ ranks mean that the curriculum is less likely to be warped by proxies
- Less need to reify and quantify aspects of the curriculum
- Clearer sense of what has been taught effectively (if a clear majority of students have understood an element of the curriculum it can be assumed that instruction is effective)
- Precise knowledge of what each student has and has not learned
- Clearer sense of what needs to retaught
- Greater clarity of which students require intervention in which areas
- No need to assign individual students’ global grades of competence
Benefits to students:
- Clearer sense of what they have and have not learned
- Clearer sense of what aspects of the curriculum need further attention
- Less risk that students who have not met a curriculum related expectation are likely to view this an an inherent quality
- Less risk of seeking the external validation of a grade
I should be clear at this point that I’m at an early stage in my thinking about all this. I’m open to the idea that there may be unforeseen pitfalls with this approach, just as there may be benefits I haven’t considered. Over the next few weeks I intend to blog about what this might look like in English. In the meantime, if you have any thoughts about how to use assessment to keep our focus on using the curriculum as a progression model I’d be keen to hear them.
UPDATE: It’s been pointed out to me that this could easily end up with a huge list of tickbox “I can…” statements. I appreciate that the impulse to construct such lists exists but would argue that rather than making any such thing we already have (or should have) a curriculum which specifies what we want students to learn. Creating an additional layer of administration is to be resisted.
This kind of laddering can be very natural with a well ordered curriculum and a knowledgeable teacher. It can quickly become that tick box list that you mention in your update if we lack either of the above.
How do you feel about the idea of explicitly focusing this process at the lesson level rather then at assessment points throughout the year. That students raise the bar largely from scratch each session rather then try and start where they last left off. I have argued with people on Twitter who dislike this as they feel it is not stretching students but if we accept the large variability of performance (at an individual level), and the necessity to memorize foundational ideas, this seems like a more sensible idea. Apologies if this is already what you are thinking about.
I don’t know if you have been continuing your karate with lockdown but this idea of re-progressing through the drills is very common in the martial arts. Obviously you would expect that progression to speed up on average overtime and accommodating that is a key lesson planning consideration.
How do you see these ideas linking with the idea of zone of proximal development or with Engelmann’s DI’s focus on limiting the introduction of new material?
I agree with your idea of assessing against the curriculum. As Michael mentions, this works great when you have knowledgeable teachers who understand and dig out misconceptions and build students knowledge from ground level up.
I have been using this model of low stakes quizzing but at the start of each lesson for a good few years. I start with the basics, ensuring students learn and retain the foundation knowledge through fact based questions. Once I know that, as you say, students stop getting it wrong, I change the quizzing to short, low stakes application based questions. My expectation is students will recognise the ‘fact’ they learnt over time, and now need to recall and apply.
I repeat this process multiple times, with increasing time spacing, and interleave new facts with application of prior knowledge of different topics of the curriculum.
It is a clear way for me to know exactly which parts of the curriculum students have learnt and can begin to apply from, and I can immediately, there and then, use direct instruction to reteach any parts they still haven’t learnt/understood. Students also appreciate knowing clearly which parts of the curriculum they have distinct gaps in, rather than worrying if they just simply didn’t understand an application question correctly. There is no grading involved and the only success criteria is whether a student has demonstrated they have mastered a part of the curriculum.
[…] If we are going to use the curriculum as a progression model, it’s useful to build in checkpoints to ensure students are meeting curriculum related expectations. So far I written about replacing age related expectations with curriculum related expectations, and another on replacing grades more generally with curriculum related expectations. […]