Since first hearing the idea that the curriculum should be the model of progression on Michael Fordham’s blog, I immediately and instinctively felt that this was right. Of course, I said to myself, we will know whether students are making progress if they are learning more of the curriculum. Voila! And, like many others, I left the notion as a self-evident truth that required no further explanation. Once it is understood to be true, the scales will fall from the eyes of those espousing flightpaths, Age Related Expectations and incoherent statements of progress and all will be well. (See here and here for a discussion of why these approaches are unhelpful.)
Well, after 5 years or so of wrestling with the concept I can honestly say that it has been a lot trickier to get my head around than I cheerfully assumed. The problem is, as far as I can see is three-fold. Firstly, what if learning the curriculum doesn’t seem to result in students being better at a subject? Second, how can we establish whether or not students actually know, remember and can do more of the curriculum. And third, what should we do with the information once we’ve managed to establish it?
1. What if progressing through the curriculum doesn’t result in students making progress?
The most pressing obstacle to using the curriculum as a progression model is that the curriculum isn’t good enough. There are, I think, two main reasons why a curriculum might not lend itself to effectively assessing students’ progress. In some cases, the curriculum is not specific enough about what students need to know, remember and be able to do. This inevitably leads to students not being taught crucial knowledge and then, to compound the error, assessing students on things they haven’t been taught. This is far more widespread than you might imagine. In fact, I’d suggest that in some subjects (such as English) it is the norm.
The second major failing of the curriculum is that it’s not coherently sequenced to ensure students are able to make progress. Again, the inevitable result is that students don’t possess the requisite knowledge to be able to do what is being assessed. A good question to ask of your curriculum is whether it would make a difference to move a unit from one term or year to another. If the answer is that it wouldn’t make a difference then that might indicate that the sequencing of your curriculum is not coherent. While the order of precedence in which areas of the curriculum are taught varies widely and matters more in some subjects than in others, it should be the case that what is taught in Term 2 depends in some way on what was taught in Term 1. If it doesn’t, maybe what you intend students to learn in Term 1 isn’t worth learning?
In both cases, some students – usually the more advantaged – will be successful despite the poor curriculum. They come to school with enough background knowledge to make sense of our vague explanations and assumptions and have enough support outside of school to cope with any deficits. The students who suffer will be the most disadvantaged who possess neither the required background knowledge we’ve failed to teach not the external support to recover from our failures. If these students are getting good outcomes schools can be reasonably sure it’s because they’ve done a good job.
Curriculum Related Expectations
If we want to use the curriculum as the progression model, one of the key things to get right is to clearly specify our Curriculum Related Expectations (CREs). And getting this right will disproportionately benefit the most disadvantaged students we teach. This requires us to state what specifically we want students to know and be able to do.
Here’s an example of a bad CRE in English:
Students will learn how to write an analytic essay.
Here it is unclear how students will learn to achieve the curricular expectation and, consequently, it’s very easy to miss out important steps and make unwarranted assumptions about what is required to be successful. Then, at some point down the line students are assessed on their ability to do something they haven’t been taught how to do.
Here’s an example of a curricular expectation that’s a bit better:
Students will learn how to scaffold an analytic essay using PEE [or one of its many variants].
This is more precise and the likelihood that teachers will spend lessons showing students how to craft points, select evidence and explain their choices is greatly increased. However, there are still likely to be lots of gaps between what is assumed, what gets taught and what is learned. Sarah Barker’s blog on effective modelling does a great job of explaining this breakdown.
If we are genuinely determined that all students can be successful we need to do better. Here’s an example of a CRE that is more likely to have the required effect:
Stage 1: Students will learn how to write a thesis statement beginning with a subordinating conjunction and using a list of triple/quadruple adjectives to frame the arguments they will develop.
Without going into enormous detail, this curricular expectation makes it clear that teachers must teach students what a subordinating conjunction is and how to use it, as well as necessitating the teaching of the link between adjectives used in a thesis statements and what then gets written in the rest of the essay. It’s probably a long way from perfect but it’s getting a lot closer to the kind of specificity needed for the curriculum to be used as a progression model.
When specifying CREs, remember these must be reasonable expectations of what students should learn. This means you need to have thought carefully about order of precedence: is there anything students need to know first? Could the CRE be usefully broken down into smaller, hierarchical ‘chunks’ The more specific you are, the more likely you are to teach and assess what has been specified.
Here’s my attempt to map out the essential concepts in English (many thanks to Oliver Cavaglioli for helping me with the design) If you’re struggling to read it, here is a more legible version.
As you can see, if there’s an order of precedence to knowledge in English is very wide and shallow.
This is completely different to a subject like maths where the order of precedence is very tall and thin. (Thanks to Jemma Sherwood* for sharing this precedence diagram for maths.)
In either case, there might be several plausible routes to take in order to teach this knowledge effectively. For instance, Jemma offers this sample pathway for her maths curriculum:
In English, the essential concepts I’ve suggested above need to be placed against curricular experiences. These would include the texts we want students to have read and the writers we want them to encounter. Here’s what this make look like for Year 7:
The content – the curricular experiences students will have – is open to different interpretations. The particular curricular story being told here is about the development of English as a language and a body of literature. Students will encounter various texts and writers that I consider important in a way that will allow then to see how the earliest writers they experience influence later writers and how texts have conversations back and forth through time. I’m completely open to the idea that all of these experiences could be replaced with other, completely valid experiences and that students would learn a very different story of English. However, what, I think, is less open to substitution are the concepts for which the experiences are delivery vehicle. As with Jemma’s maths example, what I’ve designed is one possible pathway through these concepts but the essential aspect is that each concept is specified, taught and assessed. (More on this here.)
Potential benefits of such an approach might include:
- The lack of emphasis on raising students’ rank means that the curriculum is less likely to be warped by proxies
- Less need to reify and quantify aspects of the curriculum
- Clearer sense of what has been taught effectively
- More precise knowledge of what each student has and has not learned
- Clearer sense of what needs to retaught
- Greater clarity of which students require intervention in which areas
- No need to assign global grades of competence to individual students
- Less risk that students who have not met a CRE are likely to view this an an inherent quality
- Less risk of students seeking the external validation of a grade.
2. How do we know if students know, remember and can do more of the curriculum?
Once we have specified the curriculum we should be in a position to start using it as a progression model. But how will we know if students are meeting our Curriculum Related Expectations? The obvious answer is assessment.
By assessing what students know, remember and can do we can not only adapt our instruction but our curriculum. It’s important to remember that in most schools, assessments will take two different forms: in-class formative assessment and more formal summative assessments. In-class assessment is accomplished by simply checking whether students know, remember and can do the things you have been teaching them and then acting on the outcomes of this assessment. Too often, this second stage is missed out. For instance, a teacher might start a lesson with a retrieval quiz which demonstrates that students do not know essential curriculum content but then not adjust future lessons in response. The following flowchart is an attempt to illustrate what might happen in response to students’ performance on in-class assessment:
A useful rule of thumb should be that if sufficient students are unable to meet our expectations, then we ought to assume the fault is with either our teaching or the curriculum. Where ever possible, it’s important to have subject level discussions with colleagues teaching the same curriculum to see if they are experiencing similar difficulties or have useful advice to offer. If it’s a curriculum you haven’t planned yourself it’s probably worth speaking to whoever designed it to get a sense of whether their expectations match yours.
For more formal summative assessments, we need to set tests which all students following the same curriculum should sit. Following the advice of Stuart Lock and Sally Stanton of Advantage Schools, I’d recommend restricting these types of assessments to no more than two per year. Using their model, the examples below show how such assessments might be composed in order to demonstrate whether students are progressively able to know, remember and do more of the curriculum:
It should be clear that the assessment towards the end of Year 9 is not intended to be six times longer than the first test taken in Year 7. Instead, each test would sample from the most important concepts and areas of skill covered during earlier stages of the curriculum and cumulatively assess students’ progression. I’m currently working with colleagues on just what such tests will look like and will share the results as they become available. The key thing here is that these tests should not simply be junior version of GCSE assessments. Instead, they should seek to assess students facility with the core components that compose skilled performance later on. As an example, here’s a first attempt at producing an English test for Year 7.
I should note that the key assumption underpinning this assessment model is not that tests should discriminate between students so we can place them in rank orders and assign summative statements of progression. Instead, in order to ensure progression through the curriculum, these tests should be primarily seen as statements of competence, that students have mastered the content sufficiently well to progress.**
These assessments will produce raw scores which can then be turned into percentages which data managers can use to populate spreadsheets. And herein lies the danger: numbers are seductive. They are derived from inherently unreliable proxies which then take on the appearance of objective reality. As we’ll see, there is a rule for numbers but it should be a limited one.
3. What should we do with assessment data?
So, your students have sat an assessment. What now? The first thing is to head off any attempt to steer us back to filling in APP grids where teachers ticking off lists of Curriculum Related Expectations. Not only is this bundensome, it leads us away from using the curriculum itself as the progression model and instead reifies curricular objectives. That is to say, what starts as a breakdown of what should be taught – only ever a proxy for the experience itself- becomes a measure against which students and teachers are judged with the inevitable result that objectives take on an independent life of their own.
This tendency to want to use CRE as an assessment tick list is contain within what I’ve called the specificity problem: if CREs are too specific we risk generating endless tick box checklists. If they’re too broad they risk becoming meaninglessly bland. Essentially, the way out of this thicket is think carefully about what level of specificity is needed by different audiences. CREs need to be both very specific and very broad depending on the audience.
Students
What students need to know are answers to Dylan Wiliam’s AfL questions: Where am I now? Where am I going? How am I going to get there? The first question is answered by students being aware of their performance in assessments. How well do they know what they have been taught? How fluently can they perform the procedures on which this knowledge depends? The second question is the province of the curriculum; where they are going should have been carefully and coherently sequenced so that they can build on prior knowledge. The third questions concerns instruction. It is contained in the feedback students are given and what they need to do to improve and the teaching which introduces new concepts and provides opportunities to practice increasingly complex skill. This will in all likelihood need to be detailed and specific.
Teachers
Teachers need to know what gaps they need to fill, both at the level of individual students and the class. Teachers need to know what to teach next in order that all progress through the curriculum. And because the process of assessing students’ curricular knowledge will reveal inadequacies both in instruction and the curriculum, teachers need to be prepared and able to be responsive both at a curricular as well as instructional level. As before, this will required detail and specificity.
Curriculum leaders
When curriculum leaders look at the performance data deriving from the twice yearly assessment cycle, their needs are different. They will need to be alert for teacher gaps and for this they need to be able to compare the performance of different teaching groups so they can investigate causes and resolve issues. They also need to use assessment to work out how well the department as a whole is teaching particular aspects of the curriculum. It’s probably not useful for curriculum leaders to make summative statements of progress. There’s very little benefit to claiming that students have achieved an arbitrary benchmark when they are part way through their curricular journey. Instead, they need to know much of the curriculum has been learned, what to do if students are not learning it and to constantly monitor whether the curriculum requires great levels of specificity or alterations in its sequencing.
To that end they need to be able to conduct question level analysis of students’ performance in tests. (See here for a detailed – and fairly technical – discussion of how this can work.) In all probability, this task will be made easier if students’ performance is converted into numbers. They key question when collecting data should always be, how will this information be used?
SLT/Governors
At the level of Senior Leadership of Governance, the detail needed is much less. At this point what is needed are summaries. Leaders and governors need to know which teachers need development and whether the cohort is on track. Again, when it comes to summaries of cohorts being on track, numbers help to make comparisons and spot trends. Whilst it’s vital to remember that grades are holistic and can ONLY be applied summatively after a course is complete, average percentages have far less narrative power to deceive us and still allow for comparison and pattern spotting.
Parents
Reporting to parents is where a lot of otherwise good practice combusts. Far too many school leaders mistakenly believe they need to report how likely students are to get particular GCSE grades and this then warps everything else they do. Whilst there’s a statutory obligation to regularly report students’ progress to parents, what form this takes is up to us. Parents may think they want to know GCSE grades but, in large part, that’s because they have been trained to expect them. In actual fact, parents generally want to know three things:
- How is my child performing (relative to other members of the group)?
- Are they working hard? (What, specifically, do they need to improve?
- Are they happy?
None of these questions can be adequately answered by grades or numbers (although numbers can play a part) and they should obviously never be based on the hocus pocus of forecasting students’ future performance.
The role of numbers
I’ve come to think that numbers can play a useful part in overseeing whether students know more, remember more and can do more of the curriculum, but we should proceed cautiously.
Think about this statement: A student’s performance in an assessment is 64%. Is this good or bad?
At this stage we don’t have enough information to know. So, what about if we also knew that the average performance in the class was 57% and that the average performance in the year group was 79%? Now we have a clear sense of how well an individual student is progressing through the curriculum: information is meaningful if it is comparable. But, a system that just communicates numerical aggregates doesn’t answer Wiliam’s questions (Where am I? Where am I going? How am I going to get there?)
What we can’t do is compare the percentage a students gets in Term 1 with one achieved in Term 6 and attempt to draw a line between them indicating progress. This would assume that the second test was objectively more difficult and that if the numbers go up, then progress is being made. We may believe this to be true but it’s very rare to find schools that have put the effort into calculating the difficulty of test questions required to make this a defensible claim.
The key to using numbers sensibly in this model is to only to compare across space (laterally) and not across time (longitudinally). What I mean by this is that it makes perfect sense to compare how different students, classes or cohorts have performed on the same assessment, but not to compare the results of different assessments.
It may be helpful to see an example of what all this make look like in practice:
Students: “The areas you need to practise more are rounding to significant figures and ordering decimals. We have assigned you tasks on these topics to complete by …”
Teachers: “My class average was 60% and the parallel set got 72%. I can see that my group did much worse on ordering decimals, so I’ll talk to the department and see what ideas they have.”
Curriculum leaders: “Across the board, rounding to significant figures wasn’t as good as it needs to be. We will revisit it soon and spend some department time on it in the meantime. Teacher C needs some support with their class as the results aren’t where they ought to be.”
SLT/Governors: “Year 7 are mostly attaining well on the maths curriculum this year, so will be ready for Year 8. Teacher C’s group is a little concern. We will spend time with the HoD to check what’s happening and how we might support.”
Parents: “Your child achieved 65%. The class average was 72% and the cohort average was 70%. She has spent 50% less time on Dr Frost Maths than the rest of the class. She will now be assigned tasks on rounding to significant figures and ordering decimals in order that she does not fall behind.”
*
In conclusion, when it comes to using the curriculum as a progression model, I think the five most important factors are:
- No one is asking for internal data – if you are going to produce it you need to have a very clear rationale
- Only assess what students have been taught
- Data can help to make lateral comparisons but is unlikely to help students make progress
- Students are making progress if they know more, remember more and are able to do more of the curriculum
- To be used as a progression model the curriculum must be highly specific and carefully sequenced.
Here are the slides I used to illustrate all this at my researchED talk:
[…] capacity to misunderstand complex ideas leads, inexorably, to the lethal mutation of those ides. In my last post I set out why the apparently simple and obvious notion of ‘using the curriculum as a […]
[…] ‘Why ‘using the curriculum as a progression model’ is harder than you think’ by David Didau […]
[…] Why ‘using the curriculum as a progression model’ is harder than you think […]
Hi
I was trying to follow the Dropbox link to the example Year 7 assessment but it says it has been deleted. Is there any chance you could share this? I would love to see this in action.
Thanks
Hi David, the dropbox content for “the essential concepts in English” seems to have been deleted. Is there another link for this? Cheers!
Try this https://www.dropbox.com/s/b4t4hkv9ihctx7p/OAT%20KS3%20English%20curriculum%202022.docx?dl=0