Throughout my career, the de facto approach to assessing English at KS3 has been to use extended writing. After all, this is what students will be faced with in their GCSEs so it kinda made sense that this was what we should get them used to as early as possible. In order to take this approach, we need a markscheme. Most markschemes attempt to identify the different skills areas students should be demonstrating and then award marks based on well well these skills are demonstrated. The weakness of using markschemes – or rubrics, if you prefer – is that it comes down to an individual marker’s judgement to determine how well a students has demonstrated a skill. The trouble is, human beings just aren’t very good at doing this. If that were the only problem we might be able to overcome it b y using comparative judgement, but, unfortunately, it’s not.

It’s also important to remember that English at GCSE is very different to the subject at KS3. Whereas GCSE literature requires students to learn specific texts, this is an unhelpful approach to take at KS3. The texts studied at KS3 must be vehicles for important curricular concepts that will continue to matter in KS4. In language, GCSE is essentially test prep with students being taught to work out what what examiners are looking for in opaquely worded questions. It should be abundantly clear that transferring this to KS3 would be a catastrophic error. Because of this, making KS3 assessments similar to GCSE assessments is not only unnecessary, it’s actively harmful.

The bigger issue with the essay approach to assessment is that it inevitably ends up assessing students on things they haven’t yet been taught to do. This is unfair and will significantly disadvantage those students who are already most disadvantaged. Not only that, it’s not very useful for identifying what students know and can do. That’s fine if what we’re primarily interested in doing is discriminating between students and finding out which ones are good at English and which are not, but usually, teachers have a fairly clear idea about this before students sit an assessment. If we want to use assessment to assess how well students are mastering the curriculum, then we need to think carefully about the constructs – the hypothetical concepts that a test is meant to measure – we want students to learn and how best we can ascertain whether they have been mastered. This is why, as I argued here, we need to use a mastery assessment model. The main point of such assessment is to find out how successful the curriculum is at teaching the constructs we’ve identified, and how well individual teachers are implementing the curriculum. The starting assumption should be that if students cannot demonstrate mastery of a concept then either the curriculum or the teach is at fault.*

So, the first step is to determine the Curriculum Related Expectations (CREs) students should be able to meet. As I argued here, it’s crucial that the curriculum should specify exactly what students need to know and be able to do to increase the likelihood that teachers will explicitly teach these things and provide opportunities for practice.

For the OAT KS3 curriculum, we’ve divided these into recall of key subject terminology, what students should know, and what they should be able to do. Here are some examples:

sonnets KJV

 

 

 

 

 

 

 

 

 

Teachers know that these are the expectations students are expected to meet, and that if they can’t meet them, then that’s on them. (See footnote below)

Then, assessments need to test when students actually know and can do these things. In order to make it clear precisely what students do and don’t know and what they can and can’t do, we’ve designed our assessments to only assess those things that students have been taught and given multiple opportunities to practice. It should go without saying – although, as you see, I’m going to spell it out, that these CREs must also be the subject of regular and frequent formative assessment.

here’s an example from our Art of Rhetoric module:

  • The first question asks students to read an extract they will have studied in class and is testing the constructs of summarising and sentence combining
  • Q3 is assessing the ability to write a thesis statement (which we specify very precisely in our curriculum) and to identify supporting evidence from the extract.
  • Q2 is testing the recall of topic vocabulary (key characters/themes are taught alongside 3/4 ‘excellent epithets’** which are then used to create thesis statements)
  • Q4 asks students to see character as a construct and performance as malleable
  • Q5 asks students to locate examples of rhetorical techniques which they have been taught about (and should have practised)
  • Q6 expects students to demonstrate an understanding of the parts of metaphor to discuss the effect of language (again, this is precisely specified)
  • Q7 is provides another opportunity for students to writera thesis statement and then to use the following two steps from our deconstructed essay structure.
  • Q8 is simple recall of learned definitions which will crop up repeatedly across the curriculum
  • Q9-13 are assessing the recall of broader ‘hinterland’ knowledge. These questions are the least essential for students to get right as, for the most part, nothing later in the curriculum is dependent on them.
  • Q14 asks student to recall topic specific vocabulary that will recur throughout the curriculum
  • Q15 is slightly messy in that it’s testing whether the creative sentence types students have been taught have been embedded and well also show how much slow writing practice they’ve had. It also requires functional knowledge of the Aristotelian Triad (for which students have already given a definition.)

I don’t claim that this is in anyway perfect or finished. However, we do think this form of assessment is a significant improvement on what has gone before and, as an ancillary benefit, they’re much less time-consuming to mark. We’re learning a lot from schools using these assessments – mainly about how well we’ve specified the various constructs – but we don’t jet have clear information about whether we’re assessing the right concepts. We think the CREs we’ve chosen are the right ones and we think our assessments do a reasonable job of assessing whether students are mastering the curriculum. However, this is only likely to become clear once students embark on GCSE. As students make their way through KS3 responses become increasingly more extended as students are taught and practice each of the components specified in the curriculum. When we find something that lots of students struggle with, we go back to the drawing board and try to specify more carefully and in more detail to support teachers in teaching what students actually need to know to be successful. And, as an added bonus, it’s now much easier to report to parents exactly what it is their child is struggling to master.

For anyone seeking to follow in our footsteps, I must make clear that working through Evidence Based Education’s Assessment Lead Programme as a Lead Practitioner team has been invaluable if not essential. I can’t recommend it highly enough.

As always, constructive feedback is welcomed.

* Obviously this will not always be true: students might have been absent or otherwise have chosen not to engage with the curriculum.

** Excellent Epithets are wonderful idea we pinched from Ormiston Horizon Academy in Stoke on Trent