Throughout my career, the de facto approach to assessing English at KS3 has been to use extended writing. After all, this is what students will be faced with in their GCSEs so it kinda made sense that this was what we should get them used to as early as possible. In order to take this approach, we need a markscheme. Most markschemes attempt to identify the different skills areas students should be demonstrating and then award marks based on well well these skills are demonstrated. The weakness of using markschemes – or rubrics, if you prefer – is that it comes down to an individual marker’s judgement to determine how well a students has demonstrated a skill. The trouble is, human beings just aren’t very good at doing this. If that were the only problem we might be able to overcome it b y using comparative judgement, but, unfortunately, it’s not.
It’s also important to remember that English at GCSE is very different to the subject at KS3. Whereas GCSE literature requires students to learn specific texts, this is an unhelpful approach to take at KS3. The texts studied at KS3 must be vehicles for important curricular concepts that will continue to matter in KS4. In language, GCSE is essentially test prep with students being taught to work out what what examiners are looking for in opaquely worded questions. It should be abundantly clear that transferring this to KS3 would be a catastrophic error. Because of this, making KS3 assessments similar to GCSE assessments is not only unnecessary, it’s actively harmful.
The bigger issue with the essay approach to assessment is that it inevitably ends up assessing students on things they haven’t yet been taught to do. This is unfair and will significantly disadvantage those students who are already most disadvantaged. Not only that, it’s not very useful for identifying what students know and can do. That’s fine if what we’re primarily interested in doing is discriminating between students and finding out which ones are good at English and which are not, but usually, teachers have a fairly clear idea about this before students sit an assessment. If we want to use assessment to assess how well students are mastering the curriculum, then we need to think carefully about the constructs – the hypothetical concepts that a test is meant to measure – we want students to learn and how best we can ascertain whether they have been mastered. This is why, as I argued here, we need to use a mastery assessment model. The main point of such assessment is to find out how successful the curriculum is at teaching the constructs we’ve identified, and how well individual teachers are implementing the curriculum. The starting assumption should be that if students cannot demonstrate mastery of a concept then either the curriculum or the teach is at fault.*
So, the first step is to determine the Curriculum Related Expectations (CREs) students should be able to meet. As I argued here, it’s crucial that the curriculum should specify exactly what students need to know and be able to do to increase the likelihood that teachers will explicitly teach these things and provide opportunities for practice.
For the OAT KS3 curriculum, we’ve divided these into recall of key subject terminology, what students should know, and what they should be able to do. Here are some examples:
Teachers know that these are the expectations students are expected to meet, and that if they can’t meet them, then that’s on them. (See footnote below)
Then, assessments need to test when students actually know and can do these things. In order to make it clear precisely what students do and don’t know and what they can and can’t do, we’ve designed our assessments to only assess those things that students have been taught and given multiple opportunities to practice. It should go without saying – although, as you see, I’m going to spell it out, that these CREs must also be the subject of regular and frequent formative assessment.
here’s an example from our Art of Rhetoric module:
- The first question asks students to read an extract they will have studied in class and is testing the constructs of summarising and sentence combining
- Q3 is assessing the ability to write a thesis statement (which we specify very precisely in our curriculum) and to identify supporting evidence from the extract.
- Q2 is testing the recall of topic vocabulary (key characters/themes are taught alongside 3/4 ‘excellent epithets’** which are then used to create thesis statements)
- Q4 asks students to see character as a construct and performance as malleable
- Q5 asks students to locate examples of rhetorical techniques which they have been taught about (and should have practised)
- Q6 expects students to demonstrate an understanding of the parts of metaphor to discuss the effect of language (again, this is precisely specified)
- Q7 is provides another opportunity for students to writera thesis statement and then to use the following two steps from our deconstructed essay structure.
- Q8 is simple recall of learned definitions which will crop up repeatedly across the curriculum
- Q9-13 are assessing the recall of broader ‘hinterland’ knowledge. These questions are the least essential for students to get right as, for the most part, nothing later in the curriculum is dependent on them.
- Q14 asks student to recall topic specific vocabulary that will recur throughout the curriculum
- Q15 is slightly messy in that it’s testing whether the creative sentence types students have been taught have been embedded and well also show how much slow writing practice they’ve had. It also requires functional knowledge of the Aristotelian Triad (for which students have already given a definition.)
I don’t claim that this is in anyway perfect or finished. However, we do think this form of assessment is a significant improvement on what has gone before and, as an ancillary benefit, they’re much less time-consuming to mark. We’re learning a lot from schools using these assessments – mainly about how well we’ve specified the various constructs – but we don’t jet have clear information about whether we’re assessing the right concepts. We think the CREs we’ve chosen are the right ones and we think our assessments do a reasonable job of assessing whether students are mastering the curriculum. However, this is only likely to become clear once students embark on GCSE. As students make their way through KS3 responses become increasingly more extended as students are taught and practice each of the components specified in the curriculum. When we find something that lots of students struggle with, we go back to the drawing board and try to specify more carefully and in more detail to support teachers in teaching what students actually need to know to be successful. And, as an added bonus, it’s now much easier to report to parents exactly what it is their child is struggling to master.
For anyone seeking to follow in our footsteps, I must make clear that working through Evidence Based Education’s Assessment Lead Programme as a Lead Practitioner team has been invaluable if not essential. I can’t recommend it highly enough.
As always, constructive feedback is welcomed.
* Obviously this will not always be true: students might have been absent or otherwise have chosen not to engage with the curriculum.
** Excellent Epithets are wonderful idea we pinched from Ormiston Horizon Academy in Stoke on Trent
Are you able to provide an example of the Excellent Epithets? Very helpful article, thank you
Hi there,
An interesting read. How do you then use the mark that a student earns to create a level or grade using this assessment method in KS3?
Why would you want to create a level or grade? Who would this be for? I appreciate you might have a backward SLT asking you to do this but there is no external audience for doing such a thing. It’s certainly not going to help students
Thank you for your response. Maybe I should clarify my question. I guess, what I am asking is how do you gauge the progress of your students? If this assessment is out of 40, is every assessment out of 40 to see if students improve?
Because the curriculum is our progressiomn model, we guage progress based on whether children are mastering our curriculum related expectations. I try to explain all this here: https://learningspy.co.uk/assessment/why-using-the-curriculum-as-a-progression-model-is-harder-than-you-think/
[…] Source link […]
Really interesting and useful article, thank you. Can I ask how often you conduct these summative assessments across the year, how the marks get recorded on the internal system/what this looks like to parents and how does it compare to the way these things are done by other departments across the school?
Thank you so much!
Hi Lindsay – this very much varies from school to school (we have 32 secondaries!)
We design an assessment for each unit and we have 4 units per year in years 7 and 8, and 5 in year 9. How schools decide to record and report students’ performance varies but the advice we’ve give is summarised here: https://learningspy.co.uk/assessment/why-using-the-curriculum-as-a-progression-model-is-harder-than-you-think/
Hi David, I’m currently planning on reshaping our curriculum/assessment model after a lot of reading around. When I was having a look at your example assessment for Y7, I was just curious as to how you would grade the final writing part? I can see it has 3 bullet points for criteria but it says 5 marks. I know this may sound like a pedantic question but I’m really struggling to find a way to mark a piece of writing on a summative assessment without resorting to rubrics and descriptors etc. Is there a more specific way?
Hi Prufrock (love that poem!) the 2 additional marks are awarded for accuracy and general competence. This final category is discretionary: no rubric, just a teacher’s instinct
This curriculum looks amazing! I am curious to know how/if you link the Y7 curriculum to what children have learned in primary. I am also fascinated to hear from secondary teachers if they have noticed any difference in the subject knowledge of children coming up from primary since the changes in 2014?
Hi David, these look great! We as a school have transitioned away from teaching GCSE English in year 9 and were looking for a different way to track progress! Is there a way of accessing more of the CRE dropbox sheets? My HOD loved the look of them!
Hi,
Thanks for this great article.
How do you report student back to parents, and how do you measure whether a child is on target against their GCSE grade?
We absolutely never do this! Using the curriculum as a progression model is inimical to using GCSE grades as a progression model.
Hi David. Thanks for a great read. We have implemented ‘Progress Checks’ throughout our curriculum as a way of determining whether or not our pupils are mastering the curriculum.
Our next step is looking at departmental wide trends in these Progress Checks to address in Curriculum Team meetings, looking at pedagogical approaches to address misconceptions/how to improve the curriculum as we move forward. I recently read ‘Leverage Leadership’ and love the idea on ‘Data Driven Instruction’ where meeting time is using to spot the gap and then address it.
I was wondering whether or not you do anything similar, and if so, do you record your assessment marks centrally? It’s easy enough to record ‘Knowledge Checks’ as raw scores, but we are looking at how to record trends in qualitative writing/analytical tasks. We are going to trial having a centralised ‘Whole Class Feedback’ where teachers record their class’ WWW/Work On and if there are common themes, look at how best to address these in ‘responsive teaching’ episodes.
I’d be keen to talk to anyone who has tried anything like this.
Thank you,
Ben