Curriculum related expectations: the specificity problem

2020-11-22T09:44:54+00:00November 21st, 2020|assessment, curriculum|

If we are going to use the curriculum as a progression model, it's useful to build in checkpoints to ensure students are meeting curriculum related expectations. So far I written about replacing age related expectations with curriculum related expectations, and another on replacing grades more generally with curriculum related expectations. But how specific do these expectations have to be in order to be useful? If they're too specific we risk generating endless tick box checklists, but if they're too broad there's the risk they become meaninglessly bland and tell us nothing about how students are progressing. It seems tempting to suggest [...]

High jump vs hurdles: Replacing grades with curriculum related expectations

2020-11-18T18:40:59+00:00November 18th, 2020|assessment, curriculum|

I've recently argued that one way to ensure schools are explicitly using the curriculum as a progression model is to assess children against curriculum related expectations. Briefly, this means that if your curriculum specifies that students have been taught x, they are then assessed as to whether they have met a minimum threshold in their understanding of x. So, for instance, if I've taught you about, say, the differents of metrical feet and their effects, are you now able to demonstrate this knowledge? If you can then you have met a curriculum related expectation; if you cannot then they haven't. In [...]

Curriculum related expectations: using the curriculum as a progression model

2020-10-27T11:08:23+00:00October 27th, 2020|assessment, curriculum|

One of the barriers to using the curriculum as a progression model is that there is too little understanding of what this might mean. It sounds great but a bit mysterious. I've spoken to a number of people who are happy to agree that the curriculum provides a map of the quality of education a school provides and even approvingly use the phrase 'curriculum as a a progression model' who never the less continue to attempt measuring progress using 'age related expectations' or some other meaningless confection. Let's first deal with why age related expectations are unhelpful. First, they are guesswork. [...]

GCSE reform: a modest proposal

2020-05-30T12:17:30+01:00May 30th, 2020|assessment|

The pandemic has cast many assumptions about how education could or should unfold into sharp relief. Like many others, I've been wondering about the positives we might find in our current situation and how - or whether - we can salvage anything when schools eventually return to normal. One area that seems to beg for reform is the way the exam season currently plays out. Here are some of the factors to consider: Accountability creates huge pressures on teachers which are, inevitably, passed on to students. Is there a way to break this chain? Along with these pressures, the quantity of [...]

Should we scrap SATs? Cautiously, yes

2020-12-03T12:10:56+00:00April 20th, 2019|assessment|

Earlier this week, Labour leader Jeremy Corbyn turned up at the NEU annual conference with some crowd pleasing ideas. The most eye-catching of these was that he would, if elected, scrap SATs, saying, "We need to prepare children for life, not just exams". Cue rapturous applause from the assembled trade unionists. None of this is particularly surprising, but what does intrigue me is why Corbyn and the NEU want to get rid of SATs. For Corbyn's part, he says, "SATs and the regime of extreme pressure testing are giving young children nightmares and leaving them in floods of tears." Of course, [...]

How do we know pupils are making progress? Part 4: Instruction

2019-04-07T20:13:21+01:00April 7th, 2019|assessment|

This is the final post in a series looking at how we can be sure that students are making progress through the curriculum. The whole purpose of knowing whether students are making progress is to be able to design appropriate instructional sequences. We may believe children are motoring through our wonderfully constructed curriculum but if empirical data reveals this not to be the case, we need to know. If my last post I discussed the importance of being able to glean meaningful data on item difficulty by seeing how well students do on particular assessment tasks. If all students are getting [...]

How do we know pupils are making progress? Part 3: Assessment

2019-03-27T14:12:23+00:00March 26th, 2019|assessment|

In Part 1 of this series I set out the problems with making predictions about students’ progress by drawing a ‘flight path’ between KS2 and KS4, then, in Part 2, I explained how thinking about the curriculum as a progression model is essential in making judgments about whether students are making progress. In this post we will turn our attention to issues of assessment. NB. This might feel a bit technical at times, but please know that I'm trying hard to explain complex ideas as simply as I'm able.  It's important to note that assessment can have a range of purposes. You [...]

How do we know pupils are making progress? Part 1: The madness of flight paths

2019-04-07T20:14:38+01:00March 23rd, 2019|assessment, curriculum|

Schools are desperate to find ways to predict students' progress from year to year and between key stages. Seemingly, the most common approach to solving this problem is to produce some sort of 'flight path'. The internet is full of such misguided attempts to do the impossible. Predicting a students' progress is a mug's game. It can't be done. At the level of nationally representative population sample we can estimate the likelihood of someone who is measured at performing at one level attaining another level, but this is meaningless at the level of individuals. It should therefore be obvious that using [...]

Garbage in, garbage out

2019-01-02T11:32:38+00:00October 16th, 2018|assessment|

This is my latest article for the rather wonderful Teach Secondary magazine. Schools are awash with data but do we know any more about how children are performing? Are we clear how likely they are to achieve particular targets? Can we diagnose what’s preventing them from making progress? All too often the answer is no. The problem can be simply summed up as data ≠ knowledge. Here’s a lovely video of celebrity chef, Jamie Oliver, showing a group of youngsters what goes into chicken nuggets. After whizzing up a mixture a skin, bone and “horrible bits” he explains that manufacturers squeeze [...]

Put down your crystal balls

2017-07-04T09:32:36+01:00July 3rd, 2017|assessment, leadership|

Many of the schools I visit and work with feel under enormous pressure to predict what their students are likely to achieve in their next set of GCSEs. In the past, this approach sort of made sense. Of course there was always a margin for error, but most experienced teachers just knew what a C grade looked like in their subject. Also, when at least half of students' results were based on 'banked' modular results, the pressure to predict became ever more enticing. Sadly, the certainties we may have relied on have gone. Not only have Ofqual have worked hard to [...]

Why feedback fails

2017-05-28T13:40:02+01:00January 10th, 2017|assessment|

Feedback is one of the few things in education that pretty much every agrees is important and worthwhile. The need for feedback is obvious: if you were expected to learn how to reverse park a car whilst wearing a blindfold you would be very unlikely to learn how to go about this without causing damage either to your car, or to the environment. In order to learn you would need to see where you were going and what happened when you turned the wheel. We get this sort of trial and error feedback all time; we act and then observe the effects of [...]

Making a mockery of marking: The new GCSE English Language mocks

2016-12-05T13:38:59+00:00December 5th, 2016|assessment|

The following is a guest post from the mastermind of Comparative Judgement, Dr Chris Wheadon. The marking of English Language is likely to be extremely challenging this year. English Language has long form answer questions, typically with 8, 16 and 24 mark responses. Ofqual’s research suggests the following range of precision is normal across GCSE and A level: 8 mark items: +/- 3 marks 16 mark items: +/- 4 marks 24 mark items: +/- 6 marks So, when an 8 mark item is marked, for the same response, it is normal for one marker to give 4 marks, while another will give 7 [...]