assessment

How do we know pupils are making progress? Part 3: Assessment

2021-09-11T08:41:38+01:00March 26th, 2019|assessment|

In Part 1 of this series I set out the problems with making predictions about students’ progress by drawing a ‘flight path’ between KS2 and KS4, then, in Part 2, I explained how thinking about the curriculum as a progression model is essential in making judgments about whether students are making progress. In this post we will turn our attention to issues of assessment. NB. This might feel a bit technical at times, but please know that I'm trying hard to explain complex ideas as simply as I'm able.  It's important to note that assessment can have a range of purposes. You [...]

How do we know pupils are making progress? Part 1: The madness of flight paths

2023-02-16T13:47:57+00:00March 23rd, 2019|assessment, curriculum|

Schools are desperate to find ways to predict students' progress from year to year and between key stages. Seemingly, the most common approach to solving this problem is to produce some sort of 'flight path'. The internet is full of such misguided attempts to do the impossible. Predicting a students' progress is a mug's game. It can't be done. At the level of nationally representative population sample we can estimate the likelihood of someone who is measured at performing at one level attaining another level, but this is meaningless at the level of individuals. It should therefore be obvious that [...]

Garbage in, garbage out

2019-01-02T11:32:38+00:00October 16th, 2018|assessment|

This is my latest article for the rather wonderful Teach Secondary magazine. Schools are awash with data but do we know any more about how children are performing? Are we clear how likely they are to achieve particular targets? Can we diagnose what’s preventing them from making progress? All too often the answer is no. The problem can be simply summed up as data ≠ knowledge. Here’s a lovely video of celebrity chef, Jamie Oliver, showing a group of youngsters what goes into chicken nuggets. https://www.youtube.com/watch?v=mKwL5G5HbGA After whizzing up a mixture a skin, bone and “horrible bits” he explains that manufacturers squeeze [...]

Put down your crystal balls

2017-07-04T09:32:36+01:00July 3rd, 2017|assessment, leadership|

Many of the schools I visit and work with feel under enormous pressure to predict what their students are likely to achieve in their next set of GCSEs. In the past, this approach sort of made sense. Of course there was always a margin for error, but most experienced teachers just knew what a C grade looked like in their subject. Also, when at least half of students' results were based on 'banked' modular results, the pressure to predict became ever more enticing. Sadly, the certainties we may have relied on have gone. Not only have Ofqual have worked hard to [...]

Why feedback fails

2017-05-28T13:40:02+01:00January 10th, 2017|assessment|

Feedback is one of the few things in education that pretty much every agrees is important and worthwhile. The need for feedback is obvious: if you were expected to learn how to reverse park a car whilst wearing a blindfold you would be very unlikely to learn how to go about this without causing damage either to your car, or to the environment. In order to learn you would need to see where you were going and what happened when you turned the wheel. We get this sort of trial and error feedback all time; we act and then observe the effects of [...]

Making a mockery of marking: The new GCSE English Language mocks

2016-12-05T13:38:59+00:00December 5th, 2016|assessment|

The following is a guest post from the mastermind of Comparative Judgement, Dr Chris Wheadon. The marking of English Language is likely to be extremely challenging this year. English Language has long form answer questions, typically with 8, 16 and 24 mark responses. Ofqual’s research suggests the following range of precision is normal across GCSE and A level: 8 mark items: +/- 3 marks 16 mark items: +/- 4 marks 24 mark items: +/- 6 marks So, when an 8 mark item is marked, for the same response, it is normal for one marker to give 4 marks, while another will give 7 [...]

Go Compare!

2016-09-09T20:54:35+01:00September 9th, 2016|assessment|

Another one from Teach Secondary, this one from their assessment special. This time it's an over view of Comparative Judgement. Human beings are exceptionally poor at judging the quality of a thing on its own. We generally know whether we like something but we struggle to accurately evaluate just how good or bad a thing is. It’s much easier for us to compare two things and weigh up the similarities and differences. This means we are often unaware of what a ‘correct’ judgement might be and are easily influenced by extraneous suggestions. This is compounded by the fact that we aren’t [...]

When assessment fails

2017-07-27T18:29:04+01:00July 12th, 2016|assessment|

I wrote yesterday about the distinctions between assessment and feedback. This sparked some interesting comment which I want to explore here. I posted a diagram which Nick Rose and I designed for our forthcoming book. The original version of the figure looked like this: We decided to do away with B - 'Unreliable but valid?' in the interests of clarity and simplicity. Sadly though, the world is rarely clear or simple. Clearly D is the most desirable outcome - the assessment provides reliable measurements which result in valid inferences about what students know and can do. It's equally clear that A is [...]

Feedback and assessment are not the same

2016-07-11T21:18:46+01:00July 11th, 2016|assessment|

You don't figure out how fat a pig is by feeding it. Greg Ashman At the sharp end of education, assessment and feedback are often, unhelpfully, conflated. This has been compounded by the language we use: terms like 'assessment for learning' and 'formative assessment' are used interchangeably and for many teachers both are essentially the same thing as providing feedback. Clearly, these processes are connected - giving feedback without having made some kind of assessment is probably impossible in any meaningful sense and most assessment will result in some form of feedback being given or received - but they are not the same. [...]

10 Misconceptions about Comparative Judgement

2016-07-07T17:05:02+01:00July 7th, 2016|assessment|

I've been writing enthusiastically about Comparative Judgement to assess children's performance for some months now. Some people though are understandably suspicious of the idea. That's pretty normal. As a species we tend to be suspicious of anything unfamiliar and like stuff we've seen before. When something new comes along there will always be those who get over excited and curmudgeons who suck their teeth and shake their heads. Scepticism is healthy. Here are a few of the criticisms I've seen of comparative judgement: It's not accurate. Ranking children is cruel and unfair. It produces data which says whether a child has passed or [...]

Proof of progress Part 3

2016-12-06T09:34:06+00:00July 6th, 2016|assessment|

Who's better at judging? PhDs or teachers? In Part 1 of this series I described how Comparative Judgement works and the process of designing an assessment to test Year 5 students' writing ability. Then in Part 2 I outlined the process of judging these scripts and the results they generated. In this post I'm going to draw some tentative conclusions about the differences between the ways teachers approach students' work and the way other experts might do so. After taking part in judging scripts with teachers, my suspicion was that teachers’ judgements might be warped by the long habit of relying on rubrics [...]

A marked decline? The EEF’s review of the evidence on written marking

2016-05-19T10:45:32+01:00May 18th, 2016|assessment|

Question: How important is it for teachers to provide written feedback on students' work? Answer: No one knows. This is essentially the substance of the Education Endowment Foundation's long-awaited review on written marking. The review begins with the following admission: ...the review found a striking disparity between the enormous amount of effort invested in marking books, and the very small number of robust studies that have been completed to date. While the evidence contains useful findings, it is simply not possible to provide definitive answers to all the questions teachers are rightly asking. [my emphasis] But then they go and spoil it all by [...]

Go to Top