Put down your crystal balls


Many of the schools I visit and work with feel under enormous pressure to predict what their students are likely to achieve in their next set of GCSEs. In the past, this approach sort of made sense. Of course there was always a margin for error, but most experienced teachers just knew what a C grade looked like in their subject. Also, when at least half of students' results were based on 'banked' modular results, the pressure to predict became ever more enticing. Sadly, the certainties we may have relied on have gone. Not only have Ofqual have worked hard to [...]

Put down your crystal balls 2017-07-04T09:32:36+00:00

Why feedback fails


Feedback is one of the few things in education that pretty much every agrees is important and worthwhile. The need for feedback is obvious: if you were expected to learn how to reverse park a car whilst wearing a blindfold you would be very unlikely to learn how to go about this without causing damage either to your car, or to the environment. In order to learn you would need to see where you were going and what happened when you turned the wheel. We get this sort of trial and error feedback all time; we act and then observe the effects of [...]

Why feedback fails 2017-05-28T13:40:02+00:00

Making a mockery of marking: The new GCSE English Language mocks


The following is a guest post from the mastermind of Comparative Judgement, Dr Chris Wheadon. The marking of English Language is likely to be extremely challenging this year. English Language has long form answer questions, typically with 8, 16 and 24 mark responses. Ofqual’s research suggests the following range of precision is normal across GCSE and A level: 8 mark items: +/- 3 marks 16 mark items: +/- 4 marks 24 mark items: +/- 6 marks So, when an 8 mark item is marked, for the same response, it is normal for one marker to give 4 marks, while another will give 7 [...]

Making a mockery of marking: The new GCSE English Language mocks 2016-12-05T13:38:59+00:00

Go Compare!


Another one from Teach Secondary, this one from their assessment special. This time it's an over view of Comparative Judgement. Human beings are exceptionally poor at judging the quality of a thing on its own. We generally know whether we like something but we struggle to accurately evaluate just how good or bad a thing is. It’s much easier for us to compare two things and weigh up the similarities and differences. This means we are often unaware of what a ‘correct’ judgement might be and are easily influenced by extraneous suggestions. This is compounded by the fact that we aren’t [...]

Go Compare! 2016-09-09T20:54:35+00:00

When assessment fails


I wrote yesterday about the distinctions between assessment and feedback. This sparked some interesting comment which I want to explore here. I posted a diagram which Nick Rose and I designed for our forthcoming book. The original version of the figure looked like this: We decided to do away with B - 'Unreliable but valid?' in the interests of clarity and simplicity. Sadly though, the world is rarely clear or simple. Clearly D is the most desirable outcome - the assessment provides reliable measurements which result in valid inferences about what students know and can do. It's equally clear that A is [...]

When assessment fails 2017-07-27T18:29:04+00:00

Feedback and assessment are not the same


You don't figure out how fat a pig is by feeding it. Greg Ashman At the sharp end of education, assessment and feedback are often, unhelpfully, conflated. This has been compounded by the language we use: terms like 'assessment for learning' and 'formative assessment' are used interchangeably and for many teachers both are essentially the same thing as providing feedback. Clearly, these processes are connected - giving feedback without having made some kind of assessment is probably impossible in any meaningful sense and most assessment will result in some form of feedback being given or received - but they are not the same. [...]

Feedback and assessment are not the same 2016-07-11T21:18:46+00:00

10 Misconceptions about Comparative Judgement


I've been writing enthusiastically about Comparative Judgement to assess children's performance for some months now. Some people though are understandably suspicious of the idea. That's pretty normal. As a species we tend to be suspicious of anything unfamiliar and like stuff we've seen before. When something new comes along there will always be those who get over excited and curmudgeons who suck their teeth and shake their heads. Scepticism is healthy. Here are a few of the criticisms I've seen of comparative judgement: It's not accurate. Ranking children is cruel and unfair. It produces data which says whether a child has passed or [...]

10 Misconceptions about Comparative Judgement 2016-07-07T17:05:02+00:00

Proof of progress Part 3


Who's better at judging? PhDs or teachers? In Part 1 of this series I described how Comparative Judgement works and the process of designing an assessment to test Year 5 students' writing ability. Then in Part 2 I outlined the process of judging these scripts and the results they generated. In this post I'm going to draw some tentative conclusions about the differences between the ways teachers approach students' work and the way other experts might do so. After taking part in judging scripts with teachers, my suspicion was that teachers’ judgements might be warped by the long habit of relying on rubrics [...]

Proof of progress Part 3 2016-12-06T09:34:06+00:00

A marked decline? The EEF’s review of the evidence on written marking


Question: How important is it for teachers to provide written feedback on students' work? Answer: No one knows. This is essentially the substance of the Education Endowment Foundation's long-awaited review on written marking. The review begins with the following admission: ...the review found a striking disparity between the enormous amount of effort invested in marking books, and the very small number of robust studies that have been completed to date. While the evidence contains useful findings, it is simply not possible to provide definitive answers to all the questions teachers are rightly asking. [my emphasis] But then they go and spoil it all by [...]

A marked decline? The EEF’s review of the evidence on written marking 2016-05-19T10:45:32+00:00

Testing, testing… why one test can’t do everything


The thing which most seems to rile people about testing is the fact that it puts children under stress. A certain amount of stress is probably a good thing - there's nothing as motivating as a looming deadline - but too much is obviously a bad thing. Martin Robinson writes here that ... a teacher needn’t pass undue exam stress onto her pupils, and a Headteacher needn’t pass undue stress onto her teachers. People work less well under a lot of stress; by passing it down the chain, each link ceases to function so well. Therefore if a school wants to [...]

Testing, testing… why one test can’t do everything 2016-05-17T19:16:16+00:00

Workload Challenge: Marking


The three areas identified by teachers' responses to the Workload Challenge as particularly burdensome were marking, planning and data and a separate report has been prepared on each. On of the problems encountered in preparing these reports is the lack of a robust evidence base. Too often those involved in compiling the reports were forced to rely on professional judgement and 'common sense' interpretations of what little evidence there was. One of the themes which ran through all our work was the belief that marking, planning and data are proxies for teacher performance. On its own, this might be fine - proxies are often the [...]

Workload Challenge: Marking 2016-11-02T18:04:15+00:00

Proof of progress Part 2


Back in January I described the comparative judgement trial that we were undertaking at Swindon Academy in collaboration with Chris Wheadon and his shiny, new Proof of Progress system. Today, Chris met with our KS2 team and several brave volunteers from the secondary English faculty to judge the completed scripts our Year 5 students had written. Chris began proceedings by briefly describing the process and explaining that we should aim to make a judgements every 20 seconds or so. The process really couldn't be simpler: the system displays two scripts at a time and you just have to judge which one you think is [...]

Proof of progress Part 2 2016-07-06T22:04:47+00:00