Many of the schools I visit and work with feel under enormous pressure to predict what their students are likely to achieve in their next set of GCSEs. In the past, this approach sort of made sense. Of course there was always a margin for error, but most experienced teachers just knew what a C grade looked like in their subject. Also, when at least half of students’ results were based on ‘banked’ modular results, the pressure to predict became ever more enticing. Sadly, the certainties we may have relied on have gone. Not only have Ofqual have worked hard to ensure that it’s more or less impossible to predict what grade a particular pupils is likely to get and, for the new maths and English exams in particular, we don’t even the vaguest idea of what the standard might be like at any of the new grades. There is always a natural volatility in results: they go up and down and it’s got nothing to do with you. In the past, for subjects like English language, the volatility has been as high as 19% up or down from previous years for individual schools. The expectation is that the this year it may be even higher.
If you are making these predictions in this climate, you’re just guessing. Although your guess might turn out to be lucky, it’s based on a house of cards assembled on the sandiest of shores. There is no assessment that accurately allows us to predict future performance, so when we say things like, ‘He’s a strong B’ or, ‘She’ll definitely make her target grade,’ we might as well be reading tea leaves or trying to make sense of chicken guts. The problem is, because you’ve input some data, your guesses feel a bit more ‘mathsy’ than that. But, because the data is based on made up numbers, it can’t tell you much at all.
Think of it like this: if I give a class an assessment and find that 75% get a passing grade, and then use this information to predict what grades the students will get in a few months time, it might feel like I’m doing something robust and sensible, but I’m not. It would be essentially the same as finding out what each student watched on telly last night and attempting to predict what channel they’re likely to be watching in a few month’s time.
The error will be compounded by how many times you repeat the exercise. In Antifragile, Nicholas Nassim Taleb says this:
Assume… that for what you are observing, at a yearly frequency, the ratio of signal to noise is about one to one (half noise, half signal)—this means that about half the changes are real improvements or degradations, the other half come from randomness. This ratio is what you get from yearly observations. But if you look at the very same data on a daily basis, the composition would change to 95 percent noise, 5 percent signal. And if you observe data on an hourly basis, as people immersed in the news and market price variations do, the split becomes 99.5 percent noise to 0.5 percent signal. (p.126)
The more data you collect and the more you try to analyse it, the less you are likely to perceive. Looking at the past leads us into believing we can control the present and predict the future.
So, what should we do? It’s all very well to accept that predicting grades isn’t possible, but if you’ve got an Ofsted inspector breathing down your neck asking for the impossible, what can you do? Fear not, help is at hand. In a recent speech, Amanda Spielman, Her Majesty’s Chief Inspector said this:
At Ofsted, we are all too aware of the challenge of interpreting data wisely and placing it in its proper context. And we are particularly conscious of the changing exam landscape and all the increased volatility of results in periods of transition. We know, for example, that it is particularly difficult to predict outcomes this year in the new English and maths GCSEs … no one in schools – however good – can predict Progress 8 this accurately. So … I have been really clear that our inspectors aren’t expecting these predictions. Instead, we will be looking at whether schools know that pupils are making progress and, if they are not, whether the management team is taking effective action.
No one is expecting schools to predict results, all you’re expected to do is to know how your students have performed in the past, and what you need to do to ensure they make progress. The difference might not seem immediately obvious, but this is a much more sensible approach to data. Let’s re-run the scenario from earlier where I found that 75% of students achieved a ‘passing grade’ in an assessment. Instead of using this information to predict the future, I should instead look at the elements that students struggled with. If I find, for instance that over half struggled with topic x but pretty much everyone did OK with topic y, then a solution presents itself: we need to spend less time on topic y and more time on topic x. Compare that with our ‘real world’ example: I find out what students watch on telly last night and then use this to introduce them to things they might be better off watching.
This is what using data to demonstrate students’ progress looks like: find out what can children actually do, and then show how you have tried to meet their needs.