This is my latest article for the rather wonderful Teach Secondary magazine.
Schools are awash with data but do we know any more about how children are performing? Are we clear how likely they are to achieve particular targets? Can we diagnose what’s preventing them from making progress? All too often the answer is no. The problem can be simply summed up as data ≠ knowledge.
Here’s a lovely video of celebrity chef, Jamie Oliver, showing a group of youngsters what goes into chicken nuggets.
After whizzing up a mixture a skin, bone and “horrible bits” he explains that manufacturers squeeze this revolting mixture into a machine that removes all the crunchy bits and leaves you with a smooth paste. At the end of the process he shows them some nuggets and asks, “Who would still eat this?” All their hands shoot up.
The data machine is similar. We make up numbers based on a combination of hunch, bias and ill-conceived assessments, feed this slop into a machine which removes anything inconvenient and at the other end the slop is attractively repackaged as flight paths, progression models, age related expectations and all sorts of other gibberish. Who would still eat this?
To know something about children and what their performance tells us about how they likely they are to achieve, we need meaningful information. We need to know what happens to this information and we need to know what to do as a result of collecting and manipulating it. Sadly, it’s much easier to simply chow down on reheated, reconstituted giblets than to do the hard work of cooking up some delicious, nutritious data from scratch.
Way back at the very dawn of the information era, Victorian engineer Charles Babbage, inventor of the earliest computer, recounted being asked by various influential individuals, some of them members of parliament, variation on the following question: “Pray Mr Babbage, if you put into the machine wrong figures, will the right answers come out?” He confessed that, “I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” The same confusion of ideas is alive and well a century and a half later. In some ways it’s gotten worse as the modern data consumer doesn’t even realise wrong figures are routinely fed into the machine.
So, what’s the solution? First, and possibly most importantly, we need to separate the notion that data and numbers are synonymous. We must get into the habit of asking what numbers actually represent. If a student is predicted to get a grade 5 in their Spanish GCSE, what does this mean? What are they predicted to know? What will they be able to do, and what will they struggle with? Sadly, in most cases, being predicted a 5 simply means better than a 4 but not as good as a 6. The numbers act as black holes, sucking meaning into themselves and warping our perceptions of reality.
Substituting numbers for labels is no better; we need to be clear about what such labels mean. If a parent is told their child is making ‘age related expectations’ what does this tell them? Most parents might interpret this as short hand for ‘Don’t worry, everything’s fine,’ but is it? There’s no such thing as a universal age-related expectation. You might expect a woman in her forties to be a mother, but there’s absolutely no guarantee that this will be true. How many of our expectations are similarly based on bias and stereotype? This is an essential part of the human condition. We find it hard to juggle too many variables and find it so much easier to say this is like that. All we’re really saying is that this child is broadly able to do what an average of other children can also do. But what specifically is it they can – or can’t – do?
Then, what happens as a result of data having been collected and manipulated? Does it help anyone to know a child is performing above an age-related expectation or that they’re predicted to get a grade 4 in GCSE maths? The acid test for any form of data collection is to ask whether having it will lead to better teaching. How will this data help a teacher to understand what it is a pupil can’t do or doesn’t know? Too often this essential question is only obscured by relying on the data machine.
Crucially, teachers and school leaders need to know that no one is asking them to collect garbage data. Both the DfE and Ofsted have stated, unequivocally, that there is no such requirement for, in Amanda Spielman’s words, “byzantine number systems”. If schools choose to waste time in this way they have only themselves to blame.
Dear David
The collection and use of data is a minefield for schools! It is easy to fixate on flight paths and 1 to 9 grades but schools’ accountability systems do tend to rely on being able to say whether a cohort- typically a year group – is ‘on track’ at regular reporting points, and whether individual students are ‘on track’ – statements like these are mostly short-hand for ‘doing okay’ or ‘not doing okay’ in English/maths/science/ geography and so on. Isn’t that what most parents want to know at parents evening, in the measly few minutes we have to talk to them?
If we don’t use 1 to 9 grades (which definitely feel wrong at KS3), with flight paths based on prior attainment to measure the progress of both individual students and cohorts, and if we shy away from reporting ‘on track’ or ‘making expected progress’ what do we do about tracking? We ARE accountable to our governors, who expect us to be able to report on our students’ progress in-year at various points and we are very accountable when we face Ofsted, too. They ask, ‘How are your students doing? And how do you know?
At the school where I work, we have recently introduced Cumulative Knowledge Tests ( Thank you, Michaela School!) that result in a percentage score, so students and their parents can see what they know and don’t know yet in terms of ‘key learning’ but we also still have flight paths and 1-9. I don’t like this, at KS3 in particular, what do we do instead, for tracking?