Knowledge, n.: The small part of ignorance that we arrange and classify.
Ambrose Bierce
Advanced Learning has commissioned me to write a piece about the uses and abuses of data in schools. My thesis, if that’s not too grand a term, is that while data can be extraordinarily useful in helping us make good decisions, too much data leads, inexorably, to overload. When we have too much data we start doing silly things with it, just because we have it. The cost of bad data is the conviction that we have figured out all the possible permutations and know exactly what we’re doing and why we’re doing it. This is an illusion.
Here’s the introduction:
The more data schools collect, the better they will understand their students, right? Well maybe not. Much of the data schools collect distorts how we think and warps decisions we make about what to teach. The illusion of knowing is comforting, but maybe we’d be better off confronting the uncomfortable truth that we know far less than we suppose. As we find stuff out, we reveal new boundaries to what we know. As the island of knowledge grows, so does its shoreline and beyond that shore is a vast ocean of ignorance. Nate Silver said “Even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening.”
And, if you’re interested in reading more, here’s the paper. See what you think.
Acknowledgements and thanks must go to Jack Marwood, Dylan Wiliam, David Thomas, Chris Wheadon, Rob Coe and Colin Hegarty who have all helped expand the coastline of my ignorance in different ways.
I think you are overly critical of external assessment. Yes, there are limitations about what it can say and improvements possible. However, compared to any other external measure, external assessment is streets ahead in terms of providing meaningful insight on school and pupil achievement.
An idea that your essay has help to crystallise for me is that the assessment that is most wasteful is the assessment in the middle ground – things like end of unit tests. This is the lifeblood of school data systems that want neat half-termly inputs but these tests are the ones with least value – they have all the validity problems you describe and they come too late to have an impact on teaching/learning.
I agree that external assessment data is better then teacher assessment (which is, in the most part, veridically worthless). I’ve written about that here: https://www.learningspy.co.uk/assessment/tests-dont-kill-people/
The problem with the way students are assessed externally is that we trade off validity against reliability. This might be the answer: https://www.learningspy.co.uk/assessment/can-we-assess-progress-2/
I don’t completely agree about the reliability/validity trade-off. Much as I would love to see improvements, GCSEs and A-levels are largely fit-for-purpose and your essay does seem to lump them in with the much less rigorous/useful teacher assessments.
I’m also not convinced by the potential for comparative judgement. I taught science, where ‘good’ is much less to do with flare and much more to do with technical accuracy and arriving at valid conclusions. This means it is easier to state objectively what raises an essay/long answer to a certain level.
My own pet solution is to assess certainty – an approach that I think has merit both for summative assessment (due to the increased accuracy and reliability) and formative assessment (due to the better insight). Whilst the method is limited to closed/objective questions, I think it can give a level of insight into things that, currently, essays are used to assess: http://www.whatdoyoureallyknow.com/blog/2015/06/breadth-and-depth/.
For my thoughts on it see http://www.whatdoyoureallyknow.com/ and for some crunchy research on it see http://www.ucl.ac.uk/lapt/ .
David
thank you so much for the full article.
My school is still getting us to use old levels (which will then be magically converted to new GCSE grades). I have seen two depts already cobbling together some present levels
eg a test out of 60, 56 is a 7c, 52 is a 7b or something like that.
One dept had a bold 90% is 7c, 80% is …70% is …
The linear nonsense continues
I have nothing against rank order but trying to guess whether the tracking system wnats a 5a or 6c is a complete waste of every ones time…
have to pick up kids…
will add to this reply soon [something sinister to report…]
…You also have to predict what level they’ll be on according to an interim target calculated all the way back from an FFT (I assume) GCSE grade. This is even for yr7…
Here’s the sinister bit..I get the impression that any member of staff who guesses the correct predicted end of yr level spot on (ie everyone making perfect ‘ON target’ progress) will be in trouble. Managers will work out a difference column and automatically spot someone who is making the data up (ha! it’s all made up anyway!). So the game seems to be put in the levels they want but make sure some are a bit above and some are a bit below. My subject is one of the easier ones to measure but even this has difficulties. I have to feel sorry for PE and drama teachers (no offence to them of course but I imagine their subject is less measurable). Do we really want teachers agonising and spending so much time on this? I also find it hard to believe how oblivious managers seem to be regarding the present developments.
Great article. Thanks
I hope your work on the workload group is going well.
I had a very nice reply from workload solutions today and believe they are listening to ordinary teachers.
https://www.tes.com/news/school-news/breaking-news/exclusive-workload-would-be-easier-if-you-understood-benefits-dfe
Is it just me or is this article ambiguous. Are the DfE saying data collection should change/reduce so that we only input what is useful…or are they saying it IS useful so stop complaining and get on with it. I am starting to lose hope.
[…] The Illusion of Knowing | David Didau: The Learning Spy ‘Advanced Learning has commissioned me to write a piece about the uses and abuses of data in schools. My thesis, if that’s not too grand a term, is that while data can be extraordinarily useful in helping us make good decisions, too much data leads, inexorably, to overload. When we have too much data we start doing silly things with it, just because we have it. The cost of bad data is the conviction that we have figured out all the possible permutations and know exactly what we’re doing and why we’re doing it. This is an illusion.’ […]
Reminds me of my favourite Feynman quote about the difference between “knowing the name of something and knowing something”.
Cheers
Glen
[…] November The Illusion of Knowing – the most I’ve ever been paid to write anything. […]