You don’t figure out how fat a pig is by feeding it.
Greg Ashman
At the sharp end of education, assessment and feedback are often, unhelpfully, conflated. This has been compounded by the language we use: terms like ‘assessment for learning’ and ‘formative assessment’ are used interchangeably and for many teachers both are essentially the same thing as providing feedback. Clearly, these processes are connected – giving feedback without having made some kind of assessment is probably impossible in any meaningful sense and most assessment will result in some form of feedback being given or received – but they are not the same.
It’s worth giving some simple definitions:
Assessment is the process of judging or deciding the amount, value, quality, or importance of something.
Feedback is information given on the amount, value, quality, or importance of the thing being judged or measured.
Assessing students’ performance is a complex business. It might seem obvious that we could simply ask students questions to find out what they’ve learned, but how do we know we’re asking the right questions? Our questions often prompt students to give particular answers and are unlikely to reveal the full extent of what they know. Any inferences we make about what or whether students have learned are likely to be flawed unless we have a decent working knowledge of reliability and validity.
Validity asks us to consider whether we are measuring the things we claim to be measuring and whether the interpretations we make of students’ test scores and the decisions we subsequently make are reasonable.
Reliability represents the extent to which a measure stays the same when different students are assessed by different teachers, or if the same students were given the same assessment on different occasions.
NB – This is a massive over simplification: there’s a lot more to it than that!
Feedback tends to be much better understood than assessment, but still, there’s a lot we can learn from knowing the differences. Assuming that the assessment we’ve done is reliable and the inferences we’ve made are valid, then we’re in a position to give meaningful feedback. Of course, just because we’ve got some useful feedback doesn’t been that we’ll communicate it in a way that students will understand how to use it or that they’ll choose to use it if they do understand it. However, giving feedback based on unreliable assessments and invalid inferences might be disastrous. At best it will be ignored, but if students do decide to take such feedback seriously they might try to improve something which doesn’t need changing or, more likely, not change an aspect of their work which does need to be improved.
Three posts on assessment which might be useful are here, here and here. I’ve also written extensively about feedback; maybe the two most useful posts are here and here. Also, there are separate chapters on both assessment and feedback in the new book What Every Teacher Needs To Know About Psychology.
One aspect of the feedback process that Dylan Wiliam highlighted is the feedback from the pupil to the teacher about what the pupil needed for which there is a process in place whereby the teacher(s) can respond to that request from the pupil.
Yes – although this aspect identified in Hattie & Timperely (2007) is not normally confused with assessment. Please feel free to read the linked posts above for more details.
I read about it in Dylan’s booklet on assessment for learning. That must explain my confusion.
Oh, it’s been well-known for decades. I shouldn’t worry about it. As you can see, Dylan’s left a comment on this post. Why not ask him where research on feedback from students to teachers originates?
As David implies, an assessment cannot be valid and unreliable. Reliability is therefore a prerequisite for validity. But this creates a conceptual difficulty, because reliability is often in tension with validity, with attempts to increase reliability having the effect of reducing validity. Here’s the way I have found most useful in thinking about this.
Validity is a property of inferences based on assessment results. The two main threats to valid inferences are that the assessment assesses things it shouldn’t (e.g., a history exam assesses handwriting speed and neatness as well as historical thinking) and that the assessment doesn’t assess things it should (e.g., assessing English by focusing only on reading and writing, and ignoring speaking and listening). When an assessment assesses something it shouldn’t, this effect can be systematic or random. If bad handwriting lowers your score in a history examination, this is a systematic effect because it will affect all students with bad handwriting. If however, the score is lowered because of the particular choice of which topics students have revised, then this is a random effect. The particular choice of topics in any particular exam will help some students and not others. As another random effect, some students get their work marked by someone who gives the benefit of the doubt, and others get their work marked by someone who does not. We call the random component of assessments assessing things they shouldn’t an issue of reliability.
Thanks Dylan. That’s a very helpful distinction.
[…] wrote yesterday about the distinctions between assessment and feedback. This sparked some interesting comment which I want to explore here. I posted a diagram which Nick […]
[…] You don’t figure out how fat a pig is by feeding it. Greg Ashman At the sharp end of education, assessment and feedback are often, unhelpfully, conflated. […]
[…] much ‘accuracy’ do we need? Steedle and Ferrara, along with Dylan Wiliam (see blog ‘comment’) note that efforts to tighten the ‘reliability’ statistic can reduce the ‘validity’ […]
[…] https://www.learningspy.co.uk/assessment/whats-difference-assessment-feedback/ […]