Back in March I wrote a post called Why AfL might be wrong, and what to do about it based, largely, on Dylan Wiliam’s book Embedded Formative Assessment (If you haven’t already read it, I encourage you to do so as many of the common misconceptions about AfL are specifically addressed). I’m pleased to report that Dylan has taken time out of his hectic schedule to comment on the post and defend the essentials of formative assessment. What follows is, in its entirety, the comment left on the original post.
In his post on “Why AfL might be wrong, and what to do about it” David Didau points out (correctly) that it is impossible to assess what students have learned in an individual lesson. As John Mason once said, “teaching takes place in time, but learning takes place over time” (Griffin, 1989). The ultimate test of any teaching is long-term changes in what students can do (or avoid doing, such as getting pregnant or taking drugs). The problem with such an approach to teaching is that if we wait until we see the long-term evidence, it will be too late. An analogy with automobile manufacturing may be helpful here.
In the 1970s and 1980s, the major American and European car-makers had smart people designing production processes through which vehicles progressed, and then, and the end of the process, the finished product would be inspected to see if it worked properly. The Japanese, building on the work of W. Edwards Deming (an American) realized that it would be far better to build quality into the manufacturing process. If something was wrong with a car, any worker on the production line had the authority to stop the production line to make sure that the problem was fixed before the vehicle moved along the production process. This approach is often described as “building quality in” to the production process, rather than “inspecting quality in” at the final stage. Another way of describing the difference is a move from quality control to quality assurance.
Similarly, in teaching, while we are always interested in the long-run outcomes, the question is whether attending to some of the shorter term outcomes can help us improve the learning for young people.
This is an extraordinarily complex task, because we are trying to construct models of what is happening in a students’ mind when this is not directly observable. Ernst von Glasersfeld described the problem thus:
“Inevitably, that model will be constructed, not out of the child’s conceptual elements, but out of the conceptual elements that are the interviewer’s own. It is in this context that the epistemological principle of fit, rather than match is of crucial importance. Just as cognitive organisms can never compare their conceptual organisations of experience with the structure of an independent objective reality, so the interviewer, experimenter, or teacher can never compare the model he or she has constructed of a child’s conceptualisations with what actually goes on in the child’s head. In the one case as in the other, the best that can be achieved is a model that remains viable within the range of available experience.” (von Glasersfeld, 1987 p. 13)
So I agree with David that we can never be sure that the conclusions we draw about what our students have learned are the right conclusions. This is why my definition of formative assessment does not require that the inferences we make from the evidence of student achievement actually improve student learning—learning is too complex and messy for this ever to be certain. What we can do is increase the odds that we are making the right decision on the basis of evidence rather than hunch.
In terms of the five strategies, I was surprised that David’s post specifically focused on critiquing success criteria. In fact, I actually went back to read what I had written on this in 2011 to check what I had said. Much of the chapter 3 of my book Embedded formative assessment (where I discuss learning intentions) is spent railing against success criteria, and arguing that a shared construct of quality (what Guy Claxton calls a “nose for quality”) is what we should be aiming for, although on those rare occasions when we can spell out the rules for success, we should, of course do so. Michael Polanyi’s work on “Personal knowledge” now over 50 years old, is still the definitive work on this, in my opinion.
In terms of eliciting evidence (which is definitely not just questioning by the way, as I go to considerable lengths to show), then of course we never really know what is happening in the student’s head but I am confident that teaching will be better if the teacher bases their decisions about what to do next on a reasonably accurate model of the students’ thinking. There will also, I suspect, be strong differences across disciplines here. Asking a well-framed question in science may reveal that a student has what Jim Minstrell calls a facet of thinking (DeBarger et al., 2009) that is different from the intended learning—for example, a student may think that a piece of metal left outside on a winter’s night is colder than the wooden table on which it rests, when in fact the temperature of the two are the same (the metal feels colder because it conducts heat away from the hand faster). You may not get rid of the facet of thinking quickly, but knowing that the student thinks this has to be better than not knowing it.
As for feedback, there really is a lot of rot talked about how feedback should and should not be given. People say that feedback should be descriptive, and maybe a lot of the time it should be, but people forget that the only good feedback is that which is acted upon, which is why the only thing that matters is the relationship between the teacher and the student. Every teacher knows that the same feedback given to one student will improve that student’s learning but to another student, of similar achievement, will make that student give up. Teachers need to know their students, so that they know when to push, and they know when to back off. There will be times when it is perfectly appropriate to say to a student that this work really isn’t worth marking because they have “phoned it in” and other times then this would be completely inappropriate. Just as importantly, students need to trust their teachers. If they don’t think the teacher knows what he or she is talking about, or doesn’t have the student’s best interests at heart, the student is unlikely to invest the effort required to take the feedback on board. That is why almost all of the research on feedback is a waste of time—hardly any studies look at the responses cued in the individual recipient by the feedback.
The quote about collaborative/co-operative learning being one of the success stories of educational research comes from Robert Slavin, who has probably done more high-quality work in this area than anyone (Slavin et al., 2003). The problem is that few teachers ensure that the two criteria for collaborative learning are in place: group goals (so that students are working as a group rather than just in a group) and individual accountability (so that any student falling down on the job harms the entire group’s work). [I wrote a post last year on Effective Group Work which makes these points.] And if a teacher chooses to use such techniques, the teacher is still responsible for the quality of teaching provide by peers. As David notes, too often, peer tutoring is just confident students providing poor quality advice to their less confident peers.
Finally, in terms of self-assessment, it is, of course, tragic that in many schools, self-assessment consists entirely of students making judgments on their own confidence that they have learned the intended material. We have over 50 years of research on self-reports that show they cannot be trusted. But there is a huge amount of well-grounded research that shows that helping students improve their self-assessment skills increases achievement. David specifically mentions error-checking, which is obviously important, and my thinking here has been greatly advanced by working (in Scotland) with instrumental music teachers. Most teachers of academic subjects seem to believe that most of the progress made by their students is made when the teacher is present. Instrumental music teachers know this can’t work. The amount of progress a child can make on the violin during a 20 or 30 minute lesson is very small. The real progress comes through practice, and what I have been impressed to see is how much time and care instrumental music teachers take to ensure that their pupils can practice effectively.
So in conclusion, David has certainly provided an effective critique of “assessment for learning” as enacted in government policy, and in many schools, but I don’t see anything here that forces me to reconsider how I think about what I call formative assessment. I remain convinced that as long as teachers reflect on the activities in which they have engaged their students, and what their students have learned as a result, then good things will happen.
References
Claxton, G. L. (1995). What kind of learning does self-assessment drive? Developing a ‘nose’ for quality: comments on Klenowski. Assessment in Education: principles, policy and practice, 2(3), 339-343. [This is behind a pay wall – I’ve been unable to find a pdf]
DeBarger, A. H., Ayala, C. C., Minstrell, J., Kraus, P., & Stanford, T. (2009). Facet-based progressions of student understanding in chemistry. Menlo Park, CA: SRI International.
Griffin, P. (1989). Teaching takes place in time, learning takes place over time. Mathematics Teaching, 12-13.
Polanyi, M. (1958). Personal Knowledge: Towards a Post-Critical Philosophy. London, UK: Routledge & Kegan Paul.
Slavin, R. E., Hurley, E. A., & Chamberlain, A. M. (2003). Cooperative learning and achievement. In W. M. Reynolds & G. J. Miller (Eds.), Handbook of psychology volume 7: Educational psychology (pp. 177-198). Hoboken, NJ: Wiley. [This is behind a pay wall, but this paper, Co-operative Learning: What makes groupwork work? also by Slavin is available as a pdf.
von Glasersfeld, E. (1987). Learning as a constructive activity. In C. Janvier (Ed.), Problems of representation in the teaching and learning of mathematics (pp. 3-17). Hillsdale, NJ: Lawrence Erlbaum Associates.
Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, IN: Solution Tree.
[…] Read more on The Learning Spy… […]
Really enjoyable read, thanks David.
Thanks for posting Dylan’s comment in full. He has given us plenty to think about and it is great that he has made clear references to credible material. Well done Dylan!
So, education is complicated. There is evidence and research, but interpreting it can be problematic and implementing findings even more difficult. It is the latter point that is more significant, I think, because this is what impacts on teachers in the classroom. It must surely be a no-brainer that formative assessment is a good thing? Looking at where a student is at and seeking to tailor future input so that progress can happen. The difficult bits are the implementation, and this is where it goes wrong. It is evident from Dylan’s response that the level of nuance required to implement the ideas effectively is formidable, and not very amenable to simplification and reduction to a few implementable bullet points. Important things are worth taking time over…..ITT? (I am guessing that experienced teachers have probably already found out the truth of the situation, and if left to get on with it would probably implement effectively…?).
Yes, education is *so* complicated – that it’s largely a waste of time spending so much time navel-gazing about it. Wiliam says that leaving events for the long-run risks bad outcomes, but there is absolutely nothing anyone can do about that, unless you somehow believe in Fate and think that all actions are somehow pre-ordained. (And even then you interventions wouldn’t really be in your control anyway!)There is no way that the outcomes of lives can be future-proofed by interventions in the present.
Dylan’s remedy is to attend to short-term issues. Fair enough – what else can one do, other than give up altogether? But there is so little confidence to be had about cause and effect here that it really is tantamount to Canute commanding the waves.
So, you stop someone from taking up smoking, and the outcome is that they live long enough to be run over by a bus instead. And maybe if you stop lots of pupils from smoking, someone somewhere loses their livelihood. That is meant tongue-in-cheek, but I rest the point – you *cannot* know what short term actions are for the best as you simply cannot know the future.
The best we can do is make broad judgments about things that *in general* tend to promote better life outcomes, such a promoting education or health, but they are so general as to fall far outside the realm of a teacher’s specific decision making with respect to a particular class or individual, whatever we choose to call it.
Wiliam also makes the classic error of choosing an inanimate situation for his analogy and it doesn’t work – there is insufficient similarity between what you can know of the material science needed to anticipate materials’ and machines’ integrity, and the complexity and *un*certainty that goes to make up a human life. This is nothing more than classic social engineer’s angst about the things they wish they could control but know they can’t.
.
Then he builds so many caveats around his observations that he effectively concedes that we can’t in any case know what he wants us to know in order to act as he stipulates. So it all boils down to wishful thinking. We would better just to accept the fact that we can never have the certainty he craves.
I do agree with him, however that these existential problems are vastly compounded by over-simplistic interpretations at school level.
Hits nail on the head. Put simply, formative assessment means different things to different people. Done well it works well, done badly it’s a waste of time and effort. No different from interpreting direct instruction as purely a teacher lecturing from the front of a class for an hour. The really interesting stuff is how much of which technique when and in what context.
[…] Back in March I wrote a post called Why AfL might be wrong, and what to do about it based, largely, on Dylan Wiliam’s book Embedded Formative Assessment (If you haven’t already read it, I encourage you to do so as many of the common misconceptions about AfL are specifically addressed). I’m pleased to report that […]
A really interesting original post, a large number of comments and discussions and then this outstanding respose from Dylan William.
Thanks to everyone who contributed, my view of Afl is now much clearer.
For me this has been for me one of the most useful exchanges I have seen anywhere bringing together expertise from both practice and academic research.
I also feel that Ian Lynch summarised the situation perfectly above when he said….
“Hits nail on the head. Put simply, formative assessment means different things to different people. Done well it works well, done badly it’s a waste of time and effort. No different from interpreting direct instruction as purely a teacher lecturing from the front of a class for an hour. The really interesting stuff is how much of which technique when and in what context.”
The only thing I would humbly add is to append “and by whom.”
Thanks David and all that have contributed, I would love to see more posts like this one.
[…] struck me recently in the debate between David Didau and Dylan Wiliams was the latters view that actually the two were discussing something other, or rather that they […]
[…] you’re not already aware of my critique and Dylan Wiliam’s defence of formative assessment I do recommending getting up to speed before reading this […]
[…] The original article can be found here: https://www.learningspy.co.uk/assessment/dylan-wiliams-defence-formative-assessment/ […]
[…] https://www.learningspy.co.uk/assessment/dylan-wiliams-defence-formative-assessment […]
[…] Dylan Wiliam’s defence of formative assessment – David Didau […]
[…] interventions, might be widely misunderstood and misapplied. I’ve been truly grateful to Dylan Wiliam for engaging with my critique of formative assessment – his generosity has been […]
[…] Dylan Wiliam's defence of formative assessment – David Didau: The Learning Spy. Back in March I wrote a post called Why AfL might be wrong, and what to do about it based, largely, on Dylan Wiliam’s book Embedded Formative Assessment (If you haven’t already read it, I encourage you to do so as many of the common misconceptions about AfL are specifically addressed). […]
[…] Sälen. Bloggar. Formativ bedömning. Books. Speech. Sköldkörtelinfo. Dylan Wiliam's defence of formative assessment – David Didau: The Learning Spy. […]
[…] See Williams’ response here https://www.learningspy.co.uk/assessment/dylan-wiliams-defence-formative-assessment/ […]
Dylan is our philosophy for building software to engage learners. Every time we create code we check back on his principles. He is a genius.
[…] recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments. Nor does the Ontario student […]
[…] of Assessment for Learning, Dylan Wiliam, “The only good feedback is that which is acted upon”. https://www.learningspy.co.uk/assessment/dylan-wiliams-defence-formative-assessment/. Our team review was in itself formative assessment, a look at moment in time and an evaluation of […]