If you’re not already aware of my critique and Dylan Wiliam’s defence of formative assessment I do recommending getting up to speed before reading this post.
Dylan’s defence rests on the idea that although we can never be sure what’s going on in a child’s mind, “teaching will be better if the teacher bases their decisions about what to do next on a reasonably accurate model of the students’ thinking.”
He makes a rather interesting and surprising point: it doesn’t matter that we can’t know what’s going on in our students’ minds because his “definition of formative assessment does not require that the inferences we make from the evidence of student achievement actually improve student learning”.
Hang on, what’s formative assessment then?
Here’s what Wikipedia has to say:
Formative assessment or diagnostic testing is a range of formal and informal assessment procedures employed by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. It typically involves qualitative feedback (rather than scores) for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.
And while we could pick a few holes in this definition this is, by and large, what I was under the impression most people believed formative assessment to be.
Well, it turns out that maybe this isn’t what Dylan Wiliam believes. From his comment I think we can infer that formative assessment is about “increase[ing] the odds that we are making the right [teaching] decision on the basis of evidence rather than hunch”, and that “as long as teachers reflect on the activities in which they have engaged their students, and what their students have learned as a result, then good things will happen.”
The essential elements of ‘formative assessment’ would appear to be these:
- We should make teaching decisions based on evidence.
- We should reflect on the activities students have engaged in and what they have learned as a result.
On the face of it this advice is both sound and wise. No one is seriously arguing that we should avoid making teaching decisions based on evidence, or that it’s a bad idea to reflect on the process of teaching and learning. But, and it’s a big but, what evidence? And what learning?
If we are to accept that it is best to “use evidence about learning to adapt teaching and learning to meet student needs” we need to pretty clear about what we’re doing.
Maybe the best evidence might be our understanding of cognitive science and the limits of working memory? Or possibly our knowledge of the role of retrieval induced forgetting? If we were to accept, for instance, Bjork’s concept of ‘desirable difficulties’ and his finding that current performance is a very poor indicator of future learning then maybe the very worst thing we could be doing is imagining that this evidence should be overthrown just because our students can respond to cues and prompts in the classroom?
If we were to accept these ideas about learning and memory then possibly the most sensible approach is to amass evidence on the most effective ways to teach before embarking on a teaching sequence and then reflecting on how successfully we believe we stuck to these principles. To refute this Dylan cites the example of car manufacture and the Japanese practice of “building in quality” to the manufacturing process. He says, “Similarly, in teaching, while we are always interested in the long-run outcomes, the question is whether attending to some of the shorter term outcomes can help us improve the learning for young people.”
Well, I’d contend that it is overwhelmingly more complex to see whether a student has learned something during a lesson than it is to check whether a car has been sufficiently well constructed. Any attempts to elicit evidence from students during the teaching process is fraught with difficulty, and the only really helpful response is one along the lines Dylan refers to on the properties of wood and metal. As he says, knowing a student is labouring under a misapprehension is better than not knowing it. Definitely. But what about when students are able to answer your questions? If we are to take this as a measure of our teaching’s effectiveness than we could be sadly and spectacularly mistaken. Finding out that students know the right answer during a lesson is most useless piece of feedback we can get. Who cares? It is only in ascertaining whether a change has taken place over the long term that we will get any useful feedback on the effectiveness of our teaching.
So in conclusion, all that formative assessment within lessons can tell us is what students haven’t learned, never what they have learned. That’s not to say that it isn’t extremely useful to check out misconceptions and reveal areas of ignorance, but it might be incredibly damaging to use formative assessment in lessons as justification for believing we can ‘move on’ in the belief that we know with any certainty whether students are ‘making progress’.
Does any of this matter? Yes, I think it does. If Dylan’s right then we should carry on with his ‘5 strategies and one big idea’. But if I’m right, then formative assessment in lessons (a long-winded way of saying AfL) could be counterproductive and prevent us from doing what is best. Maybe the ‘nose for quality’ we need is in searching out the most effective ways to teach.