Over my last couple of posts I’ve suggested that you can’t see learning in lessons, you can only infer it from students’ performance. This means that as a teacher, when you get students to respond to exit passes, signal with traffic lights and otherwise engage in formative assessment what you see are merely cued responses to stimuli. What I mean by that is that the tasks we set students to check whether they’ve learned what we’ve taught only tell us how they are performing at that particular time and in those particular circumstances; they offer no indication whether the feckless buggers will remember any of it, or be able to apply it in new and interesting ways. I’m not saying that assessing performance is without value; by monitoring students’ performance if we can get some idea of what they may have learnt. But that’s all it is: inference.
The problem comes when well meaning teachers, having been blessed with some training on AfL, believe that they can ever know what their students have actually learnt rather than what they were able to do, and then, congratulating themselves on their clear-sightedness, adjust their plans for tomorrow’s lesson accordingly. At a massively simplified level, the process might look like this:
- Teacher teaches students how to multiply fractions.
- Teacher sets some problems which require students to multiply fractions.
- Students successfully complete these problems.
- Teachers marks their books and thinks, ah good; they’ve all learnt how to multiply fractions. Let’s more on to some harder sums.
Sound familiar? I’ve certainly operated in this way (although not about fractions!) The problem is that I’ve really no idea whether the students really have learned what I wanted to teach them. All I know is that they were able to do (or not do) the task I set. If I assume that knowledge has successfully been transmitted and move on, many of the students may not have retained anything from the wonderful lesson I so lovingly crafted. And I won’t realise this until they sit some terminal exam and fail.
Of course, this is a caricature and I’m sure no teacher would actually behave this way. Would they?
Well, maybe in a culture obsessed with demonstrating progress in lessons they might. If they’re under pressure to design lesson so that an observer can tick a box and nod wisely at what the students have magically learned they might start to think that this is in fact the right thing to do.
Learning is messy. Sometimes students look like they’re making no progress at all but actually they just have a problem with the performance task you’ve designed. Sometimes students seem to fly; they jump through all our hoops and then, for reason we can work out, don’t appear to know anything when they’re asked to apply their knowledge in a different setting. And sometimes students look like they’re learning loads but actually they were able to do all they stuff you just taught em back in year 6!
Or maybe we prepare students so thoroughly to perform in exams that that’s all they can do.
So, if that’s the problem, what’s the solution?
Well, step one is to organise your programme of study to introduce what Robert Bjork calls ‘desirable difficulties‘. (See links at the end of the post.)
And step two is to plan lessons which take account of how children actually learn rather than how we’d like them to. Here are a few principles to help us do precisely that:
1. Build progress into learning outcomes
This is pretty obvious really. Use your spaced and interleaved curriculum to add to and connect the knowledge students are (you hope) acquiring. Nuthall’s research tells us we should assume that students will only retain new concepts in long term memory when we have taught them for the third time. Until then we need to remind students about the context in which they previously came across the information in order to activate their working memories. I’m going a bit off-script here, but I’ve become increasingly convinced that SOLO taxonomy is most effectively used to plan learning outcomes; many of the tricks and gimmicks involved in explicitly teaching students about the taxonomy should, perhaps, be bypassed to concentrate on expanding students’ domain knowledge.
There, I’ve said it. I find SOLO valuable in helping to plan and organise a curriculum, but much of the time I was previously putting into teaching the taxonomy itself was based on the flawed belief that it would help students demonstrate progress. And make no mistake, it is great for getting students to demonstrate progress; but of what? If I accept that learning takes time and needs to build on a firm foundation of knowledge then there really isn’t any value in prompting students to show they’re able to more from multi-structural to extended abstract in a single lesson. All this demonstrates is the progress they’ve made in their ability to perform a particular task at a particular time. True extended abstract thinking develops over time. This is of course something we should plan for and it seems a sensible use of time within a spaced, interleaved curriculum that we should plan to take students on a journey from knowing very little, to knowing a lot, to being able to apply this knowledge in new and interesting ways.
An alternative approach to helping students to think more deeply about the knowledge they’re acquiring is contained in this guide on Visible Thinking from the Harvard Graduate School of Education.
2. Test for prior knowledge
If we’re to have any chance whatsoever of tracking what students learn we need to know what they have already learned. This means that before we teach a topic we should use diagnostic assessment to see what we might need teach and who we might need to teach it to. Nuthall tells us that 50% of what teachers teach is already know to their students. And to further complicate matters, students will all know a different 50%! This sounds impossibly complex; how are we ever to know what to teach?
Since the acquisition of new knowledge and skills depends on establishing pre-existing knowledge and skill, knowing what students know and can do when they come into the classroom or before they begin a new topic of study, will help us design lessons that build on student strengths and acknowledge and address their weaknesses. Cognitive psychologist, Daniel Willingham says, “students come to understand new ideas by relating them to old ideas. If their knowledge is shallow, the process stops there.”
Diagnostic assessment doesn’t have to just be a test. Direct measures like tests, concept maps, interviews etc. are all useful but so, sometimes, are more indirect methods like student self-assessment, reports and inventories of topics that have already been studied.
A quick word on concept maps. Novak and Canas describe concept maps as “graphical tools for organizing and representing knowledge. They include concepts, usually enclosed in circles or boxes of some type, and relationships between concepts indicated by a connecting line linking two concepts. Words on the line, referred to as linking words or linking phrases, specify the relationship between the two concepts.” So, essentially they’re a more structured mind map (and not under copywrite by Tony Buzan!) Concepts are represented hierarchically according to the structure for a particular domain of knowledge and also on the context in which that knowledge is being applied or considered. I’d advise constructing concept maps with reference to a particular question you want students to focus on. This ‘focus question’ can be broad or specific depending on what you’re planning on teaching – the key is to keep students focussed on identifying concepts that answer the question and then rank them in order of importance. These concepts are then used to construct the map, which, students need to realised is never finished but will expand to fit in new concepts they learn along the way. And that’s the point: all students will have constructed their own map which they can use to help them decide what they need to learn. (More on concept maps here.)
Too trendy? If co-construction’s not your bag, you can always stick with a test; as long as you have a reliable means of assessing what students know before you start teaching you’ll have a benchmark against which to assess how much they’ve learned when you finish. Anyone who doesn’t do this is effectively fumbling in the dark and using a combination of guesswork and fortunetelling to work out their students’ progress.
3. ‘Take the temperature’ of lessons
It’s important to gauge whether students are ‘getting it’. I written before about how I use my ‘flow chart’ to work out whether the levels of stress vs challenge in lesson are sufficient for students to move from what they comfortably know to knowledge that is ‘just out of reach’. But the king of the in-lesson thermometers is the hinge question. For the uninitiated, here are the essentials:
– A hinge question is based on an important concept in a lesson that is critical for students to understand before you can move on.
– The question should fall about midway during the lesson.
– Every student must respond to the question within two minutes so go with multiple choice, factual questions
– You must be able to collect and interpret the responses from all students in 30 seconds. This is a great use for all those mini whiteboards gathering dust in the stock cupboard.
– You need to be clear on how many students you need to get the right answer in advance – 20-80% depending on how important the question is.
4. Use dialogic questioning to explore misconceptions
Dialogic teaching is very different from the classic question-and-answer sessions we’ve all either suffered or perpetrated in which students compete to offer brief answers to closed questions. In contrast, dialogic teaching is characterised by comparatively lengthy interactions between a teacher and a student or students in a climate of collaboration and mutual support.
Alexander (2005) describes dialogic teaching as:
… collective, supportive and genuinely reciprocal; it uses carefully-structured extended exchanges to build understanding through cumulation; and throughout, children’s own words, ideas, speculations and arguments feature much more prominently.
A teacher’s stock in trade is the question. Acres of forests have been felled to provide the paper needed for all that’s been written about questioning but in the interests of the environment and convenience I’ll point you in the direction of Alex Quigley’s Top 10 questioning strategies.
The point is that we need to have a battery of questions ready to explore what students currently understand and to use this information to guide them away from amusing misconceptions. This can be done during whole class direct instruction or with individuals or individuals while students get on with the tasks you’ve designed for them to demonstrate their current performance. If you have students’ concept maps to hand you’ll be able to establish who might need particular support with this particular topic as well as who requires stretching to make relational links with the stuff they don’t know so well.
The Rolls Royce approach might be to design specific questions for key students, but who’s got time? I rely on using a range of clarify, probe & recommend questions in order to drill into why students think what they think and to get them to establish connections with the other aspects of the course which are being interleaved.
Over to you
I’m sure there are many more wonderful strategies which promote the long term retention and transfer of knowledge rather than the whizz bang showmanship of teaching for short term performance, but these will be enough to get you started. I’d be delighted if you felt able to suggest your own top tips for designing lessons for learning.