You may remember that over the past few weeks I’ve been trying to refine my thinking about how we can improve the way we give feedback. If you haven’t already read the previous instalments, you might find it helpful to go over Part 1 (which discusses the different purposes for giving feedback) Part 2 (which looks at how to increase pupils’ understanding) and Part 3 (which considers how to get pupils to expend greater effort.)
In this post I want to explore how feedback can be used to encourage pupils to aim higher, want more and go beyond their current performance. Many high achieving pupils will be naturally hungry and will want to take every opportunity to improve even further, but some won’t. What do we do about those pupils who meet our expectations but are satisfied with doing just enough to get by?
So, there are two issues to deal with here:
- How can we formulate feedback which has the effect of raising aspiration?
- What do we do about those pupils who, when they meet of exceed expectations, decide to exert less effort or that the goal is too easy?
On the first question, the EEF report that, “On average, interventions which aim to raise aspirations appear to have little to no positive impact on educational attainment.” This is bad news. They go on the provide the following explanations:
First, evidence suggests that most young people actually have high aspirations, implying that much underachievement results not from low aspiration itself but from a gap between the aspirations that do exist and the knowledge and skills which are required achieve them. As a result it may be more helpful to focus on raising attainment more directly in the first instance.
Second, where pupils do have lower aspirations it is not clear that any targeted interventions consistently succeed in raising their aspirations. Third, where aspirations begin low and are successfully raised by an intervention, it is not clear that an improvement in learning necessarily follows. In programmes which do raise attainment, it is unclear whether learning gains can be credited for raising aspirations rather than the additional academic support or increased parental involvement.
The clear message is that we are better off spending our time on increasing attainment rather than worrying ourselves about imponderables like aspiration. So is trying to design feedback aimed at raising aspirations doomed to fail? And if it is, what do we do with those pupils who are making the grade? Just leave them to it?
Dylan Wiliam has the following to say:
When the feedback tells an individual that he has already surpassed the goal, one of four things can happen. Obviously one hopes that the individual would seek to change the goal to one that is more demanding, but it might also be taken as a signal to ease off and exert less effort. Also, when success comes too easily, the individual may decide that the goal itself is worthless and abandon it entirely or reject the feedback as being irrelevant.
If we’re not careful, any feedback we give may well have a detrimental effect. Our focus must be on providing feedback that raises pupils’ aspirations. Ringing in my ears is this message from John Hattie: “A teachers’ job is not to make work easy. It is to make it difficult. If you are not challenged, you do not make mistakes. If you do not make mistakes, feedback is useless.” This implies that if pupils are not making mistakes, this is the teacher’s fault. And if it’s our fault, the solution is consider how to design assessments without a ceiling on achievement.
Recently, I was involved in an extremely unscientific project which looked at how we add value to high attaining pupils. A group of Year 10 pupils who were achieving A* grades across a range of subjects were put forward and, following conversation, we realised that almost all of them felt that their success was despite not because of their teachers’ efforts. One said, “I’ve never had any feedback which helped me improve.” Maybe this is understandable: busy teachers who are being held accountable for the progress of their pupils are not going to prioritise those who are already achieving at the top of the scale. But surely someone has to?
We explained to the pupils that we were going to give them a series of challenges designed get them to make mistakes so that we could give them meaningful feedback on home to improve their performance.
Firstly,we tried getting them to complete tasks in limited time: if we deemed that a task should take 30 minutes to complete, we gave them 20 minutes complete it. The thinking was that one condition for mastery is that tasks can be completed more automatically. Also, by rushing they would be more likely to make mistakes. This had some success.
Next, we gave the pupils tasks in which they had to meet certain demanding conditions and criteria for success. These were difficult to set up and always felt somewhat arbitrary in nature. For instance, in a writing task we made it a condition that pupils could not use any word which contained the letter E. The kind of constraint lead to some very interesting responses, but ultimately, the feedback we were able to give felt superficial and unlikely to result in improvement once the conditions were removed.
Finally, we decided that we would try marking work using ‘A level’ rubrics. This had a galvanising effect. Suddenly, pupils who were used to receiving A* grades as a matter of routine, were getting Bs and Cs. The feedback we were able to give was of immediate benefit and had a lasting impact. When interviewed subsequently, one pupil said, “For the first time I can remember, [the teacher’s] marking was useful – I had a clear idea of how I could get better.”
Now this is of course highly anecdotal and not worth a hill of beans in terms of academic research: there were no controls, and our findings cannot be claimed to be in any way reliable or valid. But they’re interesting. Perhaps the most powerful aspect for the pupils who took part was the novelty of teachers being interested in exploring how to add value to them.
Designing assessments that allow pupils to aspire beyond the limits is no mean feat. Tom Sherrington has written about ‘lifting the lid‘ so that we don’t place artificial glass ceilings on what pupils might achieve. The notion of Performances of Understanding from Y Harpaz, quoted in Creating Outstanding Classrooms suggests a potentially useful model:
Although these performances are not intended to be seen as hierarchical, it’s possible to trace potential progression both within each category of performance, and across the categories. Interestingly, most assessments tend to be capped at some point with the “operate on and with’ category. Very few assessments are interested in exploring pupils’ ability to ‘criticise and create knowledge’. As ever, we teach what we assess, and if it’s not assessed, it’s not valued. How much scope would the dialectic process of questioning, exposing assumptions and formulating counter-knowledge give to pupils stuck at the top of the assessment tree? How much more productive might our feedback be if it were to encourage pupils to criticise what they have been taught?
I’m not certain that I’ve answered the questions I set out to explore, but I hope at least some of my musing might have been thought provoking. Once again, I’ve condensed this thinking into a flowchart for your convenience:
I had intended this series to conclude with this instalment, but on reflection I think there’s need for one further post of the role feedback from pupils to teachers. Not sure when I’ll get around to it, but until then…
[…] Getting feedback right Part 2: How do we provide clarity? Getting feedback right Part 3: How can we increase pupils’ effort? Getting feedback right Part 4: How can we increase pupils’ aspiration? […]
Interesting. When I worked in a high achieving grammar school, we were almost discouraged from giving A* grades in order to avoid a plateau effect.
Perhaps we need to draw a disinction between the aspiration to improve and ‘master’ a subject and the aspiration to achieve the highest grade possible.
I like the idea of using A-level mark schemes for extremely able students. Perhaps once a student is attaining A* grades on a consistent basis we should shift to a more demanding set of criteria and introduce them to increasingly sophisticated models.
Either that or use the Harpaz model to construct an assessment model that goes beyond where most assessments end.
An interesting post.
Best part for me was “The notion of Performances of Understanding from Y Harpaz, quoted in Creating Outstanding Classrooms suggests a potentially useful model:”
Well I never, it’s Bloom’s/Anderson and Krathwohl Taxonomy rearranged slightly. Anyone who actually understands Bloom’s taxonomy but more importantly the Anderson and Krathwohl revised version will see what I mean I am sure.
I have been using a model similar to this for the last 10 years based upon A&K.
I think they say “what goes around comes around” or something similar.
Really interesting the way in which Harpaz’s model is arranged. I will get hold of a copy. Thanks.
The most interesting thing about Harpaz’s model is that it isn’t a hierarchical taxonomy: it values knowledge as well as the dialectical process of questioning knowledge in order to create new knowledge: thesis, antithesis, synthesis.
[…] via Getting feedback right Part 4: How can we increase pupils’ aspiration? | David Didau: The Learning…. […]
David, sorry for hijacking the comments section, but I am really interested in the SOLO taxonomy but I’m struggling to completely understand it. I get the general gist of it, but I would prefer an idiot’s guide if at all possible?
Further to your ‘extremely unscientific project’, you might find interesting the experiment that was done on university students that Malcolm Gladwell discusses in his book ‘David and Goliath’. The aim was to compare CAT (Cognitive Reflection Test) scores across different schools in an attempt to get students to move past impulsive answers to more analytical judgements. The test is hard and when administered the usual way MIT students scored an average of 2.18/3, whereas students from Princeton scored 1.9/3. The test was then repeated on Princeton students but this time with the test questions printed so that they were really, really hard to read: 10% grey, 10-point italic Myriad Pro font (you have to squint and really concentrate and read the sentence more than once to get its meaning). This time the Princeton students had an average score of 2.45/3. This does not particularly follow your theme of feedback, however it does pose an interesting question of how to challenge A* students.
Yes, I’ve read Gladwell’s book and found this interesting – thanks for the reminder.
Regarding the font – wasn’t this same experiment trialled with a greater number of test subjects with differing results? I.e. the font made little difference to the CAT scores. Does anyone have a link to the original research?
[…] You may remember that over the past few weeks I’ve been trying to refine my thinking about how we can improve the way we give feedback. If you haven’t already read the previous instalments, you might find it helpful to go over Part 1 (which discusses the different purposes for giving feedback) Part 2 (which looks at how to […]
[…] Getting feedback right Part 4: How can we increase pupils’ aspiration? – David Didau […]
I wonder why so little attention in education research is given to (learner) goal setting, there’s quite a substantial body of research work on it, but it seems it is more applied to work settings than education settings. http://med.stanford.edu/content/dam/sm/s-spire/documents/PD.locke-and-latham-retrospective_Paper.pdf