It is the duty of the human understanding to understand that there are things which it cannot understand, and what those things are.

Søren Kierkegaard

With the freedom to replace National Curriculum Levels with whatever we want, there’s a wonderful opportunity to assess what students can actually do rather than simply slap vague, ill-defined criteria over students’ work and then pluck out arbitrary numbers as a poor proxy for progress. But there’s also an almost irresistible temptation to panic, follow the herd and get things badly wrong. Levels are by no means the worst we could do – in fact there was actually much to like about them – if we’re not careful we’ll replace them with something truly awful.

We need always to remember that any system of assessment is an attempt to map a mystery with a metaphor. There’s no way we can every really know everything about what students are learning. All we get to measure is their performance on a given day. Because we can’t see learning we come up with metaphors to make it easier to conceptualise. Levels, ladders, thermometers, graphs are all metaphors. They’re meant to help us to think about something so complex and mysterious it makes the mind boggle. Unfortunately, they often end up concealing the truth that learning is messy and unpredictable. My favourite metaphor for learning is Robert Siegler’s ‘overlapping waves’ model; the tide may be coming in, but individual waves roll in and recede unpredictably.  Siegler suggests we make the following assumptions:

  1. At any one time children think in a variety of ways about most phenomena;
  2. These varied ways of thinking compete with each other, not just during brief transition periods but rather over prolonged periods of time;
  3. Cognitive development involves gradual changes in the frequency of these ways of thinking, as well as the introduction of more advanced ways of thinking.

How can you show that on a spreadsheet? Obviously it’s much easier to just ignore all this complexity and pretend learning is predictable.

Here then are a few simple principles for getting things wrong.

Assessment and tracking systems should (not):

  1. display an ignorance of how students actually learn
  2. assume progress is linear and quantifiable, with learning divided into neat units of equal value
  3. predefine the expected rate of progress
  4. limit students to Age Related Expectations (ARE) that have more to do with the limits of the curriculum than any real understanding of cognitive development.
  5. invent baseless, arbitrary, ill-defined thresholds (secure, emerging etc.) and then claim students have met them
  6. use a RAG rating (Red, Amber, Green) based on multiples of 3 to assess these thresholds
  7. apply numbers to these thresholds to make them easy to manipulate
  8. provide an incentive to make things up

Picking holes in other people’s work is easy, but in all seriousness, if you want to work out whether your levels replacement is fit for purpose try asking Michael Tidd’s 7 questions. Also, do your best to resist the myths of progress: the best we can do is to approximate what we think students are learning by looking at what they can actually do here and now

Many thanks to @jpembroke for helping come up with the silly ideas.

And here are the slides I used at The Key’s conference on Life After Levels.