Following a recommendation from Sam Freedman, I’ve recently devoured Dan Davies’s The Unaccountability Machine. It’s an attempt to analyse ‘what’s gone wrong’ in what we might call The West over the past decade or so through the lens of cybernetics. I know, right?

If your first thought is to assume that this must have something to do with tech (or Dr Who) you can be forgiven as the term has been thoroughly hijacked since it was first coined by the American mathematician, Norbert Wiener in 1948. The word is derived from the Latin (via Greek) kybernetes, meaning ‘steersman.’ The French, cybernetique, is the ‘art of governing.’ In this original context, cybernetics – or, to give it its full title ‘management cybernetics,’ was concerned with the design and implementation of effective and efficient systems.

The figure most closely associated with the development of cybernetics was the improbably bearded, Stafford Beer, now probably most famous for his axiom POSIWID: ‘The purpose of a system is what it does.’

At first glance, this may seem banal, but the point Beer makes is that the purpose of a system is not what you intend it to do or what you hope it will do but what it actually does.

To give an example, last weekend I had cause to look into making two train journeys. In the morning I had to travel up to Birmingham from Bristol and then, in the afternoon I needed to go to Bath. When I tried to book a train to Birmingham I found that the trip, which would normally be under 2 hours was going to take over three and a half hours as the journey was being routed through Wales. Needless to say, I decided to drive. My plan was to do a round trip back to my local train station and get on the train to Bath with the friend I was travelling with in order to avoid the horror that is attempting to park in Bath on a Saturday. But, much to my annoyance, he phoned to say all the trains had been cancelled. The unavoidable conclusion is that train journeys are more expensive, less convenient and less reliable than alternative means of travel. Applying POSIWID we should conclude that the purpose of the rail network is to disincentivise people from making train journeys.

The point of all this is to not to say that systems are bad, but that they’re powerful. Systems steer our behaviour in ways that can be beneficial or detrimental. From this observation, it shouldn’t be too great a leap to think about the various systems we rely on to manage schools. Whenever we see teachers and school leaders doing something that appears, on the face of it, a bit odd, we should look at the systems that might be channelling their behaviour.

For instance, I regularly encounter the notion that students need more practice at extended writing. This leads to putting in place systems to ensure teachers give students more opportunities to write extended responses. While this may sound innocuous and well-intentioned, we need to look at what happens when such systems are enacted. What I overwhelmingly find is that students produce pages and pages of poor-quality writing. We know that what we practice we get better at but practice doing something badly does not lead to the kind of improvements we want. If students fill up books with poor quality writing they get better at writing badly. The purpose of the system is to make students worse at writing. Obviously enough, this outcome is in no one’s interest.

The tighter and more prescriptive a system is, the greater the danger it may lead to perverse incentives. The looser and less prescribed a system is, the greater the risk of lethal mutations. Either way, systems always run the risk of causing unintended outcomes. There’s a lot of detail in Davies’s book about how management cybernetics should build in safeguards and controls to mitigate against these risks but, to be absolutely honest, I’m still struggling to get my head around the detail and – certainly at the moment – can’t recommend that anyone should head off down this particular rabbit hole.

However, we may not need to. By being mindful of the potential problems and thoughtful about designing sensible safeguards, we can probably do a lot. With that in mind, these simple steps seem like a good starting place:

  1. Assume the system is the problem

Look carefully at what staff and students are actually doing and compare this with your intentions. Why is your beautifully designed whole class feedback policy resulting in students’ work getting worse? Why is your carefully considered uniform policy resulting teachers constantly battling with students over untucked shirts and undone top buttons? Why don’t detentions seem to be acting as deterrent? Why are teachers leaving the profession in droves? Instead of blaming people, consider the possibility that their behaviour is a rational response to the system someone has put in place.

  1. Adapt to individuals

Maybe you can avoid some of the undesirable outcomes by tweaking existing systems. Systems that are too prescriptive try to anticipate too many possibilities and end up removing agency. This leads to at best compliance but at worse, lip service, conflict and resentment. Loose systems overestimate people’s capacity and expertise, leading to chaos, inconsistency and confusion.

In either case, systems should be tightened for those that need it and loosened for those that don’t. If you know a teacher to be effective and hardworking, why would you want to force them into complying with a policy designed to meet the needs of less effective staff? Similarly, if you know a teacher to be inexperienced or untrustworthy, why would you allow them the same agency as their more competent colleagues? One of the most consistently poor system design attributes in schools is to assume all staff are the same. Treating everyone equally is fundamentally unfair.

  1. Run a pre-mortem

Before changing anything, run the the following thought experiment: you launched your policy a year ago but it failed utterly and have gone back to the drawing board. What went wrong? You’re unlikely to catch everything but by gathering everybody involved in implementing a strategy to suggest possible reasons for its failure you’re likely to avoid the most obvious and egregious errors.

  1. Systems need maintenance

Even the best designed systems can go wrong and stop working as intended. It’s daft to wait for the wheels to come off so we need to build in plans – additional systems – for checking the health of our systems. When and how you review should be decided in advance. How will you know if your system is working? What will you look for as signs of success?

  1. Beware sunk costs and leadership bubbles

We all have a vested interest in believing that anything we’ve invested effort, resources or credibility into must be good. If we’ve got step 4 right we have suitable checks and balances in place but we need to aware of the tendancy to a systematically discount any negative feedback and over-inflate the value of positives.

The more senior you are, the more likely people are to tell you what they think you want to hear. It’s very hard to see what’s going on around you through the leadership bubble you occupy. Your interactions with systems are prone to be stage managed; those involved in the day-to-day operation of your system will make every effort to shield you from reality. Added to this, your very presence can warp reality: things change around you because you’re there. It’s very easy for senior leaders to convince themselves that, for instance, behaviour is great because they never see any poor behaviour but that’s because the people most likely to experience this behaviour are the most junior staff.

Knowing how hard it is to stay objective and murder our darlings, we should aim to take those most invested in the success of a project out of the evaluation process. By tying ourselves to the mast we increase the chance we will resist the siren song of the sunk cost fallacy.

None of this may seem especially original but the clarity of POSIWID can be brutally enlightening. If you reformulate the purpose of your systems by observing what they actually produce, we might be able to sidestep some of the many pitfalls of running schools.

The purpose of your systems should be for students (and staff) to be happier and more successful. If this isn’t the outcome you’re getting, there’s probably something wrong with the systems.