In the past, school improvement was easy. You could push pupils into taking BTECs or Diplomas (sometimes with 100% coursework) equivalent to multiple GCSEs; you could organise your curriculum to allow for early entry and multiple resits; you could bend the rules on controlled assessment and a whole host of other little tricks and cons intended to flatter and deceive. Now what have we got? PiXL Club?

As Rob Coe laid bare in Improving Education: A Triumph of Hope over Experience, school improvement has been a tawdry illusion. Evidence from international comparisons, independent studies and national exams tell a conflicting and unsavoury tale.

Screen Shot 2016-07-03 at 09.17.27

As Coe said, “The question … is not whether there has been grade inflation, but how much.”

Scurrilously, he suggested various ways we can make our efforts at school improvement look like they’ve worked:

  1. Wait for a bad year or choose underperforming schools to start with. Most things self-correct or revert to expectations (you can claim the credit for this)
  2. Take on any initiative, and ask everyone who put effort into it whether they feel it worked. No-one wants to feel their effort was wasted.
  3. Define ‘improvement’ in terms of perceptions and ratings of teachers. DO NOT conduct any proper assessments – they may disappoint.
  4. Only study schools or teachers that recognise a problem and are prepared to take on an initiative. They’ll probably improve whatever you do.
  5. Conduct some kind of evaluation, but don’t let the design be too good – poor quality evaluations are much more likely to show positive results.
  6. If any improvement occurs in any aspect of performance, focus attention on that rather than on any areas or schools that have not improved or got worse (don’t mention them!).
  7. Put some effort into marketing and presentation of the school. Once you start to recruit better students, things will improve.

In other news, Amanda Spielman – ex chair of Ofqual and incoming head of Ofsted – has said in the 2014 researchED conference that school level volatility is both well-known and predictably unpredictable:

Screen Shot 2016-07-03 at 09.25.01

Results can be expected to go up or down by between 9-19% every year! Jack Marwood explains this phenomenon in his contribution to What if everything you knew about education was wrong?

Here is Seaside Primary School in North Yorkshire*, a fairly typical two-form entry school. These are the percentages of children achieving level 4 or above in reading, writing and maths:

2013: 77%

2012: 70%

2011: 58%

2010: 69%

2009: 77%

2008: 76%

There is no pattern. Unless a school consistently records 100%, there never is a pattern for any school, in any historical data. This is because the data is based on children’s results, and children are complicated and individual, and the school population in any given school is statistically too small to make meaningful generalisations.

…Long-term trends in something as complex as educational outcomes are – unless you mess with the data by, you know, making the tests easier, selecting by ability or dis-applying certain children from assessment, or simply not reporting stuff – always random.

*Name changed

All this is more pressing now than it ever was. Quite rightly the DfE have moved to reduce the opportunities for gaming the system and Ofsted are getting better and better at not just looking a single measures of success. GCSE grades will be applied on a curve with the top 4.75% always getting a grade 9 and the bottom 4.75% always getting a grade 1. Roughly 40% of students will get grade 4 or below and there is nothing schools can do about it.*

Stupidly though, the government is still insisting that schools need to be above average to avoid being labelled as failing. Schools will tear themselves apart looking for the latest silver bullets but there are none. If a school does especially well in one year – or even two – results will inevitably regress to the mean. No amount of grit or growth mindset can resist this mathematical bulldozer.

As Tom Sherrington says in this post: “rapid improvement is only likely to happen where a) the school was performing significantly poorly in the first place b) the cohort has changed significantly or c) something dodgy is going on.”

Of course, all this may end up meaning that English schools start to do better in real terms. International comparisons and independent studies might tell us over the next few years that the system is improving, but this will be small comfort if foolish accountability measures punish schools for falling foul of statistical inevitabilities.

Broadly, I’m in favour of the changes to exams and Progress 8 is definitely a preferable model for school level accountability than focussing everything on the C/D borderline in 2 subjects, but unless we want headteachers to spend all their time looking for ways to game perverse incentives, the DfE must change the way GCSE results are viewed and schools are treated. The agenda for school improvement has to move away from endlessly pouring over data looking for patterns that don’t exist. We need to find new – better – ways to hold schools to account and come up with new definitions of what school improvement means.

At least – Thank Christ! – we’ve got someone taking the helm at Ofsted who understands the complexity of all this.

*Apparently this may change after the first year. It says here that, “we propose to carry forward the grade standard established in the first award in subsequent years. This will be done through largely the same approach as is in place for pre-reform GCSEs, which is based on a mixture of statistics and examiner judgement.” According to @Yorkshire_Steve this means that if KS2 results go up then the boundaries could change to allow more than 60% of students to get Grade 5 and above. There is, as far as I can tell, no detail beyond that.