On March 3, 1969 the United States Navy established an elite school for the top one percent of its pilots. Its purpose was to teach the lost art of aerial combat and to insure that the handful of men who graduated were the best fighter pilots in the world.

They succeeded.

Today, the Navy calls it Fighter Weapons School. The flyers call it: TOP GUN.

As I’m sure you know, these are opening credits of the 1986 movie starring Tom Cruise and Val Kilmer as pilots graduating the elite Navy fighter school. What you may not know is the background to the real school. The impetus to train pilots in aerial combat came from the appalling attrition rates experience in the Vietnam War where, at it’s worst US pilots were shooting down enemy fighters at the cost of 1 for 1. But winning one dog fight meant you were increasingly likely to win the next. By the time a pilot had survived about 20 aerial encounters they had an almost 100% probability of winning subsequent ones. The question was, how do you ensure a pilot survives for long enough to become expert enough to become likely to win dogfights?

The US Navy’s response was to recruit their best pilots as instructors and their next best pilots as students. The instructors set up mock combats designed to push their students to the limits of their ability and then systematically helped them to identify and eliminate errors and refine their performance. As pilots graduated, they returned to the front to train other pilots and the instructors remained to become ever more expert in training others to become fighter aces. The results were dramatic. By 1972, Navy pilots were shooting down an average of 12.5 enemy planes for every plane they lost. A remarkable improvement.

So, what, if anything, can we learn from this in education? In his new book, Peak, Anders Ericsson suggests the Top Gun model could provide a recipe for implementing the kind of deliberate practice regimen which leads to expertise. The trick, as Ericsson sees it, is to find out what the very best performers do and emulate them. In fields where there are clear, objective measures of success this is relatively straightforward. Identifying the best teachers is trickier.

Of course there’s Doug Lemov’s approach: hunt down teachers whose students produce the best test scores as compared to other teachers in similar schools, watch what they do, see what commonalities there are and design a taxonomy which enables other teachers to isolate, practise and perfect their skills. This is probably as good as we’re likely to get in teaching, but as Dylan Wiliam explains in his new book, Leadership for Teacher Learning, whilst data allows us to say with certainty that some teachers are better than others, for all sorts of complex reasons, it doesn’t allow us to reliably identify who those teachers are. The best measures of teachers’ performance we have are like a scales which gives us an individual’s weight to within +/- 50 pounds. This is enough to tell us that men are, on average, heavier than women, but tells us nothing useful about the weight of an individual. Does this matter? Well, as Wiliam explains, it depends what you do with the information. The very best we can probably manage is to say with a probability of between 0.6 – 0.8 whether a teacher is performing well. This is not good enough for high stakes decisions about pay or employment but it should be sufficient to design effective training.

So, what are the ingredients of a Top Gun model and how might it look in education? Ericsson says the US Navy wasted little time on trying to quantify the expertise of the best pilots. Instead they “just set up a program that mimicked the situations pilots would face in real dogfights and that allowed the pilots to practice their skills over and over again with plenty of feedback and without the usual costs of failure.”

Practically, this is tricky to do in education: we’d have to hire lots of child actors. But, as the costs of failure are so much less for teachers than for fighter pilots maybe we can risk using real live students in actual classrooms for teachers to practice on. After all, this is effectively how most current teacher training models already work. But, once teachers qualify, observation becomes increasingly rare. We assume that meeting minimum qualification plus experience will result in expertise. This is probably wrong.

There’s a lot of evidence that people don’t just get better doing their jobs. In order to improve we need to engage in purposeful, conscious practice.  Ericsson trots out many examples of professions where, because of the quality of the practice, people actually get worse over time. Radiologists are a case in point. In most cases, radiologists are sent X-rays to examine without ever finding out the consequences of their diagnoses. They rarely get useful feedback on the judgements. In such cases experiences may result in increasing confidence, but the judgement of experienced radiologists is, if anything, slightly worse than that of colleagues with about 3 years experience.

This echoes the findings of Rivkin, Hanushek & Kain, and of Kraft & Papay  in regard to teachers: teachers improve dramatically in the first 3 years of practice, then plateau before start to dip after about 10 years experience. Teaching is an interesting case in that teachers get a mix of excellent and very poor feedback. We get great feedback on aspects of teaching like behaviour management: either children behave or they don’t. But other aspects, like how well students retain information, are badly neglected. Teachers tend to only get feedback on how well children perform within an individual lesson and not on well content from previous lessons has been applied. This can lead teachers to believe that what results in improved short-term performance will also result in better learning, but this belief is contradicted by the evidence. Thankfully, this is a relatively easy problem solve: all we have to do is rethink what we mean by learning.

None of this is really about Top Gun teachers. We don’t actually need to set up an elite teacher training school in which participants are required to play a lot of topless beach volley ball. But if we want to commit to maintaining and improving teachers’ performance we need to consider the following:

  • Frequent, low-stakes lesson observations. Ideally, teachers would get regular intensive sessions maybe a week at a time – where observation was followed by feedback and then further practice.
  • Much better feedback on learning. This would require teachers to teach lessons which allowed students to demonstrate the retention and application of content covered weeks, months, maybe even years before.
  • Guided, purposeful practice. Teachers will make more progress with a mentor or coach to help them focus and pay attention to the skills they are trying to develop.
  • A codified body of knowledge (ideally phase and subject specific) which would give teachers a means to objectively assess where they need to make progress, isolate skills and practise – this could result in much better mental representations of what effective teaching looks and feels like. Lemov’s Teach Like a Champion taxonomy might provide a useful starting point in designing such a body.
  • All this needs to voluntary. You can’t force someone to practice purposefully.

You might not like the sound of any of this, but what’s the alternative? If we accept that the status quo is a recipe for mediocrity and decline, what would you do instead? If you think everything’s fine, do you have any evidence beyond the anecdotal?

Now I’m not gonna sit here and blow sunshine up your ass, Lieutenant. A good pilot is compelled to evaluate what’s happened, so he can apply what he’s learned. Up there, we gotta push it. That’s our job. – Viper, Top Gun