In Part 1 I explored the concept of vested interest and how it could lead us to make decisions and react in ways which might, to others, appear irrational. This post address another predictable way we make mistakes: the confirmation bias.
Confirmation bias, the tendency to over value data which supports an pre-existing belief, is something to which we all routinely fall victim. We see the world as we want it to be, not how it really is. Contrary to some of the accusations levelled at me, I don’t hate technology. Far from it. I’m just sceptical about unbridled enthusiasm. Technology might help in certain circumstances and in others it might not. Because of the costs involved I’m cautious about recommending technological solutions, but two examples of tech in education which I think are worth a punt are Colin Hegarty’s maths site and Chris Wheadon’s comparative judgement assessment system.
But if you’re already a true believer in the benefits of edtech – or anything else for that matter – you will tend not to be very critical of evidence which says edtech is great and completely scornful of any suggestion that it’s not all that. All too often, technology enthusiasts get excited by the product and then start thinking about how to use it in the classroom. The most common, and most worrying, effect of confirmation bias is when enthusiastic teachers draw the conclusion, “Well, it works for me and my students!”
Does it? How do you actually know? Often we’re guilt of taking feedback from dubious sources. We notice that it feels good. We notice that students seem to enjoy lessons more and we conclude, erroneously, that this means what we’re doing is working. In order to know whether an intervention was working we’d have to design a fair test with a control group and find a way to reliably measure the progress of both groups to see which one actually made more progress as opposed to which one appeared to make more progress.
This isn’t a hypothetical problem. Much of what scientists have discovered about how we really learn as opposed to how we think we learn is counter-intuitive. In the foreword to What If Everything You Knew About Education Was Wrong? Robert Bjork wrote:
That we tend to have a faulty mental model of how we learn and remember has been a source of continuing fascination to me. Why are we misled? I have speculated that one factor is that the functional architecture of how we learn, remember, and forget is unlike the corresponding processes in man-made devices. We tend not, of course, to understand the engineering details of how information is stored, added, lost, or over-written in man-made devices, such as video recorder or the memory in a computer, but the functional architecture of such devices is simpler and easier to understand than is the complex architecture of human learning and memory. If we do think of ourselves as working like such devices, we become susceptible to thinking, explicitly or implicitly, that exposing ourselves to information and procedures will lead to their being stored in our memories—that they will write themselves on our brains, so to speak—which could not be further from the truth.
He goes on:
What we can observe and measure during instruction is performance; whereas learning, as reflected by the long-term retention and transfer of skills and knowledge, must be inferred, and, importantly, current performance can be a highly unreliable guide to whether learning has happened. In short, we are a risk of being fooled by current performance, which can lead us, as teachers or instructors, to choose less-effective conditions of learning over more-effective conditions, and can lead us, as learners ourselves, to prefer poorer conditions of instruction over better conditions of instruction.
It might well appear that our interventions ‘work’ but what effect are they actually having? If we’re content to merely raise pupils’ current performance then we’ll see plenty of evidence to support our beliefs. And when we don’t see the evidence we expect, we’re happy to ignore these occasions as an ‘off day’ or worse, evidence that another teacher isn’t up to snuff if they can’t teach using the latest gimmickry. When a teacher’s practice is held up as a model of ‘best practice’ we tend to ‘cherry-pick’ those bits we’re comfortable with, or already know something about, and to ignore anything unfamiliar, difficult or strange.
The disconnect between what we can observe in our lessons and what occurs inside students’ minds is why we need well designed research. There’s been a fair bit of research into the effects of technology on learning and as far as I can see, the jury’s still out. The Education Endowment Foundation suggest digital technology provides moderate gains for moderate cost. That looks promising, so let’s see what it means in practice.
Evidence suggests that technology should be used to supplement other teaching, rather than replace more traditional approaches. It is unlikely that particular technologies bring about changes in learning directly, but different technology has the potential to enable changes in teaching and learning interactions, such as by providing more effective feedback for example, or enabling more helpful representations to be used or simply by motivating students to practise more.
OK, so maybe if technology “has the potential to enable changes in teaching and learning interactions” we should dive right in? Well first, lets consider the costs:
The costs of investing in new technologies are high, but they are already part of the society we live in and most schools are already equipped with computers and interactive whiteboards. The evidence suggests that schools rarely take into account or budget for the additional training and support costs which are likely to make the difference to how well the technology is used. Expenditure is estimated at £300 per pupil for equipment and technical support and a further £500 per class (£20 per pupil) for professional development and support. Costs are therefore estimated as moderate.. Expenditure is estimated at £300 per pupil for equipment and technical support and a further £500 per class (£20 per pupil) for professional development and support. Costs are therefore estimated as moderate.
This estimate of £300 per pupil aggregates all the different things which might be meant by digital technology: clearly the extravagant the kit, the greater the expense. Are there any other costs? Well, there’s also the cost on teachers’ time. The EEF allude to the problems here when they say, “schools rarely take into account or budget for the additional training and support costs which are likely to make the difference to how well the technology is used.” We should consider whether the costs of buying in some new kit and investing in the effort to train everyone in how to use it would be a better use of resources than spend that time and money on something else. This is the principle of opportunity cost. What is the likely impact of the best foregone choice and how does that compare against the costs of implementing the choice you actually make?
There are also some other considerations. Technology is not an end in itself – what is it you think edtech can do? New technology does not automatically lead to increased attainment, so are you clear on how and why your investment will improve learning? If technology is not supporting students to work harder, for longer or more efficiently to improve their learning, what is it doing? Sure it’s motivating to have something new and shiny to play with, but this is not going to not going to automatically translate into better outcomes. In this report on The Impact of Digital Technology on Learning the authors conclude that “it is clear technology alone does not make a difference to learning.”
These are all complex issues and it’s really not good enough to take the position that edtech is de facto good. Before we implement any new system or strategy, especially one which will affect every teacher and student in a school (such as a 1-1 device policy) we should ask these five questions:
- What evidence is there to suggest the intervention will work as expected?
- What problem is being solved and what is supposed to improve?
- How will we know if things are getting better?
- When is this improvement is expected?
- What will happen if the goal is, or isn’t met?
In Part 3 I will address another reason for the reactions of edtech enthusiasts: the sunk cost fallacy.
[…] Parts 2 and 3 I’ll discuss how confirmation bias and the sunk cost fallacy affect the edtech […]
I am in the de facto ‘edtech doesn’t really make much difference’ camp, having read something a few year’s back which measured the impact of tech as quite high to start with but everything back to normal after a few months.
It’s, I guess, a little bit ’emperor’s new clothes’. One of the difficulties schools have is that the new tech might just be the magic bullet, so you don’t want to miss out on it, or you wait and see and the technology is dead when you finally do invest – so it’s a lose-lose. Obviously, companies make money from this, and there is an attitude best summed up in by one of our SLT in a training session by saying, ‘better to do something than nothing’ – unfortunately, not recognising that every SLT or tech ‘something’ is a new initiative that drags from the finite pot of teacher resource (time, energy, experience, skill).
I’ve gone out on a limb, a little, and barely use tech at all – apart from the odd DVD or simple PowerPoint- and am focusing on getting my classes to actually do some work and practise what they need to. Whilst there’s no official RCT, comparatively, it hasn’t affected my classes, and I’m seeing slow and steady improvements in their work as I’d want to. (Of course, I would say that – confirmation bias, and all..)
Some of the tech looks quite nice, though, and my class do like to stroke the covers on the iPads should I ever pry them from the desperate hands of my colleagues!
This ties in very well with the SAMR model of Ruben Puentedura which basically says that edtech should only be used to transform teaching and learning not to simply replace existing practices (e.g. type instead of handwrite). See https://www.commonsensemedia.org/videos/ruben-puentedura-on-applying-the-samr-model or http://www.hippasus.com/rrpweblog/archives/2014/06/29/LearningTechnologySAMRModel.pdf
Donald Clark has some interesting things to say about technology in the classroom, including iPads: you can read his id as if you Google Too cool for school: 7 reasons why tablets should NOT be used in education. sorry, I can’t seem to create a hyperlink on my iPad !!
Thanks Frank, I’m familiar with Donald’s output and agree with much of what he writes.
[…] technology provokes such an egregious responses. In Part 1 I wrote about vested interest and in Part 2 I addressed confirmation bias. The focus of this third installment is the sunk cost […]
Hey, where are all the comments? There were heaps responding to your last post on this subject. I guess when you are presented with evidence you either shut up or walk away! Or maybe the vested interest argument touched a nerve more than this idea. People don’t like being accused of building empires, whether consciously for personal gain (hey, I’m a distinguished apple educator, I know shit about technology, get me in on the educational decision making head honcho club in your school) or as puppets within a mega corporations global domination strategy.
[…] hold. We all filter new information through our accumulated prejudices, and we all susceptible to confirmation bias and the fundamental attribution error. We could all do with be a little more tentative in the way […]