Lying to ourselves is more deeply ingrained than lying to others.
Fyodor Dostoevsky
A closed circle argument is one where there is no possibility of convincing an opponent that they might be wrong. They are right because they’re right.
Imagine you wake to find yourself in a psychiatric ward, deemed by all and sundry to be mad. Any attempt to argue that you are not, in point of fact, mad, is evidence that you are ‘in denial’. Any evidence you cite in support of your sanity is dismissed as an elaborate attempt to buttress your denial. There is no way out of this predicament; no way to demonstrate your sanity that will be accepted by those who have decided they are right because there is no way that they can conceive of being wrong.
If there’s no way in which you can be wrong, then you have created an unfalsifiable argument.
I’ve written a couple of recent posts about falsifiability which might be worth reading as background before getting stuck into this one. Firstly, there’s this: “Works for me!” The problem with teachers’ judgement in which I hold up falsifiability as an antidote for the argument that personal experience trumps empirical data. The crux of my argument is this:
If, in the face of contradictory evidence, we make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please. If we’re told that students’ exam results might improve if we changed our practice we can say things like, “There’s more to education than exam results” and claim that our students are happier, better rounded, or have an excess of some other vague, unmeasurable trait. We can laugh at the idea of measurement and say, “Just because you can’t measure it, doesn’t mean it isn’t important.” We can insulate ourselves from logic and reason and instead trust to faith that we know what’s best for our students and who can prove us wrong?
Then, there’s my last post Is growth mindset pseudoscience? in which I explore Carol Dweck’s attempts to resist the falsification of her theories and question whether her claims are, as a result. “veridically worthless”. She seems to be saying that if research into mindsets doesn’t work then either the teachers, students, or perhaps the researchers didn’t actually have a growth mindset.
I’m dredging all this up because of a couple of interesting comments on that post attacking the worth of falsifiability. Firstly this:
… I’m not sure the use of falsifiability helps or just invites unnecessary questions about the philosophy of science. Hence, I’m all for questioning Dweck’s position but to do so by invoking a contentious and highly problematic theory about the boundary between science and pseudo-science seems unnecessary and distracting. As many great minds have argued we cannot reduce science to the test for falsifiability and to do so is disingenuous, and misleading. Which I’m sure is not your intention, but for a take down on Dweck maybe the philosophy of science should be left alone.
This was a bit of surprise because I didn’t realise falsifiability was contentious. The ‘great minds’ who’ve argued against it include Thomas Kuhn, Imre Lakatos, Paul Feyerabend, Alan Sokal and Jean Bricmont. You can read a brief summary of their various critiques here. Now the great thing about all these arguments is that they neatly sidestep any attempts to say they might be wrong because, you guessed it, they’re unfalsifiable.
Another commenter suggested, “There are areas in which falsifiability is not a viable proposition. We then need to rely on replicable confirmation.” I agree that trying to replicate a test which purports to prove a claim is very useful, but if falsifiability is “not a viable proposition” all we’re left with is uncritically trying to prove things right. And you can prove anything right if you don’t look hard enough. This is crucial because, as physicist Richard Feynman said, “Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.”
And then there was this comment:
Falsifiability might be irrelevant if it is regarding something that can vary rather than is always false or always right. In my understanding of Carol Dweck’s (earlier) work some people have a mixture of fixed and growth mindsets or they might feel that some skills are malleable while others aren’t. A theory about learning seems pretty malleable rather than something that can be proven true or false.
Hang on a minute, if a theory about learning is “malleable”, doesn’t that essentially mean that it can mean whatever it wants to mean at any given point? Isn’t it saying that it’s all just a question of interpretation? If you make an empirical claim, then it should be falsifiable. There must come a point when twisting your ideas to fit the facts becomes pseudoscience, otherwise we can all believe whatever the hell we like and damn the evidence. And that would never happen in education, would it!
Would it?
Yes, it would. Education continues to be a closed circle in which it’s possible to write something like this with absolutely no sense of irony:
Their methods might work in some limited way to be able to ‘pass the test’ and they can pat each other on the back telling everyone about the fab job they are doing. The point they’re missing in their rants about so called ‘new ideas’ is that just passing the ‘tests’ isn’t an education!
We can argue that what we like ‘works’ because we like it. And if it’s unsuccessful on verifiable metrics then the metrics are worthless. This is the apotheosis of a closed circle: you can explain away any amount of disconfirming evidence as not fitting your paradigm. You’ve given yourself permission to ignore reality and anyone who suggests you might not be wearing any clothes can safely be dismissed as having a fixed mindset.
Back to Feynman:
It doesn’t make a difference how beautiful your guess is. It doesn’t make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong.
I’d like to propose an acid test for opinions in education: if you cannot accept that there are conditions in which you might be wrong, then we should feel free to dismiss your ideas as guff.
Rather pleasingly, isn’t your falsifiability proposition incapable of being falsified?
Damn! Yes, it is! Mea cupla.
And pleasingly, the fact that I can admit I’m wrong and move on is pretty much the point 🙂
I wonder sometime David if you have a web cam in my office – a number of times things that have cropped up as live examples for us have resulted in a considered blog article from you – that or the fact that you are asking “those questions” again – Emperors New Clothes and all that.
Recently visited a school that had invested £1500 in getting professionally designed “praise cards” made. The school distributed about 1000 of these cards to each department with the instructions to “recognise learners doing things well”. By this half term, some 600 cards had been posted home (in an envelope) to parents using second class mail (£320 + envelope + processing time).
Not a fortune granted – but not a trivial amount either.
Because so many cards had been sent home, the initiative was deemed a success.
When meeting the headteacher, I asked about the evidence upon which this initiative was based. Well it turns out that “it works for RE, so we’re going whole school”. Undeterred, when I asked about exactly what “works” for RE – it was like we’d entered a parallel universe where I’d challenged everything that was “obviously true”. Turns out there was no evidence that this works and no measure of impact other than “the kids like getting them” – and even that evidence was qualitative and very ephemeral in nature.
“Everyone likes getting praise” was the crux of the argument in favour of the cards. Game, set and match – they were right because they were right.
What worries me in situations like this is by not considering things as falsifiable the school has created a non sustainable intervention that as soon as the next “shiny thing” comes along, will be abandoned with equal haste (aka solo taxonomy, growth mindsets, dare I say VAK, etc).
Cheers
David – I’ll gift you “Pedagogical Shiny Things – in search of the new Growth Mindset” as a blog piece 😉
Glen
Excellent! Thanks Glen – I look forward to the exchange
Always enjoy reading your work and just wanted to share my views. I think it’s important to have one foot in the past and the other in the future….. by this I mean that everything we do tends to be full circle and old knowledge/methods and approaches shouldn’t be cast aside –
A Head said to me- I was wrong once- I thought I was wrong and I wasn’t….
. Speaking as a social scientist I agree we can use the methodology of the natural sciences to study learning but as you have demonstrated David we don’t fit neatly into boxes – we then have the next series of questions such as why do people react to learning in different ways – Is it their cognitive processes, their environment or that individuals develop at different stages and that the education system we currently have is merely a broken toy wrapped in shiny paper. Thanks for reading the rant
Three points to get the ball rolling. 1) For Popper falsifiability doesn’t separate meaning from nonsense, but science from pseudo-science. Thus, unlike the Logical Positivists, Popper was not dismissive of anything which isn’t open to empirical investigation. As such, Popper can evade the ‘but isn’t falsifiability unfalsifiable’ charge, by simply stating that the theory of falsifiability isn’t science, it’s philosophy. This defense can also be used to protect Kuhn, Lakatos, Feyeraband et al. Of course this points to a huge debate about the nature, meaning and value of philosophy, but best leave that for another day.
2) ‘There must come a point when twisting your ideas to fit the facts becomes pseudoscience.’ There she blows! This is the crux of the debate. For what is that point? When has a theory jumped the shark? How do we know? Is there an objective test or is it subjective and/or sociological? This is what brings us to Kuhn, Lakatos, Feyeraband et al.
3) Please don’t construe my skepticism as an endorsement for an ‘anything goes’, pro-homeopathy, pro-learning styles, if it feels good it must be ok, radical relativism. I’m just concerned by the ease with which you seemed to have reduced the boundary of science and psuedo-science to falsifiability. For me, it much more complicated than that.
Thanks Klaus
1) So, I know everyone hates logical positivism but I’m not so sure…http://slatestarcodex.com/2013/02/21/a-defense-of-logical-positivism-yes-really/
But I take your point about philosophy and science.
2) I don’t know when that point comes, I’m just posing a question. Certainly the popular embrace of GM seems like shark jumping territory, but that’s not my objection. I’m writing here at the way some people dismiss metrics as irrelevant or wrong and then justify their theory using circular logic: it’s right because it’s right.
3) Most things are more complicated than a single index so I’m pretty sure you’re right about that but I’m yet to be convinced that a theory can be unfalsifiable and veridically worthwhile. Happy to be educated on that point.
Interestingly, I’m now chuckling to myself as the comment I’ve just posted on the Blog about Concept Mapping may be symptomatic of exactly the point of this article. Oh the joys of unfalsifiability!
I’m now academically scolding myself!
[…] – the false growth mindset – might make here theory unfalsifiable. 30th October The closed circle: Why being wrong is so useful – Critique is how we make progress. When someone points out our mistakes we can adjust and […]
I think your lunatic example is originally by philosopher of religion RM Hare, who wrote the Parable of the Dons to illustrate his concept of a ‘blik’ (a basic unfalsfiable assumption about how the world works). The story is basically of a paranoid ‘lunatic’ convinced that a series of Dons are out to murder him – all evidence to the contrary (eg the dons are friendly and charming) is reframed by the lunatic as evidence of their evil intent (“they are behaving in such a way as to get me to lower my guard”).
Hare was a Logical Positivist, along with AJ Ayer. I only mention it because I thought I detected hints of Ayer in your post about Growth Mindset from the other day (Ayer coined the phrase ‘death by a thousand qualifications’ – saying that those with Growth Mindsets who fail to perform actually have a *false* Growth Mindset would certainly contribute to such a death). All the Logical Positivists were concerned with whether we could make meaningful statements about things that cannot be verified or falsified. I wonder if Carol Dweck has read any?
When reading your recent posts I understand the point you are making on claims of evidence based practice. It seems to me that some of the difficulties to overcome is just how hard it is to neutralise all the other factors that might influence the results of students, how difficult it is to get large enough sample sizes and to find enough students to repeat the tests on and how to manage the educational input that control groups are receiving. I often hear of successful research with a few classes but that is never going to be a big enough sample size. Then there is a question of the time available and the ethics of control groups. So whilst it is not acceptable for teachers or scientists to say something is evidence based without rigorous results the next step of the argument is this something that is achievable and if so how do we build up portfolios of evidence based practice? Or do we have to wait until neuroscience has developed further before we can isolate the impact of one particular teaching method or intervention?
One sensible strategy would to adopt horse race style research as Greg Ashman explains here https://gregashman.wordpress.com/2017/02/12/conducting-sound-education-research-should-be-as-easy-as-abc%EF%BB%BF/
Also, well controlled lab studies provie meaningful, measurable input on what should happen when we try an intervention. This often provides a better guide than ed research.
Horse race approach appears to have value if sample size and voice are rigorous. Interesting idea to get research going in different countries too. That could widen sample size and reduce cultural impact.
Any links on the lab based studies would be interesting. Is that to determine who much difference would be significant? I guess computer modelling could be used in the future for this too.
By ‘lab based’ I mean psychology labs. Here’s a series of posts about some of the evidence from psychology: https://www.learningspy.co.uk/psychology/top-20-principles-from-psychology-for-teaching-and-learning/
Interestingly, cog psy has produced robust replicable evidence on the effectiveness of the impact retrieval practice on learning but it’s entirely and shamefully absent from the EEF toolkit. “+2 months for Learning Styles” Pfft!
Thanks, I think I have some reading to do on research practice and findings. None of this seems to be influencing any useful CPD. So need to work through it myself. I feel more comfortable working with research now there is wider acceptance of The need to be inclusive of those who are outside of the middle range. Having so often fallen out of the bell curve myself or observed that as an issue for students and others.
[…] And if we’re wrong we’re gambling with children’s life chances. We create a closed circle in which we put our vanity and misplaced sense of professional pride ahead of what’s best […]
[…] if you argue against this narrative, you merely reveal the extent of your delusion. This sort of closed circle thinking prevents us from learning from mistakes. The sixteenth century philosopher, Thomas Hobbes argued […]
[…] Under what circumstances could the claim be shown to be false? For a claim to qualify as scientific, it must be falsifiable. […]
[…] test your claims you have to try to disprove, or falsify them. Gardner’s argument is a classic closed circle: I’m right because I have a lot of data which says I’m right. How does he know the data’s […]