Facts as facts do not always create a spirit of reality, because reality is a spirit.
Meaning and reality were not hidden somewhere behind things, they were in them, in all of them.
I reached some tentative conclusions about evidence in education in my last post. One of the criticisms I keep coming up against is that my thinking is ‘positivist’ and therefore either limited or bad, depending on the biases of the critic. To understand this criticism we need to briefly explore some conceptions about reality, or ontology. Ontology is the philosophical study of the nature of being and reality, and if you stare at it for long enough it will melt your eyes! And any thinking about ontology also butts up against epistemology: the study of what constitutes valid knowledge and how we might go about obtaining it. If your nose has started to bleed in response to all this arcane vocabulary, don’t worry – you are not alone. For my own benefit as much as anyone else’s I’m going to attempt to simply matters by contrasting the methods of the scientist (positivist) with those of the detective (interpretive).
If we want to enquire into some aspect of education (or any other social science) then, whether we’re aware of it or not, we’ll make decisions about the following:
- Methods (What research tools will we use?)
- Methodology (How do we plan to conduct our research?)
- Theoretical perspectives (What assumptions about reality underlie the question we are asking and the kinds of answers we are looking for?)
- Beliefs about epistemology and ontology (What do we believe reality is, and how can we find out about it?)
The scientific (positivist) approach is to use the physical sciences as a model for investigation and experimentation. They will likely believe that there is some objective truth which is discoverable through a deductive, theory testing approach. This is the ‘scientific method’. They will believe that facts are facts and can tell us something objective about the world which can help us explain how and why things happen. As a result they will choose research tools like surveys, random sampling, blind tests and the manipulation of variables. The advantages of such an approach are that it provides easily comparable data which is verifiable and replicable. The disadvantage is that is fails to account for social processes and ignores as irrelevant the meanings subjects ascribe to their own behaviour.
The approach of the detective (interpretive) is to start by critiquing the natural sciences as a model for investigating the social sciences. They are likely to believe that reality is subject to the context in which it is perceived and the may even take the relativist view that there is no such thing as objective truth at all; instead of seeking to establish facts, they conclude that people are people and as such we must attempt to understand why they behave as they do. Society is, obviously, socially constructed. Their methods will be ethnographic studies, interviews, observation and analysis. Although this approach accounts for complex contextual issues, the evidence collected is often so complex as to be resistant to clear meaning and can be shaped to mean whatever the researcher says it means.
This leads us to question whether we’ve establish a false dichotomy or a real one? Can we do a ‘bit of both’? Or are we left with dismissing the interpretivism as less credible and the positivist approach as inflexible? The problem with positivism, for all its hard data and emphatic conclusions is that if it flies in the face of our values, it isn’t worth a damn. No matter how much empirical evidence we could come up with proving the effectiveness of rote learning, corporal punishment, circle time or group hugs, if it comes into conflict with your moral and ethical beliefs about the world you will ignore it. If you believe rote learning is “vicious” and boring, who cares how effective it is a tool for learning? Interpretivism attempts to square this circle by thinking about meaning instead of facts. But if reality is entirely subjective doesn’t the concept of evidence become meaningless? If you can never reliably control for all the variables of a classroom (time of day, time of year, weather, motivation, dispositions of teachers and students) then context over rides any ‘objective’ truth and we can argue, “Well, it works for me.”
For my money, I agree with the interpretivists that we cannot and will not find objective truth by investigating classrooms with the tools of the physical sciences. Context and values will make even the most robustly controlled trial meaningless. But, my problem with their alternative is that ‘evidence’ means whatever anyone says it means and the person who shouts the loudest and the most authoritatively wins; it becomes a matter of persuasion and rhetoric.
So, what’s the alternative? Well, wrote in this post that
Good science has the power to make useful predictions; if research can be used to inform our actions then it is useful. It’s unnecessary to accurately control and predict how every student in every context will behave or learn, just as a physicist has no need to control or predict how every single atom will behave in a physics experiment. All that’s necessary is that we can predict an outcome that is both meaningful and measurable.
Could this be a workable third way?
Education research is, I think, on the whole a waste of everyone’s time and effort. Instead we should focus on the more controllable science of psychology and use the empirical evidence produced in laboratories to help us make educated guess, predictions if you will, in order to guide our values and beliefs with data that at least points us in the right general direction. Never mind that social sciences are different form natural sciences and education research is so saturated with values; laboratory research that offers meaningful and measurable outcomes is far more worthy of our consideration than a classroom study, no matter how randomised on controlled.
That said, teachers conducting research into their own classrooms, whether it’s positivist or interpretivist in approach can only be a good thing as long as no one seeks to generalise from such findings. While it may be very useful to experiment and test ‘what works’ in your own classroom, we can never discover what will work in anyone else’s classroom beyond certain testable hypotheses.
I could of course be entirely wrong and, as always, I would welcome any thoughtful critique.
I like this. I’d concur that classrooms are too messy for reliable RCTs, because even if you ask the same teacher to teach differently in different classes, the individual students and they group dynamics will differ, before we’ve even thought about grouping, time of day and the rest. Once we get on to imagining that different teachers are faithfully following their script in the same way each time, we’re beyond what I believe is credible.
So truly robust psychological findings will remain the domain of laboratory research – where the conditions can be controlled. Once sufficient proof exists to put a concept beyond reasonable doubt (like much of what we know about memory perhaps), it is up to teachers to work out how to put this to use in the classroom. This morning I was learning about chunking – if I design something in a lesson which puts this to use, I don’t need to research whether it works from scratch if it follows the principles of existing studies faithfully… this is the point I was trying, perhaps unsuccessfully, to make at Pedagoo London which you picked me up on!
That said, I wouldn’t close things down quite this much. I think there is a place for researching classrooms – although focused more on qualitative work. As we try to use what we know about learning in schools, I value a process that examines how students interact with it. I think Nutthall’s work is a good example of classroom research providing useful insights…
I too think there is a place for teachers to research their own classrooms but I don’t think we should generalise from these findings.
Nuthall’s methods were unprecedented and haven’t (as far as I know) ever been replicated. That said, I’m amazed at how many of Nuthall’s findings are predicted by psychological research. And even with Nuthall – I’m happy with his work being cited to explain what happens in classrooms but not to be used to say what should happen.
Does a biologist, look at a particular ecology and conclude, I can’t control the variables so I can’t do science? Or a physicist decide that, since he can’t detect all of the particles that a particular cascade will generate, and any way the particle being searched for is so rare as to be hidden in the noise, there’s no point thinking about it? Does a meteorologist say the environment is chaotic and we don’t know all of the parameters perfectly, so our models are only 95% accurate, there’s no point forecasting? Or maybe a cosmologist will think that telescopes aren’t up to imaging exoplanets so nothing can be found out about them until the telescope technology catches up? Or a medic; I can’t control for the genes, the chemical exposure, the environment of every person who might take my drug so I can’t test it?
Obviously the answer to those questions is no. They design experiments that are large enough to be statistically valid despite the limitations, or they design innovative new ways to extract useful information from necessarily incomplete data.
Education researchers on the other hand or at least some of the teachers who like to express opinion on education research see it as perfectly valid to say; the system is too chaotic for anything to be achieved, or we are humans, we have minds separate from the biology so science doesn’t apply, or no one in the trial will do as they were asked so there’s no point starting, or some other justification for working on the next grand theory based on videoing twenty kids instead of doing science.
I know, that that isn’t quite what you are saying David, but the lets apply ideas from a lab to the classroom because we can’t study the classroom paradigm is where brain gym and its like came from.
Are you claiming Brain Gym came from psychology labs? Cos it really it didn’t. You seem to be conflating pseudoscience with actual science – if you’re not, apologies.
But your points about biologists, cosmologists and medics don’t hold up. It might be stretching a point but, generally speaking, all biologist, cosmologists and medics agree on the purposes of biology, cosmology and medicine. There is no such agreement in education. You and I might believe in surprisingly different purposes – I write about this here: https://www.learningspy.co.uk/education/disagree-purpose-education/
So in medicine you’re less likely get evidence which doctors refuse to engage with because it doesn’t fit with their ideological stance on medicine. (I realise you might well get this in certain fields like psychiatry – electro-shock therapy is a case in point.)
I’m not sure if you read the preceding post https://www.learningspy.co.uk/featured/evidence-education/? You might find answers to some of your concerns there. My contention is not at all that “the system is too chaotic for anything to be achieved, or we are humans, we have minds separate from the biology so science doesn’t apply, or no one in the trial will do as they were asked so there’s no point starting” it’s that even when these things are controlled for, it doesn’t help. I’m also mindful that if I just dismiss a interpretivist approach to research I’m picking a fight with all the fields of social science which seems somewhat counter-productive.
No Brain Gym really does have lab work behind it, useful brain signalling chemicals are shown to peak after exercise. Given this good science, it is no great leap to arrive at brain gym but if you don’t do the empirical tests in a real environment…
Most of the rest of your arguments just fall into the category of more special pleading. You can define an objective for your study, you can do the study. Does it matter that your particular study didn’t address every aim an educationalist might choose to debate?
I would swear I saw “mind” & “society” on your twitter feed this past couple of days as reasons against scientific methods. Sorry must have misinterpreted.
“Even when these things are controlled it doesn’t help” is to dismiss science.
I know some of what science has achieved since the turn of the twentieth century. I’m afraid that I am ignorant of social science’s great leaps forward in knowledge in that time.
I was thinking some more about climate & weather forecasting. It is clearly a tough area to approach because of the complexity The 95% I quoted is accuracy of next day forecasts, it falls off very rapidly at longer timescales. And yet companies pay the met office for seasonal forecasts that are only a few percent better than guessing. When there’s a lot of money at stake a few percent is better than nothing. In education there’s a great deal more than just money at stake.
I do think that sometimes these terms and ideas are skewed by an incorrect understanding of around the believed infallibility of the physical science vis a vis evidence proves something/objective truth. Everything generally operates in tolerances and relative probabilities. Evidence does not prove something it merely does not disprove it. A theory is never 100% proven, but survives the test of time and numerous testing. In ed research there are virtually no replicated studies, so maybe that is part of the reason we cannot talk about ‘generalising’. Even if general principles were seen to have high probability of being correct, we would still need to recognise that they will not be 100% successful, this might lead us into more a of a moral/ethical dilemma. Trying to balance the ‘general good’ vs individual rights to education. Just thoughts at this stage.
Very interesting, though there is another ‘middle’ way, that of Realistic Pragmatism (CS Peirce/Rescher/Haack/Biesta) as opposed to the Relativistic Pragmatism of James/Rorty, which is really more of a Post Modern Relativism than a workable methodological Pragmatism. Such a Realistic Pragmatism can overcome troublesome dualities objective/subjective – quantitative/qualitative and allow researchers to use a plurality of methods covering a range of inductive, abductive and deductive reasoning.
The practical and theoretical retrovalidation loops in such a pragmatism linked with its concept of fallibililism ensures that evaluation of means, ends and values lie at the heart of this philosophy.
Thanks Karl – I will explore this Realistic Pragmatism you speak of… Any key texts you’d recommend?
[…] is a response to David Didau’s further thoughts on evidence. First of all I would like to say that this is the latest in a number of BLOGS exploring the view […]
Let’s imagine that all students with brown hair out perform students with other colour hair. Now let’s suppose that there are all kinds of variables we don’t understand that underlie this, and that it is some of these, not the brown hair at all that makes the students succeed so well. Does this matter? Not if I can persuade my students to die their hair brown.
Isn’t this what educational research does? We measure outcomes and we control for likely causes. The fact that these are not specific enough does not negate the conclusions we can draw – increasing X is likely to improve progress, so we will increase X.
I can normally spell dye!
This clarifies much that I’ve thought about since Twitter has alerted me to some of the deeply held and conflicting ideas that seem to buzz around in today’s teaching profession. This para of yours, for example, neatly summarises why I worry about any move to make a particular teaching approach rigidly compulsory. ,
‘For my money, I agree with the interpretivists that we cannot and will not find objective truth by investigating classrooms with the tools of the physical sciences. Context and values will make even the most robustly controlled trial meaningless’.
Of course, as the para goes on to add, the opposite stance is equally worrying. At the very least, though,the ‘context and values’ factor, which I’m sure is often underestimated by the evidence-seekers, should convince everyone who debates education that we need to be wary of over-confidence in any approach or solution. We need to accept that while evidence can be strong and convincing, it’s never sufficiently so that alternatives, adjustments, doubts, can be rejected out of hand. The comment from SteveTeachPhys suggests (to me, I can’t speak for him) that while Brain Gym as described and purveyed, is undoubted nonsense, it should not stop us from considering whether it’s worth at least looking at the possible effects of exercise on learning.
By the way, didn’t someone recently write about the almost complete absence of replication of research studies in education. Yes, here’s one report about it
https://www.insidehighered.com/news/2014/08/14/almost-no-education-research-replicated-new-article-shows (Sorry if you’ve already blogged about that)
Thinking about it, my stance in the previous tweet might be put to the test. There’s talk (hotly denied, but then it would be) that the govt might make setting compulsory. My reaction is one of outrage, as I am convinced that setting is wrong in all sorts of ways. But by my own admission I must be prepared to listen, acknowledge, and respect other points of view. At the very least I must not be outraged and dismissive.
[…] Further thoughts about evidence in education […]
[…] points out – as I have done – that there are problems with evidence in education. Yes, of course. I’m not aware of […]