Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.
Yesterday we were told that the much vaunted testing effect (which I’ve written about here) has been effectively shown to be useless in improving the learning of ‘complex’ material. Tamara van Gog and John Sweller’s provocatively titled paper, Not New, but Nearly Forgotten: the Testing Effect Decreases or even Disappears as the Complexity of Learning Materials Increases explored the ‘boundary conditions’ of the effect. The abstract of the paper says,
[One] potential boundary condition concerns the complexity of learning materials, that is, the number of interacting information elements a learning task contains. This insight is not new, as research from a century ago already had indicated that the testing effect decreases as the complexity of learning materials increases, but that finding seems to have been nearly forgotten. Studies presented in this special issue suggest that the effect may even disappear when the complexity of learning material is very high. Since many learning tasks in schools are high in element interactivity, a failure to find the effect under these conditions is relevant for education. Therefore, this special issue hopes to put this potential boundary condition back on the radar and provide a starting point for discussion and future research on this topic.
So, what is meant by ‘complexity’ and ‘interactivity’? Many studies of the testing effect have concentrated on the retention of simple nuggets of information which can be learned in isolation. As the paper puts it,
For example, when learning a list of new Spanish words with their English translation, one item, such as “gato–cat,” can be memorized without reference to another item, such as “perro–dog.” Or in history, the fact that “World War I began in 1914 and ended in 1918” can be learned without reference to another fact, like “The Netherlands was neutral in World War I.”
But when items to be learned interact with each other with one piece of information depending on an understanding of another, the topic to be learned can be said to be more complex.
For instance, when learning about the mechanics of a hydraulic car brake system in engineering, it is not only necessary to learn the individual components in the system (e.g., pistons, cylinders), but also how these components interact with each other (e.g., principles of hydraulic multiplication and friction). Moreover, the aim is usually not just to learn how the system works, but also to be able to apply that knowledge to real-world tasks (e.g., being able to diagnose and repair faults in a system).
Van Gog and Sweller “suggest that the complexity of learning materials may reduce or even eliminate … the testing effect.” The conclude their account by saying,
The studies collected in this special issue suggest that the complexity of learning materials might constitute another boundary condition of the testing effect, by showing that the effect decreases or even disappears with complex learning tasks that are high in element interactivity, which are plentiful in education.
Damn! Honestly, I hadn’t even considered that this boundary condition might exist. Does this mean we should we forget about the testing effect? Is it something which can only usefully be relied upon in psychology labs where psychology undergraduates are tested in simple, non-interactive facts? Van Go and Sweller are careful to point out that we shouldn’t be too hasty:
… the studies presented in this special issue suggest that it would be worthwhile to conduct further research on the complexity of learning and test tasks as a potential boundary condition of the testing effect. It would help teachers and instructional designers to know for which learning tasks they can and cannot expect benefits of having their students take practice tests instead of engage in further study. Therefore, we hope that this special issue encourages debate about and future research on the question of how the complexity of learning materials affects the testing effect.
And before you know it – the very next day in fact – Karpicke and Aue drop this little bombshell: The Testing Effect Is Alive and Well with Complex Materials. I haven’t actually been able to read it as it’s behind a paywall, but the abstract says,
Van Gog and Sweller (2015) claim that there is no testing effect—no benefit of practicing retrieval—for complex materials. We show that this claim is incorrect on several grounds. First, Van Gog and Sweller’s idea of “element interactivity” is not defined in a quantitative, measurable way. As a consequence, the idea is applied inconsistently in their literature review. Second, none of the experiments on retrieval practice with worked-example materials manipulated element interactivity. Third, Van Gog and Sweller’s literature review omitted several studies that have shown retrieval practice effects with complex materials, including studies that directly manipulated the complexity of the materials. Fourth, the experiments that did not show retrieval practice effects, which were emphasized by Van Gog and Sweller, either involved retrieval of isolated words in individual sentences or required immediate, massed retrieval practice. The experiments failed to observe retrieval practice effects because of the retrieval tasks, not because of the complexity of the materials. Finally, even though the worked-example experiments emphasized by Van Gog and Sweller have methodological problems, they do not show strong evidence favoring the null. Instead, the data provide evidence that there is indeed a small positive effect of retrieval practice with worked examples. Retrieval practice remains an effective way to improve meaningful learning of complex materials. [my emphasis]
So there you go. Make of this what you will. The lesson – if there is one – is that it pays to be skeptical and to withhold judgment rather than leaping to confirm our biases. Because seriously, what if you’re wrong?
Image courtesy of Shutterstock