The evidence for popular teaching methods is shaky

Issuing either broad guidelines or narrow ‘recipes’ or ‘scripts’ lead to problems measuring effectiveness. Andrew Davis argues teachers should be sceptical
10th November 2017, 12:00am
Magazine Article Image

Share

The evidence for popular teaching methods is shaky

https://www.tes.com/magazine/archived/evidence-popular-teaching-methods-shaky

The Department for Education, with support from some educators, claims to adore research-based policies. Any kind of opposition to this is easily ridiculed. “Oh - so you don’t bother with evidence in your practice? Finally we’re getting some rigour and rationality into teaching and you are trying to ban it!”

In their 2012 book Evidence-Based Policy: A Practical Guide to Doing It Better, Nancy Cartwright and Jeremy Hardie address the question, “It worked there. Will it work here?” “It” could be many things in education. Popular candidates include group work, direct instruction, mastery mathematics, inquiry-based teaching, synthetic phonics and whole-language approaches.

Yet, what counts as the “it”? For instance, what is it for a teacher to employ direct instruction (DI) today, and to do the “same thing” next week? For several teachers to adopt it in different classrooms, with different age groups, subjects, contexts, cultures and even countries? When researchers report that DI has a useful effect size, what is researched? Is the “it” the same, regardless of context? How would anyone know? Without satisfactory answers, robust empirical research into “effectiveness” is not feasible.

Lesson objectives

Versions of DI range from tight scripts, or recipes that incorporate obligatory pupil responses, to much broader guidelines.

Consider one from Clayton R Cook, Elizabeth Holland and Tal Slemrod: “Lesson objectives that are clear and communicated in language students are able to understand. An instructional sequence that begins with a description of the skill to be learned, followed by modeling of examples and non-examples of the skill…shared practice...and independent demonstration of the skill…Instructional activities that are differentiated and matched to students’ skill levels to the maximum extent practicable. Choral response methods in which all students respond in unison to strategic teacher-delivered questions” (Cook, C, Holland, E and Slemrod, T (2014), “Evidence-Based Reading Decoding Instruction”, in Little, S and Akin-Little, A, Academic Assessment: p202).

What does “clear” mean? How clear to the students? Must all of them find it clear? How does the teacher know what they can “understand”? Is she making a very general judgement about what pupils of that age and stage are likely to understand? Or are her decisions about her instruction drawing on some kind of assessment of this particular group of students? Or some combination? What counts as a lesson objective? Is it the same kind of thing regardless of the curriculum area concerned?

What happens to the “it” that is supposed to be researched when DI is said to have a decent “effect size”? What can class teachers possibly take from it to improve their practice? It leaves them entirely free to make all the detailed professional decisions that they have always made. The putative “effect size” of DI would fail to have any implications for what should happen in classrooms. Unless the culture had been that teachers should never, ever stand at the front and tell pupils things. Have any schools and teachers ever thought that?

Effect sizes

Teaching methods supposedly suitable for research into their effect size face a dilemma. Either they are a kind of script or recipe, or they are abstract and open to multiple interpretations. The motivation for the recipe choice is that the method becomes straightforwardly researchable. It is clear what counts is implementing it - and observers can agree on whether it is present in any given lesson or series of lessons. Yet script following is hardly true teaching and it is doubtful whether claims about effect sizes draw on research of strict educational recipe implementations. What school would ever agree to trying such scripts?

Alternatively, if the method is researched, but as flexibly interpretable, teachers can take little from any effect size “proved”. This is because the teachers must engage in their day-to-day choices, about which interpretation of the “method” is appropriate in their context. Their choices may vary, with good reason.

These points should encourage all teachers to cast a sceptical eye on a good deal of educational research by people such as John Hattie, who conduct large scale systematic reviews and make much of the “effect size” of various approaches. The problems I have outlined also apply to the synthetic phonics teaching method. So much the worse, then, for the government’s attempts to impose it by means of the phonics screening check…

Andrew Davis is assistant professor emeritus at the School of Education, Durham University. His book, Critique of Pure Teaching Methods and the Case of Synthetic Phonics, is published on 16 November. It will be discussed at an associated research seminar at the UCL Institute of Education on 22 November

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared