The battle over Direct Instruction

Forty years ago, the most expensive educational research project in history claimed to find Direct Instruction to be the most effective teaching method, but its results were buried and have had little impact on classrooms since. John Morgan asks whether we have been getting teaching wrong as a result, or if the project’s findings were not as clear cut as many believe ...
30th October 2020, 12:01am
Direct Instruction

Share

The battle over Direct Instruction

https://www.tes.com/magazine/archived/battle-over-direct-instruction

The story of Project Follow Through, often described as the biggest and most expensive educational research project in history, started on 8 January 1964, in president Lyndon Johnson’s first State of the Union Address.

In the time-honoured method of TV documentaries, transport yourself back to the era by having the number one single of the time play in your head: There! I’ve Said It Again by Bobby Vinton.

Hmm … that’s not much help.

Try the subsequent US number one: I Want to Hold Your Hand by the Beatles, which zapped to the top a month later via the band’s appearance on The Ed Sullivan Show.

In his address, Johnson, a Democrat, presented a radical vision for a “war on poverty” waged by the US government. Poverty, he argued, was rooted in society’s “failure to give our fellow citizens a fair chance to develop their own capacities, in a lack of education and training”, among other failures.

His ensuing Great Society domestic policy programme included Head Start, an educational scheme (still running today) that aims to support the emotional, social, health and psychological needs of preschool-aged children from low-income families.

But Head Start’s success failed to endure for children when they started elementary school. So, in his 1967 State of the Union Address (Billboard number one: I’m a Believer by The Monkees), Johnson pledged to maintain Head Start’s “educational momentum by following through in the early years”.

This was the point when Project Follow Through was conceived. After a somewhat confused repurposing of the plans following budget cuts, it became an “educational experiment” to try to find what worked in education. It would compare the effectiveness of 22 different teaching methods, gathering yearly data on around 10,000 students in around 180 schools in low-income communities.

It eventually ran for eight years, between 1968 and 1976, at a cost of $500 million (£383 million) at the time.

And yet, today - despite its immense scale - Project Follow Through is largely forgotten by most. That, some argue, is because its shock results - finding the Direct Instruction method to be the most effective - were silenced by the “educational establishment”.

However, the findings of that study that ended 44 years ago have recently tiptoed back into the spotlight, not just in the US but here in the UK. Schools minister Nick Gibb has referred to Project Follow Through, noting that it found “Direct Instruction, a teacher-led programme, comprehensively outperformed a multitude of ‘child-centred’ approaches”. Proponents of more “traditional” methods of pedagogy cite it frequently as evidence that their method works best. And a growing number of UK schools are experimenting with Direct Instruction, prompted by the evidence base around it.

But is Project Follow Through really the conclusive evidence for traditionalist teaching that many claim?

‘It wasn’t an experiment at all’

The history of Project Follow Through is a curious one.

For example, the study doesn’t resemble what we would call an experiment today, according to Dylan Wiliam, emeritus professor at the UCL Institute of Education.

”[Project Follow Through] wasn’t really an experiment at all - it didn’t have randomisation,” he says, adding that the schools selected were “a convenient sample”.

Those schools were offered 22 different teaching methods grouped into three broad categories: a cognitive-conceptual category emphasising problem-solving skills; an affective-cognitive category emphasising positive attitudes to learning and “learning to learn” skills; and a basic skills category emphasising fundamental skills in reading, maths, spelling and language. Each of the models was proposed by a different sponsor - mostly university education colleges or education research organisations.

The schools were not assigned a method, rather they just picked the one they wanted to adopt (or were already using).

Direct Instruction was included in the basic skills category. The method had been pioneered by Siegfried “Zig” Engelmann, a senior educational specialist at the University of Illinois when Project Follow Through began (he moved, with the Direct Instruction research project, to the University of Oregon in 1970), and Wesley Becker, a professor of clinical psychology at Illinois.

Engelmann, who died in 2019, had started off working in advertising. But after devising a teaching method through testing on his children, he had put his theories into practice at a preschool at the University of Illinois, aiming to accelerate the learning of disadvantaged children.

Direct Instruction had gathered enough attention by 1968 to be featured in an episode of the Walter Cronkite-hosted CBS documentary series The 21st Century, in which Engelmann explained how his method worked “not in terms of magic, not in terms of hoping that somehow by learning word games or listening to records [the child will] learn to read, but in identifying the skills that a kid has to know in order to read, and then systematically programming those”.

It’s worth pointing out at this point that there’s a distinction between two strands of the method. “Upper-case” Direct Instruction (used in Project Follow Through), or DI, involves carefully planned and scripted lessons, an incremental step design intended to avoid overloading pupils, and a “faultless communication” technique that aims to use examples to induce wider concepts and eliminate the chance of misinterpretation, says Kurt Engelmann, Zig Engelmann’s son and president of the National Institute for Direct Instruction (NIFDI), which provides course materials for schools implementing the method.

Direct Instruction is a method that involves specific teaching practices, curriculum materials, monitoring and training systems - basically a “full-system approach”, explains Doug Carnine, emeritus professor in the University of Oregon’s Department of Education Studies, who was integral to the development of DI as a collaborator with Zig Engelmann.

Meanwhile, “lower-case” direct instruction brings in the instructional delivery techniques of DI - fast pace, high interaction, modelling and feedback for students - without the same full-system approach, says Kurt Engelmann.

Lower-case direct instruction was - and is - a common approach to teaching, but DI was - at the time and is even now - something quite different.

So, how exactly did DI fare in the Project Follow Through results?

Project outcomes were judged by tests of basic skills, cognitive skills (measured through Metropolitan Achievement Tests, or MATs) and affective strengths. Early results became available in 1974, before final results were formulated in 1977.

There were a variety of analyses of the data, but all converged on judging that Direct Instruction and the University of Kansas’ Behavioural Analysis model were the strongest performers on the MAT tests.

A book published this year, All Students Can Succeed: a half century of research on the effectiveness of Direct Instruction, describes the results in an even more positive light for DI: “Students from the DI sites significantly outperformed students in the comparison schools in all three of the areas that were measured: basic skills, cognitive skills and affective measures. No other program had positive results in all three areas, nor did any other program have as many positive results as DI.”

Criticism and conspiracy

However, critics at the time soon raised concerns about Project Follow Through’s methodology. A 1978 report, funded by the Ford Foundation and written by four education professors, critiqued the early data analysis. It argued that the MAT tests naturally favoured the basic skills models, and suggested the project should have focused on how to make each model work better, rather than identifying which was most effective.

For advocates of DI, that report was central to a conspiracy around the Project Follow Through results - one intended to eradicate the method’s victory over those of an educational establishment wedded to progressive, child-centred approaches.

Weight to that theory came in the decision not to publish the final report of the project.

“The federal government refused to release the final report,” says Carnine. “So, to start with, the results weren’t even out there, which makes it even easier for the educational community to ignore.”

When questioned by a US senator on the failure of the results to emerge in March 1978 (number one US single: Night Fever by the Bee Gees), Ernest Boyer, then commissioner of education, the equivalent of today’s US secretary of education, wrote: “Since only one of the models, and therefore only one of the sponsors, was found to produce positive results more consistently than any of the others, it would be inappropriate and irresponsible to disseminate information on all the models which carried the implication that such models could be expected to produce generally positive outcomes.”

“Very powerful people - the people who were behind the projects that weren’t succeeding - were able to bury the results,” says Jean Stockard, emerita professor in the University of Oregon’s School of Planning, Public Policy and Management, and one of the authors of All Students Can Succeed.

But didn’t the official verdict of the project judge that there was no overall effect of the interventions?

That’s because the authors put “all the [different methods] together and said overall there wasn’t an effect - but that’s simply because only one [DI] had an effect”, Stockard argues.

Zig Engelmann was clearly unhappy. “The truth about Follow Through was silently drowned like an unwanted kitten, and nobody protested,” he later wrote.

Despite this assurance of the validity of the findings, criticisms of the Project Follow Through results have not fallen away.

The original failure to allocate models randomly to schools “potentially introduced a host of alternate explanations for observed differences” in how each model performed, beyond the effectiveness of each model, says Jack Schneider, assistant professor in the College of Education at the University of Massachusetts Lowell and author of From the Ivory Tower to the Schoolhouse: how scholarship becomes common knowledge in education.

There was also “significant variation” in how the same models performed at different sites in the study, “indicating that the results may be more ‘noise’ than ‘signal’,” Schneider adds.

And it is not surprising that “curricular programmes oriented around instruction in basic skills are better at raising basic competencies measured by basic assessments,” he continues. “The key word here is ‘basic’, and that’s what all the ensuing argument was about. Can we conclude that a programme is better if it achieves its results by narrowing its aims?”

Advocates of DI can fire back with a response: that Project Follow Through prompted a huge amount of separate research on DI that offers further support.

“If those results [from Project Follow Through] were flawed, then they wouldn’t have shown up in other studies,” says Stockard, whose book painstakingly examines the data in 549 reports on DI across 50 years. “No matter how we sliced the data, we got the same kinds of results: DI is incredibly effective.”

Sceptical researchers think they can fire that one back over the net. Karen Eppley, associate professor of curriculum instruction at Penn State University, has published a paper evaluating 40 studies of Direct Instruction. “DI research is plagued by a wide range of methodological problems that severely limit the veracity of claims that can be made about the efficacy of DI,” she says. In her study, she concluded that the positive advantage of DI is often tiny and that positive results “faded over time”.

Moving across the Atlantic to the UK today (number one single at the time of writing: Mood by 24kGoldn featuring Iann Dior), we find some compromise: perhaps what Project Follow Through’s results show is not the supremacy of DI for all objectives, but a useful nudge about what might work for certain objectives.

“The legacy of Project Follow Through is a prima facie case that Direct Instruction was helpful for low-achieving students … on a fairly limited range of measures, but ones we still think are important today,” argues Wiliam.

Adam Boxer, head of science at The Totteridge Academy in North London, says DI’s performance in Project Follow Through showed “incredibly strong outcomes across a barrage of measures”, including on student “affect” - how much they enjoyed their learning.

The project’s “sheer scale and cost” and the fact that it “accords really nicely with the findings of cognitive science, despite being from a different psychological paradigm” are other factors making it worthy of attention today, argues Boxer.

He observes that the use of DI is “becoming bigger” within the trust of which his school is a part, United Learning, particularly as rapid catch-up for Year 7s when they enter secondary school.

Meanwhile, the Education Endowment Foundation funded a pilot to implement a maths DI programme in the autumn term last year. The pilot - a collaboration between the Midland Academies Trust and NIFDI, involving students in Years 7-9 struggling to make progress in maths across eight schools in the trust - aimed to test whether there is merit in further research on DI (see box, page 22).

The Midland Academies Trust had already been using DI to help students arriving below age-related expectations to catch up, says Robin Shakespeare, the trust’s director of education. When starting to use DI, the trust had noted the “considerable volume of research from the US - because the research is pretty compelling”, says Shakespeare. But the key question, he adds, was: “Could something that had volume of educational evidence in another context be brought over to ours? The short answer was yes, it absolutely could be.”

Already, in some of the schools involved in the pilot, DI groups have outperformed groups above them not taught with Direct Instruction, Shakespeare says. And these were students who had been “working two to three years below age-related expectations”.

“One of the things we find with Direct Instruction is that it’s got that stickability”, which reflects cognitive science that suggests repetition is necessary for learning to be transferred to long-term memory, adds Shakespeare.

An ideological battle

But, even here, questions remain. For example, is Direct Instruction just for students who have fallen behind in specific subjects?

“It can work with any students that place into the programme, at their skill level. It works with higher-performing students, too,” says Kurt Engelmann, who points to a private school network in North Carolina that uses the method.

Is Direct Instruction too “traditional” for modern schools, too out of step with more progressive methods, as some argue? Is it Bobby Vinton teaching for a 24kGoldn age?

“The progressive ones you’re talking about are those other [models] in Follow Through,” replies Kurt Engelmann. “Progressive in rhetoric, progressive in goals, but not necessarily progressive in outcomes. Direct Instruction is progressive in outcomes.”

And it’s “not old-fashioned in the least - it’s absolutely revolutionary”, he argues.

Wiliam takes a slightly more cautious approach to these questions: “I’m perfectly open to the idea that if kids discover things for themselves that their learning is more stable, secure, deeper and longer lasting. But the trouble is that a lot of kids haven’t got that time.”

He adds: “I think anyone who rejects Direct Instruction because they don’t like the label is making a foolish decision that potentially disadvantages kids, especially the kids who find learning more difficult, who learn at a slower rate.”

However, Eppley counters by saying that the method is most often used in the US by “for-profit ‘no-excuse’ charter schools that disproportionately serve children of colour in economically disenfranchised urban communities”.

In her argument, “DI’s focus on the lowest-level skills is an equity issue. DI limits children’s opportunities to learn high-value literacy skills that are at the heart of affluent schools and classrooms”.

Schneider, meanwhile, sees a big question being raised by the use of DI: “How much are we willing to settle for, when it comes to the education of the less advantaged? If the answer is, ‘We’ll settle for the basics,’ then a programme like Direct Instruction has a great deal of evidence in support of it. If the answer is, ‘We want for them everything we want for all children,’ the solution is far less clear.”

For Carnine, it’s the exact opposite on equity and, in particular, racial equity: DI offers a way out of a situation in which “hundreds of thousands of poor kids of colour are being miseducated in vast numbers”.

These arguments could go on ad infinitum. As evident from the above, everything is still fiercely debated when it comes to DI, despite what advocates think Project Follow Through shows, and the debate spotlights the enduring (probably neverending) battle between educational “progressives” and “traditionalists”, between advocates of child-centred and teacher-centred approaches.

Ultimately, you don’t have to take a side on Direct Instruction to recognise that the history of Project Follow Through raises important questions about educational research: is it possible to directly compare the effectiveness of different teaching methods in a meaningful “scientific” way? Or is any such attempt doomed to descend into an ideological battle?

Everyone desperately wants an answer to the question of what the best method of teaching is, but first you have to agree on what the best “outcome” for any given child might be. Despite the US’ spending a vast amount of money on an eight-year project, we did not get an indisputable answer to that question - nor did we get agreement on outcomes. Perhaps that’s a sign we never will?

John Morgan is a freelance journalist

This article originally appeared in the 30 October 2020 issue under the headline “(Mis)directed learning”

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared