Reviews of Direct Instruction
While decades of well-designed, scientific research show that Direct Instruction programs are highly effective, the programs have faced criticism. Some of these involve how DI affects students. For instance, some suggest that DI is less effective than other types of instruction, such as the “constructivist” or “discovery” approaches, or that it has no long-lasting impact on students’ achievement. Others suggest that it is only appropriate for disadvantaged students or those with learning difficulties. Some even claim that exposure to Direct Instruction results in poor self-image, behavior problems, or other problems for students. The accumulated evidence counters each of these claims. The research conclusively shows that Direct Instruction is more effective than other curricular programs and that the positive effects persist through high school. The positive effects occur with students of all ability levels and social backgrounds. Students exposed to Direct Instruction also have greater self-esteem and self-confidence than students in other programs, primarily because they are learning more material and understand that they can be successful students.
Other criticisms focus on the Direct Instruction programs and their use by teachers. Some suggest that Direct Instruction is only “rote and drill” and that teachers don’t like it because it hampers their creativity. Again, the research evidence counters these claims. Rather than involving a “rote and drill” approach, DI programs are designed to accelerate students’ learning and allow them to learn more material in a shorter amount of time. The programs are technical and prescriptive, designed to make teachers more effective, much like the prescribed techniques for surgeons and pilots ensure optimal results for their patients and passengers. Yet, just as surgeons and pilots’ personalities create the atmosphere of an operating room or plane, the individual personalities and creativity of DI teachers permeate their classrooms and interactions with students. The research shows that teachers like DI programs because they help their students learn more and they become more effective teachers.
Others suggest that using separate elements of the programs will result in outcomes that are just as good as using the full DI programs. Yet, the research shows that using only some of the elements of DI programs, what is sometimes called “direct instruction” or “little di,” is far less effective than using the true Direct Instruction programs developed by Engelmann and associates.
The claim that Direct Instruction programs are not effective has been promulgated in recent years by the What Works Clearinghouse (WWC), which is funded by the United States Department of Education to provide reviews of curricular programs. Careful analyses of the WWC reports show that they can be very misleading and provide inaccurate summaries of the research. As a result, some WWC reports give positive ratings to programs that researchers have found to be ineffective and negative ratings to programs that the research has found to be highly effective.
The scientific literature emphasizes the importance of multiple tests, or replications, of studies to ensure that conclusions are accurate. Over the last five decades, there have been many studies of Direct Instruction’s efficacy, and researchers have reviewed and summarized this vast literature. They have found strong and consistent evidence of DI’s effectiveness.
Two approaches are typically used in such analyses: systematic literature reviews and meta-analyses. Both approaches begin with a delineation of the topic to be covered. For instance, some have looked only at studies of reading or of mathematics. Some have focused on studies of whole school reform. Some may look only at special populations, such as students with disabilities. Systematic literature reviews and meta-analyses may also use methodological criteria to limit the range of studies examined, such as sample size or the nature of the research design. Once the researchers have determined the topic and criteria to be used, they try to amass all the relevant studies and then carefully examine their findings.
The procedures used to summarize the findings differ slightly for the two approaches. Systematic literature reviews usually involve narrative summaries of the results. They describe the nature of each study and compare and contrast conclusions. These reviews usually include simple tallies of the outcomes, noting the proportion of results that are positive, negative, or indeterminate. Meta-analyses use a more statistical approach. They translate results into a common numerical metric, usually an effect size, and statistically analyze variations in the metric and factors that might influence it. All of the literature reviews and meta-analyses of Direct Instruction materials have found strong evidence of their effectiveness.
For more, see the work of: