fbpx
What our partners say…

The What Works Clearinghouse (WWC) is a federally funded program established in 2002 that evaluates educational interventions and publishes reports and summary ratings. The reports have received extensive criticism, including concerns such as examining only a small proportion of the available evidence, errors in the review process, and a lack of peer review and comparisons of results to related literature. Two WWC reports issued in July 2013 illustrate the severe problems that can permeate the process and result in the dissemination of erroneous conclusions. In one case, the WWC’s errors resulted in a positive rating for a program that has been determined, by more inclusive and careful reviews, to be ineffective and inefficient. In the other case the WWC’s errors resulted in a negative conclusion regarding a program that has been judged, by more inclusive and careful reviews, to be highly effective. This document describes these errors and preliminary steps needed to prevent their reoccurrence.

The What Works Clearinghouse (WWC) is a federally funded program established in 2002 to evaluate educational interventions and provide reliable and trustworthy summary ratings and reports of their effectiveness. Yet, some of their reports directly contradict the conclusions of the research literature, giving positive ratings to a program that scholars have found to be ineffective (Reading Recovery) and failing to give positive ratings to programs the research literature has found to be highly effective (Direct Instruction). This article uses a comparative case study approach to examine how these contradictory conclusions developed. It contrasts the methods used by the scholarly world and the WWC to summarize literature and their conclusions about the two curricula. It then examines errors in the three major steps of the WWC review process: 1) compiling lists of studies to examine, 2) applying WWC criteria to select studies for further analysis, and 3) interpreting and reporting the results of the studies. Extensive problems are documented at each step, systematically favoring RR and not favoring DI. Implications of the results are briefly discussed.

A report by the What Works Clearinghouse (2012) posted in July 2012 examined two studies of the use of Reading Mastery with students with learning disabilities and concluded that the program had “no discernible effects on reading comprehension and potentially negative effects on alphabetics, reading fluency, and writing.”  This conclusion is in stark contrast to dozens of studies of Reading Mastery and other elements of the Direct Instruction (DI) corpus of material. This technical report documents significant errors in the WWC report. The WWC analysis was based on only two articles. One compared two very similar Direct Instruction programs, Reading Mastery and Horizons, and found that students in both programs made gains over the academic year that were significantly greater than those made by students in national and state level populations. Gains in both programs were similar, leading the WWC to conclude that Reading Mastery was no better than its comparison program. They ignored the fact that students performed significantly better than the national or state norms or that the comparison program was similar on all but a very few characteristics. The second article reviewed involved two groups of students, both of which received Reading Mastery as part of the schools’ “usual and customary school day curriculum.” One group of students also received 45 minutes of supplemental phonemic related instruction from their regular classroom teachers. Not surprisingly, the group receiving the additional instruction had significantly larger gains than those who did not have additional learning time. Despite these differences in exposure and the fact that both groups appear to have had Reading Mastery as their usual reading curriculum, the WWC used these results to suggest that Reading Mastery could have potentially negative effects.

The educational research community, including the What Works Clearinghouse has defined “scientifically based research” in ways that emphasize small, randomized control trials within schools or artificial settings. In contrast, the classic social science literature clearly notes the limits of randomized assignment in field settings, such as schools. More importantly, it offers alternative designs that promote internal validity, can utilize the type of data schools routinely collect, and can answer the questions that most often concern school officials and parents. This paper shows how the data schools obtain as part of their routine state and federally mandated assessment programs can be used to examine the effectiveness of educational curricula. More important, based on the logic developed in the classic research design literature, it suggests that appropriate analysis of these data can approximate the quality of results that could be obtained through randomized control trials of the same curricula. The procedure is illustrated with published data regarding the Reading Mastery curriculum. Empirical results are comparable to those obtained in meta-analyses of the curriculum, with average effect sizes far surpassing the usual criterion for educational importance. The analysis could easily be used by local school officials wanting to assess the impact of curricular changes.

Implementing Direct Instruction Successfully

When implemented fully, Direct Instruction (DI) is unparalleled in its ability to improve student performance and enhance students’ self-esteem. In order to implement DI effectively, much more is required than simply purchasing instructional materials. The following two-part tutorial guides administrators, teachers, and coaches through the key features of a successful DI implementation. Part I provides an overview of the steps schools need to take in preparation for a DI implementation before school starts, while Part II provides an overview of the steps schools need to take after school has started.

IMPORTANT: This tutorial is an intensive video series comprised of 18 segments, each followed by a series of questions. Users should allow approximately three hours to watch the videos and complete the questions. NIFDI recognizes the high demand for time placed on school officials and, for this reason, has structured the tutorial so users may stop at any time and later resume where they left off.

Enroll in the tutorial here


Tutorial Thinkific Header
New to Direct Instruction? Watch the Introduction to Direct Instruction Video Series before taking the online tutorial.

Module-Bottom-Button-A rev

Module-Bottom-Button-B rev

Module-Bottom-Button-C rev2

candid-seal-gold-2024.png