Most evaluators have embraced the goal of evidence-based practice (EBP). Yet, many have criticized EBP review systems that prioritize randomized control trials and use various criteria to limit the studies examined. They suggest this could produce policy recommendations based on small, unrepresentative segments of the literature and recommend a more traditional, inclusive approach. This article reports two empirical studies assessing this criticism, focusing on the What Works Clearinghouse (WWC). An examination of outcomes of 252 WWC reports on literacy interventions found that 6% or fewer of the available studies were selected for review. Half of all intervention reports were based on only one study of a program. Data from 131 studies of a reading curriculum were used to compare conclusions using WWC procedures and more inclusive procedures. Effect estimates from the inclusive approach were more precise and closer to those of other reviews. Implications are discussed.
This report reviews criticisms of What Works Clearinghouse (WWC) publications and policies, errors identified in their publications, and issues regarding the transparency and accountability of the WWC. The review is based on material obtained from a series of Freedom of Information Act (FOIA) requests submitted by the National Institute for Direct Instruction and subsequent appeals. With the information provided from the FOIA requests and appeals, and the publicly available information on the WWC, three conclusions appear clear: 1) The WWC suffers from a lack of transparency in their policies and guidelines, 2) the conclusions they create in their reports can be misleading, and 3) the reports are potentially damaging to program developers and ultimately the success of students. The validity and practicality of the WWC policies and reports are discussed.
Reviewing documentation related to 62 Quality Reviews of What Works Clearinghouse (WWC) publications, this report summarizes the reasons for the reviews, the revisions made and not made, and the inconsistent application of WWC policies during the publication of these reports and during the Quality Reviews. This information was obtained in the fall of 2013 via the Freedom of Information Act (FOIA). These Quality Reviews were performed in response to the concerns of 54 organizations, study authors, program developers, teachers, and education researchers.Forty-one different instructional programs and study reviews were examined in these Quality Reviews. This documentation revealed a wide range of concerns, particularly the misinterpretation of study findings. This issue was given specific attention, especially in relation to how the WWC accounts for the fidelity of implementation when determining their rating of effectiveness for specific programs. With the information provided from the FOIA request and the publicly available information three conclusions appear clear. 1) The WWC suffers from a lack of transparency in their policies and guidelines, 2) the conclusions they create in their reports can be misleading, and 3) the reports are potentially damaging to program developers and ultimately the success of students.
A November, 2013, report of the What Works Clearinghouse stated that it could find “no studies of Reading Mastery that fall within the scope of the Beginning Reading review protocol [and] meet What Works Clearinghouse (WWC) evidence standards” (WWC, 2013b, p. 1). This NIFDI technical report documents substantial errors in the WWC’s compilation of studies to examine and in the interpretations of individual studies. Effect sizes (Cohen’s d) were computed for results of more than three dozen studies identified by the WWC but rejected for analysis. All of these studies conformed to standard methodological criteria regarding valid research designs and would have been accepted in scholarly reviews of the literature. The average effect size associated with Reading Mastery (RM) was .57. This value is more than twice the .25 level traditionally used to denote educational significance. The results replicate meta-analyses that have found strong evidence of the efficacy of Reading Mastery. Given the high rate of error in this and other WWC reports consumers are advised to consult reviews of studies in the standard research literature rather than WWC summaries.
Meta-analyses and reviews of the educational research literature have identified hundreds of efficacy studies. Yet the What Works Clearinghouse reports that very few of these analyses meet its selection criteria and standards of evidence. This report examines why these differences occur. It finds that the WWC procedures differ markedly from standard practices within the social sciences. The WWC gives no academic or scholarly justification for their policies. Moreover, an empirical, quantitative analysis of the utility of the WWC approach indicates that it provides no “value added” to estimates of a curriculum’s impact. The costs of applying the WWC standards are far from minimal and result in highly selective and potentially biased summaries of the literature. It is suggested that the public would be better served if the WWC adopted the standard methodological practices of the social sciences.