The educational research community, including the What Works Clearinghouse has defined “scientifically based research” in ways that emphasize small, randomized control trials within schools or artificial settings. In contrast, the classic social science literature clearly notes the limits of randomized assignment in field settings, such as schools. More importantly, it offers alternative designs that promote internal validity, can utilize the type of data schools routinely collect, and can answer the questions that most often concern school officials and parents. This paper shows how the data schools obtain as part of their routine state and federally mandated assessment programs can be used to examine the effectiveness of educational curricula. More important, based on the logic developed in the classic research design literature, it suggests that appropriate analysis of these data can approximate the quality of results that could be obtained through randomized control trials of the same curricula. The procedure is illustrated with published data regarding the Reading Mastery curriculum. Empirical results are comparable to those obtained in meta-analyses of the curriculum, with average effect sizes far surpassing the usual criterion for educational importance. The analysis could easily be used by local school officials wanting to assess the impact of curricular changes.