We're moving our offices! Orders placed online during our move will be delayed. We're sorry for the inconvenience.
Project Follow Through
Project Follow Through was the most extensive educational experiment ever conducted. Beginning in 1968 under the sponsorship of the federal government, it was charged with determining the best way of teaching at-risk children from kindergarten through grade 3. Over 200,000 children in 178 communities were included in the study, and 22 different models of instruction were compared. The communities that implemented the different approaches spanned the full range of demographic variables (geographic distribution and community size), ethnic composition (white, black, Hispanic, Native American) and poverty level (economically disadvantaged and economically advantaged). Parent groups in participating communities selected one approach that they wanted to have implemented, and each school district agreed to implement the approach the parent group selected.
Follow Through had strong safeguards to assure that the participating districts actually implemented the approach it adopted. The government provided stipends to supplement local budgets and support the implementations and also provided comprehensive health services, including a nutritional component, plus medical-dental care.
Evaluation of the project occurred in 1977, nine years after the project began. The results were strong and clear. Students who received Direct Instruction had significantly higher academic achievement than students in any of the other programs. They also had higher self esteem and self-confidence. No other program had results that approached the positive impact of Direct Instruction. Subsequent research found that the DI students continued to outperform their peers and were more likely to finish high school and pursue higher education.
Although the evaluation clearly favored the Direct Instruction Model, the results of the evaluation were suppressed by the U.S. Office of Education. Here is a pdf
from the U.S Commissioner at the time (equivalent to today’s Secretary of Education) explaining why the results of the evaluation were not disseminated: Siegfried Engelmann, Senior Author of the DI programs, examines the Commissioner’s rationale for not disseminating the evaluation results (from pp. 250-51) of Teaching Needy Kids in our Backward System (ADI Press, 2007):
The first sentence of point 1 in Boyer’s letter contradicts the assertion by Wilson, House, and Glass about whether Follow Through was designed to find successful models or to evaluate the aggregate of models. “Since the beginning of Follow Through in 1968, the central emphasis has been on models.”
Boyer freely admits that policy makers accepted the data as valid. Several references in his letter indicate that he had no doubt that only one model was highly successful, which means that he was aware of facts that had never been shared with states and school districts.
The ultimate conclusion Boyer drew was that it there was only one successful model, it should be treated like all the other models. In response to the question about funding selected models, Boyer’s logic seems to be that somehow such funding would be irresponsible because there were not selected models, only one selected model. So rather than fund that model, the Office of Education assumed it was equitable to treat all models the same and simply promote selected sites. Imagine spending half a billion dollars to draw this conclusion.
The effect Boyer presumed would happen is naïve: “ ... we are funding 21 of the successful sites as demonstration sites this year so that other schools and educators will learn about, understand, and hopefully adopt the successful activities and procedures taking place in these effective sites.”
Boyer had data that the effective non-DI schools were aberrations and that they were so elusive that the sponsors could not even train their other schools to do what the successful school did. If there was any validity to the notion that people would visit a dissemination model for High Scope and be able to implement as well as the school visited, the sponsor would have been the first to know about this excellent site and therefore the first to try to disseminate in his other sites. This dissemination failed. The successful school remained an outlier. Therefore, there would be no hope of visiting schools being able to replicate the procedures of this school. In fact, the National Diffusion Network (NDN) did not create more than a handful of success stories for failed schools.
Schools from High Scope and other failed models were disseminated for one reason: to preserve at least a modicum of credibility to all the favored ideas and practices of mainline educational thought. If everybody failed, at least Stallings, Piaget, and the rationale that drove at least 19 of 22 models would not be shown to be grossly inferior to the ideas and practices that innervated DI.
In terms of morality, Boyer’s decision not to permit sponsors to disseminate was brutal. Why wouldn’t it have been possible to fund us as a model and fund sites from other models? The consistent performance of our model affirmed that our techniques and programs were replicable and that with proper training teachers in failed schools could succeed. Why wouldn’t that information be important enough to disseminate? Why did the government feel that it had to initiate some form of affirmative action to keep failed models floating?
Boyer admits that the results didn’t come out the way experts predicted. Policy makers didn’t have the vision of only one program excelling in basic skills and cognitive skills, or the same program excelling in reading, spelling, and math. They were not prepared for the possibility that this program would also have children with the strongest self-image.