fbpx
What our partners say…

dreamstimemaximum 28435958Numerous myths about DI circulate in education circles, usually by people who have never taught the program or never seen it used by teachers who have received proper training and support. These myths have to do with the supposed rigidity of DI, its inappropriateness for certain populations, and its restrictions on creativity. These myths are myths

After you’ve taught the program for a few months, you will see that the program has the flexibility to accommodate the needs of lower and higher-performing students, and it allows teacher creativity within the confines of the script, much as a play script allows an actor to be creative within its confines. Most importantly, you will see that all students succeed in ways you hadn’t thought possible before. The DI programs provide teachers with a powerful tool for presenting an instructional sequence that has been verified to be highly effective with the full range of learners. By providing effective wording and examples, the scripts allow teachers to focus on students’ responses.

Teachers don’t need to worry about how to present critical skills and concepts. Instead, they can concentrate on what students know, what they don’t understand, and where they need additional practice or support. Your interaction with students will increase with DI because the DI programs elicit high rates of student responses in each lesson. With DI, you will have a much better understanding of your students’ skill levels than ever before. DI is effective with all students as long as they are placed and grouped at their skill levels and taught to mastery every day.

Groups should be homogeneous with respect to students’ current performance level, and these groups should be flexible in order to incorporate different rates of student learning. Some students master skills and concepts quickly and may be ready to move to a higher group. Other students need additional practice and might need to be moved to a lower group. These adjustments to student placement are made on a weekly basis through the analysis of student performance data. To the largest extent possible, with the school’s resources, instruction is individualized for students through flexible grouping.

Senior Direct Instruction Author Siegfried Engelmann addresses many of the myths related to DI in a series of videos recorded in 2017. In the video segments, you will hear Zig's thoughts on everything from "Drill and Kill" to multi-sensory learning to teachers' creativity in a DI classroom.


For more, see the work of:

Professor Sara Tarver - University of Wisconsin
pdf DI and Higher Order Thinking, compiled by Bill Sower
pdf Myths and Truths About Direct Instruction (Tarver, 1998)
pdf Research on Direct Instruction: 25 Years Beyond DISTAR, Chapter 3: “Myths about Direct Instruction” (Adams & Engelmann, 1996)

Project Follow Through was the most extensive educational experiment ever conducted. Beginning in 1968 under the sponsorship of the federal government, it was charged with determining the best way of teaching at-risk children from kindergarten through grade 3. Over 200,000 children in 178 communities were included in the study, and 22 different models of instruction were compared. The communities that implemented the different approaches spanned the full range of demographic variables (geographic distribution and community size), ethnic composition (white, black, Hispanic, Native American) and poverty level (economically disadvantaged and economically advantaged). Parent groups in participating communities selected one approach that they wanted to have implemented, and each school district agreed to implement the approach the parent group selected.

Follow Through had strong safeguards to assure that the participating districts actually implemented the approach it adopted. The government provided stipends to supplement local budgets and support the implementations and also provided comprehensive health services, including a nutritional component, plus medical-dental care.

Project-Follow-Through-Chart

Evaluation of the project occurred in 1977, nine years after the project began. The results were strong and clear. Students who received Direct Instruction had significantly higher academic achievement than students in any of the other programs. They also had higher self-esteem and self-confidence. No other program had results that approached the positive impact of Direct Instruction. Subsequent research found that the DI students continued to outperform their peers and were more likely to finish high school and pursue higher education.

Although the evaluation clearly favored the Direct Instruction Model, the results of the evaluation were suppressed by the U.S. Office of Education.  Here is a pdf letter  from the U.S Commissioner at the time (equivalent to today’s Secretary of Education) explaining why the results of the evaluation were not disseminated:
Boyer 1Boyer 2
Siegfried Engelmann, Senior Author of the DI programs, examines the Commissioner’s rationale for not disseminating the evaluation results (from pp. 250-51) of Teaching Needy Kids in our Backward System (ADI Press, 2007):

The first sentence of point 1 in Boyer’s letter contradicts the assertion by Wilson, House, and Glass about whether Follow Through was designed to find successful models or to evaluate the aggregate of models. “Since the beginning of Follow Through in 1968, the central emphasis has been on models.”

Boyer freely admits that policymakers accepted the data as valid. Several references in his letter indicate that he had no doubt that only one model was highly successful, which means that he was aware of facts that had never been shared with states and school districts.

The ultimate conclusion Boyer drew was that if there was only one successful model, it should be treated like all the other models. In response to the question about funding selected models, Boyer’s logic seems to be that somehow such funding would be irresponsible because there were not selected models, only one selected model. So rather than fund that model, the Office of Education assumed it was equitable to treat all models the same and simply promote selected sites. Imagine spending half a billion dollars to draw this conclusion.

The effect Boyer presumed would happen is naïve: “ ... we are funding 21 of the successful sites as demonstration sites this year so that other schools and educators will learn about, understand, and hopefully adopt the successful activities and procedures taking place in these effective sites.

Boyer had data that the effective non-DI schools were aberrations and that they were so elusive that the sponsors could not even train their other schools to do what the successful school did. If there were any validity to the notion that people would visit a dissemination model for High Scope and be able to implement as well as the school visited, the sponsor would have been the first to know about this excellent site and, therefore, the first to try to disseminate in his other sites. This dissemination failed. The successful school remained an outlier. Therefore, there would be no hope of visiting schools being able to replicate the procedures of this school. In fact, the National Diffusion Network (NDN) did not create more than a handful of success stories for failed schools.

Schools from High Scope and other failed models were disseminated for one reason: to preserve at least a modicum of credibility to all the favored ideas and practices of mainline educational thought. If everybody failed, at least Stallings, Piaget, and the rationale that drove at least 19 of 22 models would not be shown to be grossly inferior to the ideas and practices that innervated DI.

In terms of morality, Boyer’s decision not to permit sponsors to disseminate was brutal. Why wouldn’t it have been possible to fund us as a model and fund sites from other models? The consistent performance of our model affirmed that our techniques and programs were replicable and that, with proper training, teachers in failed schools could succeed. Why wouldn’t that information be important enough to disseminate? Why did the government feel that it had to initiate some form of affirmative action to keep failed models floating?

Boyer admits that the results didn’t come out the way experts predicted. Policymakers didn’t have the vision of only one program excelling in basic skills and cognitive skills or the same program excelling in reading, spelling, and math. They were not prepared for the possibility that this program would also have children with the strongest self-image.

Read the full description of the Follow Through experiment, the results, and the aftermath in a  pdf chapter from Engelmann's book, Teaching Needy Kids in our Backward System (ADI Press, 2007).  

Follow the links below to get additional information on Project Follow Through, including its design, the findings, and what happened with the results:

Athabasca University online module on Direct Instruction Evidence: Project Follow Through.

pdf special issue of Effective School Practices published in 1995-96 described Project Follow Through and its implications for current generations of students.

Shepard Barbash describes the design and outcomes of Project Follow Through in his book Clear Teaching.

Staff of NIFDI's Department of Research and Evaluation have prepared a comprehensive pdf bibliography  of writing related to Direct Instruction and Project Follow Through.

Veteran Direct Instruction author, researcher, and implementer Bonnie Grossen presented this webinar and provided insight into the outcomes of the project and the response to the findings.  

Linda Carnine, Susie Andrist, and Jerry Silbert discuss Project Follow Through with Dr. Zach Groshell on The Direct Instruction Podcast.

All Direct Instruction programs are developed with extensive field testing. Most traditional programs are routinely published without first being subjected to extensive trials in actual learning situations. In contrast, Direct Instruction programs are field tested to determine the extent to which students actually master the material that is presented in a program and the extent to which teachers are able to follow the program’s presentation specifications.

Field testing begins as the Direct Instruction programs are being developed. The authors select several classrooms in which to field test the first version of the curriculum. These classrooms are selected to include a variety of students and teachers. At least some of these classes must include the lowest performing students who will be placed in the program. This is important because the lower-performers make all the mistake that higher performers make and additional mistakes that higher performers tend not to make. 

Every activity in the field-test version of the program is assessed. Data are kept on the number of students who miss particular items and the incorrect responses that were made by the students. Rules are established about what student performance levels indicate a need to revise the teaching sequence. Student errors are analyzed to determine how the sequence of instruction likely led to the student problems. Revisions may include:

    • making the initial explanations more clear
    • including more thorough teaching of component skills that were not adequately taught
    • providing for more scaffolding to prompt students to apply strategies
    • providing more practice to help students discriminate when to apply strategies
    • other elements that will improve the instruction

The revised version of the curriculum is then field tested and revised again. Data are kept on student performance on each item. In addition to the data on student performance, information on how long tasks take to present are collected as well as feedback from teachers about the clarity of directions.

Each element of this field testing procedure is essential to providing teachers with a program capable of being an effective instructional tool for all students. The extensive testing that underlies the development of Direct Instruction programs is the primary reason that they are so effective.


For more information on the theory and process that underlies the field testing of Direct Instruction programs, see:

pdf Research on Direct Instruction: 25 Years Beyond DISTAR, Chapter 2: “Features of Direct Instruction Programs” (Adams & Engelmann, 1996)

Proactive administrative support and strong commitment to success are prerequisites for developing outstanding school-wide Direct Instruction (DI) implementations. When the school's principal and leadership team demonstrate that they are committed to implementing DI with high fidelity every day, and they communicate this commitment to the rest of the staff through words, deeds and actions, the prospect of success with DI increases substantially throughout the school. Isolated teachers may achieve considerable success implementing DI alone in an uncoordinated effort, but the effect of individual teachers implementing DI by themselves is usually far less than the effect of a schoolwide, coordinated implementation of DI lead by an actively involved administrator. Success with DI depends on many factors—schedules, assignment of paraprofessionals, professional development, data analysis—that cannot be controlled by individual teachers. These factors are most effectively implemented through a coordinated and systematic effort, which requires consistent and forward-thinking leadership.

Successful school leaders take decisive action through all stages of a DI implementation. They:

  1. ensure initial support of all staff members for the DI implementation;
  2. understand the major factors that lead to success with DI, including the purpose and function of NIFDI support services;
  3. set up the structural components of a successful implementation, such as the schedule and assignment of paraprofessionals, before instruction begins;
  4. arrange for initial program training and other professional development sessions;
  5. ensure that staff members attend training and practice sessions;
  6. identify student problems through data analysis and direct observation of instruction;
  7. take appropriate actions to resolve student problems;
  8. recognize and celebrate student achievement!

Observing Classroom Instruction
Regular classroom observations conducted by the administrator with focus on student performance are critical for a successful DI implementation. The administrator's presence in the classroom communicates a strong message to staff the commitment of the school to implement the model with fidelity. Direct observation by administrators also provides another set of eyes to identify possible instructional problems and assess the status of past problems. NIFDI trains principals and other administrators on conducting 5-minute observations that provide quick, comprehensive and powerful assessments of classroom instruction.

Ensuring Accountability for Student Success
The principal is the school's leader in the NIFDI accountability system. In addition to regular classroom observations, the principal can help ensure a successful implementation and quality instruction through active participation in weekly conference calls conducted with NIFDI. Each week, the principal attends a conference call between NIFDI and the school's leadership team, during which the progress of each instructional group is discussed. A summary of the call is provided to the school and describes the actions to be taken before the next call and designates who will take which actions. Principals can greatly facilitate the implementation by ensuring that the actions described are in fact taken before the next conference call.

IMPORTANT: A long-lasting commitment to implementing Direct Instruction (DI) with fidelity is a prerequisite to maximizing student achievement with DI. Student achievement may surpass historical levels after just a couple of years of DI, especially in the lower grades. Maximizing student achievement—especially in the upper grades—requires years of implementing DI with fidelity. Teachers usually require thorough program training and several years of expert in-class coaching and professional development before they become highly effective with DI. It takes several years for student performance in Kindergarten to reach its peak as Kindergarten teachers master DI techniques. It takes several more years for student performance in the upper grades to reach its peak as cohorts of students work their way up through the grades. If an elementary school contains grades K-5, it can take more than six years before an implementation reaches its full potential in the upper grades. Strong leadership must be in place for this time to maintain the school's commitment to implement the program with fidelity and maximize student performance for all students.

Implementing Direct Instruction Successfully

When implemented fully, Direct Instruction (DI) is unparalleled in its ability to improve student performance and enhance students’ self-esteem. In order to implement DI effectively, much more is required than simply purchasing instructional materials. The following two-part tutorial guides administrators, teachers, and coaches through the key features of a successful DI implementation. Part I provides an overview of the steps schools need to take in preparation for a DI implementation before school starts, while Part II provides an overview of the steps schools need to take after school has started.

IMPORTANT: This tutorial is an intensive video series comprised of 18 segments, each followed by a series of questions. Users should allow approximately three hours to watch the videos and complete the questions. NIFDI recognizes the high demand for time placed on school officials and, for this reason, has structured the tutorial so users may stop at any time and later resume where they left off.

Enroll in the tutorial here


Tutorial Thinkific Header
New to Direct Instruction? Watch the Introduction to Direct Instruction Video Series before taking the online tutorial.

Module-Bottom-Button-A rev

Module-Bottom-Button-B rev

Module-Bottom-Button-C rev2

candid-seal-gold-2024.png