fbpx

 Dr Kerry Hempenstall, Senior Industry Fellow, School of Education, RMIT University, Melbourne, Australia.

First published Nov 11 2012 Updated 14/1/2018

All my blogs can be viewed on-line or downloaded as a Word file or PDF at https://www.dropbox.com/sh/olxpifutwcgvg8j/AABU8YNr4ZxiXPXzvHrrirR8a?dl=0


 

How does one make judgements about which educational programs/approaches deserve respect and implementation? One can go to the primary sources (original research), although this may be very time-consuming or one may feel unable to critically evaluate research merit. An alternative is to examine reviews of evidence performed by respected sources.

One focus involves whether particular programs incorporate the components considered crucial by relevant authorities. That is, is the approach in question theoretically plausible? Does it have the recommended elements to enable it to succeed?

How does Direct Instruction stack up theoretically?

The National Reading Panel (2000) issued a now famous report consequent upon a Congressional mandate to identify skills and methods crucial in reading development. The Panel reviewed more than 100,000 studies focusing on the K-3 research in reading instruction to identify which elements lead to reading success.

From a theoretical perspective, each of the National Reading Panel (2000) recommended foci for reading instruction (phonemic awareness, phonics, fluency, vocabulary, comprehension) is clearly set out and taught in Direct Instruction literacy programs. An examination of the program teaching sequences in, for example, the Reading Mastery and Corrective Reading texts attests to their comprehensive nature.

However, these necessary elements are only the ingredients for success. Having all the right culinary ingredients doesn’t guarantee a perfect soufflé. There are other issues, such as what proportion of each ingredient is optimal, when should they be added, how much stirring, heating, cooling are necessary? Errors in any of these requirements lead to sub-optimal outcomes. For some examples of these important elements, see Direct Instruction: Explicit, systematic, detailed, and complex

So, it is with literacy programs. “Yet there is a big difference between a program based on such elements and a program that has itself been compared with matched or randomly assigned control groups” (Slavin, 2003). Just because a program has all the elements doesn’t mean that it will be effective necessarily. Engelmann (2003) points to the logical error of inferring a whole based upon the presence of some or all of its elements. There is a logical error involved in this argument If a dog is a Dalmatian, it has spots. Therefore, if a dog has spots, it is a Dalmatian (Engelmann, 2003). In this simile, the Dalmatian represents programs known to be effective with students. It is possible to analyse these programs, determine their characteristics, and then assume incorrectly that the mere presence of those characteristics is sufficient to ensure effectiveness. Engelmann is thus critical of merely “research-based” programs, that is, programs constructed only to ensure each respected component is somewhere represented. He points out that this does not guarantee effectiveness.

So for a true measure, we must look also for empirical studies to show that a particular combination of theoretically important elements is indeed effective in practice.

The vital question then becomes: Has a particular program demonstrated replicated effectiveness? In what settings, and for what populations?

Below is a collection of the outcomes of analyses of the DI approach.

A valuable resource is: A Bibliography of the DI Curriculum and Studies Examining its Efficacy at http://www.nifdi.org/15/news/126-a-bibliography-of-the-di-curriculum-and-studies-examining-its-efficacy

And also: Shep Barbash’s book Clear Teaching at education-consumers.org/pdf/CT_111811.pdf

 

And also:

Stockard, J. (2015). A brief summary of research on Direct Instruction. P.1-26. Retrieved from https://www.nifdi.org/research/recent-research/whitepapers/1352-a-brief-summary-of-research-on-direct-instruction-january-2015/file

 

 And in 2018, a new meta-analysis published in Review of Educational Research:

“Quantitative mixed models were used to examine literature published from 1966 through 2016 on the effectiveness of Direct Instruction. Analyses were based on 328 studies involving 413 study designs and almost 4,000 effects. Results are reported for the total set and subareas regarding reading, math, language, spelling, and multiple or other academic subjects; ability measures; affective outcomes; teacher and parent views; and single-subject designs. All of the estimated effects were positive and all were statistically significant except results from metaregressions involving affective outcomes. Characteristics of the publications, methodology, and sample were not systematically related to effect estimates. Effects showed little decline during maintenance, and effects for academic subjects were greater when students had more exposure to the programs. Estimated effects were educationally significant, moderate to large when using the traditional psychological benchmarks, and similar in magnitude to effect sizes that reflect performance gaps between more and less advantaged students.”

Stockard, J., Wood, T.W., Coughlin, C., & Khoury, C.R. (2018). The effectiveness of Direct Instruction curricula: A meta-analysis of a half century of research. Review of Educational Research, On Line First. Retrieved from http://journals.sagepub.com/doi/pdf/10.3102/0034654317751919


John Hattie at Melbourne University in his book Visible learning: A synthesis of over 800 meta-analyses relating to achievement examines meta-analyses of research studies relating to student achievement, and concludes that Direct Instruction is highly effective. No other curricular program showed such consistently strong effects with students of different ability levels, of different ages, and with different subject matters.

“One of the common criticisms is that Direct Instruction works with very low-level or specific skills, and with lower ability and the youngest students. These are not the findings from the meta-analyses. The effects of Direct Instruction are similar for regular (d=0.99), and special education and lower ability students (d=0.86), higher for reading (d=0.89) than for mathematics (d=0.50), similar for the more low-level word attack (d=0.64) and also for high-level comprehension (d=0.54), and similar for elementary and high school students. The messages of these meta-analyses on Direct Instruction underline the power of stating the learning intentions and success criteria, and then engaging students in moving towards these. The teacher needs to invite the students to learn, provide much deliberative practice and modeling, and provide appropriate feedback and multiple opportunities to learn. Students need opportunities for independent practice, and then there need to be opportunities to learn the skill or knowledge implicit in the learning intention in contexts other than those directly taught” (pp. 206-7).

Hattie, J. A.C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.


“In the category of “strong evidence of effectiveness” were several programs. Success for All, with an effect size of +0.52 in 9 studies, had more evidence of strong effects than any other program. Direct Instruction, a whole-class instructional process approach (ES=+0.37 in 2 small studies) and Corrective Reading, a remedial small group form of Direct Instruction (ES=+0.71 in 2 studies) were considered together as having strong evidence (ES=+.56 in 4 studies)” (p.112).

Slavin, R.E., Lake, C., Davis, S., & Madden, N. (2009, June) Effective programs for struggling readers: A best evidence synthesis. Baltimore, MD: Johns Hopkins. Retrieved from www.bestevidence.org/word/strug_read_Jun_02_2010.pdf


“The saga of Direct Instruction (DI) is remarkably similar to the story of Lancaster’s cure for scurvy. Invented nearly 50 years ago, DI is a scripted, step-by-step approach to teaching that is among the most thoroughly tested and proven in the history of education. It works equally well for general education, gifted students, and the disabled, but surprisingly remains little used." (p.1)

Stone, J. (2012). Forward. In S. Barbash, Clear teaching. Education Consumers Foundation. http://www.education-consumers.org/CT_111811.pdf


Stockard and Wood (2016) reported an effect size of 0.79 in a meta-analysis of 131 studies of Reading Mastery.

Stockard, J., & Wood, T.W. (2016). The threshold and inclusive approaches to determining ‘‘best available evidence’’: An empirical analysis. American Journal of Evaluation, 1-22. first published on August 19, 2016, 1098214016662338.


“A consistent pattern identified in our review points to the effectiveness of Direct Instruction (DI), a specific teaching program, and of specific explicit instructional practices underpinning the program (e.g., guided practice, worked examples) in maximizing student academic achievement. Collectively, studies, reviews, and encompassing meta-analyses (e.g., Borman et al., 2003; Hattie, 2009) show that DI has significantly large effects on achievement” (p.368).

Liem, A., & Martin, A. (2013). Direct Instruction. In John Hattie and Eric M. Anderman (Eds.), International guide to student achievement (pp. 366-368). New York: Routledge.


 “What does the research say? What is the evidence for its efficacy? There is a large body of research evidence stretching back over four decades testifying to the efficacy of explicit/direct instruction methods including the specific DI programs. Possibly the largest educational experiment ever conducted, in the 1970s, comparing many different forms of instructional practice, found that the gains made by students undertaking the DI programs designed by Engelmann and colleagues were far greater than for any other program. This has been confirmed by recent meta-analyses. Research has also confirmed the superiority of explicit/direct instruction more generally compared with minimally guided instruction, as currently advocated. The MUSEC verdict Explicit/direct instruction is recommended.”

Wheldall, K., Stephenson, J., & Carter, M. (2014). What is Direct Instruction? MUSEC Briefings, 39. Macquarie University Special Education Centre. Retrieved from file:///E:/Users/Kerry/Documents/Workshops/Evidence%20based%20practice/MUSEC/Musec%20Briefings%20complete/WEBPAGE9JUNE2017Macquarie_University_-_MUSEC_Briefings.html


“The DI model has enjoyed a more than 30-year history of framing successful learning experiences. The model has evolved to address current understandings about learners and learning, but maintains the central purpose of promoting student on-task behavior through explicit instruction, ongoing support, and student engagement in successful practice. The DI model is well suited to the design of technology-enhanced and technology-based instruction because of its clear structure and potential for providing learners with opportunities for practice and immediate feedback, especially in asynchronous learning environments. … DI continues to hold potential as an effective teaching method, particularly in technology mediated learning environments. Computer-based programs have been designed to model instructor-led DI approaches while leveraging the technological ability to provide feedback, remediation, and guided practice, all essential components of the DI process and all of which contribute to its effectiveness.” (p.51).

DiMagliaro, S., Lockee, B., & Burton, J. (2005). Direct instruction revisited: A key model for instructional technology. Educational Technology Research & Development, 53(4), 41-55. Retrieved from https://pdfs.semanticscholar.org/f535/8dec17e81c0d8b931178904c460db76c0a6c.pdf


There are variables beyond simply writing a curriculum that influence the effectiveness of a model or program:

“This report uses data from different studies and settings to examine two general factors that make DI implementations more effective: 1) administrative decisions and practices and 2) experience with the program. The data show that DI students make significantly more progress at mastery and have significantly higher achievement when:
  • teachers implement the programs with greater fidelity
  • teachers have been trained for the specific programs they are teaching
  • teachers are given time and support to prepare lessons
  • teachers have more experience in teaching the programs
  • DI has been implemented for a longer period of time
  • students are taught for the recommended time each week
  • at-risk students are given extra instructional time ("double dosing")
  • students start learning with DI in kindergarten

Cupit, A. (2016). Technical Report 2016-1: Effective Direct Instruction implementations: The impact of administrative decisions and time. Retrieved from https://www.nifdi.org/research/recent-research/technical-reports


 “A consistent pattern identified in our review points to the effectiveness of Direct Instruction (DI), a specific teaching program, and of specific explicit instructional practices underpinning the program (e.g., guided practice, worked examples) in maximizing student academic achievement. Collectively, studies, reviews, and encompassing meta-analyses (e.g., Borman etal., 2003; Hattie, 2009) show that DI has significantly large effects on achievement” (p.368).

Liem, A., & Martin, A. (2013). Direct Instruction. In John Hattie and Eric M. Anderman (Eds.), International guide to student achievement (pp. 366-368). New York: Routledge.


The three research syntheses below offer strong support for Direct Instruction programs for beginning readers, struggling readers, and secondary school struggling readers.

Slavin, R.E., Lake, C., Chambers, B., Cheung, A., & Davis, S. (2009, June). Effective beginning reading programs. Baltimore, MD: Johns Hopkins University, Center for Data-Driven Reform in Education. http://www.bestevidence.org/reading/begin_read/begin_read.htm

Slavin, R.E., Lake, C., Davis, S., & Madden, N. (2009, June) Effective programs for struggling readers: A best evidence synthesis. Baltimore, MD: Johns Hopkins University, Center for Data-Driven Reform in Education. http://www.bestevidence.org/word/strug_read_Jul_07_2009.pdf

Slavin, R.E., Cheung, A., Groff, C., & Lake, C. (2008). Effective reading programs for middle and high schools: A best evidence synthesis. Reading Research Quarterly, 43(3), 290-322. www.bestevidence.org/word/mhs_read_Feb_2008_RRQ.pdf


Florida Center on Reading Research: "Direct instruction is appropriate instruction for all learners, all five components of reading, and in all settings (whole group, small group, and one-on-one)." http://www.fcrr.org/Curriculum/curriculumInstructionFaq1.shtm


“Corrective Reading, a remedial small group form of Direct Instruction, has strong evidence of effectiveness”. (Slavin, 2009, Best Evidence Encyclopedia)

Slavin, R.E., Lake, C., Davis, S., & Madden, N. (2009, June) Effective programs for struggling readers: A best evidence synthesis. Baltimore, MD: Johns Hopkins University, Center for Data-Driven Reform in Education. http://www.bestevidence.org/word/strug_read_Jul_07_2009.pdf


“Reading First focuses on core reading programs in grades K-3. There are only two programs widely acknowledged to have strong evidence of effectiveness in this area: Success for All and Direct Instruction”.

Slavin, R.E. (2007). Statement of Robert E. Slavin, Director Center for Data-Driven Reform in Education. Committee on Appropriations Subcommittee on Labor, Health and Human Services, Education, and Related Activities. Hearings on Implementation of No Child Left Behind. March 14, 2007. Retrieved March 16, 2007, from http://www.ednews.org/articles/8996/1/Statement-of-Robert-E-Slavin-Director-Center-for-Data-Driven-Reform-in-Education/Page1.html


"The evidence is pretty much overwhelming," said Prof Steve Dinham, the Australian Council for Educational Research director for teaching, learning and leadership. "Direct instruction and explicit teaching is two to three times more effective than inquiry-based learning or problem-based learning."

Smith, B. (2008). Results back principal's return to instruction. The Age, 10 May, p.8.


"For example, Direct Instruction (DI), a behaviorally oriented teaching procedure based on an explicit step-by-step strategy (ES=.93) is six-and-one-half times more effective than the intuitively appealing modality matched instruction (ES=.14) that attempts to capitalize on learning style differences. Students with Specific Learning Disabilities who are instructed with DI would be better off than 87% of students not receiving DI and would gain over 11 months credit on an achievement measure compared to about one month for modality matched instruction."

Kavale, K. (2005). Effective intervention for students with specific learning disability: The nature of special education. Learning Disabilities, 13(4), 127-138.


“Across varying contexts, Direct Instruction, the Comer School Development Program, and Success for All have shown robust results and have shown that, in general, they can be expected to improve students’ test scores. These three models stand out from other available comprehensive school reform (CSR) designs by the quantity and generalizability of their outcomes, the reliable positive effects on student achievement, and the overall quality of the evidence. … These clear, focused, and well-supported school-based models of improvement are in stark contrast to top-down direction and flexibility for educational reform”.

Borman, G. (2007). Taking reform to scale. Wisconsin Center for Educational Research Retrieved February 4, 2007, from http://www.wcer.wisc.edu/


The American Institutes for Research (2006) reviewed 800 studies of student achievement and of the 22 reform models examined, Direct Instruction and Success for All received the highest rating for quality and effectiveness http://www.air.org/files/csrq.pdf

Additionally, Direct Instruction was one of only three programs with adequate evidence for effectiveness in reading instruction. http://www.aasa.org/issues_and_insights/district_organization/Reform/Approach/direct.htm


“There is ample empirical evidence that the Direct Instruction programs have succeeded with a wide range of learners. This has been recognised by diverse groups, for example, the US Government’s acceptance of the Direct Instruction model as one eligible for funding. The US Department of Education allocates enormous amounts for the implementation of replicable, research based school reform models. Its approved list includes Direct Instruction programs. Direct Instruction programs have also been acknowledged as having the exemplary research base required under the recent USA Reading First Act, 2001 (Manzo & Robelen, 2002)”.

Manzo, K., & Robelen, E. (2002, May 1). States unclear on ESEA rules about reading.Education Week online. Retrieved February 14, 2003. http://www.edweek.org


“In a world without DAP, effective, research-based approaches to preK-3 teaching would be welcomed—especially those that have been used for decades by special educators and teachers in high poverty schools.52 Englemann’s Direct Instruction (DI) and Slavin’s Success for All (SFA) are two well established, empirically documented examples. Both are comfortably able to produce the 1.5 to 2.0 years of achievement growth per school year needed to bring delayed students to grade level by third grade and yet both have been widely defamed as “drill and kill” and “push-down curricula. 53 In truth, both are highly engaging and well received by students and teachers who are trained and supported. The Engelmann program, in particular, was found to be the most effective teaching model in the massive federal Follow Through project of the 1960s and 1970s.54 Direct instruction was shown to be both the most effective approach to teaching basic skills and the most effective in boosting student selfesteem. Children taught by DI like going to school.55 Despite their documented success with children, the use of both DI and SFA by schools has suffered because they are systematic, results-focused, and teacher-led; and therefore considered “developmentally inappropriate”” (p.9).

Stone, J.E. (2015). Misdirected teacher training has crippled education reform. Education Consumers Foundation. Retrieved from education-consumers.org/pdf/Misdirected-teacher-training.pdf


Major reviews of the primary research can provide additional surety of program value. In a Department of US Education meta-analysis, Comprehensive School Reform and Student Achievement (2002, Nov), Direct Instruction was assigned the highest classification: Strongest Evidence of Effectiveness, as ascertained by Quality of the evidence, Quantity of the evidence, and Statistically significant and positive results.

“Its effects are relatively robust and the model can be expected to improve students’ test scores. The model certainly deserves continued dissemination and federal support.”

Borman, G.D., Hewes, G.M., Overman, L.T., & Brown, S. (2002). Comprehensive school reform and student achievement.http://www.csos.jhu.edu./crespar/techReports/report59.pdf


"Reading First focuses on core reading programs in grades K-3. There are only two programs widely acknowledged to have strong evidence of effectiveness in this area: Success for All and Direct Instruction."

Slavin, R.E. (2007). Statement of Robert E. Slavin, Director Center for Data-Driven Reform in Education. Committee on Appropriations Subcommittee on Labor, Health and Human Services, Education, and Related Activities. Hearings on Implementation of No Child Left Behind. March 14, 2007. Retrieved from http://www.ednews.org/articles/8996/1/Statement-of-Robert-E-Slavin-Director-Center-for-Data-Driven-Reform-in-Education/Page1.html


"By using a Direct Instruction approach to teaching, more children with learning disabilities, who were thought to be unable to improve in any academic area, can make incredible gains in their schooling."

"Special Needs Education: Direct Instruction and Special Needs" Department of Psychology, University of Michigan http://sitemaker.umich.edu/delicata.356/direct_instruction_and_special_needs.


"Following the successful models of rigorous medical science, the Power4Kids reading study will be a landmark in education ~ a large-scale, randomized, controlled, longitudinal field trial. It is the second largest study of its kind ever to be conducted in public schools. It is designed to provide conclusive evidence of the effectiveness of quality remedial reading programs, along with determining common learning profiles of students and the best targeted-intervention for each profile. Regardless of the reason a child struggles to learn to read, Power4Kids will provide the information and winning models of how to close the reading gap in our schools. Four (4) highly effective remedial reading programs have been awarded a position in the study by virtue of their scientifically-based evidence of effectiveness. The programs are: Corrective Reading, Failure Free Reading, Spell Read P.A.T., Wilson Learning Program"

Power4Kids. Retrieved from http://www.haan4kids.org/power4kids/


The Council for Exceptional Children provides informed judgements regarding professional practices in the field. The Direct Instruction model was judged to be well validated and reliably used. http://s3.amazonaws.com/cmi-teaching-ld/alerts/17/uploaded_files/original_Alert2.pdf?1301001903

See also under Current Practice Alerts: Espin, C., Shin, J., & Busch, T. (2000). Formative evaluation. Current Practice Alerts, 3, 1-4. Retrieved from http://TeachingLD.org/alerts


Direct Instruction is the only model to be recommended by American Federation of Teachers in each of their reviews. Seven Promising Reading and English Language Arts Programs "When this program is faithfully implemented, the results are stunning..." (Seven Promising Reading and English Language Arts Programs, pg. 9). Direct Instruction is also lauded in Three Promising High School Remedial Reading Programs, and Five Promising Remedial Reading Intervention Programs (http://www.aft.org/pubs-reports/downloads/teachers/remedial.pdf). http://www.aft.org/edissues/Reading/Resources.htm

American Federation of Teachers (1999). Five promising remedial reading intervention programs. Building on the best: Learning from what works. Retrieved from http://www.aft.org/pubs-reports/downloads/teachers/remedial.pdf


The report Bringing Evidence Driven Progress to Education: A Recommended Strategy for the U.S. Department of Education (2002) nominates Direct Instruction as having strong evidence for effectiveness.http://www.excelgov.org/displayContent.asp?Keyword=prppcEvidence


The Center for Education Reform (2003) nominated DI among its “Best Bets”.

“Strong, proven education programs for kids - programs that demonstrate success for more than just a handful of students”

McCluskey, N. (2003). Best bets: Education curricula that work. Center for Education Reform. Retrieved from http://www.edreform.com/pubs/bestbets.pdf


Better by design: A consumers' guide to schoolwide reform: A report from the Thomas B. Fordham Foundation supports the Direct Instruction model as a viable approach to schoolwide reform http://www.edexcellence.net/library/bbd/better_by_design.html


Reading Programs that Work: A Review of Programs for Pre-Kindergarten to 4th Grade

This independent review included Direct Instruction among six school-wide effective reading models (Schacter, 1999)http://www.mff.org/edtech/publication.taf?_function=detail&;Content_uid1=279


Corrective Reading: Decoding and Corrective Reading: Comprehension are among the programs adopted by the California State Board of Education in 1999, after it abandoned the Whole Language model. http://www.cde.ca.gov/cdepress/lang_arts.pdf


Task Force on Improving Low-Performing Schools (American Federations of Teachers, 1999) named Corrective Reading as one of five effective remedial reading interventions


Marilyn Jager Adams, author of a major text on reading: “Beginning to read: Thinking and learning about print” commented on Direct Instruction thus "The research is irrefutable."


"The two best known examples of sound research-based practices coming to scale are Direct Instruction (Carnine, Silbert, & Kameenui, 1997) and Success for All (Slavin, Madden, Dolan, & Wasik, 1996)."

Foorman, B.R., & Moats, L.C. (2004). Conditions for sustaining research-based practices in early reading instruction. Remedial and Special Education, 25, 51-60.


From renowned researcher on effective teaching, Barak Rosenshine,

“Reading Mastery is an extremely effective program for teaching decoding to all children. The mean score for 171 students across six DI schools, who began the program in kindergarten and who remained in the program for four years was at the 49th percentile. I think this is a wonderful finding” (Rosenshine, 2002).


“Direct Instruction is based on over 5 decades of work. The curricular programs are based on extensively formulated and carefully tested theoretical insights and are developed through a painstaking process of research and testing. A great deal of research has shown that they are highly effective in helping all students to increase their levels of achievement. Research also shows that the programs are most effective when they are implemented as designed. This brief report summarizes some of that work. It has three major sections.1 The first gives a brief overview of the development of Direct Instruction and its theoretical basis. The second section gives examples of results from a variety of efficacy studies that document the impact that DI has on students’ learning, and the third section discusses studies of the implementation of the program and factors that can make it more or less effective. The literature on Direct Instruction is very large. While this summary is believed to be representative of the body of work, interested readers are urged to consult the entire literature.” (p.1)

Stockard, J. (2015). A brief summary of research on Direct Instruction. P.1-26. Retrieved from https://www.nifdi.org/research/recent-research/whitepapers/1352-a-brief-summary-of-research-on-direct-instruction-january-2015/file


For more than one third of the models, the CSRQ Center identified only 10 or fewer studies that seemed to be relevant for our review of the overall evidence of positive effects of the models on student achievement. In contrast, one model (Direct Instruction) had more than 50 … For Category 1, the CSRQ Center rated the models as follows:

Two models as moderately strong: Direct Instruction and Success for All

■ Seven models as moderate: Accelerated Schools Plus, America’s Choice School Design, Core Knowledge, Literacy Collaborative, National Writing Project, School Development Program, and School Renaissance

■ Six models as limited: ATLAS Learning Communities, Different Ways of Knowing, Integrated Thematic Instruction, Modern Red SchoolHouse, Pearson Achievement Solutions (formerly Co-nect), and Ventures Initiative and Focus System

Seven models as zero: Breakthrough to Literacy, Coalition of Essential Schools, Community for Learning, Comprehensive Early Literacy Learning, Expeditionary Learning, First Steps, and Onward to Excellence II”.

The Comprehensive School Reform Quality Center (2006). CSRQ Center Report on Elementary School CSR Models. Retrieved from http://www.csrq.org/CSRQreportselementaryschoolreport.asp


A study conducted by researchers at the Florida Center for Reading Research and Florida State University compared Reading Mastery and several other core reading programs (Open Court, Harcourt, Houghton Mifflin, Scott Foresman, Success for All). In the study, Examining the core: Relations among reading curricula, poverty, and first through third grade reading achievement (2009), the authors tracked the performance of 30,000 Florida students in first through third grades.

“Overall, students in the Reading Mastery curriculum demonstrated generally greater overall ORF growth than students in other curricula. Also, they more frequently met or exceeded benchmarks for adequate achievement in first, second, and third grade. In first grade, regardless of SES status, students generally met adequate achievement benchmarks. Among second graders, on average, only students using Reading Mastery and Success for All met benchmarks, while the lowest scores for students were among those using Houghton Mifflin. In third grade, on average, students did not reach the adequate achievement benchmark. However, Reading Mastery students came closest to the benchmarks because scores among these students were the highest across curricula” (p. 209).

Crowe, E.C., Connor, C.M., & Petscher, Y. (2009). Examining the core: Relations among reading curricula, poverty, and first through third grade reading achievement. Journal of School Psychology, 47(3), 187–214.


Adams & Englemann' (1996) meta-analysis resulted in an effect size of 0.69 for the 44 acceptable comparisons involving the Direct Instruction program Reading Mastery. Across DI programs, the average effect size for 173 comparisons was 0.87. In White’s 1988 DI meta-analysis involved learning disabled, intellectually disabled, and reading disabled students, the average effect size for Direct Instruction programs was .84. A similar meta-analysis of the effectiveness of the whole language approach to reading found an effect size of only 0.09 (Stahl & Miller, 1989). An effect size of 1 means a gain of 1 standard deviation - equivalent of a year’s progress (0.8 is a large effect size, 0.5-0.8 is a medium effect size, and less than .5 is a small effect size).


2004 Florida Center for Reading Research aims to disseminate information about research-based practices related to literacy instruction and assessment for children in pre-school through 12th grade. Its Director is well known researcher, Joe Torgesen.

The instructional content and design of Corrective Reading is consistent with scientifically based reading research” (p.4).

Torgesen, J. (2004). SRA Corrective Reading. Florida Center for Reading Research. Retrieved 16/1/2005 from http://www.fcrr.org/FCRRReports/PDF/corrective_reading_final.pdf


Sally Shaywitz does recommend the REACH System (Corrective Reading, Spelling Through Morphographs, and R&W) for "dyslexic" children in her much publicised book The Brain and Dyslexia.


In the Oregon Reading First Center Review of 9 Comprehensive Programs 2004 Reading Mastery was ranked number 1.

http://reading.uoregon.edu/curricula/core_report_amended_3-04.pdf

To be considered comprehensive, a program had to (a) include materials for all grades from K through 3; and (b) comprehensively address the five essential components of the Reading First legislation.

Program Title

1 Reading Mastery Plus 2002

2 Houghton Mifflin The Nation’s Choice 2003

3 Open Court 2002

Others:

Harcourt School Publishers Trophies 2003

Macmillan/McGraw-Hill Reading 2003

Scott Foresman Reading 2004

Success For All Foundation Success for All

Wright Group Literacy 2002

Rigby Literacy 2000

Curriculum Review Panel. (2004). Review of Comprehensive Programs. Oregon Reading First Center.Retrieved 16/1/2005 from http://reading.uoregon.edu/curricula/core_report_amended_3-04.pdf


DI and explicit teaching

“Explicit Instruction: Essential to Close the Gap. Being an effective teacher requires use of instructional momentum techniques and the functions of explicit instructional lessons. The functions of explicit instruction should be used whether staff are teaching tier 1, tier 2, or tier 3 prevention within the MTSS model. Based on our experience, with few exceptions (e.g., Direct Instruction programs from SRA/McGraw-Hill; http://www.sra.com/), lessons in most core curriculum programs used by schools do not incorporate directly and consistently the functions of explicit instruction. In contrast, most evidence-based supplemental interventions designed to be delivered at the tier 2 and/or 3 levels include the functions of explicit instruction” (p.23).

Benner, G. J., Kutash, K., Nelson, J. R., & Fisher, M. B. (2013). Closing the achievement gap of youth with emotional and behavioral disorders through multi-tiered systems of support. Education & Treatment of Children, 36(3), 15-29.


“Skills-based instruction here means instruction reflecting an intent to strengthen academic skills (e.g., letter-sound correspondence and math problem solving) and to enhance knowledge in areas such as social studies and science. We also use the term to signify an approach inspired by Direct Instruction (DI; e.g., Becker, Englemann, Carnine, & Rhine, 1981). According to Gersten, Woodward, and Darch (1986), the key to DI is that "materials and teacher presentation of [these] materials must be clear and unambiguous" (p. 18), "much more detailed and precisely crafted" (p. 19) than the norm, for successful use with students with academic challenges. Moreover, wrote Gersten et al. (1986), this instruction "must contain clearly articulated [learning] strategies" (p. 19): a step-bystep process involving teaching to mastery, a procedure for error correction, a deliberate progression from teacher-directed to student-directed work, systematic practice, and cumulative review (cf. Gersten et al., 1986). A belief in the efficacy of skills-based instruction seems well founded. When implemented with fidelity, carefully scripted programs in reading, writing, and math - often involving learning strategies similar to DI - have been shown to benefit numerous at-risk students (e.g., Graham ÒC Perin, 2007; Kroesbergen & Van Luit, 2003; Stuebing, Barth, Cirino, Francis, & Fletcher, 2008)” (p.263).

Kearns, D. M., & Fuchs, D. (2013). Does cognitively focused instruction improve the academic performance of low-achieving students? Exceptional Children, 79(3), 263-290.


“Spelling Mastery represents a third example of an explicit, whole-word approach to spelling instruction. For high frequency, irregular words that cannot be spelled by applying phonemic rules, Spelling Mastery uses an explicit wholeword approach to spelling instruction. A typical whole-word lesson in Spelling Mastery begins by introducing students to a sentence that contains irregular words (e.g., I thought he was through.). At first the unpredictable letters or letter combinations are provided and students must fill in the missing letters (e.g., _ _ _ ough _ _ _ _ a _ _ _ _ ough). Presenting the irregular words in this way teaches the students that even irregular words have some predictable elements. Gradually, the number of provided letters is decreased until students are able to spell all the words without visual prompts. Once the sentence is learned, variations are presented so that students can apply the spelling of irregular words to various sentence contexts (e.g., She thought about her homework throughout the night.). This explicit approach to whole-word spelling instruction leads students through gradual steps toward the ultimate goal of accurate spelling performance” (p.100).

Simonsen, F., & Gunter, L. (2001). Best practices in spelling instruction: A research summary. Journal of Direct Instruction, 1(2), 97–105.


 “An analysis of these 2 approaches suggests that direct instruction principles are effective in supporting students with varied achievement levels and that these principles can be used to enhance comprehension among students at very different points in reading development. These evidence-based approaches also illustrate that direct instruction can be designed to support complex learning and the development of higher order cognitive strategies.” (p.221-2)

Coyne, M.D. Zipoli Jr., R.P., Chard, D.J., Faggella-Luby, M., Ruby, M., Santoro, L.E., & Baker, S. (2009) Direct instruction of comprehension: Instructional examples from intervention research on listening and reading comprehension. Reading & Writing Quarterly, 25(2-3), 221-245.


Some recent research:

“In recent years, major initiatives in the U. S. and U. K. have added greatly to the amount and quality of research on the effectiveness of secondary reading programs, especially targeted programs for struggling readers. This review of the experimental research on secondary reading programs focuses on 64 studies that used random assignment (n=55) or high-quality quasi-experiments (n=9) to evaluate outcomes of 49 programs on widely accepted measures of reading. Programs using one-to-one and small-group tutoring (ES=+0.23) and cooperative learning programs (mean ES=+0.16), showed positive outcomes, on average. Among technology programs, direct instruction, metacognitive approaches, mixed-model programs, and programs for English learners, there were individual examples of promising approaches. Except for tutoring, targeted extra-time programs were no more effective than programs provided to entire classes and schools without adding instructional time. The findings suggest that secondary readers benefit more from engaging and personalized instruction than from remedial services.”

Baye, A., Lake, C., Inns, A. & Slavin, R. E. (2016, December). Effective reading programs for secondary students. Baltimore, MD: Johns Hopkins University, Center for Research and Reform in Education. Retrieved from http://www.bestevidence.org/reading/mhs/mhs_read.htm


“One of the central conclusions that emerged from the evaluation of the literature was that, in order for children to become proficient readers, it is necessary for them to master the alphabetic principle, the idea that in the written form of the language, different letters correspond to different sounds. A second important conclusion that the authors formed in the paper was that direct instruction in phonics is an effective technique to allow children to understand the alphabetic principle, while other techniques such as whole word, or whole language approaches that do not adopt this direct approach are less effective. Although they provoked reactions from many individuals with strong views on how reading should be taught, the Rayner et al. (2001, 2002) articles provide excellent examples of scientifically based, translational writing, and they provide a model of how researchers can use findings from basic science to inform discussion and motivate evidence based practice.” (p.6)

Clifton, C. E., Ferreira, F., Henderson, J. M., Inhoff, A. W., Liversedge, S., Reichle, E. D., & Schotter, E. R. (2016). Eye movements in reading and information processing: Keith Rayner’s 40 year legacy. Journal of Memory and Language, 86, 1-19.


“In three direct instruction studies, researchers investigated effects of a commercially available program, Expressive Writing (Engelmann & Silbert, 2005), on the written expression of high school-age students with disabilities (Viel-Ruma et al., 2010; Walker et al., 2005; White et al., 2014). In the studies, instruction began with constructing simple sentences to picture-word prompts before progressing to closely related writing skills, such as complex sentences and paragraph composition. Instruction included between 25 lessons to 50 lessons, lasting 30 min to 50 min each (i.e., a total duration of 750 min to 2,500 min). In two of the three Expressive Writing studies, a multiple baseline across participants design was used to investigate effects of direct instruction on students with learning disabilities (Walker et al., 2005) and English language learners with learning disabilities (Viel-Ruma et al., 2010). In both studies, students showed increases in correct word sequences (CWS) and the percentage of CWS on narrative writing probes. CWS is a curriculum-based measure that provides a global indicator of writing: It is the number of words written with correct capitalization, punctuation, spelling, and syntax (Ritchey et al., 2016). One study used a quasi-experimental design, comparing two Expressive Writing treatment groups of high school–age students with emotional behavioral disorders (White et al., 2014). Results suggest both treatment groups improved their percentage of CWS. Two studies investigated effects of a combined direct instruction and precision teaching intervention on the simple sentence construction of elementary and high school students with writing difficulties (Datchuk, 2016; Datchuk et al., 2015). The two studies were smaller in scope than the direct instruction only studies. The two studies only addressed skills specific to simple sentence construction, such as capitalization, punctuation, and simple sentence structure. Intervention lasted 13 to 18 lessons with a total duration of 135 min to 195 min.” (p. 2-3)

Datchuk, S.M. (2016). A direct instruction and precision teaching intervention to improve the sentence construction of middle school students with writing difficulties. The Journal of Special Education, 1–10. Online First. DOI: 10.1177/0022466916665588


“Two researchers [Diane August and Timothy Shanahan] reviewed many of the same studies as the National Literacy Panel on Language-Minority Children and Youth and concluded that “the programs with the strongest evidence of effectiveness in this review are all programs that have also been found to be effective with students in general” and modified for ELs. … These programs include various versions of Success for All (a school-wide program that involves far more than classroom instruction), Direct Instruction, and phonics instruction programs.” (p. 5-6)

Goldenberg, C. (2013). Unlocking the research on English learners: What we know—and don’t yet know—about effective instruction. American Educator, 37(2), 4–11, 38.


“Vocabulary interventions in preschool and early elementary school can improve vocabulary skills. In a meta-analysis of vocabulary interventions in preschool and kindergarten settings (Marulis & Neuman, 2010), interventions for vocabulary skills improved vocabulary knowledge, especially those implemented by trained teachers or researchers (effects were largest for researchers), as opposed to child care providers or parents. In addition, intervention that used explicit (i.e., direct instruction) strategies or the combination of explicit and implicit strategies was more impactful, as opposed to implicit strategies alone. However, children from middle- or high-SES households benefited more than children from low-SES households, and the interventions were not powerful enough to close vocabulary gaps for children who needed it the most. The amount of vocabulary learned in some interventions (e.g., 8-10 words per week) is not enough to close vocabulary knowledge gaps with peers (Nagy, 2007).” (p.4)

Clemens, N.H., Ragan, K., & Widales-Benitez, O. (2016). Reading difficulties in young children: Beyond basic early literacy skills. Online First. Policy Insights from the Behavioral and Brain Sciences, 1–8. DOI: 10.1177/2372732216656640


“Efficacy studies such as the one described in this article are urgently needed (Hmelo-Silver et al., 2007). For example, there is an ongoing debate about whether students are better served by direct instruction or constructivist approaches to learning (Kirschner, Sweller, & Clark, 2006; Tobias & Duffy, 2009). Klahr (2010) asserts ‘‘the burden of proof is on constructivists to define a set of instructional goals, an unambiguous description of instructional processes, a clear way to ensure implementation fidelity, and then to perform a rigorous assessment of effects’’ (p. 4). Some constructivists have expressed resistance to direct rigorous comparisons of these different instructional approaches, arguing that due to fundamental differences between constructivist pedagogies and direct instruction, no common research method can evaluate the two (Jonassen, 2009). Alternatively, Klahr states, ‘‘Constructivists cannot use complexity of treatments or assessments as an excuse to avoid rigorous evaluations of the effectiveness of an instructional process’’ (p. 3). Similarly, Mayer (2004) recommends that we ‘‘move educational reform efforts from the fuzzy and unproductive world of ideology—which sometimes hides under the various banners of constructivism—to the sharp and productive world of theory-based research on how people learn’’ (p. 18).” (p.1011)

Taylor, J., Getty, S., Kowalski, S., Wilson, C., Carlson, J., & Van Scotter, P. (2015). An efficacy trial of research-based curriculum materials with curriculum-based professional development. American Educational Research Journal, 52, 984-1017.


“We found not only that many more children learned from direct instruction than from discovery learning, but also that when asked to make broader, richer scientific judgments, the many children who learned about experimental design from direct instruction performed as well as those few children who discovered the method on their own. These results challenge predictions derived from the presumed superiority of discovery approaches in teaching young children basic procedures for early scientific investigations.” (p. 661)

Klahrl, D., & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15(10), 661-667.


DI for English language learners

The beginning reading programs with the strongest evidence of effectiveness in this review made use of systematic phonics - such as Success for All, Direct Instruction, and Jolly Phonics (Slavin & Cheung, 2003)

Slavin, R.E., & Cheung, A. (2003). Effective reading programs for English language learners: A best-evidence synthesis. Center for Research on the Education of Students Placed at Risk. www.csos.jhu.edu/crespar/techReports/Report66.pdf

The two best known examples of sound research-based practices coming to scale are direct instruction (Carnine, Silbert, & Kameenui, 1997) and Success for all (Slavin, Madden, Dolan, & Wasik, 1996).

Foorman, B.R., & Moats, L.C. (2004). Conditions for sustaining research-based practices in early reading instruction. Remedial and Special Education, 25, 51-60.


"This review synthesizes research on English reading outcomes of all types of programs for Spanish-dominant English language learners (ELLs) in elementary schools. … the largest and longest term evaluations, including the only multiyear randomized evaluation of transitional bilingual education, did not find any differences in outcomes by the end of elementary school for children who were either taught in Spanish and transitioned to English or taught only in English. The review also identified whole-school and whole-class interventions with good evidence of effectiveness for ELLs, including Success for All, cooperative learning, Direct Instruction, and ELLA. Programs that use phonetic small group or one-to-one tutoring have also shown positive effects for struggling ELL readers. What is in common across the most promising interventions is their use of extensive professional development, coaching, and cooperative learning. The findings support a conclusion increasingly being made by researchers and policymakers concerned with optimal outcomes for ELLs and other language minority students: Quality of instruction is more important than language of instruction."

Cheung, A.C.K., & Slavin, R.E. (2012). Learners (ELLs) in the elementary grades: A synthesis of research effective reading programs for Spanish-dominant English language. Review of Educational Research, 82(4), 351-395.


Torgesen (2003) suggests there is now a consensus on the most important instructional features for interventions:

  • Provide ample opportunities for guided practice of new skills
  • Provide a significant increase in intensity of instruction
  • Provide systematic cueing of appropriate strategies in context
  • Interventions are more effective when they provide appropriate levels of scaffolding as children learn to apply new skills
  • Provide systematic and explicit instruction on whatever component skills are deficient: e.g., in reading - phonemic awareness, phonics, fluency, vocabulary, reading comprehension strategies (Torgesen, 2003)

Torgesen, J. (2003). Using science, energy, patience, consistency, and leadership to reduce the number of children left behind in reading. Barksdale Reading Institute, Florida. Retrieved 3/5/2004 from http://www.fcrr.org/staffpresentations/Joe/NA/mississippi_03.ppt

The 2000 report to the Department for Education and Employment in Great Britain (McBer: A model of teacher effectiveness) reached similar conclusions about the value of this approach.

DI was originally designed to assist disadvantaged students

But, its emphasis on analysing task characteristics and effective teaching principles transcends learner characteristics.

“A substantial body of NIFDI research has examined the effectiveness of the DI curricula. These studies have confirmed the accumulated findings of decades of other studies showing that students studying with DI have higher achievement scores and stronger growth rates than students studying with other curricula. These results have appeared with reading1,2,8,9,10,13,15 and math7; in urban1,2,7, rural2,8 and suburban8,13,15 settings; with middle class high achieving students13; with high risk students16, general education students1,2,7,8,9,10,13,15,16 and special education students15; with schools that are predominantly African American1,7,9, those with substantial numbers of Hispanic students2,8,15 and those with large numbers of non-Hispanic whites8,13,15; and with children from pre-school age10 through middle school4. The strong positive results appear in studies examining state test scores4, curriculum-based measures2,4,8,10 and norm-referenced tests1,4,7,9,10; in the United States as well as in other countries11 and with randomized control trials10,13,14 as well as quasi-experimental designs1,2,4,7,8,9,11,15 ”.

The National Institute for Direct Instruction. (2012). NIFDI Research Office. Retrieved from http://www.nifdi.org/15/nifdis-research-office

“The results of this study also confirm the findings of previous research which has indicated that DI is an effective methodology for diverse groups of students, including those from low socioeconomic backgrounds (Goldman, 2000; Torgesen, Alexander, Wagner, Rashotte, Voeller, & Conway, 2001), students at-risk for academic failure (Carlson & Francis, 2002; Foorman, Francis, Fletcher, & Schatschneider, 1998; Frederick, Keel, & Neel, 2002; Grossen, 2004; Shippen, Houchins, Steventon, & Sartor, 2005), students with learning disabilities (Swanson, 1998; Torgesen et al.), and students with cognitive deficits (Bradford, Shippen, Alberto, Houchins, & Flores, 2006; Flores, Shippen, Alberto & Crowe, 2004; Gersten & Maggs, 1982).” (p. 94)

Head, C. (2016). The effects of Direct Instruction on reading comprehension for individuals with autism or intellectual disability. A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfilment of the requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 6, 2016. Retrieved from https://etd.auburn.edu/bitstream/handle/10415/5272/FINAL_DISS.pdf?sequence=2&isAllowed=y


 “In addition to the evidence from successful schools, there are teaching methodologies that have been carefully researched and shown to be effective in bringing disadvantaged students to grade level by the 3rd grade. Although often used with special education students, they are demonstrably effective in preventing almost all reading failure when implemented in kindergarten as part of the general education curriculum. Direct Instruction, a program originally developed and tested in the 1960s and 70s, and Success for All, developed in the late 1980s, are two well known examples” (p.7).

Stone, J.E. (2013). Reversing American decline by reducing education’s casualties: First, we need to recapture our school boards. Education Consumers Foundation. Retrieved from http://www.education-consumers.org/rad.htm


DI programs have been shown to be effective for:

Slow learners - Disadvantaged students - Intellectual disability - Gifted - Learning disability - Indigenous - Acquired brain injury -  Language disability - Hearing Impaired - Behavioural disorder - Autism Spectrum Disorder - ADHD - English language learners. See NIFDI (2012) above for details.


 

See also:

Above average students:

“ … the design strategy of using DI programs with above-average student populations has much to recommend it because in such settings, implementations with fidelity can be accomplished more readily, and under such circumstances the academic gains resulting from DI programs can be magnified.” (p.25)

Vitale, M.R., Medland, M.B., & Kaniuka, T.S. (2010). Implementing Spelling with Morphographs with above-average students in Grade 2: Implications for DI of comparisons with demographically similar control students in Grades 2-3-4-5. Journal of Direct Instruction, 10(1), 17-28.


 

Children with hearing impairment:

“There has been a surge in research with young deaf children using Visual Phonics and Direct Instruction. The results to date have been promising, but with the caution that most of the work has been done with children who are second graders or younger” (p.101).

Moores, D. F. (2013). One size does not fit all: Individualized instruction in a standardized educational system. American Annals of the Deaf, 158(1), 98-103.


“Phonics during preschool will build a foundation for instruction of reading programs, such as Reading Mastery (2008) in elementary school (see Trezek & Wang, 2006). … While traditionally alphabetic knowledge is not taught until kindergarten, even for children with typical hearing, recent research suggests such instruction in prekindergarten can have long-term positive effects on later reading skills, including reading achievement and spelling (Kirk & Gillon, 2007; Korkman & Peltomaa, 1993). The current study suggests that children who are DHH, even those who have delays in language, are able to learn the foundation for the alphabetic principle during prekindergarten.” (p.113, 114)

Bergeron, J.P., Lederberg, A.R., Easterbrooks, S.R., Miller, E.M., & Connor, C. (2009). Building the alphabetic principle in young children who are deaf or hard of hearing. The Volta Review, 109(2–3), 87–119. Retrieved from http://clad.education.gsu.edu/files/2016/05/Bergeron-et-al-2009-Alphabetic-Principle.pdf

“Based on a previous investigation implementing remedial phonics-based reading instruction for DHH students at the middle school level (Trezek & Malmgren, 2005), this study also utilized the first 20 lessons of the Direct Instruction Corrective Reading-Decoding A curriculum (Engelmann, Carnine, & Johnson, 2008) for instruction. This remedial reading curriculum, the first in a series of four levels, focuses on teaching the fundamental code-related skills necessary to develop the alphabetic principle. Research findings have documented the effectiveness of the Corrective Reading Decoding series for a variety of remedial readers, including noncategorical poor readers and special education students (see Przychodzin-Havis, Marchand Martella, & Martella, 2005, for review)” (p.394)

“The purpose of this study was to examine the results of implementing remedial instruction in the alphabetic principle with DHH students in the 2nd grade or higher and educated in a setting employing a sign bilingual model. More specifically, the goal of this inquiry was to explore participants’ acquisition and generalization of skills as a result of remedial instruction. As hypothesized, the intervention of the Decoding A curriculum supplemented by Visual Phonics resulted in growth in identifying phonemes in isolation, phoneme blending, and word reading. Results of the analysis indicated that there was statistically significant difference between pre- and posttest scores and Cohen’s d estimates revealed effect sizes of 1.75, 2.18, and 2.39 for the three dependent measures, respectively. … These results are supported by the findings indicating that a direct, explicit, and systematic phonics instructional approach, which includes teaching phoneme blending and segmenting, produces more favorable results than curricula that do not include these features, benefitting even older students experiencing difficulty learning to read (National Reading Panel, 2000; Scammacca et al., 2007).” (p.403-4)

“It is time to bring to an end the reading wars that are polarizing researchers and practitioners in the field of deafness. Although we contend that code related skills are a necessary element of the reading instruction, we also recognize that the development of these skills alone is not sufficient to support overall reading achievement. As members of a field, we must resolve the either/or dichotomy, acknowledge the importance of developing skills in both the language- and code-related domains, and collaborate to explore and evaluate pedagogical practices that support the development of overall reading proficiency among DHH learners. (p.406)

Trezek, B. J., & Hancock, G. R. (2013). Implementing instruction in the alphabetic principle within a sign bilingual setting. Journal of Deaf Studies and Deaf Education, 18(3), 391–408. Retrieved from https://www.researchgate.net/publication/236252529_Implementing_Instruction_in_the_Alphabetic_Principle_Within_a_Sign_Bilingual_Setting


“Elements from the Spelling through Morphographs curriculum were chosen to develop lesson plans for the present study because DI curriculums have been effective for teaching discrete literacy skills to DHH students (Trezek & Malmgren, 2005; Trezek, Wang, Woods, Gampp, & Paul, 2007) in the past.(p.231) … What effect does morphographic instruction have on the morphographic analysis skills of DHH students who are reading below grade level? We answered this question through repeated assessment of morphographic analysis skill. We found that morphographic instruction does positively change the student participants’ morphographic analysis skills. … A functional relation between the morphographic  intervention and the students’ morphographic analysis skills was established. These findings support previous findings that DHH students can improve literacy skills through DI programs (Trezek & Hancock, 2013; Trezek & Malmgren, 2005; Trezek & Wang, 2006) paired with a visual organizer (Easterbrooks & Stoner, 2006).” (p.237-8

Trussell, J.W., & Easterbrooks, S.R. (2015). Effects of morphographic instruction on the morphographic analysis skills of deaf and hard-of-hearing students. The Journal of Deaf Studies and Deaf Education, 20(3), 229–241.


“After receiving instruction from the Reading Mastery I curriculum supplemented by Visual Phonics, the mean score of each cohort of students was higher at posttest when compared to pretest measures. In addition, the results of a paired-sample t test revealed that the findings obtained on the Word Reading subtest were considered statistically significant and the effect size was large.” (p. 211)

Trezek, B. J., & Wang, Y. (2006). Implications of utilizing a phonics-based reading curriculum with children who are deaf or hard of hearing. Journal of Deaf Studies and Deaf Education, 11(2), 202-213.


“Direct Instruction programs in comprehension, spelling, and writing have been shown to produce considerable test-score gains for deaf and hard-of hearing high school students in selfcontained classrooms. To make these programs work efficiently with deaf and hard-of-hearing students, adaptations must be made in how the programs are taught and how to most effectively combine usage of ASL and CASE. Teacher training and widespread consistent usage of the programs are necessary to obtain the greatest impact. Although the high school student gains reported in this study are impressive, earlier and more consistent use of these programs and techniques has the potential of producing students who can attain much higher levels of performance.” (p.29)

Kraemer, J., Krame,r, S., Koch, H., Madigan, K., & Steely, D. (2001). Using Direct Instruction programs to teach comprehension and language skills to deaf and hard-of hearing students: A six-year study. Direct Instruction News , 1, 23–31. Retrieved from http://files.eric.ed.gov/fulltext/ED467298.pdf


“When beginning the present study, the researchers hypothesized that a deaf first-grade struggling reader would increase his phonological decoding skills by means of instruction from the Direct Instruction reading curriculum Teach Your Child to Read in 100 Easy Lessons. The results support this hypothesis. The participant was essentially a nonreader at the onset of the intervention. On a curriculum-based measure of oral reading from the story in Lesson 24 of the Teach Your Child to Read in 100 Easy Lessons curriculum, he was able to correctly read only the words a and in. In addition, he was able to identify only one nonsense word, nazz. At the end of the 8 weeks of the intervention, the participant correctly read short stories and 9 of 10 nonsense words. The present study was the first investigation to use Teach Your Child to Read in 100 Easy Lessons with a deaf student. Findings from this case study align with previous research investigating other use of the Direct Instruction curricula with deaf or hard of hearing students—that is, the studies by Trezek and Malmgren (2005) and Trezek and Wang (2006). Both of these previous studies, like the present study, emphasized the need to accompany phonological instruction with accommodations such visual cues.” (p.386)

Syverud, S.M., Guardino, C., & Selznick, D.N. (2009). Teaching phonological skills to a deaf first grader: A promising strategy. American Annals of the Deaf, 154(4), 382-388.


 

Children with intellectual disability:

"Studies of the use of Direct Instruction materials and procedures have shown that general language can be improved among mentally retarded pupils (e.g., Maggs & Morath, 1976), as well as disadvantaged children eligible for Head Start (e.g., Engelmann, 1968) and those in Follow- Through (Becker, 1977)." (p.70)

Lloyd, J., Cullinan, D., Heins, E.D., & Epstein, M.H. (1980). Direct instruction: Effects on oral and written language comprehension. Learning Disabilities Quarterly, 3, 70-76.


 

 Preschool children:

“The results obtained in this study largely replicate findings obtained in other studies of Funnix Beginning Reading. Like Parlange (2004) and Watson and Hempenstall (2008), these results indicate significantly stronger gains in pre- and beginning reading skills among Funnix students than among the control students. The results are especially notable given the random assignment of students to conditions and the use of high school-age tutors, rather than college students or parents, as employed in other studies.” (p.45)

Stockard, J. (2010). Promoting early literacy of preschool children: A study of the effectiveness of Funnix Beginning Reading. Journal of Direct Instruction, 10, 29-48.


 

Spelling:

"In this study, Spelling Mastery was shown to have a significant effect on trained spelling regular words, morphological words, and words that followed spelling rules and generalized to untaught regularly spelled word and untaught words that followed the spelling rules. Moreover, these 8- to 12-year-old students who had a learning disability in spelling maintained their progress on words that followed spelling rules, suggesting that the Spelling Mastery was effective in teaching students to pay attention to the patterns that occur in words. However, there was a lack of significant findings and smaller effects for irregular words at both posttest and maintenance” (p. 8, 9).

Squires, K.E., & Wolter, J.A. (2016). The effects of orthographic pattern intervention on spelling performance of students with reading disabilities: A best evidence synthesis. Remedial and Special Education, 37 (6), 357-369.


 DI with students with autism spectrum disorder (ASD)

“The Reading Mastery curriculum in particular has a powerful evidence base for its effectiveness with disadvantaged children, English Language Learners, and children with disabilities (Engelmann 1997; Gersten et al.1987; Kamps and Greenwood 2005; Kamps et al. 2008), but limited studies specifically targeting children with ASD. … Findings support the use of explicit and Direct Instruction curricula for high risk children who are struggling academically (Kame’enui and Simmons 2001; Kamps et al. 2008); and more specifically children with ASD at risk for learning problems (El Zein et al. 2014; Flores and Ganz 2009; Ganz and Flores 2009; Plavnick et al. 2014, 2016). Findings also support the use of the Reading Mastery curriculum to teach children with ASD basic phonemic awareness, decoding skills and word reading (Plavnick et al. 2016; Spector and Cavanaugh 2015).”

Kamps, D., Heitzman-Powell, L., Rosenberg, N., Schwartz, I., Mason, R., & Romine, R.S. (2016). Effects of Reading Mastery as a small group intervention for young children with ASD. Journal of Developmental Physical Disabilities, 28, 703-722.



“There is a limited body of reading research that includes students with ASD and DD who have significant cognitive and language deficits (Bradford et al., 2006; Flores & Ganz, 2007; Flores & Ganz, 2009; Flores et al., 2004). This study extended the research for that population by showing that students made progress after participating in comprehensive implementation of DI programs. … students in the current study successfully participated in DI which required sustained attention, frequent responding, and choral responses in a group format. This is significant since group instruction may provide for greater efficiency in meeting students’ needs in diverse classrooms. In addition, providing instruction to students with ASD and DD in a group format may also better prepare them for participation in group situations within general education classrooms” (p.46-7).

Flores, M. M., Nelson, C., Hinton, V., Franklin, T. M., Strozier, S. D., Terry, L., & Franklin, S. (2013). Teaching reading comprehension and language skills to students with autism spectrum disorders and developmental disabilities using direct instruction. Education and Training in Autism and Developmental Disabilities, 48(1), 41-48.


“Flores and Ganz (2007) investigated the effects of Corrective Reading (a DI program) on the reading comprehension skills of four individuals with developmental delays, including autism. Results indicated that a functional relationship existed between DI and reading comprehension and DI. Furthermore, DI was effective in teaching students statement inferences, using facts, and analogies, as all students met criteria in each area.

In another study, Ganz and Flores (2009) investigated the effects of Language for Learning (a DI program) on the oral language skills, specifically the identification of materials of which objects are made, for three participants with ASD by utilizing a single subject changing criterion design. Results indicated a functional relationship existed between the program and language as students met criterion with replications over three criterion changes with three students.

Flores and Ganz (2009) investigated the effects of Corrective Reading (a DI program) on the reading comprehension skills of three individuals with autism. Using a multiprobe design across behaviors (picture analogies, deductions, and inductions), the authors demonstrated a functional relationship between the DI program and reading comprehension as all participants met the criterion in each of the three areas. Ganz and Flores (2009) found that a DI language intervention program (Language for Learning, Engelmann and Osborn, 1999) was a highly effective intervention for increasing expressive language skills for elementary children with autism. This study employed the use of a single subject changing criterion design. An additional study conducted by these authors in 2007 indicated similar results of a DI program for reading comprehension (Corrective Reading Thinking Basics: Comprehension Level A) of students with autism and developmental disabilities.” (p.51-52)

Head, C. (2016). The effects of Direct Instruction on reading comprehension for individuals with autism or intellectual disability. A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfilment of the requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 6, 2016. Retrieved from https://etd.auburn.edu/bitstream/handle/10415/5272/FINAL_DISS.pdf?sequence=2&isAllowed=y


 “DI has been shown to be effective in teaching diverse groups of students; however, no studies have examined the effects of complex comprehension skills for students with autism and developmental delay or for students who can decode but have deficits is comprehension. … The results of this study indicate that students with disabilities, specifically those with autism, can make significant academic gains when provided with appropriate instruction. … the findings of this study support the efficacy of DI for students with autism and will eventually help establish DI as an evidenced based practice for this population.” (p. 177, 190)

Head, C.N., Flores, M.M., & Shippen, M.E. (2018). Effects of Direct Instruction on reading comprehension for individuals with autism or developmental disabilities. Education and Training in Autism and Developmental Disabilities, 53(2), 176–191.


“This research demonstrates that DI is a promising practice for students with ASD. … there appears to be a gap in the extant literature on using DI to teach language skills to high school students with ASD in a group format. … The purpose of this study was to determine the effectiveness of DI, and specifically the SRA Reading Mastery Signature Edition language program, in teaching high school students with ASD in a small group setting to answer ‘‘who,’’ ‘‘where,’’ and ‘‘what’’ questions. An additional purpose of this study was to determine whether any effects demonstrated during the intervention would maintain after instruction was removed. The results indicated that the DI curriculum, as modified, was effective in teaching all participants to answer ‘‘who’’ and ‘‘what’’ questions to mastery. In addition, the data revealed that the participants maintained these improvements at both the 2- and 4-week post-intervention follow-up assessments.” (p. 2969, 2976)

Cadette, J.N., Wilson C.L., Brady, M.P., Dukes, C., & Bennett, K.D. (2016). The effectiveness of Direct Instruction in teaching students with autism spectrum disorder to answer "wh-" questions. Journal of Autism and Developmental Disorders, 46(9), 2968-78.


“Published instructional programs that incorporate explicit and systematic procedures in a scripted manner allow consistent implementation across instructors of varying skill levels. Scripted programs control instructional delivery, increasing fidelity of implementation (Cooke et al. 2011). According to Watkins and Slocum (2004), scripts accomplish two goals: 1. To assure that students access instruction that is extremely well designed from the analysis of the content to the specific wording of explanations, and 2. To relieve teachers of the responsibility for designing, field-testing, and refining instruction in every subject that they teach. (p. 42) Importantly, Cooke et al. (2011) compared scripted to nonscripted explicit instruction and found increased rates of on-task instructional opportunities during scripted instruction. Additionally, students indicated they enjoyed answering together (i.e., in unison) and instructors shared positive outcomes including greater student attention, consistent routine, and reduced likelihood of leaving out crucial concepts.” (p.56)

Plavnick, J. B., Marchand-Martella, N. E., Martella, R. C., Thompson, J. L., & Wood, A. L. (2014). A review of explicit and systematic scripted instructional programs for students with autism spectrum disorder. Review Journal of Autism and Developmental Disorders, 2(1), 55–66.


“The primary purpose of this study was to test the effects of the Reading Mastery curriculum on early literacy skills for students with ASD in Kindergarten and First grade. The curriculum was delivered in small groups with peers and compared to “business-as-usual” reading instruction. Analysis showed that children in the treatment group showed significantly more growth on the Reading Mastery curriculum-based word list, letter sound knowledge (DIBELS nonsense word fluency), and on the Word Identification test on the Woodcock Reading Mastery Test. Findings support the use of explicit and Direct Instruction curricula for high risk children who are struggling academically (Kame’enui and Simmons 2001; Kamps et al. 2008); and more specifically children with ASD at risk for learning problems (El Zein et al. 2014; Flores and Ganz 2009; Ganz and Flores 2009; Plavnick et al. 2014, 2016). Findings also support the use of the Reading Mastery curriculum to teach children with ASD basic phonemic awareness, decoding skills and word reading (Plavnick et al. 2016; Spector and Cavanaugh 2015).”

Kamps, D., Heitzman-Powell, L., Rosenberg, N., & Swinburne Romine, R.E. (2016). Effects of Reading Mastery as a small group intervention for young children with ASD. Journal of Developmental and Physical Disabilities,28(5), 703–722. DOI10.1007/s10882-016-9503-3


“Many children diagnosed with autism spectrum disorder (ASD) exhibit difficulties with complex language and social communication. Direct Instruction (DI) is an empirically supported curriculum designed to teach complex language skills to children with and at risk of learning disabilities. Only recently, the effectiveness of DI has been evaluated among children with autism. The present study evaluated the effectiveness of the DI Language for Learning curriculum among 18 children diagnosed with ASD. Immediate post-intervention language scores on curriculum post-tests were significantly higher than pre-intervention scores and remained significantly higher than pre-intervention scores up to 6 to 8 months following the intervention. Comparing language skills across groups, children already exposed to the intervention exhibited significantly higher language skills than their non-exposed waitlist counterparts.” (p. 44)

Shillingsburg, M. A., Bowen, C. N., Peterman, R. K., & Gayman, M. D. (2015). Effectiveness of the Direct Instruction Language for Learning curriculum among children diagnosed with autism spectrum disorder. Focus on Autism and Other Developmental Disabilities, 30(1), 44–56.  


What about the research into instructional components (e.g., scripts, choral reading, response frequency, spaced repetition, frequent progress monitoring) built in to DI programs?

“Published instructional programs that incorporate explicit and systematic procedures in a scripted manner allow consistent implementation across instructors of varying skill levels. Scripted programs control instructional delivery, increasing fidelity of implementation (Cooke et al. 2011). According to Watkins and Slocum (2004), scripts accomplish two goals: 1. To assure that students access instruction that is extremely well designed from the analysis of the content to the specific wording of explanations, and 2. To relieve teachers of the responsibility for designing, field-testing, and refining instruction in every subject that they teach. (p. 42) Importantly, Cooke et al. (2011) compared scripted to nonscripted explicit instruction and found increased rates of on-task instructional opportunities during scripted instruction. Additionally, students indicated they enjoyed answering together (i.e., in unison) and instructors shared positive outcomes including greater student attention, consistent routine, and reduced likelihood of leaving out crucial concepts.” (p.56)

Plavnick, J., Marchand-Martella, N., Martella, R., Thompson, J., & Wood, A. L. (2015). A review of explicit and systematic scripted instructional programs for students with autism spectrum disorder. Review Journal of Autism and Developmental Disorders, 2, 55-66. doi:10.1007/s40489-014-0036-3.


 

“Peer-reviewed research reporting positive effects of CR (choral reading) on ASR (active student responding), learning outcomes, and deportment has been published since the late 1970s (e.g., McKenzie & Henry, 1979; Pratton & Hales, 1986; Sindelar, Bursuck, & Halle, 1986; and see Haydon, Marsicano, & Scott, 2013). CR has been used successfully with students from preschool through secondary grades (Rose & Rose, 2001; Sainatoetal.,1987), with general education students (Kretlow, Cooke, & Wood, 2012; Maheady, Michielli-Pendl, Mallette, & Harper, 2002), and with special education students with various disabilities (Alberto, Waugh, Fredrick, & Davis, 2013; Cihak, Alberto, Taber-Doughty, & Gama, 2006; Flores & Ganz, 2009; Sterling, Barbetta, Heward, & Heron, 1997). ... one of the most consistent and important findings in recent educational research: Students who make frequent, relevant responses during a lesson (ASR) learn more than students who are passive observers.” (p.6)

Twyman, J. S., & Heward, W. L. (2016). How to improve student learning in every classroom now. International Journal of Educational Research. doi:10.1016/j.ijer.2016.05.007


“Hundreds of studies in cognitive and educational psychology have demonstrated that spacing out repeated encounters with the material over time produces superior long-term learning, compared with repetitions that are massed together. Also, incorporating tests into spaced practice amplifies the benefits. Spaced review or practice enhances diverse forms of learning, including memory, problem solving, and generalization to new situations. Spaced practice is a feasible and cost-effective way to improve the effectiveness and efficiency of learning, and has tremendous potential to improve educational outcomes.” (p.12)

Kang, S.H.K. (2016). Spaced repetition promotes efficient and effective learning: Policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3(1), 12-19. 


  “Taking a test improves memory for the material, and it also decreases the rate at which we forget that material. What this means is that the benefits of testing are even greater when looking at longer-term retention. … Testing also increases the effectiveness of the way in which we choose to access and organize the tested information. … When taken together, these results help us understand why students who take more tests in the classroom tend to perform better on later exams (Bangert-Drowns, Kulik, & Kulik, 1991).   The fact that testing decreases the rate of forgetting can be leveraged to start thinking about how tests can be efficiently sequenced. Because the material will be forgotten a little more slowly after each test, then if all tests were equally difficult from an objective standpoint, each test would actually be subjectively a little easier than the last. To render each test more similar in difficulty from the test taker’s perspective requires each test to be a little more objectively difficult than the last. One way in which this can be done is by using an expanding test schedule, in which each quiz is administered at a slightly longer interval than the last one. Expanding schedules have been shown to enhance memory for names (Landauer & Bjork, 1978) and text (Storm, Bjork, & Storm, 2010). … … One concern that people have with testing is that test takers will make errors and that the process that leads to those errors will become engrained and will prevent the learner from acquiring the correct solution. Interestingly, this does not appear to be the case; in fact, making errors may even have tangible benefits for learners.” (p. 15-18)

Benjamin, A.S., & Pashler, H. (2015). The value of standardized testing: A perspective from cognitive psychology. Policy Insights from the Behavioral and Brain Sciences, 2(1), 13–23.


So, DI programs have been shown effective in:

Basic skills: reading, spelling, maths, language, writing

Higher order skills: literary analysis, logic, chemistry, critical reading, geometry, history and social studies

Computer-assisted instruction: Funnix beginning reading program, videodisc courseware in science and maths.

 The combination of effectiveness across learner types and across curriculum areas provides credibility that the model itself is very well founded. Further it demonstrates that effective instruction transcends learner characteristics.


“Hard evidence rather than self-reported intentions) indicates that students who went through a Direct Instruction program significantly exceeded controls in the percentage completing high school, the percentage applying for college admission, and the percentage being accepted. … the extensive and well documented case for the educational benefits of Direct Instruction (p. 289).”

Bereiter, C. (1986). Does Direct Instruction cause delinquency? Response to Schweinhart and Weikart. Early Childhood Research Quarterly, 1(3), 289-292.


Perceptions of teachers and paraprofessionals with regard to a Direct Instruction

“Gersten et al. (1986) evaluated perceptions of teachers and paraprofessionals with regard to a Direct Instruction program. Teachers were interviewed toward the end of the first and second year of implementation. Initially, teachers were concerned with the high degree of structure leaving little room for fun activities and felt that scripted lessons were overly mechanical. At least half of the teachers believed that their teaching philosophy conflicted with that of Direct Instruction. By mid year, Gersten et al. found that teachers and paraprofessionals generally came to accept the program. By the end of the first year, attitudes had improved along with student achievement. Gersten et al. found that by the end of the second year of implementation, all but one teacher agreed with the main objectives of Direct Instruction as a program for educationally disadvantaged students (p.26-27).

In the final discussion, Proctor concludes that 89% of all subjects agreed that regular use of Direct Instruction had increased their appreciation of the method. Also, the results show evidence supporting the relationship between the amount of supervised experience and positive attitudes towards Direct Instruction (p.28).

Results from the pre and post internship evaluation show that responses in favor of Direct Instruction increased. Differences in responses regarding attention signals, response signals, and feedback were statistically significant. Cossairt et al. conclude, “After completion of an internship where they work directly with educationally handicapped students, students felt even more strongly about the usefulness and values of these techniques” (p. 170)” (p.28-9)

“It is evident that the majority of the responses favored Reading Mastery. Overall, the teachers surveyed seemed to have mostly positive attitudes and perceptions towards the program. In general, it appears that the majority of participants believe that Reading Mastery aids learning and that they have seen positive results with the program” (p.58).

Cossairt, A., Jacobs, J., & Shade, R. (1990). Incorporating direct instruction skills throughout the undergraduate teacher training process: A training and research direction for the future. Teacher Education and Special Education, 13, 167-171.

Gersten, R., Carnine, D., & Cronin D. (1986). A multifaceted study of change in seven inner-city schools. Elementary School Journal, 86, 257-276.

Proctor, T. J. (1989). Attitudes toward direct instruction. Teacher Education and Special Education, 12, 40-45.

Gervase, S. (2005). Reading Mastery: A descriptive study of teachers attitudes and perceptions towards Direct Instruction. (Electronic Thesis or Dissertation). Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1120001760


Use by paraeducators and tutors

“Although the interventions used in this study, Reading Mastery and Corrective Reading were not specifically designed for use by non-teachers, the study demonstrated that that instructional assistants – when provided with training – were able to implement these scripted reading programs effectively, with benefits for students across reading skill subsets. Both of these programs, which had been previously evaluated for small-group and whole-class use (Stahl & Miller, 1989), include critical alphabetic, decoding, and fluency components identified for inclusion in effective early reading programs, and incorporate features of effective instructional design (e.g., explicit skills instruction, teacher modelling, review cycles)” (p.310).

Vadasy, P.F. (2011). Supplemental reading instruction by paraeducators and tutors: Recent research and applications. In Rollanda E. O'Connor & Patricia F. Vadasy (Eds.). Handbook of reading interventions. New York: Guildford (p.310).


STUDY RATES 22 WIDELY USED COMPREHENSIVE SCHOOL REFORM MODELS

http://www.air.org/search/site/2005%20STUDY%20RATES%2022%20WIDELY%20USED%20COMPREHENSIVE%20SCHOOL%20REFORM%20MODELS

WASHINGTON, D.C. - A new guide using strict scientific criteria to evaluate the quality and effectiveness of 22 widely adopted comprehensive elementary school reform models rates 15 as “limited” to “moderately strong” in demonstrating positive effects on student achievement.

The American Institutes for Research (AIR) report was produced by AIR’s Comprehensive School Reform Quality (CSRQ) Center, a multi-year project funded by a grant from the U.S. Department of Education. The CSRQ Center Report on Elementary School CSR Models builds on AIR’s pioneering work in conducting consumer-friendly research reviews, including An Educators’ Guide to Schoolwide Reform issued in 1999, and its current work for the What Works Clearinghouse.

“Our purpose in providing ratings is not to pick winners and losers but rather to clarify options for decision-makers,” said Steve Fleischman, a managing director for AIR who oversaw the study. “This report is being issued in the hopes that the information and analysis it provides contributes to making research relevant in improving education.”

Collectively, the reform models reviewed serve thousands of mostly high-poverty, low-performing schools nationwide. The review includes such well known models as Success for All, Accelerated Schools, Core Knowledge, America’s Choice, Direct Instruction, School Renaissance, and the School Development Program.

AIR researchers conducted extensive reviews of about 800 studies and other publicly available information to rate the models in five categories of quality and effectiveness, including their ability to improve student achievement and to provide support to schools that allowed the model to be fully implemented. The CSRQ Center review framework was developed in consultation with an Advisory Group composed of leading education experts and researchers, and is closely aligned with the requirement for scientifically based evidence that is part of the federal No Child Left Behind Act.

Of the 22 reform models examined, Direct Instruction (Full Immersion Model), based in Eugene, Ore., and Success for All, located in Baltimore, Md., received a “moderately strong” rating in “Category 1: Evidence of Positive Effects on Student Achievement.”

Five models met the standards for the “moderate” rating in Category 1: Accelerated Schools PLUS, in Storrs, Conn.; America’s Choice School Design, based in Washington, D.C.; Core Knowledge, located in Charlottesville, Va.; School Renaissance in Madison, Wis.; and the School Development Project, based in New Haven, Conn. Models receiving a “moderate” rating may still show notable evidence of positive outcomes, but this evidence is not as strong as those models receiving a “moderately strong” or “very strong” rating.

Eight models earned a “limited” rating in Category 1: ATLAS Communities and Co-nect, both in Cambridge, Mass.; Different Ways of Knowing, located in Santa Monica, Calif.; Integrated Thematic Instruction, based in Covington, Wash,; Literacy Collaborative, from Columbus, Ohio; National Writing Project, in Berkeley, Calif.; Modern Red Schoolhouse, based in Nashville, Tenn.; and Ventures Initiative Focus System, located in New York, N.Y. The “limited” rating indicates that while the CSRQ Center found some evidence of positive effects on student achievement, much more rigorous research and evidence needs to be presented on the model to fully support its effectiveness.

Seven CSR models received a “zero” rating in Category 1: Breakthrough to Literacy, from Coralville, Iowa; Comprehensive Early Literacy Learning, in Redlands, Calif.; Community for Learning, based in Philadelphia, Pa.; Coalition of Essential Schools, located in Oakland, Calif.; Expeditionary Learning, based in Garrison, N.Y.; First Steps, in Salem, Mass.; and Onward to Excellence II, located in Portland, Ore. A rating of “zero” means that evidence was found to provide a rating for this category, but none was of sufficient quality to be counted as reliable evidence.

None of the 22 models earned a “no” or “negative” rating, which indicate that a model has no evidence available for review, or strong evidence demonstrating negative effects in a given category or subcategory, respectively.

Consumers can visit the CSRQ Center’s Web site (http://www.csrq.org/reports.asp) to download the entire report, individual model profiles, or to search the online database to perform side-by-side comparisons of the models reviewed by the CSRQ Center.

About CSRQ Center

The Comprehensive School Reform Quality Center (CSRQ Center, www.csrq.org) is funded by the U.S. Department of Education’s Office of Elementary and Secondary Education, through a Comprehensive School Reform Quality Initiative Grant (S222B030012), and is operated by the American Institutes for Research (AIR, www.air.org).


Could such teaching be harmful somehow, as Schweinhart and Weikart proposed?

“There have been no consistent findings that reveal the depression of esteem, social development, ethical development, critical thinking, cognitive ability, or cultural participation through Direct Instruction. Stein et al. (1998) argue that many of the assertions against Direct Instruction contain a fundamental confusion between rote instruction and explicit instruction. Scripted Direct Instruction lessons are not based on the mass memorization of arbitrary facts. Instead, a fundamental design principle within the Direct Instruction curriculum is the conveyance of generalizable strategies and concepts, though this is done in an explicit and sequenced manner with constant review and assessment to ensure mastery.” (p.117)

Kim, T., & Axelrod, S. (2005). Direct Instruction: An educators’ guide and a plea for action. The Behavior Analyst Today, 6(2), 111-120.

 

Certainly there have been many criticisms of DI for various reasons. For an overview of those, see:

Why does Direct Instruction evoke such rancour?

Some have discussed why DI has received so little attention both by researchers and by education systems.

For example:

“Given the established findings in support of the effectiveness of Direct Instruction programs in general, and the cumulative evaluative findings initiated with Project Follow Through, a possible expectation may be that the use of Direct Instruction programs has been a major focus of school reform initiatives. However, this has not been the case. Despite established evidence of effectiveness, implementation capacity (see Engelmann & Engelmann, 2004), and increased cumulative use with at-risk students (see National Institute for Direct Instruction, 2010a, 2010b), Direct Instruction for the most part-has been excluded from the school reform movement. As Engelmann (2008) detailed, what evolved into a systemic avoidance of using Direct Instruction programs that began with Project Follow Through has been detrimental to both school reform and the potential achievement of at-risk students.” (p.27)

Vitale, M. R., & Kaniuka, T. S. (2012). Adapting a multiple-baseline design rationale for evaluating instructional interventions: Implications for the adoption of Direct Instruction reading curricula for evidence-based reform. Journal of Direct Instruction, 12, 25-36.

 

“Direct Instruction has been the subject of empirical research since its inception in the 1960s and has garnered a strong research base to support it. Despite its proven efficacy, Direct Instruction is not widely implemented and draws much criticism from some educators. This literature review details the components of Direct Instruction, research to support it and reported attitudes towards it. The aspects of Direct Instruction that attract the most criticism are broken down to determine just what it is that educators do not like about it. In addition, this review attempts to outline possible ways to improve the landscape for Direct Instruction by reviewing research on how best to achieve a shift in beliefs when adopting change in schools. This includes pre-service teacher education and professional development and support for practising teachers as a means of improving rates of implementation of Direct Instruction.” (p.137)

McMullen, F., & Madelaine, A. (2014) Why is there so much resistance to Direct Instruction? Australian Journal of Learning Difficulties, 19(2), 137-151.


 

Perhaps the whole explicit teaching model is inferior to self-directed approaches?

 

“Efficacy studies such as the one described in this article are urgently needed (Hmelo-Silver et al., 2007). For example, there is an ongoing debate about whether students are better served by direct instruction or constructivist approaches to learning (Kirschner, Sweller, & Clark, 2006; Tobias & Duffy, 2009). Klahr (2010) asserts ‘‘the burden of proof is on constructivists to define a set of instructional goals, an unambiguous description of instructional processes, a clear way to ensure implementation fidelity, and then to perform a rigorous assessment of effects’’ (p. 4). Some constructivists have expressed resistance to direct rigorous comparisons of these different instructional approaches, arguing that due to fundamental differences between constructivist pedagogies and direct instruction, no common research method can evaluate the two (Jonassen, 2009). Alternatively, Klahr states, ‘‘Constructivists cannot use complexity of treatments or assessments as an excuse to avoid rigorous evaluations of the effectiveness of an instructional process’’ (p. 3). Similarly, Mayer (2004) recommends that we ‘‘move educational reform efforts from the fuzzy and unproductive world of ideology—which sometimes hides under the various banners of constructivism—to the sharp and productive world of theory-based research on how people learn’’ (p. 18).” (p.1011)

Taylor, J., Getty, S., Kowalski, S., Wilson, C., Carlson, J., & Van Scotter, P. (2015). An efficacy trial of research-based curriculum materials with curriculum-based professional development. American Educational Research Journal, 52, 984-1017.

 

“We found not only that many more children learned from direct instruction than from discovery learning, but also that when asked to make broader, richer scientific judgments, the many children who learned about experimental design from direct instruction performed as well as those few children who discovered the method on their own. These results challenge predictions derived from the presumed superiority of discovery approaches in teaching young children basic procedures for early scientific investigations.” (p. 661)

Klahrl, D., & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15(10), 661-667.

 

"Research almost universally supports explicit instructional practices (Archer & Hughes, 2011; Kirschner, Sweller, & Clark, 2006; Klahr & Nigam, 2004; Marchand-Martella, Slocum, & Martella, 2004). Explicit instructional approaches are considered more effective and efficient as compared to discovery-based approaches (Alfieri, Brooks, Aldrich, & Tenenbaum, 2010; Ryder, Tunmer, & Greaney, 2008), particularly when students are naïve or struggling learners. Vaughn and Linan-Thompson (2003) answered the question, "So what is special about special education for students with LD?" Their answer, again based on a thorough review of the research literature, noted "students with LD benefit from explicit and systematic instruction that is closely related to their area of instructional need" (p. 145). Burns and Ysseldyke (2009) examined the frequency with which evidence- based practices were used with students with disabilities. They found explicit instruction was the most frequently used instructional methodology in their survey of special education teachers and school psychologists. No matter what research synthesis was reviewed, "the conclusions were clear: Explicit instruction should be a consistent mainstay of working with students both with and without learning difficulties" (Archer & Hughes, 2011, p. 17)” (p. 166-7).

Marchand-Martella, N., Martella, R. C., Modderman, S. L., Petersen, H. M., & Pan, S. (2013). Key areas of effective adolescent literacy programs. Education & Treatment of Children, 36(1), 161-184.

 

 “After half a century of advocacy associated with instruction using minimal guidance, it appears that there is no body of sound research that supports using the technique with anyone other than the most expert students. Evidence from controlled experimental (a.k.a. “gold standard”) studies almost uniformly supports full and explicit instructional guidance rather than partial or minimal guidance for novice to intermediate learners. These findings and their associated theories suggest teachers should provide their students with clear, explicit instruction rather than merely assisting students in attempting to discover knowledge themselves” (p.11).

Clark, R.E., Kirschner, P.A., & Sweller, J. (2012). Putting students on the path to learning: The case for fully guided instruction. American Educator, March 23, 2012. Retrieved from http://www.aft.org/pdfs/americaneducator/spring2012/Clark.pdf


The What Works Clearinghouse hasn't rated Direct Instruction programs as highly as have other reviews:

"The What Works Clearinghouse is the most comprehensive of the systematic review sites, and because it is heavily funded by the US government, a great deal is invested in its being considered sufficiently trustworthy to enable it to achieve its stated goals. A great deal of money can be involved subsequent to a WWC review of a program, as schools and districts may make decisions about purchase based upon such information. Additionally, potentially huge numbers of students can be advantaged or disadvantaged by the decision to implement one program over another.

The WWC methodology and some individual program ratings have been criticised over the last 5 years (Briggs, 2008; Carter & Wheldall, 2008; Greene, 2010; Engelmann, 2008; McArthur, 2008; Reynolds, Wheldall, & Madelaine, 2009; Slavin, 2008; Stockard, 2008, 2010, Stockard & Wood, 2012, 2013). It is important that issues of validity and reliability of these systematic reviews are continuously examined, and this process has been gathering momentum. The criticisms have been several: of the criteria for what constitutes acceptable research; of slowness in producing evaluations; of inconsistency in applying standards for what constitutes acceptable research; of the inclusion of studies that have not been peer reviewed; and of a failure to attend to fidelity of implementation issues in the WWC analyses. This latter criticism can be subsumed under a broader criticism of ignoring external validity or generalisation in the reviews.

The focus of syntheses must be on what has worked, that is, programs for which there is evidence of an aggregate effect that is internally valid. I would argue that such evidence, although certainly important, is necessary but not sufficient for those stakeholders enacting educational policies. What the superintendent of a school district wants to know is not so much what has worked but what will work. To be relevant, a good synthesis should give policy makers explicit guidance about program effectiveness that can be tailored to specific educational contexts: When and where will a given program work? For whom will it work? Under what conditions will it work the best? For causal inferences to be truly valid, both causal estimation and generalization should at the very least be given equal weight (Briggs, 2008, p.20).

Their point is that whilst Randomised Control Trials (RCTs) may provide the best bulwark against threats to internal validity, the acceptance of small scale and brief RCTs creates a strong threat to external validity. Thus, the large scale reviews have their own issues to deal with before they can be unquestioningly accepted. It may also be quite some time before gold-standard research reaches critical mass to make decisions about practice easier."

Hempenstall, K. (2014). What works? Evidence-based practice in education is complex. Australian Journal of Learning Difficulties, 19(2), 113-127.

 

Criticisms of What Works Clearinghouse

“The major concerns documented in these reports included the misinterpretation of study findings, inclusion of studies where programs were not fully implemented, exclusion of relevant studies from review, inappropriate inclusion of studies, concerns over WWC policies and procedures, incorrect information about a program developer and/or publisher, and the classification of programs. Multiple inquirers documented how the WWC made conclusions about study findings that did not align with the authors’ conclusions, and in some instances reported totally different conclusions. Over 80 percent of the requests for Quality Reviews involved concerns with misinterpretations of study findings. Misinterpretation of study findings appeared to result from both procedural errors of individual reviewers, but also from WWC policies, often including the WWC’s refusal to consider fidelity of implementation when determining the effectiveness rating of an intervention.” (Wood, 2017, p iii)

Wood, T. (2017). Does the What Works Clearinghouse really work?: Investigations into issues of policy, practice, and transparency. Retrieved from https://www.nifdi.org/research/recent-research/whitepapers


What’s Wrong with What Works?

Kevin Wheldall http://kevinwheldall.blogspot.com.au

Evidence-based practice has become all but a cliché in educational discourse. Perhaps finally tiring of talking about ‘learnings’, ‘privileging’ and verbing any other noun they can get their hands on, educationists have decided to "sing from the same songsheet” of evidence-based practice. That’s got to be a good thing, right? Well, yes, it would be if they were singing the same tune and the same words. Unfortunately, evidence-based practice means different things to different people. This is why I personally prefer the term scientific evidence-based practice. But how are we to know what constitutes (scientific) evidence-based practice?

The Education Minister for New South Wales has recently (August 2012) launched the Centre for Education Statistics and Evaluation which “undertakes in depth analysis of education programs and outcomes across early childhood, school, training and higher education to inform whole-of-government, evidence based decision making.” (See http://tinyurl.com/d53f2y2 and http://tinyurl.com/c6uh3y4). Moreover, we are told,

“The Centre turns data into knowledge, providing information about the effectiveness of different programs and strategies in different contexts – put simply, it seeks to find out what works”.

Ah, ‘what works’, that rings a bell. It is too early to tell whether this new centre will deliver on its promises but what about the original ‘What Works Clearinghouse’ (WWC), the US based repository of reports on educational program efficacy that originally promised so much? As Daniel Willingham has pointed out:

“The U.S. Department of Education has, in the past, tried to bring some scientific rigor to teaching. The What Works Clearinghouse, created in 2002 by the DOE's Institute of Education Sciences, evaluates classroom curricula, programs and materials, but its standards of evidence are overly stringent, and teachers play no role in the vetting process.” (See http://tinyurl.com/bn8mvdt)

My colleagues and I have also been critical of WWC. And not just for being too stringent. Far from being too rigorous, the WWC boffins frequently make, to us, egregious mistakes; mistakes that, far too often for comfort, seem to support a particular approach to teaching and learning.

I first became a little wary of WWC when I found that our own truly experimental study on the efficacy of Reading Recovery (RR) had been omitted from their analyses underlying their report on RR. Too bad, you might think, that’s just sour grapes. But, according to Google Scholar, the article has been cited 160 times since publication in 1995 and was described by eminent American reading researchers Shanahan and Barr as one of the “more sophisticated studies”. Interestingly enough, it is frequently cited by proponents of RR (we did find it to be effective) as well as by its critics (but effective only for one in three children who received it). So why was it not included by WWC? It was considered for inclusion but was rejected on the following grounds:

“Incomparable groups: this study was a quasi-experimental design that used achievement pre-tests but it did not establish that the comparison group was comparable to the treatment group prior to the start of the intervention.”

You can read the details of why this is just plain wrong, as well as other criticisms of WWC, in Carter and Wheldall (2008) (http://tinyurl.c om/c6jcknl). Suffice to say that participants were randomly allocated to treatment groups and that we did establish that the control group (as well as the comparison group) was comparable to the (experimental) treatment group who received RR prior to the start of the intervention. This example also highlights another problem with WWC’s approach. Because they are supposedly so ‘rigorous’, they discard the vast majority of studies from the research literature on any given topic as not meeting their criteria for inclusion or ‘evidence standards’. In the case of RR, 78 studies of RR were considered and all but five were excluded from further consideration. Our many other criticisms of what we regard as a seriously flawed WWC evaluation report on RR are detailed in Reynolds, Wheldall, and Madelaine (2009) (http://tinyurl.com/cuj8sqm).

Advocates of Direct Instruction (DI) seem to have been particularly ill-served by the methodological ‘rigour’ of WWC, for not only are most more recent studies of the efficacy of DI programs excluded because they do not meet the WWC evidence standards but they also impose a blanket ban on including any study (regardless of technical adequacy) published before 1985; an interesting if somewhat idiosyncratic approach to science. Philip Larkin told us that sex only began in 1963 but who would have thought that there was no educational research worth considering before 1985? (Insert your own favourite examples here of important scientific research in other areas that would fall foul of this criterion. Relativity anyone? Gravity?) Zig Engelmann, the godfather of DI, has written scathingly about the practices of WWC (http://tinyurl.com/c5pjm9d and http://tinyurl.com/85t2vpt), concluding:

“I consider WWC a very dangerous organization. It is not fulfilling its role of providing the field with honest information about what works, but rather seems bent on finding evidence for programs it would like to believe are effective (like Reading Recovery and Everyday Mathematics).”

Engelmann can be forgiven for having his doubts given that for the 2008 WWC evaluation report on the DI program Reading Mastery (RM) (http://tinyurl.com/d8kawf7), WWC could not find a single study that met their evidence standards out of the 61 studies they were able to retrieve. (Engelmann claims that there were over 90 such studies, mostly peer reviewed.)

The most recent WWC report on RM in 2012 (http://tinyurl.com/7bdobxv), specifically concerned with its efficacy for students with learning disabilities, determined that only two of the 17 studies it identified as relevant met evidence standards and concluded:

“Reading Mastery was found to have no discernible effects on reading comprehension and potentially negative effects on alphabetics, reading fluency, and writing for students with learning disabilities.”

In response to this judgement, the Institute for Direct Instruction pointed out, not unreasonably, that, of the two studies considered:

“One actually showed that students studying with RM had significantly greater gains than students in national and state norming populations. Because the gains were equal to students in Horizons (another DI program), the WWC concluded that RM had no effect. The other study involved giving an extra 45 minutes of phonics related instruction to students studying RM. The WWC interpreted the better results of the students with the extra time as indicating potentially negative effects of RM.” (http://tinyurl.com/9oewdlo)

In other words when Reading Mastery was compared with another very similar DI program (in each case), and the results were no different from or slightly better than the standard Reading Mastery program, it was concluded that Reading Mastery was therefore ineffective for students with learning disabilities and possibly even detrimental to their progress. It is conclusions such as these that have led some experts in the field to wonder whether this is the result of incompetence or bias: cock up or conspiracy.

If we needed any further proof of the unreliability of WWC reports, we now have their August 2012 report on whether Open Court Reading© improves adolescent literacy (http://tinyurl.com/9nzv5wj). True to form, they discarded 57 out of 58 studies as not meeting evidence standards. On the basis of this one study they concluded that Open Court “was found to have potentially positive effects on comprehension for adolescent readers”. There are at least three problems with this conclusion. First, this is a bold claim based on the results for just one study, the large sample size and their ‘potentially positive’ caveat notwithstanding. Second, the effect size was trivial at 0.16, not even ‘small’, and well below WWC’s own usual threshold of 0.25. Third, and most important of all, this study was not even carried out with adolescents! The study sample comprised “more than 900 first-grade though fifth-grade who attended five schools across the United States”. As Private Eye magazine would have it “shorely shome mishtake” …

There is, then, good reason for serious concern regarding the reliability of the judgments offered by WWC. The egregious errors noted above apart, there is the more pressing problem that truly experimental trials are still relatively rare in educational research and those that have been carried out may often be methodologically flawed. In its early years, What Works was renamed ‘Nothing Works’ by some because there was little or no acceptable evidence available on many programs. Clearly, teachers cannot just stop using almost all programs and interventions until there are sufficient RCTs testifying to their efficacy to warrant adopting them. Hattie, for example, in his seminal 2009 work ‘Visible Learning’ has synthesized over 800 meta-analyses relating to achievement in order to be able to offer evidence-based advice to teachers (http://tinyurl.com/3h9jssl). (Very few of the studies on which the meta-analyses were based were randomized control trials, however, as Hattie makes clear.)

Until we have a large evidence base of methodologically sound randomized control trials on a wide variety of educational programs, methods and procedures, we need a more sophisticated and pragmatic analysis of the evidence we currently have available. It is not a question of accepting any evidence in the absence of good evidence, but rather of assessing the existing research findings and carefully explaining the limitations and caveats.

As I have attempted to show, the spurious rigour of WWC whereby the vast majority of studies on any topic are simply discarded as being too old or too weak methodologically, coupled with their unfortunate habit of making alarming mistakes, makes it hard to trust their judgments. If the suggestions of bias regarding their pedagogical preferences has any substance, we have even more cause for concern. As it stands, What Works simply won’t wash.

 



https://www.huffingtonpost.com/entry/substantively-important-isnt-substantive-it-also_us_5a4cf921e4b06cd2bd03e3cc

Robert Slavin is Director of the Center for Research and Reform in Education at Johns Hopkins University School of Education

“Substantively Important” Isn’t Substantive. It Also Isn’t Important


01/04/2018
Since it began in 2002, the What Works Clearinghouse has played an important role in finding, rating, and publicizing findings of evaluations of educational programs. It performs a crucial function for evidence-based reform. For this very reason, it needs to be right. But in several important ways, it uses procedures that are indefensible and have a big impact on its conclusions.

One of these relates to a study rating called “substantively important-positive.” This refers to study outcomes with an effect size of at least +0.25, but that are not statistically significant. I’ve written about this before, but the WWC has recently released a database of information on its studies that makes it easy to analyze WWC data on a large scale, and we have learned a lot more about this topic.

Study outcomes rated as “substantively important – positive” can qualify a study as “potentially positive,” the second-highest WWC rating. “Substantively important-negative” findings (non-significant effect sizes less than -0.25) can cause a study to be rated as potentially negative, which can keep a study from getting a positive rating forever, as a single “potentially negative” rating, under current rules, ensures that a program can never receive a rating better than “mixed,” even if other studies found hundreds of significant positive effects.

People who follow the WWC and know about “substantively important” may assume that it may be a strange rule, but relatively rare in practice. But that is not true.


My graduate student, Amanda Inns, has just done an analysis of WWC data from their own database, and if you are a big fan of the WWC, this is going to be a shock. Amanda has looked at all WWC-accepted reading and math studies. Among these, she found a total of 339 individual outcomes rated “positive” or “potentially positive.” Of these, 155 (46%) reached the “potentially positive” level only because they had effect sizes over +0.25, but were not statistically significant.

Another 36 outcomes were rated “negative” or “potentially negative.” 26 of these (72%) were categorized as “potentially negative” only because they had effect sizes less than -0.25 and were not significant. I’m sure patterns would be similar for subjects other than reading and math.

Put another way, almost half (48%) of outcomes rated positive/potentially positive or negative/potentially negative by the WWC were not statistically significant. As one example of what I’m talking about, consider a program called The Expert Mathematician. It had just one study with only 70 students in 4 classrooms (2 experimental and 2 control). The WWC re-analyzed the data to account for clustering, and the outcomes were nowhere near statistically significant, though they were greater than +0.25. This tiny study, and this study alone, caused The Expert Mathematician to receive the WWC “potentially positive” rating and to be ranked seventh among all middle school math programs. Similarly, Waterford Early Learning received a “potentially positive” rating based on a single tiny study with only 70 kindergarteners in 6 schools. The outcomes ranged from -0.71 to +1.11, and though the mean was more than +0.25, the outcome was far from significant. Yet this study alone put Waterford on the WWC list of proven kindergarten programs.

I’m not taking any position on whether these particular programs are in fact effective. All I am saying is that these very small studies with non-significant outcomes say absolutely nothing of value about that question.

I’m sure that some of you nerdier readers who have followed me this far are saying to yourselves, “well, sure, these substantively important studies may not be statistically significant, but they are probably unbiased estimates of the true effect.”
More bad news. They are not. Not even close.

The problem, also revealed in Amanda Inns’ data, is that studies with large effect sizes but not statistical significance tend to have very small sample sizes (otherwise, they would have been significant). Across WWC reading and math studies that used individual-level assignment, median sample sizes were 48, 74, or 86, for substantively important, significant, or indeterminate (non-significant with ES < +0.25), respectively. For cluster studies, they were 10, 17, and 33 clusters respectively. In other words, “substantively important” outcomes averaged less than half the sample sizes of other outcomes.

And small-sample studies greatly overstate effect sizes. Among all factors that bias effect sizes, small sample size is the most important (only use of researcher/developer-made measures comes close). So a non-significant positive finding in a small study is not an unbiased point estimate that just needs a larger sample to show its significance. It is probably biased, in a consistent, positive direction. Studies with sample sizes less than 100 have about three times the mean effect sizes of studies with sample sizes over 1000, for example.

But “substantively important” ratings can throw a monkey wrench into current policy. The ESSA evidence standards require statistically significant effects for all of its top three levels (strong, moderate, and promising). Yet many educational leaders are using the What Works Clearinghouse as a guide to which programs will meet ESSA evidence standards. They may logically assume that if the WWC says a program is effective, then the federal government stands behind it, regardless of what the ESSA evidence standards actually say. Yet in fact, based on the data analyzed by Amanda Inns for reading and math, 46% of the outcomes rated as positive/potentially positive by WWC (taken to correspond to “strong” or “moderate,” respectively, under ESSA evidence standards) are non-significant, and therefore do not qualify under ESSA.

The WWC needs to remove “substantively important” from its ratings as soon as possible, to avoid a collision with ESSA evidence standards, and to avoid misleading educators any further. Doing so would help make the WWC’s impact on ESSA substantive. And important.

 See also: The Mystery of the Chinese Dragon: Why Isn’t the WWC Up to Date?11/30/2017 by Robert Slavin - Director of the Center for Research and Reform in Education at Johns Hopkins University School of Education

https://www.huffingtonpost.com/entry/the-mystery-of-the-chinese-dragon-why-isnt-the-wwc_us_5a1f11a3e4b039242f8c8151


 Further criticism

“According to the USDOE/IES, the intent of the What Works Clearinghouse is to provide education practitioners with "a central and trusted source of scientific evidencefor what works in education" (IES, 2010b, para. 2). The evidence presented by the Clearinghouse for Direct Instruction programs in reading (e.g., Reading Mastery, Corrective Reading, Horizons ) categorizes them as "minimally effective" at best, based on the few studies that met Clearinghouse Standards. In contrast, Stockard (2008) found the Clearinghouse  rejected close to 100 studies as bei ng methodologically flawed despite being cited in well-established literature reviews (see also Stockard, 2010). Considering Stockard's comprehensive analysis in conjunction with the Clearinghouse's elimination of Project Follow Through, and other research prior to 1985, Stockard's  recommendation  that educators use extreme caution in using the evaluations reported by the Clearinghouse is understandable. Coupling the preceding deficiencies with a general lack of response by the Clearinghouse to detailed  methodological  and policy issues (see Engelmann, 2008) raises significant questions about whether the operational practices of the Clearinghouse are consistent with its stated goal.” (p.32)

Vitale, M. R., & Kaniuka, T. S. (2012). Adapting a multiple-baseline design rationale for evaluating instructional interventions: Implications for the adoption of Direct Instruction reading curricula for evidence-based reform. Journal of Direct Instruction, 12, 25-36.


So, we see that different reviews produce differing accounts of the evidence in support of Direct Instruction. There is something of a Catch 22 about this issue. It is important that there be thorough, large scale, randomised controlled trials to provide the strongest evidence-support. For education, however, the gold standard is less easily achieved than, say, in medicine. Many educational settings do not readily support studies of such scale and methodological purity.

“It has also been queried whether educational research can ever have randomised control trials as the norm, however desirable that may appear to be. One issue is that the high cost of such research is not matched by the available funding. For example, The USDOE spends about $80 million annually in educational research; whereas, the US Department of Health and Human Services provides about $33 billion for health research. If studies are to be true experiments and also of large scale, the cost is enormous." (Hempenstall, 2014, p.7)

“Most quantitative researchers would agree that large-scale randomized studies are preferable, but in the real world such studies done well can cost a lot – more than $10 million per study in some cases. That may be chump change in medicine, but in education, we can’t afford many such studies.” (Slavin, 2013)

Hempenstall, K. (2014). What works? Evidence-based practice in education is complex. Australian Journal of Learning Difficulties, 19(2), 113-127.

For DI, this expectation is doubly problematic - not only because such trials are generally exceptionally expensive. The lack of preparedness of the educational research community to elect to evaluate DI programs in favour of more recent models is an additional hurdle.

So, where to? Obviously, we would benefit from study methodologies that are able to glean valid and reliable findings from less stringent approaches than RCT's. There are currently attempts to do so, including Vitale and Kaniuka's use of multiple baseline designs. Slocum et al. (2012) propose an alternative standard:

“Slocum, Spencer, and Detrich (2012) address this issue by suggesting the term best available evidence to apply to those practices yet to have sufficient high-quality studies to deserve a recommendation from a systematic review. They argue that relying solely on WWC style reviews (which they call empirically supported treatment reviews) will be counter-productive: ‘However, if this is the only source of evidence that is considered to be legitimate, most educational decisions will be made without the benefit of evidence’ (p. 135). Thus, they see a continued role for narrative reviews, meta-analyses and practice guides to offer best available evidence in those situations in which systematic reviews are unhelpful or non-existent.” (Hempenstall, 2014, p.5)

Other options:

“A single study involving a small number of schools or classes may not be conclusive in itself, but many such studies, preferably done by many researchers in a variety of locations, can add some confidence that a programme’s effects are valid (Slavin, 2003). If one obtains similar positive benefits from an intervention across different settings and personnel, there is added reason to prioritise the intervention for a large gold-standard study. There is a huge body of data out there that is no longer considered fit for human consumption. It seems a waste that there are not currently analysis methods capable of making use of these studies. There is merit in investigating further the recommendations of Spencer et al. (2012) and Slocum et al. (2012) to add the best available evidence classification to apply to those practices with apparently promising effects demonstrated in narrative and meta-analytic reviews, but which are yet to have sufficient high-quality studies to deserve a recommendation from a systematic review. O’Keeffe et al. (2012) recognise that the current system needs improvement, but have a sense of optimism: “Empirically supported treatment is still a relatively new innovation in education and the methods for conducting effective reviews to identify ESTs are in their infancy. Based on our experience in comparing these review systems we believe that review methods can and should continue to develop. There is still a great need to build review methods that can take advantage of a larger range of studies yet can provide recommendations in which practitioner can place a high degree of confidence.” (p. 362) (Hempenstall, 2014, p.11)

"As a partial response to this dilemma, attention is now being paid to single case research as a possible valid and workable adjunct to RCTs in attempting to document what works in education settings. “The addition of statistical models for analysis of single-case research, especially measurement of effect size, offers significant potential for increasing the use of single-case research in documentation of empirically-supported treatments (Parker et al., 2007; Van den Noortagate & Onghena, 2003).” (Horner, Swaminathan, Sugai, & Smolkowski, 2012, p. 271). In recognition of this potential, WWC released a document on single-case designs, providing initial WWC standards for assessing any such single case studies (Kratochwill et al., 2010). In a generally supportive response Wolery (2013) offered a number of suggestions for improvement on this initial attempt.” (Hempenstall, 2014, p.11)

So, watch this space!


 

Module-Bottom-Button-A rev

Module-Bottom-Button-B rev

Module-Bottom-Button-C rev2