fbpx

Dr Kerry Hempenstall, Senior Industry Fellow, School of Education, RMIT University, Melbourne, Australia.

Each of my articles can be downloaded as a PDF file at https://tinyurl.com/y6vat4ut


Response to Intervention (RTI) and its subsequent close relative - Multitier System of Supports (MTSS) are popular, if controversial, initiatives developed primarily in the USA. As yet, they are less well known in general education in Australia. MTSS broadened the focus of RTI beyond academics to include other elements, such as social and emotional supports for struggling students.

In Australia, RTI and MTSS are gradually becoming better known, perhaps due to the increased attention to accountability in recent times. The unsatisfactory national NAPLAN and international PISA assessments are leading to pressure being placed on schools to lift student performance. RTI may be adopted by school systems as a means of providing data demonstrating levels of effectiveness, and a means of providing a direction for schools’ responses to that data. The expectation across the education community that general education teachers provide evidence-based instruction and regular progress monitoring is increasingly evident in policy documents, though classroom practice has yet to significantly reflect this change.

Response to Intervention is a term that first came to prominence in special education circles in the USA around 2004. Its initial focus was on a better means of the identification of learning disabilities (LD), an issue that had been problematic for many years. The core concern was that the various definitions of learning disabilities were exclusionary – they implied that a person must have a specific internal learning problem when all other likely causes of low achievement have been ruled out.

Before examining Response to Intervention, it is useful to consider the special education context in which it arose.

The term learning disabilities is attributed to Kirk (1963), who wrote:

“I have used the term "learning disabilities" to describe a group of children who have disorders in the development of language, speech, reading, and associated communication skills needed for social interaction. In this group, I do not include children who have sensory handicaps such as blindness, because we have methods of managing and training the deaf and blind. I also excluded from this group children who have generalized mental retardation” (p. 2–3).

Subsequently, other variables to be excluded included serious emotional disturbance, low English language proficiency, low intelligence, deprivation during the pre-school years, and inadequate teaching. Exclusionary definitions are usually unsatisfactory for a number of reasons, and there was an obvious need for an inclusionary definition that could specify the presence (rather than simply the presumption) of a inner learning disability beyond being simply a low achiever. This need was seemingly met in the 1970’s by the introduction of the discrepancy model that highlighted unexpected underachievement as the characteristic that separated the run-of-the-mill low achiever from those with these mooted learning disabilities. The diagnosis of a learning disability required a discrepancy between individuals’ measured IQ and their achievement in a given academic domain. Surely if a child is of at least average intelligence, their achievements should be commensurate with that intelligence? The presumption was that the low achievement was due to a modular central nervous system dysfunction affecting only one or few academic areas. This contrasts with the low achievement expected right across the curriculum for those struggling students with below average intelligence. This discrepancy notion and its related assessment tools had intuitive appeal, appeared straightforward to implement, and, until recently, has been the main method of diagnosing LD for many years. Because a diagnosis of learning disability provides access to additional funding in the US and in some states of Australia, quite an industry developed in providing assessments for diagnosis.

“SLD is the most prevalent disability category in IDEA. In the 2017–2018 academic year, 43% of all children and youth who received special education and related services in the public school system—or 2.3 million students—had SLD as their primary disability (National Center for Education Statistics 2018).” (p.86)

Kranzler, J. H., Yaraghchi, M., Matthews, K., & Otero-Valles, L. (2020). Does the response-to-intervention model fundamentally alter the traditional conceptualization of specific learning disability? Contemporary School Psychology, 24, 80-88.

There have, however, been many criticisms of the discrepancy definition of LD (Carnine, 2003; Siegel, 1989, 1992; Stanovich, 1991). It has been called a wait-to-fail model, as a child will have experienced several struggling years at school before a discrepancy can be detected with this approach. This does usually occur before Year 3 or 4, when a range of other secondary difficulties have been added to the original problem (Fletcher et al., 1998). Given the appreciation that the best opportunity to prevent later problems is to intervene early, children’s progress was jeopardized by the very process designed to assist them.

Further, when comparisons were made between IQ-discrepant and nondiscrepant struggling readers no differences were found in their likely educational outcomes, the characteristic underdeveloped skills related to the reading processes, or the outcomes of interventions (Stuebing et al., 2002).

“ … the finding that nondiscrepancy-defined (i.e., low IQ) poor readers and discrepancy-defined poor readers (i.e., those with IQs in the average to above average range) do not acquire reading skills in a fundamentally different manner suggest that IQ is largely irrelevant to defining dyslexia (Aaron, 1997), other than in applying exclusionary criteria concerning intellectual impairment.”

Tunmer, W., & Greaney, K. (2010). Defining dyslexia. Journal of Learning Disabilities, 43(3), 229–243.

“In summary, cognitive differences between children with reading disabilities who do or do not also have an aptitude achievement discrepancy, all seem to reside outside of the word-recognition module. These differences are consistently revealed on memory tasks and in academic domains other than reading, and they are present but somewhat attenuated in language processing tasks. With regard to word recognition processes themselves, children with and without a discrepancy show performance patterns that are remarkably similar. Both show pseudoword reading performance below that expected on the basis of their WRAT-R Reading levels. Both show performance on phonological coding tasks not involving production of a spelling or pronunciation (phonological choice task and pseudoword recognition) that is commensurate with their reading levels but inferior to the reading level of chronological age controls.” (p. 47-48)

Stanovich, K. E., & Siegel, L. S. (1994). Phenotypic performance profile of children with reading disabilities: A regression-based test of the phonological-core variable-difference model. Journal of Educational Psychology, 86(1), 24-53.

Other problems with the concept included the wide variety of definitions of LD - this means that assessment and identification were impossible to standardise. Many professions determined their own idiosyncratic methods, for example, some optometrists labelled as LD any child with a visual tracking problem, whereas a speech pathologist may emphasise deficient language processes.

“The long-entertained theory that LD could be measured psychometrically via an aptitude–achievement discrepancy has been soundly disputed as inadequate theory (Büttner & Hasselhorn, 2011) as well as empirically discredited (Aaron, 1997; Fletcher et al., 2002; cf. Johnson et al., 2010; Swanson, 2008)” (p.27).

Scanlon, D. (2013). Specific learning disability and its newest definition: Which is comprehensive? And which is insufficient? Journal of Learning Disabilities, 46(1) 26–33.

Differing definitions make research findings non-comparable with other research findings. It is often proposed that LD is a heterogeneous category, implicating one or more of: receptive language, expressive language, reading skills, reading comprehension, written expression, maths calculation, maths reasoning. Thus it is too broad a category to be useful educationally (Gresham, 2001). Lyon described the state of the LD field scathingly "Learning disabilities have become a sociological sponge to wipe up the spills of general education … It's where children who weren't taught well go." (Lyon, 1999)

Consider, as an example, dyslexia. In the discrepancy approach, dyslexia is assessed by the presence of a gap between a child’s intelligence and his reading attainment. However, it is now increasingly recognized that intelligence is far from perfectly correlated with reading. Stanovich (1992) calculated a median correlation of 0.34 across 14 studies involving 26 measures whose correlations ranged from 0.10 to 0.66. The range of correlations relate to the choice of intellectual assessment instruments and reading tests. The lower figures are more likely when the reading measure has a strong word-decoding emphasis, and the higher figures when comprehension is the major focus. Given this only moderate correlation, any intelligence-reading discrepancy may be more reasonably considered a normal statistical variation than evidence for a specific neurological deficit. In other words, there is no rule that intelligent people must find reading easy. The reason, of course, is that phonemic awareness is more strongly correlated with reading than is intelligence - and phonemic awareness does not necessarily parallel intelligence (Morris et al., 2012; Tunmer & Greaney 2010).

Further, it is noted that the development of literacy is closely intertwined with the development of intelligence (Stanovich, 1993). That is, the continued normal development of intelligence may rely on an adequate volume of reading. Vocabulary development and higher-order comprehension skills are best advanced through reading (Nagy & Anderson, 1984) once the beginning stages are passed. Thus, as children with reading difficulties grow older, their lack of reading could be expected to reduce the initial gap between measured intelligence and attainment. Over time dyslexic students' measured intelligence may more closely resemble that of their garden-variety colleagues, as problems additional to the phonological core develop (Stanovich, 1988). Sadly, the intelligent under-achiever may appear to become less intelligent because of our educational system's failure to adequately address his needs at the critical early stage. In a bizarre twist, the discrepancy is likely to diminish over time such that the child may lose his LD status, and thereby, any funding allocated for learning disabilities.

The other major problem with discrepancy-defined dyslexia is that a different group (between 2%-35% of the population) is described by employing different intelligence tests, and through different subtest analyses. For example, there was debate over which specific IQ test, and whether verbal or performance (or both) scales should be used - the use of one over the other certainly defines a different group as dyslexic. There is also disagreement over how large a discrepancy (e.g., 1, 1.66, or 2 SD) is needed for a diagnosis of dyslexia; disagreement over the minimum general intelligence level needed for a dyslexia classification; and, over the type of reading test chosen to define the reading deficit. Each of these decisions leads to a different population being defined.  Given the slippery nature of such assessment choices, it is unsurprising that such a model is falling from favour, although it still has currency in some special educational circles (Hale et al., 2010). For parents seeking funding assistance for their child in the USA, the advice has been to see as many professionals as you can afford - someone, somewhere will be prepared to classify your child as LD, with the attendant additional support that classification entails.


The decline of the discrepancy model

“Originally, the U.S. Individuals with Disabilities Education Act required the use of the discrepancy model to identify those students who needed assistance for a learning disability. In the 1990s, studies showed that children who had difficulty learning to read had difficulty with phonological awareness — matching printed letters of the alphabet to the speech sounds that those letters represented. Based on these findings, the reauthorization of the Act dropped the requirement that school systems use the discrepancy model. Many school systems, however, retained the discrepancy model as a means to classify students needing special educational services in reading.”

National Institute of Child Health and Human Development (2011). NIH-funded study finds dyslexia not tied to IQ. Research on brain activity fails to support widely used approach to identify dyslexic students. https://www.nih.gov/news-events/news-releases/nih-funded-study-finds-dyslexia-not-tied-iq

The discrepancy definition has declined in use over the past 10 years, as research findings found their way into educational policies at national and state levels. However, the relative ease of discrepancy assessment for those charged with identifying learning disabilities makes the demise of the discrepancy model difficult to complete. For example, there remain psychologists who continue to ignore the evidence when requested to assess students’ LD status.

“Taken as a whole, results of this study indicate that school psychologists differ widely in their approach to intelligence test interpretation, particularly for the identification of SLD, and that these differences are only modestly related to personal characteristics, level and accreditation/approval, status of professional training, and state regulations for SLD eligibility determination. Results of our study, however, also revealed the presence of a large research-to-practice gap, particularly as it concerns the use of ipsative analysis and the PSW methods for the interpretation of intelligence tests.” (p.9-10)

Kranzler, J.H., Maki, K.E., Benson, N.F., Floyd, R.G., & Fefer, S.A. (2020). How do school psychologists interpret intelligence tests for the identification of specific learning disabilities? Contemporary School Psychology, 24, 445-456.


Where to now?

The term LD itself is often considered now as of little benefit because it does not lead to interventions specific to the cause of the student’s problem. As for younger students yet to confront learning the requisite school skills, social justice demands that best educational practice must be supplied to all at-risk (or currently failing) students - intelligent or otherwise.

For failing students, strong evidence supports systematic, intensive teaching (Gersten et al., 2008; Swanson & Hoskyn, 1998; Torgesen, 2003): avoiding ambiguity in communication, employing carefully designed and trialled sequences of instruction, supplying ample massed and spaced opportunities for practice, ensuring careful monitoring and feedback until mastery is achieved, and further extended independent practice to obtain fluency, incorporation, and generalisation. Such effective learning programs tend to be effective for all classes of learners, not simply for those with some presumed idiosyncratic learning style (Goyen, 1992).

Years of research on learning disability had emphasised within-person factors to explain the unexpected difficulty that academic skill development poses for students with LD. Unfortunately, the impact of the quality of initial and subsequent instruction in ameliorating or exacerbating the outcomes of such disability has received rather less exposure until more recently as the criticisms of the discrepancy model have been increasingly acknowledged as valid.

“This problem persists to this day. In short, there is little scientific or professional support for the continued use of an IQ-achievement discrepancy in identifying children with SLD. ... To this we would add that IQ does not predict how well students with SLD learn to read or what their educational prognosis might eventually be (Vellutino et al., 2000). The fact is that we have better and more direct measures of reading achievement and individual differences in the ability to learn to read that are more closely related to the key phonological core constructs which have been shown to underlie reading ability. Such measures are also more time efficient than most measures of intelligence and they are more highly predictive of response to reading interventions than are measures of intelligence (Gresham, 2002; Vellutino et al., 2001; Wagner, Torgesen, & Rashotte, 1999)” (Gresham & Vellutino, 2010, p. 204 - 205).

Additionally, the 2020 survey of What’s Hot in Literacy noted that 51% of teachers considered they had inadequate strategies for academic intervention, and 48% noted inadequate or incorrect diagnosis of reading disabilities. 71% believe that variability of teacher knowledge and effectiveness is one of the greatest barriers to equity in literacy.


Response to Intervention

Increasingly, a different approach known as Response to Intervention (RTI) is supplanting the discrepancy approach, although there are also criticisms of this new emphasis (Baskette, Ulmer, & Bender, 2006). For example, some have argued that a hybrid of RTI and discrepancy approaches offer the optimal solution for assessment and intervention in LD (Hale et al., 2010).

While this debate over special education continues, the RTI model has found another, much broader, niche in education – as a framework for providing early identification of potential problems, and better instruction to students in general education, thereby reducing the demand for expensive special education services.

“Response to Intervention integrates assessment and intervention within a multi-level prevention system to maximize student achievement and to reduce behavioural problems. With RTI, schools use data to identify students at risk for poor learning outcomes, monitor student progress, provide evidence-based interventions and adjust the intensity and nature of those interventions depending on a student’s responsiveness, and identify students with learning disabilities or other disabilities” (National Center on Response to Intervention, 2010).

RTI derives from the application of the same scientific method used to study natural phenomena. The approach proceeds from a description of the problem, followed by the development of a hypothesis as to cause. A procedure is selected based upon the hypothesis, and the intervention is commenced, while data is then regularly collected, leading to a conclusion about the intervention’s effectiveness. This is a cyclical process that continues until the objective is attained for a given student.

There are a number of assumptions underlying the RTI framework. All students can learn, and the learning is strongly influenced by the quality of instruction. In fact, it is argued that there is a predictable relationship between instructional quality and learning outcomes. It is expected that both general classroom programs and additional specific interventions will be evidence-based to provide greater instructional quality. Assuming the curriculum content is evidence-based, another manipulable causal variable will be intensity of the intervention. This includes varying academic engaged time, lesson frequency, program duration, group size, engagement, lesson pacing, mastery criteria, number of response opportunities, correction procedures, goal specificity, and instructor skill.

RTI advocates argue that the model is useful for both beginning and remedial instruction. The sequence for a school or class involves all beginning students being screened for the pre-skills that evidence highlights are necessary for success in the domain in question. Appropriate universal screening tools are available at the National Center for RTI website (rti4success.org), and in numerous other sites. The derived data allow judgements of the students' current performance, by comparing them to a criterion-referenced benchmark. If scores are at or beyond the benchmark, students are judged to be satisfactorily managed within the general classroom program. If a student’s scores fall below the benchmark then general classroom instruction is considered insufficient for the child’s needs, and requires supplementation.

Most of the studies involving RTI have focused upon reading, but the breadth of application is increasing. All students are provided with research-validated instruction from the beginning, and are regularly re-assessed, at least three times per year. Additionally, student behaviour is assessed because of the close links (and possible reciprocal causation) between early academic success and student behaviour (Algozzine, McCart, & Goodman, 2011). This widening of RTI emphasis has led to the introduction of another descriptor – Multi-tier System of Supports (MTSS).

“More recently, multitier system of supports (MTSS) has become influential in educational policy. It provides an overarching framework that usually includes the three levels of RTI for struggling students. Extending beyond academics, its reach includes social and emotional supports, such as behavior intervention plans, and is intended to be applicable to all students.”

Greenwood, C. R., Carta, J. J., Schnitz, A. G., Irvin, D. W., Jia, F., & Atwater, J. (2019). Filling an information gap in preschool MTSS and RTI decision making. Exceptional Children, 85(3), 271–290. https://doi.org/10.1177/0014402918812473

Any student making slow progress is provided one or more research-validated interventions additional to the regular class program. For this group, academic progress is monitored more frequently to detect change using Curriculum Based Measures (CBM).

CBM is a means of assessing students’ basic skills. The intent is to provide a cheap, simple system that can be regularly used to measure students' initial skills and also their growth in performance, which is a proxy for the effectiveness of the instructional program. The tests have norms, so they can be used to judge which students are at risk. As an example, reading may be assessed using reading accuracy and speed on grade level text over a one minute, several times a year for average students, and weekly or fortnightly for struggling students. Graphing of progress aids decision making about progress. The reliability and validity of this type of assessment are well established (Reschly, Busch, Betts, Deno, & Long, 2009; Seungsoo, Dong-Il, Lee Branum-Martin, Wayman, & Espin, 2012). There are numerous examples available freely at Intervention Central (http://www.interventioncentral.org/cbm_warehouse). One caveat about their use is raised by Ball and Christ (2012) who warn against making decisions about student attainment growth based on too few data points. For this reason, decisions concerning any change of intervention are delayed until at least 6 data points are collected (Stecker & Lembke, 2007).

CBM probes have been particularly used to measure phonemic awareness, oral reading fluency, maths computation, writing, and spelling skills. In adding a behavioural component to RTI, two major components have been emphasized. One involves daily behaviour report cards which are teacher rating forms for evaluating a student’s behaviour. The other which the Solomon, Klein, Hintze, Cressey, and Peller (2012) meta-analysis highlighted is direct observation of target students’ behaviour as a means of data collection. Often, an external observer visits the classroom to observe and rate a student’s rates of on-task and academically engaged behaviours.


Tiers not tears?

Kerry response1
Figure 1

Response to intervention tiers (Sugai, Simonsen, Coyne, & Faggella-Luby, 2007)

RTI/MTSS is often characterized as a 3 Tier approach applicable to both academics and behaviour aiming at prevention and amelioration of educational problems.

Tier I: Evidence-supported instruction is provided to all students within the classroom, and may be sufficient for 80-90% of the class. This means that all teachers of beginning reading in the school employ methods as chosen in conjunction with the school’s RTI team. The decision as to which actual reading method is chosen is made on the basis of investigating what has been shown to be effective. For example, a school elects to introduce a specific whole class program, such as Jolly Phonics, or Reading Mastery. It doesn’t necessarily mean that all teaching occurs in the whole class format; there may also be flexible grouping and differentiation based upon collected data. A behavioural example would be a teacher commencing a classwide behaviour strategy, such as the Good Behaviour Game.

Tier II: An individualized plan is designed for students who are deemed to need further additional support (perhaps 15% of the class). An example would be to offer supplemental peer tutoring in reading to increase some students’ low reading fluency. The interventions would normally be in small groups and occur in addition to the regular program. In the behaviour domain, an example might be a teacher systematically acknowledging displays of appropriate behavior, including contingent and specific praise to better engage a group of frequently off-task students.

Tier III: Intensive Intervention is supplied to students whose intervention needs are greater than these (maybe 5% of the class). Typically, these will be made available to students when the careful monitoring of their Tier 11 intervention indicates less progress than is expected. This may include more in depth assessment leading to more intense small group or individual instruction. In the behaviour domain, an example might be a development of a home-school contract for an individual with significantly challenging behaviour following a functional behaviour assessment.

“The components of a multi-level prevention system include universal screening, progress monitoring, and data-based decision making (American Institutes for Research, 2013).”

American Institutes for Research. (2013). Using a Response to Intervention framework to improve student learning; A pocket guide for state and district leaders. https://www.air.org/resource/using-response-intervention-rti-frameworkimprove-student-learning

If despite several well-designed and implemented interventions, there is a continued failure to respond to intervention – this may be viewed as evidence of an underlying learning disability if such a diagnosis is sought. Hence a major difference between using a discrepancy definition to diagnose a LD is that the RTI model exhausts all the best teaching practices before deciding on an LD classification. Only a student who is still struggling in Level 3 is a candidate for diagnosis.

There has been criticism of RTI as the replacement diagnostic tool for identifying learning disabilities. For example, Kranzler et al. (2020) and Reynolds (2009) found that using RTI overidentifies as LD too many low ability students.

“Thus, the prevalence of a weakness in general cognitive ability in students with SLD who have been identified in the RTI model is almost twice as high as that in the general population of same-age peers.” (Kranzler et al. 2020, p.84)

“Because RtI is a relatively new method to identify SLDs, there is far less research examining this method compared to AAD. Arguably, there are several advantages to RtI, including reliance on low-inference decisions because of the direct link between assessment data and treatment (Christ & Arañas, 2015; Salvia, Ysseldyke, & Bolt, 2012) and increased reliability due to the use of multiple datapoints in decision making (Fletcher, 2012). However, some argue that RtI does not truly identify SLDs due to the change in conceptualization of unexpected underachievement, because there is no comparison to cognitive ability (Kavale & Spaulding, 2008). Moreover, differences in nonresponse thresholds can result in different SLD identification decisions (Barth et al., 2008) that may not be stable over time (Brown Waesche, Schatschneider, Maner, Ahmed, & Wagner, 2011), and there is no convincing evidence to support commonly implemented decision rules regarding student progress (e.g., three data points above/ below expected growth line; Ardoin, Williams, Christ, Klubnik, & Wellborn, 2010; Burns, Scholin, Kosciolek, & Livingston, 2010). There are also further challenges associated with monitoring student progress and implementing interventions within RtI. Administration of two curriculum-based measures (CBMs) per week for 10 weeks may also be needed to accurately and reliably estimate slope and to guide even low-stakes decisions (Ardoin & Christ, 2009), which may be unrealistic to implement in authentic school contexts.” (p.344)

Maki, K.E., Barrett, C.A., Hajovsky, D.B., & Burns, M.K. (2020). An examination of the relationships between specific learning disabilities identification and growth rate, achievement, cognitive ability, and student demographics. School Psychology, 5, 343-352.

Be that as it may, the major benefit from RTI is likely to be not so much as a tool for diagnosis, but rather for its capacity to quickly identify educational hurdles regardless of cause, and to specify, monitor, and adapt interventions that address them.

“The Response to Intervention (RtI) model is sweeping the country, changing the way children’s educational needs are recognized and met. RtI was introduced through special education legislation as part of the Individuals with Disabilities Education Improvement Act (IDEA, 2004) and offered an alternative approach for identifying students with learning disabilities (Bender & Shores, 2007). Its impact today, however, has moved well beyond this initial goal (Council for Exceptional Children, 2007). RtI is designed to bring together information about each child’s strengths and needs with evidence-based instructional approaches that support the child’s success (Kirk, Gallagher, Coleman, & Anastasiow, 2009). Although RtI is still an emerging practice, it hinges on a collaborative approach to recognizing and responding to the needs of each child. This collaborative approach requires educators to think about the child first and match the supports and services to his or her strengths and needs. The allocation of resources follows the supports and services, promoting synergy rather than increasing fragmentation, as the needs of the child increase. In other words, within the RtI model, when the child’s needs are the most intense, educational resources can be combined to 2provide greater support. This use of resources differs significantly from traditional approaches where, as the needs of the child intensify, the supports and services become more separate and rigidly codified with clear boundaries delineating the allocation of resources.” (Hughes et al., 2011 p. 1)

In summary, there are three core concepts underpinning RTI. Scientific, evidence-based interventions are to be used in the general education classroom; brief data-based measurement of student reactions to these interventions is regularly performed; and this RTI data is used to determine and evaluate future instruction (Hazelkorn, Bucholz, Goodman, Duffy, & Brady, 2011).


The response to RTI

The experience in the USA was that initially general education journals published very little on RTI whilst special education journals published a great deal (Hazelkorn, Bucholz, Goodman, Duffy, & Brady, 2011). In more recent times, as RTI has become more accepted, generalist journals and various websites are now publishing a plethora of articles on RTI. Increasingly, so too are education policy makers. A simple Google search reveals 413,000,000 hits.

In 2012, the International Reading Association included Response to Intervention in its annual list of What’s Hot and What’s Not in Education, rating it as Very Hot (Cassidy, Ortlieb, & Shettel, 2012). Hot means that the topic has received a great deal of attention by education researchers and practitioners during the preceding year. The Reading Today surveys of 25 prominent literacy leaders have considered RTI to be Hot since 2007. A simple Google search revealed 117,000,000 hits in 2012 and 442, 000,000 in 2021.

“Many professional organizations and advocacy groups like the National Association of State Directors of Special Education, National Association of School Psychologists, and the National Center for Learning Disabilities (NCLD) support the Response-to-Intervention Model (RTI).They embrace RTI as a science-based practice and have made RTI knowledge and practice part of their professional expectations and advocacy (Charles and Judith, 2011).” (p. 2033)

Eissa, M.A. (2020). Effects of RTI on letter naming and spelling among kindergarteners at risk for reading failure. Elementary Education Online, 19(4), 2032-2041. doi:10.17051/ilkonline.2020.763216

“PBIS [Positive Behavioral Interventions and Supports] has been consistently correlated with reductions in student exclusion including suspensions, expulsions, poor attendance, and high school dropout rates. However, school-wide strategies that do not specifically involve effective instruction in academic areas are unlikely to result in increased academic achievement. To address this reality, multi-tiered systems of support (MTSS) involving tiered intervention for both academic and behavior have become commonplace. The Academic and Behavior Response to Intervention School Assessment (ASA) was developed to assess the fidelity with which schools are implementing MTSS for Reading, Mathematics, and behavior. Using the ASA to assess MTSS fidelity across 29 schools and four years, analyses were conducted to determine the predictive validity of sub-group domain scores. The question was whether ASA scores were predictive of student outcomes in terms of suspension and of state academic achievement scores in the areas of reading, math, and language. Results show that schools with higher fidelity in the behavior domain had significantly fewer suspension events than matched comparison schools. In comparison, higher fidelity in the reading domain was associated with more students at or above proficient on both the Language Mechanics measure and the Mathematics measure, but not in Reading; and higher fidelity in the math domain was also associated with more students at proficient or above on the Language Mechanics, but not in math or reading. Results are discussed in terms of implications for the further development of fidelity assessments and future research.” (p.308)

Scott, T.M., Gage, N.A., Hirn, R.G., Lingo, A.S., & Burt, J. (2019). An examination of the association between MTSS implementation fidelity measures and student outcomes. Preventing School Failure: Alternative Education for Children and Youth, 63(4), 308-316.

In 2012, the journal Psychology in the Schools published a special issue: “Addressing response to intervention implementation”. The articles pointed to the increasing breadth of application and popularity of the approach. It also highlighted the many issues still to be resolved, including “conceptual, procedural, and logistical questions related to RtI implementation” (Jones & Ball, 2012, p.207). There have been many implementation issues requiring attention from schools and districts as the model is not prescriptive regarding many of the details of RTI. It was also noted in the USA that a great deal of pre-service and in-service training was required to inform the various education practitioners, including teachers, administrators, reading coaches, school psychologists, speech therapists, special education teachers, and paraprofessionals.

The impact of RTI in education has shifted from a focus on special education to much broader applications. It is being employed in general education in areas such as preschool programs (Fox, Carta, Strain, Dunlap, & Hemmeter, 2010; Koutsoftas, Harmon, & Gray, 2009; VanDerHeyden, Snyder, Broussard, & Ramsdell, 2007), secondary grades (Pyle & Vaughn, 2012; Vaughn et al., 2008; Vaughn, Cerino, et al., 2010; Vaughn, Wanzek, et al, 2010), English Language Learners (Haager, 2007; Hernández Finch, 2012; McMaster, Kung, Han, & Cao, 2008; Orosco & Klingner, 2010; Xu & Drame, 2008), for challenging behaviour (Lane, Oakes, & Menzies, 2010; Mitchell, Stormont, & Gage, 2011; Solomon, Klein, Hintze, Cressey, & Peller, 2012), and for mathematics (Koellner, Colsman, & Risley, 2011; Lembke, Hampton, & Beyers, 2012).

The growth in acceptance has been considerable. In a USA survey in 2008, of the 44 states that responded, all reported that they either have or plan to introduce an RTI model (Hoover, Baca, Wexler-Love, & Saenz, 2008). In 2012, O’Connor and Freeman report that “implementation efforts have been occurring at some level in most school districts across the country” (p. 297). The 2011 RTI Adoption Survey (Spectrum K12, 2012) revealed that 94 percent of respondents in 2011 reported being at some stage of RTI implementation.

There has been a similar interest in the UK, although the terms waves of intervention is more commonly used. In the Primary National Strategy (2006), the waves were described as:

“Wave 1: High-quality inclusive teaching supported by effective whole-school policies

Wave 2: Wave 1 plus intervention designed to increase rates of progress and put children back on course to meet or exceed national expectations

Wave 3: Wave 1 plus increasingly personalised intervention to maximise progress and minimise gaps in achievement.” (p.7).

“Changes in RTI/MTSS implementation over the past decade reveal growing commitment among SEAs to support LEAs in implementing tiered systems of support. Although the terminology varied across states, our data were consistent with Bailey’s (2018) findings that in 2017 all states supported at least one initiative or provided guidance related to implementation of tiered systems of support in some capacity. This increased attention over the last decade is likely due to a number of factors, including the inclusion of multitier system of supports in federal and state legislation subsequent to IDEA 2004 and increased availability of federal funding and technical assistance through national centers.”

Berkeley, S., Scanlon, D., Bailey, T.R., Sutton, J.C., & Sacco, D.M. (2020). A snapshot of RTI implementation a decade later: New picture, same story. Journal of Learning Disabilities, 53(5), 332-342. doi:10.1177/0022219420915867

In 2020, the Journal of Learning Disabilities published a special issue in two parts: Special Series: Identifying and Serving Students with Learning Disabilities, including Dyslexia, in the Context of Multi-tiered Supports and Response to Intervention.


Criticism of RTI effectiveness

An evaluation by Balu et al. in 2015 cast doubt on the effectiveness of RTI on reading. However, there were design features, such as a lack of both random assignment and a control group, that limited the study’s validity.

“Because numerous well‐designed studies have documented the positive effects of high‐quality Tier 2 and 3 reading interventions (Gersten, Newman‐Gonchar, Haymond, & Dimino, 2017), critics have argued that the results of Balu et al.’s (2015) national evaluation speak more to widespread problems with RtI implementation than to the efficacy of the tiered interventions themselves (Arden, et al., 2017; Fuchs & Fuchs, 2017; Gersten et al., 2017). As reiterated by Arden et al. (2017) and others (e.g., Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005), “how implementation occurs matters just as much as what is being implemented” (p. 271). Ultimately, high‐quality implementation can only occur when school systems are prepared to engage in comprehensive systems change. This process involves gradually fostering school readiness and building capacity for full implementation. … Gersten, Jayanthi, and Dimino (2017) suggested that more field evaluations of RtI are needed to address questions left unanswered by the IES national evaluation. In particular, these authors contended that smaller field evaluations should include both treatment and control groups, or what they referred to as “intervention and “business‐as‐usual” conditions (p. 252). Designs that incorporate both types of conditions would allow researchers to better understand and trace the impact of RtI interventions on student achievement outcomes.” (p.244)

Grapin, S.L., Waldron, N., Joyce-Beaulieu, D. (2019). Longitudinal effects of RtI implementation on reading achievement outcomes. Psychology in the Schools, 56(2), 242– 254. https://doi.org/10.1002/pits.22222


What are advantages of RTI for struggling learners?

It allows schools to intervene early to meet the needs of struggling learners. This has the effect of avoiding, or at least ameliorating, the cascading deficits that can occur as time passes and a student’s failure becomes entrenched and his progress falls further and further behind that of his peers. These have been described as Matthew Effects (Stanovich, 1986). “For unto everyone that hath shall be given, and he shall have abundance; but from him that hath not shall be taken away even that which he hath” (Matthew, 80-100 A.D., XXV: 29).

The Matthew Effects are not only about the progressive decline of slow starters, but also about the widening gap between slow starters and fast starters. There is ample evidence that students who do not make good initial progress in learning to read find it increasingly difficult to ever master the process. Stanovich (1986, 1988, 1993) outlines a model in which problems with early phonological skills can lead to a downward spiral where even higher cognitive skills are affected by slow reading development. Subsequently, this finding has been often supported by other researchers. A special issue of the Journal of Learning Disabilities in 2011 concluded:

“Across studies, the generalized findings are that Matthew effects are present in LD and that disadvantaged students continue to be at a great disadvantage in the future. This finding was evident particularly with regard to the relationship between vocabulary and reading comprehension (Oakhill & Cain; Sideridis et al.) as well as with regard to other reading skills such as phonological awareness (McNamara et al.) or math abilities (e.g., Morgan et al.; Niemi et al.). When looking at the framework of responsiveness to instruction implemented in the United States and various parts of the world, the message from the present studies is clear: Students with LD are likely to be classified as nonresponders as their trajectories of growth suggest. We need to switch our attention from assessing the difficulties of students with LD to how to intervene to solve their problems” (Sideridis, 2011, p.401).

At the same time as it may provide a safety net for students, it simultaneously increases the accountability of schools. At a time when the national NAPLAN and international PISA assessments are placing pressure on schools to lift student performance, RTI challenges schools to provide data demonstrating their effectiveness, but also provides a direction for a school’s response to that data. Whether it would be viewed by schools as an attractive approach or another imposition on a stressed system is moot. The expectation that general education teachers provide evidence-based instruction and regular progress monitoring represents a significant change compared to current practice in Australia.

Its introduction in the USA was expected to lead to a significant reduction in referrals to special education “by ensuring that all children in the general education setting have access to high-quality curriculum and instruction that are provided in a cascade of intensity” (Fox, Carta, Strain, Dunlap, & Hemmeter, 2010, p.3). In other words, it should remove from the LD category the group that Vellutino et al. (1996) referred to as instructionally disabled. This group comprises those children who were identified as having LD, but the cause of their struggle was inadequate instruction. This change is important, as the number of individuals identified with LD had increased by 150% to 200% since 1975. Though prevalence varied significantly from state to state, it was by far the largest category of special education. Additionally, it was noted that minority students were over-represented in the category, and that, once referred to special education, relatively few returned to the regular system (Bradley, Danielson, & Doolittle, 2005). It has been asserted for some time that the percentage of students who fail to make adequate progress can be reduced from its currently unacceptable level to something around 6% by employing early screening followed by evidence-based literacy programs (Lyon & Fletcher, 2003; Torgesen, 1998). If indeed there is a decline in the number of students labelled as disabled, then the negative consequences of labelling are avoided. It is also a hard tag to shake off once applied.


So, has there been a decline in referrals for special education?

“Much of the success of the sound MTSS implementation is evidenced by a decrease in special education referrals related to SLD given that the pre‐referral process associated with MTSS implementation allows for the elimination of unnecessary special education evaluations. As aforementioned this is reflected in an estimated reduction of SLD identification of up to 25% over the past several years.” (p.10)

Frank Webb, A., & Michalopoulou, L. E. (2021). School psychologists as agents of change: Implementing MTSS in a rural school district. Psychology in the Schools, 1–13. https://doi.org/10.1002/pits.22521

“A recent study by Albritton, et al. (2017) was aimed at determining the effectiveness of RTI in identifying preschool children in need of interventions in the areas of emergent language and literacy. This study focused on 274 students enrolled in a Head Start program, 92.3% of the participants were African American. The children were four years old and were considered at risk for educational difficulty due to their socioeconomic status. Students were assessed in the fall and again in the spring of the same school year. The initial assessment indicated that 29.9% of the students were in need of tier two supports and 2.6 were in need of tier three supports. After receiving interventions in the areas of print knowledge, phonological awareness, and receptive vocabulary, 76.8% of the tier two group was able to transition to tier one and all of the tier three students were able to move to tier two, with one student moving to tier one. Progress 11 monitoring in RTI can help to identify students who are culturally different but do not have a disability, as well as to identify those who are culturally different and are in need of special education (Castro-Villarreal, 2016).” (p. 10-11)

Savino, K. (2019). The effects of Response to Intervention on reducing the numbers of African American students in special education. Theses and Dissertations, 2701. https://rdw.rowan.edu/etd/2701


Other subgroups?

Although the RTI framework is intended to apply to all students, there have been examples of its successful use with various subgroups including gifted (Hughes et al., 2011), adolescent (de Haan, 2021), early childhood (Hinson, 2021).

“The implementation of the RtI Tier 2 model had a significant effect on the overall improvement in reading performance. These results are in line with previous studies that found significant changes once the intervention effects were analyzed on composite measures (Al Otaiba et al., 2014; Baker et al., 2015; Gilbert et al., 2013). … Within this context, direct instruction in small groups has proven beneficial for at-risk readers. In fact, previous studies found that Tier 2 direct instruction offered to small groups seems to be beneficial for students at-risk of reading failure (Agodini & Harris, 2010; Archer & Hughes, 2011; Carnine et al., 2004; Kamps et al., 2008; Richards-Tutor et al., 2016). … Another indicator we have used is the extent to which Tier 2 reduces the risk incidence of presenting reading and math difficulties in the early grades. Overall, the present findings indicate that the earlier the intervention, the greater the percentage of students who leave the situation of risk of LD in reading and math. This result coincides with previous studies (e.g., in reading, Gersten et al., 2020; in math, Bryant et al., 2011). Overall, these results are attributable to the fact that the intervention was carried out with adequate fidelity and had a significant positive impact on all grades. In fact, at the beginning of the intervention, the minimum requirements necessary to accurately carry out the implementation of the model were established (i.e., materials contained specific instructions on their implementation, a training and implementation schedule was designed, necessary materials for the evaluation of the fidelity of the implementation were designed, external observers were trained to do so, etc.) (Century et al., 2010; Johnson et al., 2006; Mellard & Johnson, 2008; O’Donnell, 2008)” (p. 14-15)

Jiménez, J., De León, S., & Gutiérrez, N. (2021). Piloting the Response to Intervention model in the Canary Islands: Prevention of reading and math learning disabilities. The Spanish Journal of Psychology, 24, E30. doi:10.1017/SJP.2021.25

“Since its implementation, RTI has received extensive attention at the elementary school level (RTI Action Network, n.d.). The same cannot be said about the application of RTI in preschool settings. With inclusive preschool programs available, there are an increasing number of students with special needs in early childhood programs (Lawrence et al., 2016). It is important to understand the benefits of early identification and intervention of children who exhibit challenging behaviors and basic skill concerns at the early childhood level. In this paper, the term “early childhood” is used to refer to children in their preschool years. Preschool age was chosen due to the lack of research regarding RTI with that particular age group. … a variety of behaviors are examined within the RTI process. At the early childhood level, academic behaviors such as alphabet knowledge, early listening skills, comprehension, language skills, verbal counting, and recognizing number symbols are considered important and critical to developing later skills. Social emotional skills such as following directions, transitioning between tasks, and communicating needs are skills needed to be successful in the school settings. Due to the importance of these skills, an emphasis is placed on helping students who have yet to develop these skills or who are considered behind their peers. Implementing RTI at this level allows children who may have not had prior exposure to learning opportunities the ability to receive additional instruction and assist in keeping them from falling behind. … several articles mentioned and used the progress monitoring tool, Individual Growth and Development Indicators of Early Literacy (myIGDIs).This can be administered by teachers to measure progress towards early literacy goals. This tool was used with multiple interventions, indicating that it is useful in monitoring progress of students in early childhood settings. Other progress monitoring tools that were mentioned in numerous academic articles include Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and Clinical Evaluation of Language Fundamentals Preschool (CELF-P).” (p. 1-2, 42-43)

Hinson, K.Y. (2021). Response to Intervention in early childhood education. Masters Theses & Specialist Projects, Paper 3475. https://digitalcommons.wku.edu/theses/3475


RTI in practice

So, how might a school proceed to implement RTI for beginners? The first step would be to have all the early years teaching staff and support personnel become familiar with, and accepting of, this new framework. The demands of this familiarization and acceptance process varies from setting to setting, but should not be underestimated. School leadership (especially from the principal) has been found to be essential. Nellis (2012) provides a thoughtful coverage of the issues involved in establishing and maintaining an effective RTI team to enable implementations and monitoring decisions at the whole school and individual student level.


An early screen protocol

The next phase may involve universal screening of all students, or as a pilot, only those in their first year of schooling to attempt to predict which students may struggle with literacy, in particular. There are various means of accomplishing such prediction, but it is generally accepted that a simple test of letter name (or sound) and a measure of phonemic awareness together provide sufficient information to make reasonable predictions. Longitudinal research has made apparent that the strongest predictors of success in beginning reading are a knowledge of letter-sound/letter name correspondences (Chall, 1967; Stage, Sheppard, Davidson, & Browning, 2001) and phonemic awareness (Scarborough, 1998; Torgesen, 1998), a protocol more recently supported by Manolitsis and Tafa (2011). This provides a rationale for focussing assessment on these areas initially.

Torgesen (1998) suggests the commencing screening procedure should comprise a test of letter names, because letter knowledge continues to be the best single predictor of reading difficulties, and a test of phonemic awareness. Torgesen’s research also indicated that knowledge of letter-sounds is a stronger predictor for first graders. Additionally, the single strongest predictor of a beginning child’s end-of-year spelling ability was a one minute letter-sound fluency test at the year’s beginning in the Al Otaiba et al. (2010) study. Measuring fluency of letter-sound knowledge is a worthwhile extension of simply assessing untimed knowledge. In a study with first grade students (Speece & Ritchey, 2005), letter-sound fluency was a unique predictor of subsequent oral reading fluency levels. Further useful correlations are reported by Stage, Sheppard, Davidson, and Browning (2001). Letter-naming fluency and letter-sound fluency predicted oral reading fluency and reading growth generally during the following year. Struggling first graders typically could produce only eight letter names per minute in their first months of schooling.

As phonemic awareness is thought to involve a developmental sequence, the decision as to which form of test to employ for a student cohort assumes importance. For example, it is recognised that blending, segmenting, and deletion are quite difficult tasks for children before and during their first year of school (Schatschneider, Francis, Foorman, Fletcher, & Mehta, 1999). Tests in which few students can achieve success or tests in which most students are near ceiling are of little use as screening devices.

In a longitudinal study of 499 children from kindergarten through Grade 3 (Vervaeke, McNamara, & Scissons, 2007), an accuracy figure of 80% was obtained when kindergarten assessment of phonological awareness and letter-sound correspondence was compared to their Grade 3 reading achievement. The false negative and false positive rates were each 12%, representing encouraging predictive capacity over a significant period of time.

Good and his colleagues (Good & Kaminski, 2002) have established performance-based benchmarks using the freely available Dynamic Indicators of Basic Early Literacy Skills (DIBELS). The tests relevant to this screening task are Letter Naming Fluency and Initial Sound Fluency. Note that these tests are timed, so they add a component of speed along with power – efficiency along with knowledge. Employing fluency in the measurement of subword skills (e.g., letter names/sounds) has become of increasing interest (Speece, Mills, Ritchey, & Hillman, 2003) because of the significance of automaticity as a quality beyond mastery. In fact, Initial Sound Fluency assessed in school commencement has been shown to significantly predict end of year performance in word identification, nonsense word reading, and reading comprehension for both regular students and those for whom English is a second language (Linklater, O'Connor, & Palardy, 2009).

The DIBELS measures are also very brief, and easy to administer. Letter Naming Fluency involves a sheet with upper and lower-case letters, and students name as many letters as possible in one min. Fewer than 2 letters in one min at preschool or early first year at school is considered at-risk, between two and seven constitute some risk, and eight or more is classed as low risk.

In the DIBELS Initial Sound Fluency, students are shown (for one minute) a series of pages containing four pictures. (Pointing to the pictures) This is: tomato, cub, plate, doughnut. Which picture begins with /d/? Fewer than four initial sounds correct in one minute at preschool or early first year at school is considered at-risk, between four and seven constitute some risk, and eight or more is classed as low risk.

A similar system called AIMSweb is available from The Psychological Corporation (http://www.aimsweb.com). It includes subtests for Phoneme Segmentation, Letter Naming Fluency, Letter Sound Fluency, Oral Reading Fluency, Maths, and Behaviour. Other resources include Edcheckup (http://www.edcheckup.com), Easycbm (http://www.easycbm.com/), Project AIM (Alternative Identification Models) (http://terpconnect.umd.edu/~dlspeece/cbmreading/index.html), and Yearly Progress Pro (http://www.mhdigitallearning.com).

Another option, one that is used in the UK for reviewing the progress of students in their second year, is the Year 1 phonics screening check (Department for Education, 2012).. It is completed in mid Year 1, so it doesn’t replace the initial phonological screen described above. It could however be useful as an aid in detecting those students in need of Tier 2 support. It comprises a list of 40 words and non-words read to a teacher.

The school’s decision about how to proceed with instruction may depend upon the proportion of students defined as at “some risk” and/or “at risk”. The RTI framework is not prescriptive about this decision, but if the class average is low, then a red flag is raised about not only high quality initial instruction, but also about the need for forward planning for Tier 2 interventions.
Kerry Response2

Figure 2. Data indicative of a classwide problem (Ikeda et al., 2006)

If only the “at risk” group is selected, there will be fewer students to manage, and they are the most likely to experience difficulty. However, there are likely to be students in the “some risk” who also meet hurdles. Some studies have suggested focusing on the lowest 10% of students, some on those below the 25th percentile, and others suggest all those below the 40th percentile offers the fewest false negative predictions. To some degree this decision will be driven by the school resources allocated to the RTI program. Some systems employ benchmarks to discern who is likely to struggle. There are many decisions in practice that the research into RTI has yet to report optimal recommendations (O'Connor & Freeman, 2012).

Table 1

An example from the DIBELS system of benchmark goals for the first year of school.

DIBELS Measure

Beginning of Year
Months 1 - 3

Middle of Year
Months 4 - 6

End of Year
Months 7 - 10

Scores

Status

Scores

Status

Scores

Status

Initial Sound Fluency

0 - 3
4 - 7
8 and above

At Risk
Some Risk
Low Risk

0 - 9
10 - 24
25 and above

Deficit
Emerging
Established

Not administered during
this assessment period.

Letter Naming Fluency

0 - 1
2 - 7
8 and above

At Risk
Some Risk
Low Risk

0 - 14
15 - 26
27 and above

At Risk
Some Risk
Low Risk

0 - 28
29 - 39
40 and above

At Risk
Some Risk
Low Risk

Phoneme Segmentation Fluency

Not administered during
this assessment period.

0 - 6
7 - 17
18 and above

At Risk
Some Risk
Low Risk

0 - 9
10 - 34
35 and above

Deficit
Emerging
Established

Nonsense Word Fluency

Not administered during
this assessment period.

0 - 4
5 - 12
13 and above

At Risk
Some Risk
Low Risk

0 - 14
15 - 24
25 and above

At Risk
Some Risk
Low Risk


Norms in standardised tests

There are always concerns about the applicability to Australian students of norms derived from populations in other countries. Obviously, it would be an advantage to have local norms for all the tests we wish to use; however, the huge cost of properly norming tests is prohibitive for many local developers. There are some grounds for defending US normed tests of reading. We speak and write the same language, and, in most Australian states, we commence school at about the same age. In international comparisons (e.g., OECD, 2004; UNICEF, 2002), our average reading attainment exceeds that in the USA, perhaps because of our lower proportion of disadvantaged and non-English speaking students. The implication of this disparity is that tests using US norms may slightly flatter our students. When students do not do well on such a test, it is likely that they would actually be lower on that test using local norms than is indicated by the test manual. So, if a student, for example, scores in the at-risk category on any of the DIBELS measures, any error caused by the US norms is likely to represent an underestimate of their level of difficulty. Further, according to the Galletly and Knight (2006) study, the current DIBELS norms are appropriate to states in Australia that begin teaching reading in their students’ first year. Certainly, further studies involving the testing of norms would add confidence to the vital decisions about how many students should be included in Tier 2 and Tier 3 programs.

The DIBELS system has become extremely popular as a means of ascertaining risk. Having completed the universal screening, one option is to present the evidence-based reading program to all students as normal, but to increase monitoring of the potential strugglers to weekly or fortnightly (rather than the three or four times per year for the low risk students). Another is to intervene immediately and place the identified group on a schedule of additional instruction, perhaps emphasizing Letter Name/Sound Fluency and Initial Sound Fluency. In the VanDerHeyden, Snyder, Broussard, & Ramsdell (2007) study, the researchers found increased accuracy of prediction when a test-teach-test protocol was employed by adding a brief intervention designed to address whether opportunities to learn were all that was missing for a subgroup of the potentially struggling cohort.

The progress checks are to ensure that subsequent progress (or lack of it) is quickly noted and responded to. They are charted to enable analysis using decision making protocols, such as is seen below. A significant difference between actual and expected attainment is evident from score comparisons against either the class average or the benchmarks, while rate of learning over several data points can be seen on a progress graph. A low attainment with a high rate of growth suggests the reason is probably a lack of opportunity, and some confidence that the existing instruction may be sufficient for the student to catch up. A low attainment and a flatter slope suggest the likely need for a Tier 2 intervention.

In this figure, the progress of a student in several phonological skills is plotted for the three test occasions of his first school year. The slopes of the graph indicate the extent of improvement over time – the steeper the slope, the more rapid was the progress. According to the DIBELS norms, Letter Naming Fluency scores of 27, 36, 53 correct in one minute are each in the low risk range. His Initial Sound Fluency scores of 18, 26, 34 correct in one minute each place him at “low risk” with this skill described as “established” by year’s end. Phoneme Segmentation Fluency is not usually assessed until mid-year; his score of 14 correct in one minute places him at “some risk”, and end-of-year 28 correct in one minute indicates an “emerging” skill. Nonsense Word Fluency is not usually assessed until mid-year; his score of 12 correct in one minute is just below the “low risk” threshold of 13, and thus classified as being at “some risk”. His end of year score of 22 correct in one minute is also within the “some risk” category with the threshold score for “low risk” being 25. Overall, the slopes indicate reasonable progress, with a need to monitor his Phoneme Segmentation Fluency and Nonsense Word Fluency monthly in the new year with a view to increasing intensity of instruction in phonemic awareness and decoding if necessary. There are excellent resources for planning, assessment norms, and decision rules available freely at the Oregon Reading First website (https://dibels.uoregon.edu/). Graphing software is available at several sites, including GRAPS (http://www.rtigraphs.com/home) and Chartdog (http://www.interventioncentral.org/tools/chart_dog_graph_maker).
Kerry Response3

Figure 3. DIBELS Progress Monitoring (McDougal, Clark, & Wilson, n.d.).

Students whose initial progress line has a flatter slope than that of the average student must subsequently make progress at an even faster pace than does the average student if they are to catch up. That is why early intervention is so important, and why program quality and intensity assume so much importance (Al Otaiba et al., 2008; Vaughn & Dammann, 2001; Vaughn, Denton, & Fletcher, 2010).

“Students who are behind do not learn more in the same amount of time as students who are ahead. Catch-up growth is driven by proportional increases in direct instructional time. Catch-up growth is so difficult to achieve that it can be the product only of quality instruction in great quantity” (Fielding, Kerr, & Rosier, 2007, p. 62).

Usually graphs used in a Tier 2 or Tier 3 intervention will include a final goal, and this allows a goal line to form part of the graph. The goal line is drawn from the baseline score to the projected score. The intent is for the data to be at or above the goal line most of the time, an indication that the intervention is having a positive effect. It is usual to make decisions about intervention change only after at least three weeks of instruction and at least 6 data points. If four of these points fall below the goal line a decision is made to alter the intervention (Stecker & Lembke, 2007).

In the example below, this four point rule was invoked after the Week 6 test, and so a change was made to the intervention. The teacher has several options; for example, to increase instructional time, change the teaching presentation technique, increase opportunities to respond (e.g., double dose), alter correction procedures, or change the grouping arrangement (individual instruction instead of small-group instruction). The regular weekly scores readily demonstrate to any observer whether the change was helpful. This type of data is particularly helpful to parents involved in Individual Educational Plan meetings. Sometimes the content of a school’s reporting of a child’s progress to these meetings can be a source of confusion to parents. Weak monitoring and reporting in schools and school systems has been cited by Rorris and colleagues, in their 2011 ACER report to The Review of Funding for Schooling Panel, as a major impediment to the adoption of effective educational programs.

In the example, the intervention intensity might be lifted by adding to the four times per week teacher instruction an additional two spelling sessions with a peer tutor each week. This increase may be sufficient to have the child back on track to reach her target. As there were also 4 data points above the goal line the goal line is elevated to ensure high expectations are maintained.
Kerry Response4

Figure 4. CBM data for a Tier 2 intervention in writing (McMaster, Ritchey, & Lembke, 2011).

If success in an intervention is not occurring, it is important to ensure that the program is being implemented in the prescribed manner. This is a common problem when teachers are unaware of the critical importance of program fidelity, and decide to modify the intervention according to their own judgement. Supervision and technical assistance has been shown to be essential as there is a strong relationship between fidelity of implementation and student outcomes (Vadasy, Jenkins, & Pool, 2000). Thus, the teacher needs to be retrained by watching the project coach perform the sequence correctly, being observed during their new attempt at intervention fidelity, being provided with feedback on this attempt, and subsequent observation post-training (Mellard, 2010).


More on Tier 3

If, despite several faithfully implemented Tier 2 interventions employing recognized evidence-based programs over at least a 20 week period (Wedl, 2005), a student continues to make unsatisfactory progress in a given domain then an RTI team may further manipulate some of the intensity variables highlighted by Mellard (2010). Most think of only frequency and duration of lessons and programs as intensity variables, but Mellard, McNight, and Jordan (2010) also include other features such as broader instructional design principles that attempt to reduce any ambiguity in the instructional communications occurring within lessons.

“Mellard suggests that schools evaluate 10 distinct variables that may be adjusted to increase instructional intensity. These variables include three dosage-related elements (minutes of instruction, frequency, and duration), as well as instructional group size, immediacy of corrective feedback, the mastery requirements of the content, the number of response opportunities, the number of transitions among contents or classes, the specificity and focus of curricular goals, and instructor specialty and skills” (p.219).

If even these modifications are insufficient, one may arrange for a full psychological and educational assessment, along with any other relevant professional assessment, for example a paediatric or a speech pathology assessment. The expectation of working through the tier system is that many fewer students receive this resource intensive referral, as most are adequately supported by the RTI system. Of course, the quality of RTI implementations may differ markedly from place to place, and over time. It should not be seen as a panacea, but rather as a useful framework for a reduction in the level of serious long term effects of extended student failure.


So, RTI has been employed increasingly for reading instruction. What about writing?

“To date, RTI approaches to instruction have focussed almost exclusively on reading and mathematics. Within these domains, they have been widely adopted, particularly in English-speaking countries (Berkeley et al., 2009), and there is evidence that they are successful in reducing the percentage of students requiring special education (see meta-analysis by Burns et al., 2005). Hattie (2012, 2015) estimated a standardised effect size of 1.07 for the RTI approach. Teachers’ attitudes towards RTI also tend to be positive. They find it valuable in supporting students’ learning (Greenfeld et al., 2010; Rinaldi et al., 2011; Stuart et al., 2011) and believe it has a positive impact on their teaching practices, autonomy and self-efficacy (Greenfeld et al., 2010; Stuart et al., 2011). We argue, in line with suggestions by previous authors (Dunn, 2019; Saddler & Asaro-Saddler, 2013), that the RTI framework has considerable potential value in teaching writing. This may be particularly the case in early primary school, where students have to contend with developing basic skills in spelling, handwriting and sentence construction alongside the skills necessary to generate and structure content. Transcription skills in first grade are not automatized and children who particularly struggle with these will then not gain the practice they require to develop composition skills. There is, therefore, potential for some children to fall behind their peers from an early stage, unless they are provided with additional support. Equally, the RTI principle of progress monitoring seems important in the context of learning to write. Single-task, occasional writing assessments provide a poor estimate of a child’s writing ability and progress (Van den Bergh et al., 2012).” (p. 3-4)

Arrimada, M., Torrance, M. & Fidalgo, R. (2021). Response to Intervention in first-grade writing instruction: A large-scale feasibility study. Reading & Writing. https://doi.org/10.1007/s11145-021-10211-z


One to one or small group?

The current most popular intervention in Australia is Reading Recovery, an expensive one to one program offered in the second year of a child’s education. It is probably best considered as a Tier 2 one to one intervention, but at present it represents the totality of structured literacy interventions in many Australian schools. In the UK too, it has been described (perhaps surprisingly) as the most intensive Tier 3 program available (Department for Children, Schools, and Families, 2009). As noted above, there have been criticisms of the lack of benefit analysis, much less cost-benefit analysis of this intervention.

The question of cost benefit has become more significant with the increasing number of studies in which small homogeneous group instruction (perhaps three to five students) can, in many cases, be as successful as one to one interventions, whether involving Reading Recovery or other beginning reading or early intervention programs (Elbaum, Vaughn, Hughes, & Moody, 2000; Wanzek & Vaughn, 2008). At a Tier 2 level, there are obvious savings to be made in offering evidence-based programs in small group over one to one format. The cost savings of doing so could be channeled into more intensive assistance (with smaller groups or one to one) for Tier 3 interventions.


Can RTI address the needs of older students?

The major interest in RTI is about the prevention of learning problems from the beginning, partly because of the difficulty of altering the educational trajectory of students who struggle for long periods. The Matthew Effect described earlier highlights the cascading deficits arising from a failure to thrive educationally. The research literature with secondary school students is sparse, and the applicability of RTI requires considerably more work before efficacy can be assured with this cohort.

However, RTI does have an intervention focus and provides the framework for intervention with older students. There is a stronger effort devoted to the middle primary grades than to secondary schools as the continuum of service provision becomes extreme by the time children arrive at secondary school. Vocabulary, domain knowledge, reading fluency, a sense of helplessness, and increasingly complex curriculum all conspire to thwart efforts at retrieving these students.

Roberts, Torgesen, Boardman, and Scammacca (2008) note that the instructional intensity and duration that might help close the achievement abyss for older students needs to be far more than is typically provided currently.

Vaughn, Wanzek, et al. (2010) warn that the impact of Tier 2 and Tier 3 interventions with Year 7 or 8 students will not have much effect if they are restricted to one school period per day.

“Instead, the findings indicate, achieving this outcome will require more comprehensive models including more extensive intervention (e.g., more time, even smaller groups), interventions that are longer in duration (multiple years), and interventions that vary in emphasis based on specific students’ needs (e.g., increased focus on comprehension or word study)” (p. 931).

Abbott et al. (2010) conclude that at least two and a half hours per school day needs to be devoted to literacy in a mix of large and small group instruction.

The pre-intervention CBM assessment for older students shifts from an emphasis on letter names/sounds and phonemic awareness to measures such as nonsense word fluency, oral reading fluency, story retell fluency, word identification fluency, and maze completion tests.

For many students the struggle endured in developing their phonological skills remains evident in that their reading remains slow, thereby hindering their comprehension even if their reading accuracy has reached acceptable levels (Torgesen et al., 2006). In fact, Fuchs, Fuchs, Hosp, and Jenkins (2001) found that a short oral fluency measure predicted reading comprehension more precisely than did another brief reading comprehension test. The correlation was 0.91. Clearly, for older students fluency is very strongly associated with reading comprehension. Similar results have been obtained by O’Connor et al. (2002) and by Swanson and O’Connor (2009) and hence the increased interest in fluency measures of students’ reading of connected text.

Research has also highlighted the value of extended fluency practice for such mid-primary students and secondary students (Joseph & Schisler, 2009; Kuhn & Stahl, 2003; Swanson, 2001). At a time when attempts to assist older readers focus exclusively on comprehension strategies, Mastropieri, Leinart, and Scruggs (1999) sound a warning that unless fluency is also addressed, comprehension strategy training will have little impact.

The emphasis on fluency of various skills and across grades is not one to which education in Australia has paid much attention. Fluency requires practice, and the main approach to teaching over the past 30 years, whole language, eschewed the necessity for practice.

Harn, Jamgochian, and Parisi (2009) argue for much greater attention to be paid by educators to the fluency of skills and knowledge rather than solely to accuracy as a truer measure of mastery. Their recommendation supports the work of Carl Binder (Binder, Haughton, & Bateman, 2002). Binder asserted that as educators we have failed to bring our students to the stage of fluency in the various skills we teach. A student may read a sentence without error, but takes 30 seconds to do so. Another student does so in 6 seconds. There is a difference between the two that is not evident in traditional untimed assessment tools. Fluency is the sum of effective teaching and practice. He considers four levels on the way to fluency: 1. Incompetence (no measurable performance). 2. Beginner's level (inaccurate and slow). 3. 100% accuracy (traditional "mastery"). 4. Fluency (true mastery = accuracy + speed).

Binder argues that there are real educational advantages in being able to perform fluently. He points to increased retention and maintenance in which there remains a greater capacity for the knowledge/skill to be recalled/performed long after the teaching has occurred. He also notes that fluency enables a resistance to distraction, and a capacity to remain at a task for much longer periods than can a student who is slow in working at a task. Finally, he points to the capacity to employ fluent skills in novel situations, and to use them as a bridge towards more complex skills. Fluency implies near effortless performance, hence “it frees up attention for higher order application rather than overloading attention with the mechanics of performance” (p.5). Similar conclusions were reached by Lindsley (1990) in his development of Precision Teaching, and Lindsley’s Celeration Charts have received renewed attention in recent times as a means of charting data.

Binder believes that this attention to fluency is relevant to all foundation skills, and he offers a series of estimates of fluent performance on a range of basic skills. For example, for spelling fluency - writing words from dictation at 15 – 10 words /min. For basic early arithmetic - Count by 1’s, 2’s, 5’s, and 10’s at a rate of 120 – 100 /min.

We have noted earlier that RTI employs the sorts of fluency goals to which Binder had referred, and they go further to offer age/grade norms for a range of basic skills such as letter names/sounds, phonemic awareness, nonsense word fluency, and oral reading fluency.


Can parents and paraeducators assist in intervention?

“Persampieri et al. (2006) and Gortmaker et al. (2007) demonstrated that parents were able to implement academic interventions accurately and effectively when they were provided with sufficient support. In this study, parents were provided with intervention implementation training and all intervention materials. In addition, a researcher regularly contacted parents through phone calls and written notes to assist with problem‐solving for any difficulties (e.g., audio recorder malfunction). … Results from this study are consistent with previous research which demonstrated that parent‐implemented reading interventions are effective for increasing students’ reading fluency (Gortmaker et al., 2007; Schreder et al., 2012). In particular, Gortmaker et al. and Schreder et al. conducted BEAs of participants’ reading fluency and parent‐implemented BEA‐identified interventions, which led to increases in participants’ reading fluency. However, this study extends the literature by testing BEA‐identified interventions that were implemented by parents in the context of a RtI system.” (1153-1154)

Zhou, Q., Dufrene, B.A., Mercer, S.H., Olmi, D.J., Tingstom, D.H. (2019). Parent-implemented reading interventions within a response to intervention framework. Psychology in the Schools, 56, 1139– 1156. https://doi-org.ezproxy.lib.rmit.edu.au/10.1002/pits.22251

“However, although in principle RTI appears to fit well with writing instruction, in practice both progress monitoring and additional support for struggling students may over-stretch school resources (Castro-Villarreal et al., 2014; Martinez & Young, 2011). This will be particularly the case where a single teacher has sole responsibility for a large, full-range classroom. In this context, recruiting parents to supervise researcher-designed remedial training may facilitate the implementation of a RTI-based program. Parental involvement has actually been defined as a key component of successful RTI-based programs (Stuart et al., 2011), though, to our knowledge, no detailed guidelines on parents’ role has been provided, and no studies have evaluated RTI implementations where parents supervised additional training. There is evidence that parental involvement benefits students’ learning, with estimated standardised effects of around 0.50 (Hattie, 2012, 2015). In writing, research suggests that instructional programs based on parents and children working together significantly improve spelling (Camacho & Alves, 2017; Karahmadi, et al., 2013) and even compositional quality (Camacho & Alves, 2017; Robledo-Ramón & García-Sánchez, 2012; Saint-Laurent & Giasson, 2005).” (p. 4)

Arrimada, M., Torrance, M., & Fidalgo, R. (2021). Response to Intervention in first-grade writing instruction: A large-scale feasibility study. Reading & Writing. https://doi.org/10.1007/s11145-021-10211-z


There’s evidence of a reduction in referrals for special education, but are there any longer term effects of RTI?

“The implementation of evidence-based practices through Response to Intervention (RtI) has been shown to reduce the prevalence rate of students dropping out from school (Bernardt & Hebert, 2017; Wood, Kimperman, Esch, Leroux, & Truscott, 2017).” (p.11)

Young, N.D., & Johnson, K. (2019). The potency of the Response to Intervention Framework. In N.D. Young (Ed.), Creating compassionate classrooms: Understanding the continuum of disabilities and effective educational interventions (pp. 11-21).Wilmington, Delaware: Vernon Press.

“ … students who experienced the early phases of RtI implementation (i.e., Phases I and II) during Grade 2 generally had higher mean comprehension scores in Grades 4 and 5 than students in the baseline condition. … Several studies have investigated student and systems outcomes associated with full‐scale RtI implementation. Collectively, this study has suggested that RtI implementation is associated with greater accuracy and decreased numbers of special education referrals, improvements in student achievement, and reduced assessment and placement costs for districts (Burns, Appleton, & Stehouwer, 2005; Lembke, Garman, Deno, & Stecker, 2010; VanDerHeyden, Witt, & Gilbertson, 2007).” (p. 242, 243)

Grapin, S.L, Waldron, N., Joyce-Beaulieu, D. (2019). Longitudinal effects of RtI implementation on reading achievement outcomes. Psychology in the Schools, 56(2), 242– 254. https://doi.org/10.1002/pits.22222


Is it doable in Australia?

It would certainly require wholesale changes to the often strongly held belief systems that education and science are incompatible (Hempenstall, 2006; Lilienfeld, Ammirati, & David, 2012). Some have argued that science has little to offer education, and that teacher initiative, creativity, and intuition provide the best means of meeting the needs of students. For example, Weaver considers scientific research offers little of value to education. “It seems futile to try to demonstrate superiority of one teaching method over another by empirical research” (Weaver, 1988, p.220).

Some outcomes of a failure to attend to empirical data are the popularity of learning styles, of rigid inclusion of all students at all times, of constructivism, of student directed learning (personalised learning), the distrust and rejection of instructional protocols, the belief that obtaining student engagement is the major role of teachers. For example, Smith (1992) wrote that student-teacher relationships are sufficient for effective learning to occur. Further, he rejected instruction in favour of a naturalist perspective “Learning is continuous, spontaneous, and effortless, requiring no particular attention, conscious motivation, or specific reinforcement” (p.432).

These beliefs with little or no empirical support perhaps reflect what Isaacs and Fitzgerald (1999) referred to as eminence-based practice rather than evidence-based practice.

Carnine (2000) noted that education has been largely impervious to research on effective practices, and he explored differences between education and other professions, such as medicine, that are now strongly wedded to research as the major practice informant. In psychology during the 1990’s, the American Psychological Association (Chambless & Ollendick, 2001) introduced the term empirically supported treatments as a means of highlighting differential psychotherapy effectiveness. Prior to that time, many psychologists saw themselves as developing a craft in which competence arises through a combination of personal qualities, intuition, experience. The result was extreme variability of effect among practitioners, a characteristic noted among teachers also.

In Australia in 2005, the National Inquiry into the Teaching of Literacy asserted that “teaching, learning, curriculum and assessment need to be more firmly linked to findings from evidence-based research indicating effective practices, including those that are demonstrably effective for the particular learning needs of individual children” (p.9). It recommends a national program to produce evidence-based guides for effective teaching practice, the first of which is to be on reading. In all, the Report used the term evidence-based 48 times.

Carnine (1991) argued that the leadership has been the first line of resistance to effective practices. He described educational policy-makers as lacking a scientific framework, and thereby inclined to accept proposals based on good intentions and unsupported opinions. Professor Peter Cuttance, director of the Melbourne University's Centre for Applied Educational Research in 2005 was equally blunt: “Policy makers generally take little notice of most of the research that is produced, and teachers take even less notice of it … ” (Cuttance, 2005, p.5).

In Australia, pressure for change has been building, and the view of teaching as a purely artisan activity is being challenged. Reports such as that by the National Inquiry into the Teaching of Literacy (2005) have urged education to adopt the demeanour and practice of a research-based profession, though it is not obvious that such exhortations have had a significant effect on classroom practice. However, State and national testing has led to greater transparency of student progress, and, thereby, to increased public awareness. Government budgetary vigilance is greater than in the past, and measurable outcomes are becoming the expectation from a profession that has not previously appeared enthused by formal testing.

There’s also the issue of teacher training to implement interventions.

In Australia, we have seen numerous reports that beginning teachers in particular may not have the knowledge of evidence-based practice needed as a first step to the use of RTI. A number of reports and studies have noted this deficit, and criticized the lack of this emphasis in teacher training courses (Fielding-Barnsley, 2010; Fielding-Barnsley & Purdie, 2005; Fisher, Bruce, & Greive, 2007; Louden, et al., 2005; Rohl & Greaves, 2005; Senate Employment, Workplace Relations and Education Committee, 2007). Additionally, education departments have been remiss in failing to evaluate programs that they introduce into the school system, such as Reading Recovery (Victorian Auditor-General, 2009).

If teacher training does not generally equip teachers in either the evidence base for the initial teaching of reading or with the tools to intervene effectively with students who struggle with literacy in particular, what sort of training might be effective? The majority of inservice training historically in Australian education has been that which can be squeezed into a one-off curriculum day. As Joyce and Showers (2002) report, this type of professional development does not typically translate into changes at the classroom level.

Table 2

Percentage of participants achieving desired outcomes at increasing levels of training

Percentage of participants achieving desired outcomes at increasing levels of training

 

Outcomes

 

Knowledge

Skill demonstration

Use in classroom

Theory & discussion

10%

5%

0%

Demonstration

modelling

30%

20%

0%

Practice & feedback in training

60%

60%

5%

Coaching in classroom

95%

95%

95%

(Joyce & Showers, 2002).


Knowledge of effective instructional interventions

Given that frameworks are designed to support structures, rather than provide the entire finished product, schools usually require knowledge of appropriate evidence-based practices, and how to fit those into the day to day running of the school. Given that teachers do not generally receive such knowledge and skill in their pre-service training, many need the services of a trainer to assist in the various training and implementation issues that arise from the decision to adopt the RTI framework. RTI is not a self-contained panacea. The framework stands or falls on the faithful presentation to students of demonstrably effective interventions, otherwise it remains only a shell.

“High-quality intervention is grounded in scientifically proven materials with a strong evidence base; however, it is not enough for students to simply engage with a strong evidence-based intervention. Ongoing development of teacher knowledge to ensure the successful implementation of the high-quality intervention is also essential, as “curricula alone do not teach” (ILA, 2020). The effectiveness of the intervention is contingent upon the effectiveness of the provider. In order to achieve desired student outcomes and maintain the intervention’s evidence basis, program materials must be implemented as intended. Teaching as intended requires fidelity to program structure, including content, materials, duration, and frequency as well as fidelity to program procedures, including delivery, techniques, and student engagement. Effective intervention implementations consider both what is taught and how it is being taught (Sidler Folsom & Schmitz, 2018).” (p. 4)

Forsythe, L., Kohn, A., & Arnett, M. (2021). High-quality interventions to ensure literacy for all students. Center for the Collaborative Classroom. https://public.cdn.ccclearningportal.org/program/resources/field-team/MKT-5147-highqualityinterventions-whitepaper.pdf


Explicit, systematic, and sequential

“For most students, direct instruction in the foundational skills of reading is essential to their literacy success. When students do not make expected progress with the explicit foundational skills instruction provided during core instruction, they require additional instruction, or intervention, that is both explicit (direct and teacher-led) and systematic (methodical, incremental instruction organized into a coordinated instructional routine) (Gersten et al., 2008). Students who are identified as needing foundational skills intervention will benefit from instruction that is carefully sequenced so that skills are developed gradually and deliberately. (Torgesen, 2004). According to the International Literacy Association (2020), explicit and systematic intervention instruction that results in the needed acceleration of progress must also ensure frequent opportunities for student response, provide specific and immediate corrective feedback, support positive approaches to learning and behavioral supports, and teach the transfer of skills. Well-designed, systematic instructional programs ensure that children are taught systematically and are given opportunities to practice the skills before being required to do this work independently (Torgesen, 2004; Swanson, 1999).” (Forsythe, Kohn, & Arnett, 2021, p. 4).


Preparing teachers for RTI

“Successful implementation of intervention strategies for students having difficulties with reading is highly dependent on teachers’ knowledge. Curricula alone do not teach; skilled teachers know how to prioritize learning objectives. For students who are struggling, or who have reading disabilities, including dyslexia, it is vital that teachers know (a) how to identify students who need help, (b) what help to provide them, and (c) how to access appropriate resources for supports within their school and district. Strong teacher preparation programs prepare their candidates with knowledge to guide their practice, but they also provide many types of opportunities for candidates to practice or apply their coursework and receive supportive feedback. Preparation programs vary along a developmental continuum from preservice teacher training to graduate specialized training programs that include job-embedded activities. Preservice teachers can benefit from watching faculty model literacy lessons, watching video exemplars, and practicing teaching with peers. Initially, they may follow relatively scripted lesson plans with struggling or typical learners and eventually develop their own lesson plans for whole-group, small-group, and individualized instruction. More experienced teachers returning for graduate or certification programs may benefit from trying evidence-based practices within their own classroom setting. Ideally, preservice and inservice teachers have supervision from higher education faculty in person or remotely through technologies that allow a supervisor to observe via conferencing or to observe video clips. Within the university setting, teachers benefit from watching videos of their own and their peers’ teaching as a type of a community of practice that supports reflection and rehearsal. Some recent innovations involve virtual reality simulations that allow a preservice teacher to try out lesson plans with a virtual student, receive feedback from peers and faculty, and to reflect and replan prior to delivering a lesson to a student. Some of these innovations also facilitate teachers’ self-efficacy for teaching diverse learners and managing classroom behavior in order to keep students engaged and motivated.” (p.1)

International Literacy Association. (2020). Intensifying literacy instruction in the context of tiered interventions: A view from special educators [Literacy leadership brief], 1-13.

https://literacyworldwide.org/docs/default-source/where-we-stand/ila-intensifying-literacy-instruction.pdf?sfvrsn=5caabc8e_4


From what sources might expert support arise?

Certainly, in some schools are teachers who have studied the intervention field, and are capable of supporting a school through the introduction of RTI. Additionally, some suggest a role for educational psychologists (Frank Webb & Michalopoulou, 2021), and others envisage speech pathologists being an appropriate resource (de Haan, 2021). Special educators, too, are potential sources of expertise.

“All educators will be affected by the decision to implement RTI. For example, many special education teachers and other learning disability specialists will have to take on the roles of the interventionists, and they will be expected to become proficient in a variety of research-based methods and material in a relatively short amount of time. In addition, general education teachers will have to be become proficient in the new assessment procedures and data collection methods for progress monitoring and be able to interpret the data to inform their instruction. These changing roles are a concern due to the importance of fidelity and consistency within an effective RTI model (NJCLD, 2005).” (p.14)

Wise, C. (2017). The effectiveness of Response-to-Intervention at reducing the over identification of students with specific learning disabilities in the special education population. A Dissertation Presented to The Faculty of the Education Department Carson-Newman University In Partial Fulfillment of the Requirements for the Degree Doctor of Education. https://classic.cn.edu/libraries/tiny_mce/tiny_mce/plugins/filemanager/files/Dissertations/Dissertations2017/Coleman_Wise.pdf

Haager and Mahdavi (2007) summarise some of the other issues that challenge an RTI implementation. There may be policies and practices within schools or across education systems that conflict with the evidence-based nature underpinning RTI. At the local level teacher negativity can derail any new initiative. What they do delineate as essential are the grade level meetings to ensure mutual support and consistency of application, the availability of in-class coaching and supervision mentioned above, and the unwavering support and attention of school administration. Teacher training has been a major issue in the USA involving a great deal of time and expense (Baskette, Ulmer, & Bender, 2006). Additional hurdles identified by Jones, Yssel, and Grant (2012) include how to find sufficient personnel with the time for universal screening, and the pre-testing for, and scheduling of, interventions.

Concerns have been raised by van Kraayenoord (2010) about how RTI might fare in Australia. She makes the point that the level of intensity and duration provided to students during a Tier 3 intervention has been rarely offered in the regular school system, and both the concept and the cost of such extensive and extended interventions would challenge the education system. She also queries the emphasis on reading in the RTI research, where in Australia a much broader vision of “multiliteracies” has significant currency. Further, there may be a local reaction against a primary focus on a narrower definition of reading than is present in the popular four resources model of Freebody and Luke (1990). In any case, the major stumbling block, at least for most struggling beginners is to be able to get the words off the page (Rasinski, Homan, & Biggs, 2009; Stuart, 1995).

Somewhat surprisingly, van Kraayenoord argues that RTI should be aligned with existing teacher education emphases and local policies. At the same time, she argues for an evidence-basis to determine what is effective, but “we must not devalue teachers’ professional knowledge and decision making around curricula and pedagogical practices” (p.371). These would seem to be conflicting expectations; however, if they reflect a likely popular position, it is difficult to see how an effective version of RTI would be possible in Australia without it being mandated.

Esparza Brown and Doolittle (2008) describe what they see as a potential selling point for RTI when they argue that it represents the best of personalised instruction, as each child’s needs are assessed in order to provide instruction appropriate to their needs. However, personalised instruction can have a rather different and incompatible meaning as a recent educationally valued concept related to Constructivism. If one accepts the following definition it is difficult to reconcile with RTI. “Personalising learning is the process which empowers the learner to decide what, where, when and how they learn” (National College for School Leadership, 2010).

A further issue involves another educational value, that of inclusive education. It refers to the provision of assistance for students with disabilities in the classroom they would attend if they did not have a disability. Full inclusionists consider the meaning of this term precludes the use of any withdrawal from their classroom (Stainback & Stainback, 1992) for Tier 2 or Tier3 assistance as such actions contravene the spirit of inclusive education. Sometimes the objections to withdrawal emphasise a social justice or anti-discrimination perspective. Others cite the threat to students’ self-esteem of such schedules, as peers observe that the student requires special treatment. The philosophy does tend to polarise:

“Although state-of-the-empirical-data debates have contributed significantly to improving intervention efforts, the problem of effectively translating research findings into practice remains exacerbated by other less visible but powerful tensions materially impacting the field. Most prominent in this regard is the ideology of full inclusion, which has influenced policy and practice disproportionately to its claims of efficacy. The now familiar press for full inclusion has become an ideological on-rushing river, bypassing significant islands of contradictory evidence by now quite substantial, offering viable and more ethical alternatives. Indeed, the notion of empirical evidence seems antithetical to many full inclusionists’ views on research and practice (e.g., D. J. Gallagher, 1998; Smith, 1999). In place of evidence, we are offered trendy and deliberate postmodern convolutions, whose underlying aim, transparent but incoherent, is the dissolution of special education, irrespective of the consequences (see Sasso, 2001)” (Kaval, & Mostert, 2003, p.191).

Some supporters of another related philosophy known as differentiated instruction also reject withdrawal from the general classroom for most students (Tomlinson, 1999). However, according to some definitions the RTI model could be viewed as representing a form of differentiated instruction, even though at least some of the interventions (such as Tier 2 and 3) are likely to occur in withdrawal groups or in one to one settings. Given the perspective that a differentiated instruction goal is "to maximize student growth and individual success" (Tomlinson & Allan, 2000, p. 4), then, as Allan & Goddard (2010) assert, the approaches share similar goals. On the other hand, the espousal of teaching according to learning styles (Strong, Silver, & Perini, 2001) and multiple intelligences (Campbell, Campbell, & Dickinson, 1999) by some writers in the differentiated instruction field would appear to put the philosophy at odds with RTI.

Thus, there is a tension between what may be optimal in terms of educational effectiveness and the desire not to single out students in need. There is a view that it is not the setting but the instructional practices within the setting that determine student progress (Epps & Tindall, 1988), and a question arising from that perspective is whether all the necessary evidence-based practices can currently be provided within the general classroom. McLeskey and Waldron (2011) in reviewing the research on full inclusion and withdrawal programs concluded that “full inclusion is not a feasible alternative for meeting the basic academic needs in reading and math for most students with LD” (p.49).

Another concern that has not been fully resolved is what constitutes evidence-based programs suitable at each Tier level, and how are they to be discerned from the plethora of published programs or methods that all claim to be effective? It is difficult for teachers to make such determinations as there is precious little training in research methods in their courses (Lomax, 2004). Recognising this absence from the training curriculum emphasis the National Inquiry into the Teaching of Literacy (2005) recommended that teachers-in-training be provided with a solid understanding of research design to adapt to changing educational policy.

A possible component of any reform movement in Australia arises with the establishment and objectives of the Australian Institute for Teaching and School Leadership (2011). Among the standards and procedures of the Initial Teacher Education Program Accreditation document is: “6. Evidence: The credibility of national accreditation is built on evidence-based practice and contributes to the development of evidence through research about what works in quality teacher education. This evidence in turn informs the development of accreditation, allowing it to focus on those things shown to be related to outcomes” (p.1).

In the meantime, are there any immediate shortcuts to discerning the gold from the dross? If so, where can one find the information about any areas of consensus? Those governments that have moved toward a pivotal role for research in education policy have usually formed panels of prestigious researchers to peruse the evidence in particular areas, and report their findings widely (e.g., National Reading Panel, 2000). They attempt to assemble all the methodologically acceptable research, and synthesise the results, using statistical processes such as meta-analysis, to enable judgements about effectiveness to be made. It involves clumping together the results from many studies to produce a large data set that is intended to reduce the statistical uncertainty that accompanies single studies.

So, there are recommendations for practice produced by these bodies that can become useful resources in answering the question what works? These groups include the National Reading Panel, American Institutes for Research, National Institute for Child Health and Human Development, The What Works Clearinghouse, Florida Center for Reading Research, Coalition for Evidence-Based Policy, and the Best Evidence Encyclopedia among others.

There have been some criticisms of these efforts as the standards they set often preclude huge numbers of studies that don’t meet the gold standard of randomised controlled trials. Stockard (2010) accepted the value of this approach, but argued that including only randomized control trials omits a huge number of studies unnecessarily (What Works Clearinghouse includes no studies prior to 1985), and may actually produce misleading conclusions. For example, of 106 reviews of Reading Recovery, the What Works Clearinghouse found only four met their standards and assigned a medium to large effect on that basis.

So, it is advisable to seek analyses from a number of these review sites. At this time, there is no substitute for being able to analyse research oneself.


Continuing challenges for RTI/MTSS - Its relationship to special education:

“While some states have retained a clear relationship between their tiered approach and special education, others have explicitly stated that RTI or MTSS is a support system for all students and is not a pathway to special education. Both research and practice on RTI, MTSS, and other tiered systems have prioritized the models’ uses for screening-level identification and early intervening. Undoubtedly, students who had historically needed to “wait to fail” before qualifying for special education services (L. Fuchs & Vaughn, 2012) are identified and served sooner under the current models. However, approximately two-thirds of the states do not support a tiered approach for LD identification without still relying on data from additional formal testing. Researchers have speculated that a primary reason for this may be lack of clarity about the psychometric integrity of treatment based diagnoses from tiered models and uncertainty as to how they would satisfy statutory regulations (Hale et al., 2010; cf. Zirkel, 2017). As Hudson and McKenzie (2016a) report, there is wide variation in policy, guidance, and practices across and within states that do allow or require using RTI data for LD identification.” (p.339)

“The evolution in tiered models has also changed the role of the special education teacher (D. Fuchs et al., 2010). In addition to serving students with LD and other disabilities on their caseloads, special educators are also expected to work with their general education colleagues to support any student in need of intervention at any tier. However, many special education teachers in today’s schools are underprepared to meet the needs of the students that they serve (Brownell et al., 2010). A compounding factor is that initiatives such as the inclusion movement and RTI have decreased the numbers of special educators employed in schools, a drop of 17% nationally from 2005 to 2012 (Dewey et al., 2017). One consequence is that fewer specially trained teachers are available to work with their colleagues and serve the unique needs of learners who may have disabilities. When these teachers are then spread thin with the expectation to support all students, it may end up that students with disabilities are not provided the appropriate education they need to receive meaningful educational benefit (Dewey et al., 2017; see Endrew F. v. Douglas County School District, 2017). Hence, students who have disabilities such as LD may be supported in meeting general education curricular demands but not in their individually appropriate education as called for in the IDEA (Calhoon et al., 2019). The debate D. Fuchs et al. (2010) described between those who favor providing supports within or outside of special education systems is unresolved, with the state models indicating both approaches are in practice.” (p.339-340)

Berkeley, S., Scanlon, D., Bailey, T.R., Sutton, J.C., & Sacco, D.M. (2020). A snapshot of RTI implementation a decade later: New picture, same story. Journal of Learning Disabilities, 53(5), 332-342. doi:10.1177/0022219420915867


Inconsistencies across implementation sites are problematic for RTI

“In their review of the implementation science literature, Fixen and colleagues (2005) described implementation as “a process, not an event” (p. 15) that includes the following stages: (a) exploration and adoption, (b) program installation, (c) initial implementation, (d) full operation, (e) innovation, and (f) sustainability. Support for systemic implementation of tiered approaches such as RTI is available to both SEAs and LEAs through the federal Office of Special Education Programs’ (OSEP) technical assistance centers (e.g., National Center on Student Progress Monitoring, National Center on Intensive Intervention [NCII]), state professional development grants, and regional resource centers.… Despite available resources and a growing research base supporting RTI and related models (e.g., Burns et al., 2005; Gersten et al., 2009; Tran et al., 2011), a research-to-practice gap has been documented for on-the-ground implementation. … Several survey studies provide further insight into the breakdown in RTI implementation practices. Bineham and colleagues (2014) surveyed 619 general and special education administrators nationally and found a notable discrepancy between knowing about RTI and knowing how to implement the tiered framework. Al Otaiba and colleagues (2019) surveyed 139 general and special education teachers to understand their knowledge of Tier 1 and preparedness to make data-based decisions. They found teachers reported greater levels of understanding than preparedness to implement. Similar findings have been reported in several other investigations (e.g., Maki et al., 2018; Regan et al., 2015). While teachers and school leaders recognize the importance of multitiered models such as RTI, there is a persistent breakdown in their readiness to implement, which is likely exacerbated by ongoing evolution in thinking around the topic (e.g., Burns et al., 2016; D. Fuchs et al., 2012; L. Fuchs & Vaughn, 2012; Gersten & Dimino, 2006). Despite the enduring lack of clarity surrounding RTI in the field, it has had a significant impact on service delivery models and instructional practices in schools, primarily at the elementary level (Al Otaiba et al., 2014).” (p. 333)

Berkeley, S., Scanlon, D., Bailey, T.R., Sutton, J.C., & Sacco, D.M. (2020). A snapshot of RTI implementation a decade later: New picture, same story. Journal of Learning Disabilities, 53(5), 332-342. doi:10.1177/0022219420915867


Conclusion

This movement represents a change in the way we think about students who do not make good progress in their education. RTI shifts our attention from the characteristics of the learner to those of the teaching process. As educators, our capacity to influence student performance is more fruitfully focused upon what we can contribute rather than solely on what the student brings. The environment and what conditions promote achievement become our tools.

A consequence of this change is that we no longer require struggling students to obtain some form of diagnosis before we act. We can observe and act quickly, rather than forcing children into an extended “wait-to-fail” situation before assistance can be provided.

Effective early intervention can provide sufficient impetus to overcome any initial struggles before the debilitating effects of chronic failure become entrenched. It also enables our scarce resources to be available to the more severely hampered students.

“The RTI/MTSS framework is designed as a model for early intervention (Connor et al., 2014), meaning that it is intended to address academic and behavioral difficulties as early as possible in a child’s school career. Multi-tier instructional efforts have the potential to prevent struggling readers from facing long-term impact on their academic success. In the realm of intervention, there is an important emphasis on providing early intervention in order to proactively address academic difficulties (Connor et al., 2014; Gersten et al., 2008). (p.3)

Forsythe, L., Kohn, A., & Arnett, M. (2021). High-quality interventions to ensure literacy for all students. Center for the Collaborative Classroom. https://public.cdn.ccclearningportal.org/program/resources/field-team/MKT-5147-highqualityinterventions-whitepaper.pdf

“As is clear in this study, RTI approach in early childhood(specially kindergarten children identified as at risk for the acquisition of beginning reading)presumes use of evidence-based practices , and is an emerging practice with a promise that will lead to greater levels of effectiveness in teaching children reading sub-skills such as those taught in this study (e.g. letter naming knowledge and spelling knowledge).The results from this study were in the same line with those obtained by Kamps et al. (2007) who investigated the effect of three reading programs along with evidence-based direct instruction in small groups of at-risk students, using Tier 2 intervention. The programs employed were found to be strongly effective with at-risk students. Furthermore, differentiating instruction based on approaches such as of response-to-intervention model supported Linan-Thompson, Cirino & Vaughn (2007) suggestion that explicit, systematic, and intensive interventions should be provided to children who are at risk of lagging behind their same-aged- peers in reading. As indicated by Charles et al. (2013) "RTI holds the promise of preventing early delays from becoming disabilities later by intervening sooner to meet children's needs" (p. 2).” (p. 2039)

Eissa, M.A. (2020). Effects of RTI on letter naming and spelling among kindergarteners at risk for reading failure. Elementary Education Online, 19(4), 2032-2041. doi:10.17051/ilkonline.2020.763216

In the longer term, we should have the capacity to accurately screen students well prior to their first contact with literacy, thereby eliminating rather than just reducing, the failure period that is so damaging to children’s educational and personal development.

References

Aaron, P. G., Joshi, M., & Williams, K. A. (1999). Not all reading disabilities are alike. Journal of Learning Disabilities, 32, 120–137.

Abbott, M., Wills, H., Greenwood, C.R., Kamps, D., Heitzman-Powell, L., & Selig, J. (2010). The combined effects of grade retention and targeted small-group intervention on students' literacy outcomes. Reading & Writing Quarterly: Overcoming Learning Difficulties, 26(1), 4-25.

Al Otaiba, S., Connor, C., Lane, H., Kosanovich, M.L., Schatschneider, C., Dyrlund, A.K., Miller, M.S., & Wright, T.L. (2008). Reading First kindergarten classroom instruction and students' growth in phonological awareness and letter naming–decoding fluency. Journal of School Psychology, 46(3), 281-314.

Al Otaiba, S., Puranik, C.S., Rouby, D.A., Greulich, L., Sidler, J.F., Lee, J. (2010). Predicting kindergarteners' end-of-year spelling ability based on their reading, alphabetic, vocabulary, and phonological awareness skills as well as prior literacy experiences. Learning Disability Quarterly, 33(3), 171-183.

Alber-Morgan, S. (2010). Using RTI to teach literacy to diverse learners, K-8: Strategies for the inclusive classroom. Thousand Oaks, CA: Corwin Press.

Algozzine, R., McCart, A., & Goodman, S. (2011). Teaching academics and behavior: What comes first? [solving the-chicken-or-the-egg dilemma]. National PBIS Leadership Forum, Hyatt Regency O’Hare Rosemont, Illinois October 27.

Allan, S. D., & Goddard, Y. L. (2010). Differentiated instruction and RTI: A natural fit. Educational Leadership, 68(2). Retrieved from http://www.ascd.org/publications/educational-leadership/oct10/vol68/num02/Differentiated-Instruction-and-RTI@-A-Natural-Fit.aspx

Australian Institute for Teaching and School Leadership. (2011). Initial Teacher Education Program Accreditation. Retrieved from http://www.aitsl.edu.au/teachers/accreditation-of-initial-teacher-education/initial-teacher-education-program-accreditation.html

Ball, C. R. & Christ, T. J. (2012), Supporting valid decision making: Uses and misuses of assessment data within the context of RTI. Psychology in the Schools, 49(3), 231–244.

Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary reading (NCEE 2016‐4000). Washington, DC: U.S. Department of Education, Institute of Education Sciences

Baskette, M.R., Ulmer, L., & Bender, W.N. (2006, Fall). The emperor has no clothes! Unanswered questions and concerns on the response to intervention procedure. The Journal of the American Academy of Special Education Professionals, Fall, 4-24. Retrieved from http://www.naset.org/fileadmin/user_upload/JAASEP/Response_to_Intervention_Procedure.pdf

Binder, C., Haughton, E., & Bateman, B. (2002). Fluency: Achieving true mastery in the learning process. Retrieved from www.fluency.org/Binder_Haughton_Bateman.pdf

Bradley, R., Danielson, L., & Doolittle, J. (2005). Response to intervention. Journal of Learning Disabilities, 38(6), 485-486.

Campbell, L., Campbell, C., & Dickinson, D. (1999). Teaching and learning through the multiple intelligences (2nd ed.). Needham Heights: Allyn and Bacon.

Carnine, D. (1991). Curricular interventions for teaching higher order thinking to all students: Introduction to the special series. Journal of Learning Disabilities, 24, 261-269.

Carnine, D. (2000). Why education experts resist effective practices (and what it would take to make education more like medicine). Washington, DC: Fordham Foundation. Retrieved from http://www.sc-boces.org/english/IMC/Focus/DirectInstruction-carnine_article.pdf

Carnine, D. (2003, Mar 13). IDEA: Focusing on improving results for children with disabilities. Hearing before the Subcommittee on Education Reform Committee on Education and the Workforce United States House of Representatives. Retrieved from http://edworkforce.house.gov/hearings/108th/edr/idea031303/carnine.htm

Cassidy, J., Ortlieb, E., & Shettel, J (2012). What’s Hot and What’s Not? 2011 Results. International Reading Association. Retrieved from http://www.reading.org/General/Publications/ReadingToday/RTY-decjan-20102011-surveyprimary.aspx

Chall, J. S. (1967). The great debate. New York: McGraw Hill.

Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

Conversations in literacy (2012). Guiding students to better reading. Retrieved from http://conversationsinliteracy.blogspot.com.au/2012/01/rti-moving-thru-tiers-without-moving.html

Cuttance, P. (2005). Education research 'irrelevant.' The Age, July 5, p.5.

de Haan, M. (2021) .Supporting struggling adolescent readers through the Response to Intervention (RTI) framework. Australian Journal of Learning Difficulties, 26(1), 47-66, DOI: 10.1080/19404158.2020.1870512

Department for Children, Schools, and Families. (2009). Every child a reader – the layered approach. Retrieved from http://readingrecovery.ioe.ac.uk/documents/Layered_approach.pdf

Department for Education. (2012). Year 1 phonics screening check FAQs. Retrieved from http://www.education.gov.uk/schools/teachingandlearning/pedagogy/a00198207/faqs-year-1-phonics-screening-check#faq1

Elbaum, B., Vaughn, S., Hughes, M.T., & Moody, S.W. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92(4), 605-619.

Epps, S., & Tindal, G. (1988). The effectiveness of differential programming in serving students with mild handicaps: Placement options and instructional programming. In M. C. Wang, M. C. Reynolds, & H. J. Walberg (Eds.), Handbook of special education: Research and practice, Volume I (pp. 213–248). New York: Pergamon Press.

Esparza Brown, J., & Doolittle, J. (2008). A cultural, linguistic, and ecological framework for Response to Intervention with English Language Learners. National Center for Culturally Responsive Education Systems (NCCREST). Retrieved from http://www.nccrest.org/Briefs/Framework_for_RTI.pdf

Fielding, L., Kerr, N., &Rosier, P. (2007). Annual growth for all students, Catch-up growth for those who are behind. Kennewick, WA: The New Foundation Press, Inc.

Fielding-Barnsley, R. (2010). Australian pre-service teachers' knowledge of phonemic awareness and phonics in the process of learning to read. Australian Journal of Learning Difficulties, 15(1), 99-110.

Fielding-Barnsley, R., and Purdie, N. (2005). Teacher’s attitude to and knowledge of metalinguistics in the process of learning to read. Asia-Pacific Journal of Teacher Education, 33(1), 65-76.

Fisher, B.J., Bruce, M.E., & Greive, C. (2007). The entry knowledge of Australian pre-service teachers in the area of phonological awareness and phonics. In A Simpson (Ed.). Future directions in literacy: International conversations 2007. University of Sydney. Retrieved from http://ses.library.usyd.edu.au/bitstream/2123/2330/1/FutureDirections_Ch5.pdf

Fletcher, J. M., Francis, D. J., Shaywitz, S. E., Lyon, G. R., Foorman, B. R., Stuebing, K. K., & Shaywitz, B. A. (1998). Intelligent testing and the discrepancy model for children with learning disabilities. Learning Disabilities Research and Practice, 13(4), 186-203.

Fox, L., Carta, J., Strain, P.S., Dunlap, G., & Hemmeter, M.L. (2010). Response to intervention and the pyramid model. Infants & Young Children, 23(1), 3–13.

Freebody, P., & Luke, A. (1990). Literacies programs: Debates and demands in cultural context. Prospect, 5(3), 7–16.

Fuchs, L.S., Fuchs, D., Hosp, M.K., & Jenkins, J.R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5(3), 239-256.

Galletly, S.A., & Knight, B.A. (2006). The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) used in an Australian context. Australian Journal of Learning Disabilities, 11(3), 147-154.

Gersten, R., Compton, D., Connor, C.M., Dimino, J., Santoro, L., Linan-Thompson, S., and Tilly, W.D. (2008). Assisting students struggling with reading: Response to Intervention and multi-tier intervention for reading in the primary grades. A practice guide. (NCEE 2009-4045). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/publications/practiceguides/

Good, R. H., & Kaminski, R. A. (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for Development of Educational Achievement.

Good, R.H., & Kaminski, R.A. (1996). Assessment for instructional decisions: Toward a proactive/prevention model of decision-making for early literacy skills. School Psychology Quarterly, 11(4), 326-336.

Goyen, J. D. (1992). Diagnosis of reading problems: Is there a case? Educational Psychology, 12(3/4), 225-237.

Gresham, F. (2001, August). Response to Intervention: An alternative approach to the identification of learning disabilities. Paper presented at the Learning Disabilities Summit: Building a Foundation for the Future (Washington, DC, August 27-28, 2001).

Gresham, F.M., & Vellutino, F.R. (2010). What is the role of intelligence in the identification of specific learning disabilities? Issues and clarifications. Learning Disabilities Research & Practice, 25(4), 194–206.

Haager, D. (2007). Promises and cautions regarding using response to intervention with English language learners. Learning Disability Quarterly, 30(3), 213-218.

Haager, D., & Mahdavi, J. (2007). Teacher rules in implementing intervention. In Diane Haager, Janette Klingner, & Sharon Vaughn, (Eds.). Evidence-based reading practices for response to intervention (p. 245-264). 

Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., Cohen, M., Davis, A., Decker, S., Denckla, M., Dumont, R., Elliott, C. Feifer, S., Fiorello, C., Flanagan, D., Fletcher-Janzen, E., Geary, D., Gerber, M., Gerner, M., Goldstein, S., Gregg, N., Hagin, R., Jaffe, L., Kaufman, A., Kaufman, N., Keith, T., Kline, F., Kochhar-Bryant, C., Lerner, J., Marshall, G., Mascolo, J., Mather, N., Mazzocco, M., McCloskey, G., McGrew, K., Miller, D., Miller, J., Mostert, M., Naglieri, J., Ortiz, S., Phelps, L., Podhajski, B., Reddy, L., Reynolds, C., Riccio, C., Schrank, F., Schultz, E., Semrud-Clikeman, M., Shaywitz, S., Simon, J., Silver, L., Swanson, L., Urso, A., Wasserman, T., Willis, J., Wodrich, D., Wright, P., & Yalof, J. (2010). Critical issues in Response-To-Intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disability Quarterly, 33, 223-236. Retrieved from http://www.iapsych.com/articles/hale2010.pdf

Harn, B.A. Jamgochian, E., & Parisi, D.M. (2009). Characteristics of students who don’t respond to research-based interventions. Council for Exceptional Children. Retrieved from http://www.cec.sped.org/AM/Template.cfm?Section=CEC_Today1&TEMPLATE=/CM/ContentDisplay.cfm&CONTENTID=10645

Hasbrouck, J., & Tindall, G. (2005). Oral reading fluency: Ninety years of measurement. (BRT Technical Report No. 33), Eugene, OR: Author. Retrieved from http://brt.uoregon.edu/techreports/.

Hazelkorn, M., Bucholz, J.L., Goodman, J.I., Duffy, M.L., & Brady, M.P. (2011). Response to Intervention: General or special education? Who is responsible? The Educational Forum, 75(1), 17-25.

Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11(2), 83-92.

Hernández Finch, M. E. (2012). Special considerations with response to intervention and instruction for students with diverse backgrounds. Psychology in the Schools, 49(3), 285–296.

Hoover, J.J., Baca, L., Wexler-Love, E., & Saenz, L. (2008). National implementation of Response to Intervention (RTI): Research summary. National Association of State Directors of Special Education. Retrieved from http://www.nasdse.org/Portals/0/NationalImplementationofRTI-ResearchSummary.pdf

Hosp, M.K. (2007). The ABCs of progress monitoring in reading. National Center on Student Progress Monitoring. Retrieved from http://www.studentprogress.org/weblibrary.asp#reading

Hughes, C. E., Rollins, K., & Coleman, M. R. (2011). Response to intervention for gifted learners. In M. Coleman & S. Johnsen (Eds.), RtI for gifted students: A CEC-TAG educational resource (pp. 1-20). Waco, TX: Prufrock Press

Ikeda, M.J., Rahn-Blakeslee, A., Niebling, B.C., Allison, R., NCSP, & Stumme, J. (2006). Evaluating evidence-based practice in response-to-intervention systems. NASP Communiqué, 34(8). Retrieved from http://www.nasponline.org/publications/cq/cq348evaloutcomes.aspx

International Literacy Association. (2020). What’s hot in literacy report. Newark, DE: Author. https://www.literacyworldwide.org/docs/default-source/resource-documents/whatshotreport_2020_final.pdf

Isaacs, D., & Fitzgerald, D. (1999).  Seven alternatives to evidence based medicine. British Medical Journal, 319(7225), 1618-1618.

Jones, R. E., & Ball, C. R. (2012). Introduction to the special issue: Addressing response to intervention implementation: Questions from the field. Psychology in the Schools, 49(3), 207–209.

Jones, R. E., Yssel, N., & Grant, C. (2012). Reading instruction in tier 1: Bridging the gaps by nesting evidence-based interventions within differentiated instruction. Psychology in the Schools, 49(3), 210–218.

Joseph, L. M., & Schisler, R. (2009). Should adolescents go back to the basics?: A review of teaching word reading skills to middle and high school students RASE TL & LD. Remedial and Special Education, 30(3), 131-147.

Joyce, B., & Showers, B. (2002). Student achievement through staff development (3rd Ed.). White Plains, NY: Longman Publishers.

Kavale, K. A., & Mostert, M. P. (2003). River of ideology, islands of evidence. Exceptionality, 11(4), 191– 208.

Kirk, S. A. (1963). Behavioral diagnosis and remediation of learning disabilities. Proceedings of the annual meeting of the Conference on Exploration into the Problems of the Perceptually Handicapped Child (1), 1-23. Evanston, IL.

Koellner, K., Colsman, M., & Risley, R. (2011). Multidimensional assessment: Guiding response to intervention in mathematics. Teaching Exceptional Children, 44(2), 48-56.

Koutsoftas, A. D., Harmon, M. T., & Gray, S. (2009). The effects of tier 2 intervention for phonemic awareness in a response-to-intervention model in low income preschool classrooms. Language, Speech and Hearing Services in Schools, 40(2), 116-130.

Kranzler, J. H., Yaraghchi, M., Matthews, K., & Otero-Valles, L. (2020). Does the response-to-intervention model fundamentally alter the traditional conceptualization of specific learning disability? Contemporary School Psychology, 24, 80-88.

Kuhn, M.R., & Stahl, S.A. (2003). Fluency: A review of developmental and remedial practices. Journal of Educational Psychology, 95(1), 3-21.

Lane, K. L., Oakes, W. P., & Menzies, H. M. (2010). Systematic screenings to prevent the development of learning and behavior problems: Considerations for practitioners, researchers, and policy makers. Journal of Disabilities Policy Studies, 21, 160-172.

Lembke, E. S., Hampton, D., & Beyers, S. J. (2012). Response to intervention in mathematics: Critical elements. Psychology in the Schools, 49(3), 257–272.

Lilienfeld, S.O., Ammirati, R., & David, M. (2012). Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error. Journal of School Psychology, 50(1), 7-36.

Lindsley, O. R. (1990). Precision Teaching: By children for teachers. Teaching Exceptional Children, 22(3), 10-15.

Linklater, D.L., O'Connor, R.E., & Palardy, G.J. (2009). Kindergarten literacy assessment of English Only and English language learner students: An examination of the predictive validity of three phonemic awareness measures, Journal of School Psychology, 47(6), 369-394.

Lomax, R.G. (2004). Whither the future of quantitative literacy research? Reading Research Quarterly, 39(1), 107-112.

Louden, W., Rohl, M., Gore, J., Greaves, D., Mcintosh, A., Wright, R., Siemon, D., & House, H. (2005). Prepared to teach: An investigation into the preparation of teachers to teach literacy and numeracy. Canberra, ACT: Australian Government Department of Education, Science and Training.

Lyon, G.R. (1999). Special education a failure on many fronts. LA Times, 12/12/1999. Retrieved from http://articles.latimes.com/1999/dec/12/news/mn-43238

Lyon, G.R., & Fletcher, J.M. (2003). Early identification, prevention, and early intervention for children at-risk for reading failure. Basic Education, Council for Basic Education. Retrieved from http://www.cdl.org/resource-library/articles/early_id.php?type=subject&id=10

Manolitsis, G., & Tafa, E. (2011). Letter-name letter-sound and phonological awareness: Evidence from Greek-speaking kindergarten children. Reading and Writing: An Interdisciplinary Journal, 24(1), 27-53.

Mastropieri, M.A., Leinart, A.W., & Scruggs, T.E. (1999). Strategies to increase reading fluency. Intervention in School and Clinic, 34(5), 278-283, 292.

McDougal, J., Clark, K., & Wilson, J. (n.d.). Graphing made easy: Practical tools for school psychologists. State University of New York at Oswego. Retrieved from http://www.oswego.edu/~mcdougal/web_site_4_11_2005/index.html

McLeskey, J., & Waldron, N. (2011). Educational programs for elementary students with learning disabilities: Can they be both effective and inclusive? Learning Disabilities Research & Practice, 26(1), 48–57.

McMaster, K. L., Kung, S., Han, I., & Cao, M. (2008). Peer-assisted learning strategies: A "tier 1" approach to promoting English learners response to intervention. Exceptional Children, 74(2), 194-214.

McMaster, K.L., Ritchey, K.D., & Lembke, E. (2011). Curriculum-based measurement for beginning writers: Recent developments and future directions, in Thomas E. Scruggs, Margo A. Mastropieri (ed.) Assessment and Intervention (Advances in Learning and Behavioral Disabilities, Volume 24), Emerald Group Publishing Limited, pp.111-148.

Mellard, D. (2010). Fidelity of implementation within a Response to Intervention (RtI) framework. National Center on Response to Intervention. Retrieved from http://www.ped.state.nm.us/rti/dl11/11-Fidelity%20of%20Implementation%20guidev5.pdf

Mellard, D., McKnight, M., & Jordan, J. (2010). RTI tier structures and instructional intensity. Learning Disabilities Research & Practice, 25(4), 217–225.

Mitchell, B.S., Stormont, M., & Gage, N.A. (2011). Tier two interventions implemented within the context of a tiered prevention framework. Behavioral Disorders, 36(4), 241-261.

Morris, R.D., Lovett, M.W., Wolf, M., Sevcik, R.A., Steinbach, K.A., Frijters, J.C., & Shapiro, M.B. (2012). Multiple-component remediation for developmental reading disabilities: IQ, socioeconomic status, and race as factors in remedial outcome. Journal of Learning Disabilities, 45(2), 99-127.

Nagy, W. E., & Anderson, R. C. (1984). How many words are there in printed English? Reading Research Quarterly, 19(3), 304-330.

National Center on Response to Intervention. (2010). Essential components of RTI—A closer look at response to intervention. Washington, DC: U.S. Department of Education, Office of Special Education Programs, National Center on Response to Intervention.

National College for School Leadership (2010). About personalised learning. Retrieved from http://www.nationalcollege.org.uk/index/leadershiplibrary/leadingschools/personalisedlearning/about-personalised-learning.htm

National High School Center, National Center on Response to Intervention, and Center on Instruction. (2010). Tiered interventions in high schools: Using preliminary ‘lessons learned’ to guide ongoing discussion. Washington, DC: American Institutes for Research. Retrieved from www.rti4success.org

National Inquiry into the Teaching of Literacy. (2005). Teaching Reading: National Inquiry into the Teaching of Literacy. Canberra: Department of Education, Science, and Training. Retrieved from www.dest.gov.au/nitl/report.htm

Primary National Strategy. (2006). Leading on Intervention: A resource to support leadership teams and leading teachers. Retrieved from http://www.teachfind.com/national-strategies/waves-intervention-model-0

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Department of Health and Human Services.

Nellis, L. M. (2012). Maximizing the effectiveness of building teams in response to intervention implementation. Psychology in the Schools, 49(3), 245–256.

O'Connor, E. P., & Freeman, E. W. (2012). District-level considerations in supporting and sustaining RtI implementation. Psychology in the Schools, 49(3), 297–310.

O'Connor, R.E., Bell, K.M., Harty, K.R., Larkin, L.K., Sackor, S.M., & Zigmond, N. (2002). Teaching reading to poor readers in the intermediate grades: A comparison of text difficulty. Journal of Educational Psychology, 94(3), 474-485.

OECD. (2004). Adults at low literacy level (most recent) by country. International Adult Literacy Survey (IALS). Retrieved from http://www.nationmaster.com/graph/edu_lit_adu_at_low_lit_lev-education-literacy-adults-low-level

Orosco, M.J., & Klingner, J. (2010). One school’s implementation of RTI with English Language Learners: “Referring into RTI”. Journal of Learning Disabilities, 43(3) 269–288.

Psychological Corporation (2007). AIMSweb progress monitoring and response to intervention system. San Antonio, TX: Pearson. Retrieved from www.aimsweb.com

Pyle, N., & Vaughn, S. (2012). Remediating reading difficulties in a response to intervention model with secondary students. Psychology in the Schools, 49(3), 273–284.

Rasinski, T., Homan, S., & Biggs, M. (2009). Teaching reading fluency to struggling readers: Method, materials, and evidence. Reading & Writing Quarterly, 25(2), 192-204.

Reschly, A.L., Busch, T.W., Betts, J. Deno, S.L., & Long, J.D. (2009). Curriculum-Based Measurement Oral Reading as an indicator of reading achievement: A meta-analysis of the correlational evidence, Journal of School Psychology, 47(6), 427-469.

Roberts, G., Torgesen, J.K., Boardman, A., & Scammacca, N. (2008) Evidence-based strategies for reading instruction of older students with learning disabilities. Learning Disabilities Research & Practice 23(2), 63–69.

Rohl, M., & Greaves, D. (2005). How are pre-service teachers in Australia being prepared for teaching literacy and numeracy to a diverse range of students? Australian Journal of Learning Disabilities, 10(1), 3-8.

Rorris, A., Weldon, P., Beavis, A., McKenzie, P., Bramich, M., & Deery, A. (2011). Assessment of current process for targeting of schools funding to disadvantaged students. An Australian Council for Educational Research report prepared for The Review of Funding for Schooling Panel.

Scarborough, H. S. (1998). Early identification of children at risk for reading disabilities: Phonological awareness and some other promising predictors. In B. K. Shaprio, P. J. Accardo, & A. J. Capute (Eds.), Specific reading disability: A view of the spectrum (pp. 75-119). Timonium, MD: York Press.

Schatschneider, C., Francis, D. J., Foorman, B. R., Fletcher, J. M., & Mehta, P. (1999). The dimensionality of phonological awareness: An application of item response theory. Journal of Educational Psychology, 91(3), 439-449.

Senate Employment, Workplace Relations and Education Committee. (2007). Quality of school education. Commonwealth of Australia. Retrieved from http://www.aph.gov.au/SEnate/committee/eet_ctte/completed_inquiries/2004-07/academic_standards/index.htm

Seungsoo, Y., Dong-Il, K., Lee Branum-Martin, L., Wayman, M.M., & Espin, C.A. (2012). Assessing the reliability of curriculum-based measurement: An application of latent growth modelling. Journal of School Psychology, 50(2), 275-292.

Sideridis, G.D. (2011). Exploring the presence of Matthew Effects in learning disabilities. Journal of Learning Disabilities, 44(4), 399-401.

Siegel, L. S. (1989). IQ is irrelevant to the definition of learning disabilities. Journal of Learning Disabilities, 22(8), 469-479.

Siegel, L. S. (1992). An evaluation of the discrepancy definition of dyslexia. Journal of Learning Disabilities, 25(10), 618–629.

Smith, F. (1992). Learning to read: The never-ending debate. Phi Delta Kappan, 73(6), 432-441.

Solomon, B. G., Klein, S. A., Hintze, J. M., Cressey, J. M., & Peller, S. L. (2012). A meta-analysis of school-wide positive behavior support: An exploratory study using single-case synthesis. Psychology in the Schools, 49(2), 105–121.

Spectrum K12. (2012). 2011 RTI adoption survey. Retrieved from www.globalscholar.com/2011RTI

Speece, D. L., Mills, C., Ritchey, K. D., & Hillman, E. (2003). Initial evidence that letter fluency tasks are valid indicators of early reading skill. Journal of Special Education, 36, 223-233.

Stage, S.A., Sheppard, J., Davidson, M.M., & Browning, M.M. (2001). Prediction of first-graders' growth in oral reading fluency using kindergarten letter fluency. Journal of School Psychology, 9(3), 225-237.

Stainback, W., & Stainback, S. (Eds.) (1992). Controversial issues confronting special education: Divergent perspectives. Boston: Allyn and Bacon.

Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360-406. 

Stanovich, K. E. (1988a). Explaining the differences between the dyslexic and the garden-variety poor reader: The phonological-core variable-difference model. Journal of Learning Disabilities, 21(10), 590-612.

Stanovich, K. E. (1988b). The right and wrong places to look for the cognitive locus of reading disability. Annals of Dyslexia, 38, 154-157. 

Stanovich, K. E. (1991). Discrepancy definitions of reading disability: Has intelligence led us astray? Reading Research Quarterly, 26(1), 7–29.

Stanovich, K. E. (1992). Speculation on the causes and consequences of individual differences in early reading acquisition. In Phillip P. Gough, Linnea C Ehri, & Rebecca Treiman (Eds.), Reading acquisition. (pp.307-341). USA: Lawrence Erlbaum.

Stanovich, K. E. (1993). Does reading make you smarter? Literacy and the development of verbal intelligence. In H. Reese (Ed.), Advances in child development and behavior (Vol. 24, pp. 133-180). San Diego, CA: Academic Press.

Stecker, P.M., & Lembke, E.S. (2007). Advanced applications of CBM in reading (K–6): Instructional decision-making strategies manual. National Center on Student Progress Monitoring. Retrieved from http://www.studentprogress.org/summer_institute/2007/Adv%20Reading/AdvRdgManual_2007.doc

Stockard, J. (2010). An analysis of the fidelity implementation policies of the What Works Clearinghouse. Current Issues in Education, 13(4). Retrieved from http://cie.asu.edu/

Strong, R. W., Silver, H. F., & Perini, M. J. (2001). Making students as important as standards. Educational Leadership, 59(3), 56-61.

Stuart, M. (1995). Prediction and qualitative assessment of five and six-year-old children's reading: A longitudinal study. British Journal of Educational Psychology, 65, 287-296.

Stuebing, K. K., Fletcher, J. M., LeDoux, J. M., Lyon, G. R., Shaywitz, S. E., & Shaywitz, B. A. (2002). Validity of IQ-discrepancy classifications of reading disabilities: A meta-analysis. American Educational Research Journal, 39(2), 469–518.

Sugai, G., Simonsen, B., Coyne, M., & Faggella-Luby, M. (2007). A school improvement framework for promoting evidence-based academic and behavior supports. Presentation at Closing the Achievement Gap Conference, University of Connecticut, Storrs. Retrieved from http://www.pbis.org/pbis_resource_detail_page.aspx?Type=1&PBIS_ResourceID=143

Swanson, H.L. (2001). Research on interventions for adolescents with learning disabilities: A meta-analysis of outcomes related to higher-order processing. The Elementary School Journal, 101(3), 331-348.

Swanson, H.L., & Hoskyn, M. (1998). Experimental intervention research on students with learning disabilities: A meta-analysis of treatment outcomes. Review of Educational Research, 68(3), 277-321.

Swanson, H.L., & O’Connor, R.E. (2009). The role of working memory and fluency training on reading comprehension in children who are dysfluent readers. Journal of Learning Disabilities, 42, 548-575.

Tomlinson, C. (1999). The differentiated classroom: Responding to the needs of all learners. Alexandria, VA: ASCD.

Tomlinson, C. A., & Allan, S. (2000). Leadership for differentiating schools and classrooms. Alexandria, VA: ASCD.

Torgesen, J. (2003). Using science, energy, patience, consistency, and leadership to reduce the number of children left behind in reading. Barksdale Reading Institute, Florida. Retrieved from http://www.fcrr.org/staffpresentations/Joe/NA/mississippi_03.ppt

Torgesen, J., Myers, M., Schirm, A., Stuart, E., Vartivarian, S., Mansfield, W., Stancavage, F., Durno, D., Javorsky, R., & Haan, C. (2006). National Assessment of Title I Interim Report to Congress: Volume II: Closing the Reading Gap, First Year Findings from a Randomized Trial of Four Reading Interventions for Striving Readers. Washington, DC: U.S. Department of Education, Institute of Education Sciences.

Torgesen, J.K. (1998, Spring/Summer). Catch them before they fall: Identification and assessment to prevent reading failure in young children. American Educator. Retrieved from http://www.ldonline.org/article/225/

Tunmer, W., & Greaney, K. (2010). Defining dyslexia. Journal of Learning Disabilities, 43(3), 229–243.

UNICEF. (2002). A league table of educational disadvantage in rich nations. Innocenti Report Card No.4, November 2002. Florence, Italy: Innocenti Research Centre.

Vadasy, P.F., Jenkins, J.R., & Pool, K. (2000). Effects of tutoring in phonological and early reading skills on students at risk for reading disabilities Journal of Learning Disabilities, 33, 579-590.

van Kraayenoord, C.E. (2010). Response to intervention: New ways and wariness. Reading Research Quarterly, 45(3), 363-376.

VanDerHeyden, A. M., Snyder, P. A., Broussard, C., & Ramsdell, K. (2007). Measuring response to early literacy intervention with preschoolers at risk. Topics in Early Childhood Special Education, 27(4), 232-249.

Vaughn, S., & Dammann, J.E. (2001). Science and sanity in special education. Behavioral Disorders, 27(1), 21–29.

Vaughn, S., Cirino, P. T., Wanzek, J., Wexler, J., Fletcher, J. M., Denton, C. A., Barth, A.E., & Francis, D. J. (2010). Response to intervention for middle school students with reading difficulties: Effects of a primary and secondary intervention. School Psychology Review, 39(1), 3-21.

Vaughn, S., Denton, C. A., & Fletcher, J. M. (2010). Why intensive interventions are necessary for students with severe reading difficulties. Psychology in the Schools, 47(5), 432–444.

Vaughn, S., Fletcher, J. M., Francis. D. J., Denton, C. A., Wanzek, J., Wexler, J., Cirino, P.T., Barth, A.E., & Romain, M. A. (2008). Response to intervention with older students with reading difficulties. Learning and Individual Differences, 18(3), 338-345.

Vaughn, S., Wanzek, J., Wexler, J., Barth, A.E., Cirino, P.T., Fletcher, J.M., Romain, M.A., Denton, C.A., Roberts, G., & Francis, D.J. (2010). The relative effects of group size of reading progress of older students with reading difficulties. Reading and Writing: An Interdisciplinary Journal, 23(8), 931-956.

Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S., Chen, R., Pratt, A., & Denckla, M. B. (1996). Cognitive profiles of difficult to remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognition and experiential deficits as basic cause of specific reading disability. Journal of Educational Psychology, 88(4), 601-638.

Vervaeke, S.-L., McNamara, J. K., & Scissons, M. (2007). Kindergarten screening for reading disabilities. Journal of Applied Research on Learning, 1(1), 1-19.

Victorian Auditor-General. (2009). Literacy and numeracy achievement. Retrieved from http://www.audit.vic.gov.au/reports__publications/reports_by_year/2009/20090204_literacy_numeracy/1_executive_summary.aspx

Wanzek, J., & Vaughn, S. (2008). Response to varying amounts of time in reading intervention for students with low response to intervention. Journal of Learning Disabilities, 41(2), 126-142.

Weaver, C. (1988). Reading: Progress and practice. Portsmouth, NJ: Heinemann.

Wedl, R. (2005). Response to intervention: An alternative to traditional eligibility criteria for student’s with disabilities.  Retrieved from http://www.educationevolving.org/pdf/Response_to_Intervention.pdf

Xu, Y., & Drame, E. (2008). Culturally appropriate context: Unlocking the potential of response to intervention for English Language Learners. Early Childhood Education Journal, 35(4), 305-311.

Module-Bottom-Button-A rev

Module-Bottom-Button-B rev

Module-Bottom-Button-C rev2

candid-seal-gold-2024.png