fbpx

Dr Kerry Hempenstall, Senior Industry Fellow, School of Education, RMIT University, Melbourne, Australia.

 Updated version of: Hempenstall, K. (2009). Research-driven reading assessment: Drilling to the core. Australian Journal of Learning Difficulties, 14(1), 17-52.

All my blogs can be viewed on-line or downloaded as a Word file or PDF at https://www.dropbox.com/sh/olxpifutwcgvg8j/AABU8YNr4ZxiXPXzvHrrirR8a?dl=0


 

There is considerable interest and controversy in both the US and Australian communities concerning how our students are faring in the task of mastering reading, and how our education system deals with such a vital skill. There is concern in both countries that national and international comparisons have not been flattering.

 

For decades, the National Assessment of Educational Progress (known as “The Nation’s Report Card”) has shown that 2 out of 3 students are not proficient in reading by the end of the 4th grade—the first year of schooling in which reading is an essential tool skill.4 Over 1 out of 3—more than a million students per year—fail to reach even the basic skill level (Stone, 2013, p.3).

There is a current public perception that either educational outcomes for students have been declining or that the education system is increasingly less able to meet rising community and employer expectations (Jones, 2012).

Gallup-Graph-Hempenstall blogv2

Parental concerns about literacy are becoming increasingly evident in Australia, too. In the Parents’ Attitudes to Schooling report (Department of Education, Science and Training, 2007), only 37.5% of the surveyed parents believed that students were leaving school with adequate skills in literacy. There has been an increase in dissatisfaction since the previous Parents' Attitudes to Schooling survey in 2003, when 61% of parents considered primary school education as good or very good, and 51% reported secondary education as good or very good. Recent reports in the press suggest that employers too have concerns about literacy development among young people generally, not simply for those usually considered to comprise an at-risk group (Collier, 2008).

 Confidence in US public schools (Jones, 2012).

Concerns about public education are not new; however, their focus in recent times has shifted. Concerns that have arisen over the last 20 years include apparent national and state test score declines, unflattering international achievement comparisons, the failure of funding increases to produce discernible results, high school dropout rates, and a perception that schooling and work are insufficiently closely aligned (Levin, 1998). The press has displayed increased interest in highlighting these issues, thus raising community awareness. Further, the expanding role for both national and international assessments has brought the issue further to public attention.

There appears to be a public perception that there are serious problems in the education system’s capacity to meet community expectations. In the past, some teacher organisations have argued that these issues should be left in the hands of teachers, and that the school performance of students is quite acceptable compared with that of other nations. Numerous surveys and reports have reached quite different conclusions. Further, concern has been expressed about the academic quality of those accepted into education faculties (Leigh & Ryan, 2006) and of the adequacy of the training of teachers receive to enable them to deal with the diverse needs of the student population.

In the 2011 NAEP, results in writing only about one-quarter of the 8th and 12th graders performed at the proficient level or higher, and much lower for black and Hispanic students (Fleming, 2013). Fewer than one-third of American 8th graders were deemed proficient in science(Sparks, 2013), or could read well enough to comprehend their text books (National Center for Educational Statistics, 2011). In reading, results have shown some improvement since the nineties, but it is clearly difficult to elevate scores dramatically(National Center for Education Statistics, 2011).

In the OECD Programme for International Student Assessment (PISA), the US can be seen to be performing below some nations with lesser GDPs (Tucker, 2011).

Reading-Perf-PISA-Hempenstall-blog

On the 2011 Progress in International Reading Literacy Study (PIRLS), the results (only Grade 4 is assessed) were better than in 2006. The average score for U.S. fourth-grade students (556) was higher than the PIRLS scale average, which is set to 500. Of the 52 other education systems participating, 5 had higher average scores than the United States (Hong Kong-China, Florida (participating as an independent entity), the Russian Federation, Finland, and Singapore). Incidentally, Australian Year 4 students ranked 27th among the 53 nations involved, outperformed by other English-speaking nations such as England, the US and New Zealand. As this is the first time Australia has been involved in PIRLS, some consternation has followed the results.

In a report to the Office of Educational Research and Improvement, Snow (2002) noted that U.S. students are falling behind students in other comparable countries because underdeveloped basic skills limit their attainment in the challenging subject-specific demands of the secondary school curriculum.

In Australia, such broad scale assessment is a more recent phenomenon. Begun initially at a State level, it has now expanded to a national level National Assessment Program Literacy and Numeracy (NAPLAN) and international level Progress in Reading Literacy Study (PIRLS), and Programme for International Student Assessment (PISA).

In the 2008 NAPLAN assessment, 19.6 per cent of Australian students were at or below the national minimum standard in reading, and 18.7 per cent were at or below the standard in numeracy.

Also in 2012, the international PIRLS tests revealed that almost 25 per cent of Year 4 children in Australia failed to meet the standard in reading for their age. The report released by the Australian Council for Educational Research (ACER, 2012) reveal disappointing results for Australia in this latest international study of mathematics and science achievement, and in Australia’s first ever international assessment of reading at primary school level. Australian Year 4 students ranked 27th among the 53 nations involved, outperformed by other English-speaking nations such as England, the US and New Zealand. As this is the first time Australia has been involved in PIRLS, some consternation has followed the results.

Other international data (PISA) indicated a decline in reading (2000–2009) and mathematics (2003–2009) (Australian National Audit Office, 2012).

Although the OECD average for reading literacy has not changed between 2000 and 2009, ten countries have significantly improved their performance over this time, while five countries, including Australia, have declined significantly. … Australia’s reading literacy performance has declined, not only in terms of rankings among other participating countries but also in terms of average student performance. The mean scores for Australian students in PISA 2000 was 528 points, compared to 515 for PISA 2009. A decline in average scores was also noted between PISA 2000 and PISA 2006, when reading literacy was a minor domain (ACER, 2010).

Releasing the results, ACER Chief Executive Professor Geoff Masters said, “To say the results are disappointing is an understatement (ACER, 2012, p.1)”.

National and international assessments have the potential to provide some sense of how our children are faring in their education. However, there are limitations to the value of this style of testing, particularly when the only tasks included are those intended to assess reading comprehension. It is unquestionably a major function of reading, but not the only important component. Further, the benchmarks chosen for the various levels of proficiency are not always transparent, and are thus open to manipulation. Such occurrences have been reported in the past (Adkins, Kingsbury, Dahlin, & Cronin, 2007), as have cheating by teachers in the administration and scoring of tests (HuffPost Education, 2011; 2013). There can even be marked differences in results reported nationally and locally: “According to the National Assessment of Educational Progress, the percentage of students who are proficient in basic reading and math are roughly half of the rates reported by the states” (Stone, 2013, p.5).

Over recent years in the USA, eight states had their reading and/or maths tests become significantly easier in at least two grades (Cronin, Dahlin, Adkins, & Gage Kingsbury, 2007). The report, entitled The Proficiency Illusion, also found that recent improvements in proficiency rates on US state tests could be explained largely by declines in the difficulty of those tests.

So, a weakness of such opaque data is the potential for benchmarks to be manipulated to show governments of the day in the best possible light. There are examples in which benchmarks have been so low as to be at the level of chance. For example, when four multiple choice items constitute the response set for students, a 25% mark could be obtained by chance alone. Surely benchmarks would never be so low that chance alone could produce a proficiency level?

"In 2006, the results needed to meet (Australian) national benchmarks for students in Years 3, 5 and 7 ranged from 22% to 44%, with an average of less than 34%. Year 3 students needed to achieve only 22% for reading, 39% for numeracy, and 30% for writing to be classified as meeting the minimum acceptable standard (Strutt, 2007, p.1).”

Recently in Great Britain (Paton, 2008), the Assessment and Qualifications Alliance exam board admitted that standards had been lowered to elevate scores in 2008. In one exam paper, C grades (a good pass) were awarded to pupils who obtained a score of only 20%.

If community interest in literacy has been sparked, and there is some concern about the validity of the national broad scale assessment model, it is important for educators to offer guidance about high quality assessment. Part of the current literacy problem can be attributed to educators because they have not offered this high quality assessment in their schools to monitor progress. There has been a tendency to rely on informal assessment that often lacks validity and reliability (Watson, 1998), and unhelpful techniques like miscue analysis (Hempenstall, 1998), and the perusal of student folios (Fehring, 2001).

In a three-year Australian study “Wyatt-Smith and Castleton investigated how Australian teachers made judgments about student writing using literacy benchmark standards (Department of Employment, Education, Training and Youth Affairs [DEETYA] 1998; Wyatt-Smith and Castleton 2005). … Teachers made judgments based on their own experience; explicit literacy standards were not part of teachers' experience, and teachers accepted that their "in head" standards varied from year to year and from class to class. (Bolt, 2011, p.158). Studies by Feinberg and Shapiro (2009) and by Bates and Nettlebeck (2001) each noted that informal assessments were significantly less accurate for struggling readers, and significantly overestimated their capabilities.

If every teacher did implement a standard, agreed upon assessment schedule, based upon the current evidence on reading development, then there would be no real need for national assessment. Data would be comparable across the nation, based upon a similar metric.

The assessment of critical literacy components can supply valuable information not available in these broad scale testing programs. For example, assessment can assist in the identification and management of students at-risk even before reading instruction commences. They can also help identify those making slow progress at any year level. This is especially important given the usually stable learning trajectory from the very early stages. If specific interventions are implemented, appropriate reading assessment can provide on-going information about the effectiveness of the chosen approach. There is an important question implicit in this potentially valuable activity. What sorts of assessment are likely to be most beneficial in precluding reading pitfalls and enhancing reading success? In this paper, the emphasis is directed towards assessment of those aspects of reading that have been identified by research as critical to reading development.

It is recognised that literacy assessment itself has little intrinsic value; rather, it is only the consequences flowing from the assessment process that have the potential to enhance the prospects of those students currently struggling to master reading. Assessment also allows for the monitoring of progress during an intervention, and evaluation of success at the end of the intervention; however, the initial value relates to the question of whether there is a problem, and if so, what should be done. What should be done is inevitably tied to the conception of the reading process, and what can impede its progress. How do educationists tend to view the genesis of reading problems?

Perceptions of literacy problems and causes

Alessi (1988) contacted 50 school psychologists who, between them, produced about 5000 assessment reports in a year. The school psychologists agreed that a lack of academic or behavioural progress could be attributed to one or more of the five factors below. Alessi then examined the reports to see what factors had been assigned as the causes of their students’ educational problems.

1. Curriculum factors? No reports.

2. Inappropriate teaching practices? No reports.

3. School administrative factors? No reports.

4. Parent and home factors? 10-20% of reports.

5. Factors associated with the child? 100%.

In another study this time surveying classroom teachers, Wade and Moore (1993) noted that when students failed to learn 65% of teachers considered that student characteristics were responsible while a further 32% emphasised home factors. Only the remaining 3% believed that the education system was the most important factor in student achievement, a finding utterly at odds with the research into teacher effects (Cuttance, 1998; Hattie, Clinton, Thompson, & Schmidt-Davies, 1995).

This highlights one of the ways in which assessment can be unnecessarily limiting in its breadth, if the causes of students’ difficulties are presumed to reside solely within the students, rather than within the instructional system. Assessment of students is not a productive use of time unless it is carefully integrated into a plan involving instructional action. What may hold back such intervention is a teacher belief that since most students do learn to read in my class, then surely the problem doesn't originate from my teaching.

When the incidence of failure is unacceptably high, as in the US and Australia, then an appropriate direction for resource allocation is towards the assessment of instruction. It can only be flawed instruction that intensifies the reading problem from a realistic incidence of reading disability of around 5% (Brown & Felton, 1990; Felton, 1993; Marshall & Hynd, 1993; Torgesen, Wagner, Rashotte, Alexander, & Conway, 1997; Vellutino et al., 1996) to that which we find of 20 - 30% (or higher). A tendency can develop for victim blame. "Learning disabilities have become a sociological sponge to wipe up the spills of general education. … It's where children who weren't taught well go (p.A1)" (Lyon, 1999).

Though it is not the focus of this paper, there is an increasing recognition that an education system must constantly assess the quality of instruction provided in its schools, and that it should take account of the findings of research in establishing its benchmarks and policies. “Thus the central problem for a scientific approach to the matter is not to find out what is wrong with the children, but what can be done to improve the educational system” (Labov, 2003, p.128). The development of an Australian national English curriculum is an example of this emerging system interest. Up to this time, education systems in Australia have been relatively impervious to such findings (Hempenstall, 1996, 2006), lagging behind significant, if tenuous, changes in the USA with Reading First (Al Otaiba et al., 2008) and in Great Britain with the Primary National Strategy (2006). For further discussion about the role of instruction see Failure to learn: Causes and consequences

Even allowing that the major problem for the education system lies in the realm of instruction, particularly in the initial teaching of reading, individual student assessment remains of value. It is, of course, necessary as a means of evaluating instructional adequacy. Beyond that, there is great potential value in the early identification of potential reading problems, in determining the appropriate focus for instruction, in the monitoring of progress in relevant skill areas, and with the evaluation of reading interventions. It is the assumption in this paper that decisions about assessment should be driven by up-to-date conceptions of the important elements in reading development.

Issues in reading development that could guide assessment

In the largest, most comprehensive evidenced-based review ever conducted of research on how children learn to read the National Reading Panel (NRP; National Institute of Child Health and Human Development, 2000) presented its findings. For its review, the Panel selected methodologically sound research from the approximately 100,000 reading studies that have been published since 1966, and from another 15,000 earlier studies.

The specific areas the NRP noted as crucial for reading instruction were phonemic awareness, phonics, fluency, vocabulary, and comprehension. Students should be explicitly and systematically taught:

  1. Phonemic awareness: The ability to hear and identify individual sounds in spoken words.
  2. Phonics: The relationship between the letters of written language and the sounds of spoken language.
  3. Fluency: The capacity to read text accurately and quickly.
  4. Vocabulary: All the words students must know to communicate effectively.
  5. Comprehension: The ability to understand what has been read.

For children in pre-school and in their first year of formal schooling, the Panel found that early training in phonemic awareness skills, especially blending and segmenting, provided strong subsequent benefits to reading progress. It further recommended that conjoint phonemic awareness and phonics emphases should be taught directly, rather than incidentally, as effective instruction in both skills leads to strong early progress in reading and spelling.

The Panel’s emphasis on these five elements is also consonant with the findings of other several major reports, such as those of the National Research Council (Snow, Burns, & Griffin, 1998), the National Institute for Child Health and Human Development (Grossen, 1997), the British National Literacy Strategy (Department for Education and Employment, 1998), the Rose Report (Rose, 2006) and the Primary National Strategy (2006).

In 2006, the Primary Framework for Literacy and Mathematics (Primary National Strategy, 2006) was released, updating its 1998 predecessor and mandating practice more firmly onto an evidence base. In particular, it withdrew its imprimatur from the 3-cueing system (Hempenstall, 2003), and embraced the Simple View (Hoover & Gough, 1990) of reading that highlights the importance of decoding as the pre-eminent strategy for saying what’s on the page, and comprehension for understanding that which has been decoded. Under the 3-cueing system, making meaning by any method (for example, pictures, syntactic, and semantic cues) was considered worthwhile, and, for many protagonists, took precedence over decoding as the prime strategy (Weaver, 1988).

The new 2006 Strategy mandates a synthetic phonics approach, in which letter–sound correspondences are taught in a clearly defined sequence, and the skills of blending and segmenting phonemes are assigned high priority. This approach contrasts with the less effective analytic phonics, in which the phonemes associated with particular graphemes are not pronounced in isolation (i.e., outside of whole words). In the analytic phonics approach, students are asked to analyse the common phoneme in a set of words in which each word contains the phoneme being introduced (Hempenstall, 2001). The lesser overall effectiveness of analytic phonics instruction may be due to a lack of sufficient systematic practice and feedback usually required by the less able reading student (Adams, 1990).

In Australia, the National Inquiry into the Teaching of Literacy (Department of Education, Science, and Training, 2005) recommendations exhorted the education field to turn towards science for its inspiration. For example, the committee argued strongly for empirical evidence to be used to improve the manner in which reading is taught in Australia.

In sum, the incontrovertible finding from the extensive body of local and international evidence-based literacy research is that for children during the early years of schooling (and subsequently if needed), to be able to link their knowledge of spoken language to their knowledge of written language, they must first master the alphabetic code – the system of grapheme-phoneme correspondences that link written words to their pronunciations. Because these are both foundational and essential skills for the development of competence in reading, writing and spelling, they must be taught explicitly, systematically, early and well (p.37).

Research supporting an early emphasis on the code for both assessment and instruction?

Even though it is comprehension that is the hallmark of skilled reading, it is not comprehension per se that presents the major hurdle for most struggling young readers. There is increasing acknowledgement that the majority of reading problems observed in such students occur primarily at the level of single word decoding (Rack, Snowling, & Olson, 1992; Stanovich, 1988a; Stuart, 1995; Vellutino & Scanlon, 1987), and that in most cases this difficulty reflects an underlying struggle with some aspect of phonological processing (Bradley & Bryant, 1983; Bruck, 1992; Lyon, 1995; Perfetti, 1992; Oakhill &Garnham, 1988; Rack et al., 1992; Share, 1995; Stanovich, 1988a, 1992; Vellutino & Scanlon, 1987; Wagner & Torgesen, 1987). In the Shaywitz (2003) study, 88 percent of the children with reading problems had phonologically-based difficulties. Lovett, Steinbach, and Frijters (2000) summarise neatly this emphasis. “Work over the past 2 decades has yielded overwhelming evidence that a core linguistic deficit implicated in reading acquisition problems involves an area of metalinguistic competence called phonological awareness” (p.334).

Unless resolved, phonological problems predictably impede reading development, and they continue to be evident throughout the school years and beyond (Al Otaiba et al., 2008). A study by Shankweiler, Lundquist, Dreyer, and Dickinson (1996) provided some evidence for the fundamental problem area. Their study of Year 9 and Year 10 learning disabled and low to middle range students found significant deficiencies in decoding across the groups, even among the average students. They argued for a code-based intervention as an important focus. They also noted that differences in comprehension were largely reflecting levels of decoding skill, even among senior students, a point echoed by Simos et al. (2007) in their magnetoencephalographic study, and Scammacca et al. (2008) in their meta-analysis. Shankweiler and colleagues (1999) also found that decoding, assessed by reading aloud a list of non-words (e.g., skirm, bant), correlated very highly with reading comprehension -- accounting for 62% of the variance.

A number of similar studies involving adults with reading difficulties have revealed marked deficits in decoding (Bear, Truax, & Barone, 1989; Bruck, 1990, 1992, 1993; Byrne & Letz, 1983; Perin, 1983; Pratt & Brady, 1988; Read & Ruyter, 1985; cited in Greenberg, Ehri, & Perin, 1997). In the Greenberg et al. (1997) study with such adults, performance on phonologically-based tests resembled those of children below Year Three. Even the very bright well-compensated adult readers acknowledged that they had laboriously to remember word shapes (an ineffective strategy), had little or no idea how to spell, and were constantly struggling to decode new words, especially technical terms related to their occupations.

The emphasis on decoding is not to say that difficulties at the level of comprehension do not occur, but rather, that for many students they occur as a consequence of a failure to develop early fluent, context-free decoding ability. The capacity to actively transact with the text develops with reading experience, that is, it is partly developed by the very act of reading. Students who engage in little reading usually struggle to develop the knowledge of the world and the vocabulary necessary as a foundation for comprehension (Nagy & Anderson, 1984; Stanovich, 1986, 1993). “ … the phonological processing problem reduces opportunities to learn from exposure to printed words and, hence, has a powerful effect on the acquisition of knowledge about printed words, including word-specific spellings and orthographic regularities” (Manis, Doi, & Bhadha, 2000, p.325).

Schools often espouse the goal of teaching all students to read. So, they need to know how students are progressing along the way to meet this goal (Kame'enui, Simmons, & Coyne, 2000). This implies the existence of long-term reading goals, and some sort of performance benchmarks for their students. Criterion-based benchmarks supply one form of progress monitoring. They should be aligned with the skills emphasised by the National Reading Panel, and assessed regularly during the primary years at least, to provide schools with information about student progress and the appropriateness (focus, intensity, duration) of the current instruction (Coyne, Kame'enui, & Simmons, 2004).

Effective initial instruction can reduce the need for formal individual assessment, but it is predicated on the provision of regular whole class monitoring. For example, at the beginning of the year all students could be provided with a reading assessment to assess overall literacy competence. A mid-year progress assessment can be used to evaluate instructional adequacy, and inform any revised instructional decisions. For those detected as being at risk, more fine-grained assessment information allows for efficient, pinpoint instruction. This group of students require elevated intensity of programming with specific short-term measurable objectives. Those short-term learning goals could be monitored at least monthly. In this system, those who struggle are observed more closely and more frequently.

Given the confluence of the findings of empirical research on reading instruction, it is appropriate for reading assessment to reflect this current understanding. If the five NRP elements are critical to development, then designing assessment around these five offers the best chance of detecting where something goes wrong, rather than solely that something is wrong. Of the five important elements, phonemic awareness, phonics, and fluency are lower order skills related to efficiently getting the words off the page; whereas, vocabulary and comprehension are higher order language skills associated with appreciating the meaning of the words obtained through efficient use of these well developed lower order processes.

Reading assessment that reflects the current understanding

At beginning stage:

Early reading delay is sometimes viewed as indicative of a slow starter who will catch up later; however, this is a dangerous assumption. It is based upon a belief that learning to read is as natural as learning to speak, and that immersion in interesting literature is sufficient to promote the process of development. An associated assumption that delays early identification and intervention is that children have a natural and immutable developmental trajectory that cannot profitably be hurried (Hempenstall, 2005).

Juel (1988) reported a probability of .88 that a student classified as a poor reader at the beginning of Year 1 would remain so when re-tested at Year 4, a finding echoing earlier work by Satz, Fletcher, Clark, and Morris (1981) who found that 93.9% of severely poor readers in Year 2 continued to be poor readers in Year 5. There is now a strong consensus (Al Otaiba et al., 2008; Shaywitz et al., 1999) about this nexus. The Matthew Effect (Stanovich, 1986) describes how relatively minor early deficits often broaden into a cascade of problems that intensify over a student’s career. Early identification and intervention should be paramount issues for the sake of those children who are at present needlessly exposed to crippling, extended failure.

Methods of identification

It is possible to obtain information from a wide array of domains possibly relevant to reading success, for example, perceptuo-motor development, and skills related to vision, balance, speech, handedness, self-help, language, and socialisation. One important issue is to what degree the potential components add to the predictive power of any intended assessment battery.

A second is how accurately the test(s) predict membership of the group of students who will struggle. A test or test battery, when employed with a group of children, will have a false positive rate and a false negative rate. In the former, there are children identified as at-risk who, without intervention, do not subsequently present with literacy difficulties. In the latter, there are children who do not appear as at-risk in the assessment, but who do later develop literacy problems. Scarborough (2003) refers to these occurrences as miss errors. Depending on the test(s) chosen and the cut-offs selected as the risk levels, the test(s) may be overly inclusive, identifying an unreasonably large cohort of students as at-risk. Alternatively, the results may underestimate the number of children at risk. The choice of tests needs to take these issues into account, and a higher than desirable false positive rate is considered more acceptable than a high false negative rate. Although one might waste resources on some students who didn’t really require additional assistance, at least you don’t miss many of those who do need your help.

Tests are necessary because teacher predictions are highly variable. As a group, teachers tend to predict reasonably well those students who will not experience reading difficulty, but predict much less well those students who will subsequently endure literacy problems (Rathvon, 2004).

The third requirement is that any screening test be relatively brief, and readily administered by teachers (Snow, Burns, & Griffin, 1998). The more (relevant) tests you add, the more accurate will be your predictions; however, the scarce resources available for testing all children demand brief assessment times. Scarborough (1998) in her review of screening tools suggested the extra work involved in administering and interpreting large test batteries is not reflected in a commensurate improvement in accuracy of screening.

The choice of tests also involves a consideration of the theoretical relevance of the tests, their soundness as instruments (validity and reliability), and their time demands. The length of some tests and batteries makes their use on large cohorts unworkable. There is usually some trade-off when selecting tests between the higher reliability of longer tests and multiple tests of the same skill, and the feasibility of test use with cohorts of students. However, a great deal of effort is being expended in constructing tests (both standardised and curriculum-based) that are brief, and yet able to provide trustworthy and valuable information. The tests described in this paper are generally considered to be of sound construction, and have adequate validity and reliability. However, a problem with many tests involves floor effects (Rathvon, 2004). When there are insufficient low-complexity items in a test, a very low score may not be adequately interpreted as an at risk score when standardised. Thus, a test with a high floor may fail to detect serious deficits. This problem occurs most frequently with the youngest children in a test’s age range. For example, a raw score of 1 out of 20 items in the Elision subtest of the Comprehensive Test of Phonological Processing (CTOPP) (Wagner, Torgesen, & Rashotte, 1999) produces a standard score of 8 for a five year old, which is less than one standard deviation from the mean (M=10, SD=3). Thus, test users need to be alert to test issues generally, and especially to the risk of even highly regarded tests having weaknesses at their lower age ranges.

The advantage of broad-scale screening is that it avoids the backlog, delay, and variable accuracy of teacher judgement when individual referral is the only detection system. Individual referral is a much less reliable system, and often results in reading difficulties lying undiscovered until they are well advanced, resistant to intervention, and have broadened to include many curriculum areas (Coyne, Kame'enui, & Simmons, 2004).

Schedule for assessment

A further question involves when best to schedule these screens. The later the screen, the more accurate the predictions tend to be (Scarborough, 1998). However, screening in mid Year 1, while it may provide a clearer picture of who is in need of intervention, allows more than a year to be wasted and failure to become entrenched. On the other hand, screening during the pre-school year may over-identify those who have simply had little literacy experience. One compromise is to schedule screening on several occasions during the primary years, most frequently in the early years (Rathvon, 2004). Work is still progressing in endeavouring to make accurate predictions at the preschool and beginning reading stage.

Again there are several broad formats. One might initially screen all incoming beginning students, for example. Those who fall below a predetermined criterion are allocated additional or more intensive assistance, and subsequently re-screened. The criterion or cut-off is often the lowest quartile (25%), but some suggest selecting the lowest 10%. The choice depends to some degree on the resources available to address the resultant demands on the school resources. If one selects the lowest 25% there will be a larger cohort for whom intervention will need to be supplied. The outcome will entail a larger false positive rate, but a lower false negative rate. A larger group will include students who didn’t really need extra help, as it eventuates, but it shouldn’t miss many of those who really do. If the resources are available, this more inclusive criterion is a good option. If the lowest 10% are chosen, fewer false positives but more false negatives will arise. So, this system may ignore some students who subsequently display reading difficulties.

A variant is to employ a two-tier screen in which those detected in the process above are provided with more detailed assessment, involving a more comprehensive set of measures designed to specify with more precision the optimal area(s) for prophylactic intervention. A child’s low score on a screening test does not describe the nature of the problem, but rather serves as a beacon that there is an issue of concern.

A third model treats all students as the target group. Everyone is screened with, for example, a fluency measure that establishes their baseline attainment in February. All the students participate in an evidence-based literacy program, and their growth is monitored with similar, but time-sensitive, scheduled progress (formative) assessments, usually three times over the year. Those students not displaying acceptable growth curves are assigned additional support.

California’s Reading First Plan (California Department of Education, 1999) is well regarded as a model of careful, regular, evidence-based assessment. The sequence they have adopted is summarised in this table.

California’s Reading First Plan

Phoneme Awareness: Mid-year for Prep; End of Grade 1 (if needed); only if needed for Grades 2 and 3

Tasks: Deletion: Initial and Final Sounds, Phoneme Segmentation, Counting Syllables.

Beginning Phonics: Late Prep; only if needed for Grades 1, 2, and 3;

Tasks: Alphabet Names, Consonant Sounds, Short Vowel Sounds.

Phonics: Every 4 to 6 Weeks for Grade 2; only if needed for Grade 3;

Tasks: Word Study, Decoding, Early spellings.

Oral Reading Fluency: Early Grade 1,then 3 to 6 times per year for Grades 2 and 3;

Tasks: Timed Fluency – Words Correct Per Minute (WCPM).

Reading Comprehension: Every 8-10 Weeks, Grades 1, Every 6-8 Weeks, Grades 2 and 3;

Tasks: Main idea, Author’s point of view, Analysis, and Inference.

Vocabulary: Every 8-10 Weeks, Grades 1; Every 6-8 Weeks, Grades 2 and 3;

Tasks: Antonyms, Synonyms, Multiple Meanings, Context Meanings.

This approach is sometimes called dynamic or interactive assessment, and sometimes the test-teach-test model. It is based upon the idea that more may be learned about children's cognitive development through the assessment of what they can achieve after teaching, rather than solely assessing unassisted performance, as in static tests. It is considered by Tzuriel (2000) to more accurately reflect the learning potential of children than do snapshot tests. It also forms the basis for the approach to special education known as Response to Intervention (RTI; Gresham, 2001). Children displaying signs of failing in the early grades receive scientifically based instruction as soon as possible. Special education services focus on those who, even with these services, are not successful. These have been described as treatment resisters (Torgesen, 2000). The focus of RTI is on responding to the instructional challenges caused by the disability, rather than solely giving tests to document the failure of the student.

According to Carnine (2003), the Response to Intervention model includes five major steps:

First, criteria in the beginning grade determine if any child exhibits a significant difference between actual and expected rate of learning in one or more of the academic domains included in the definition of specific learning disabilities

Second, develop a plan to provide an evidence-based intervention. Ensure that the teacher receives training sufficient to fully implement the intervention.

Third, monitor and document the child’s progress, and regularly report this information to parents.

Fourth, if the child is not progressing at a desired rate, determine if the intervention is being implemented with fidelity -- and if not, provide additional assistance to the teacher.

Fifth, lack of progress over an agreed limited period of time leads to a full child-centred evaluation conducted by a team. In the USA, this process could lead to identification of the child as having a specific learning disability, and subsequently to the provision of special education services.

A crucial and far reaching element in establishing a systematic assessment plan in a school is the likely casualty rate. The figures cited earlier suggest 20-30% of students currently struggle to varying degrees. It is also considered by many that the most common approaches to the initial teaching of reading are suboptimal, and are themselves responsible for a high proportion of the failure rate (National Inquiry into the Teaching of Literacy, 2005). Given the number of students who struggle to master reading, efficiency in the provision of initial teaching and subsequent support becomes very important for education systems (Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2001). There are several components of effective whole-system or whole-school approaches. The first is attention to evidence as a means of determining the most effective approaches. Then, adequate time must be assigned to the task of providing initial reading instruction, although not all students require the same level of direct teacher input. With improvements in intensity and program quality, a reduction in the number of students requiring significant 1:1 teacher time then allows additional time to be provided for the seriously struggling students. This circumstance can eventuate when initial instruction reflects effective, research-supported approaches, thereby producing fewer casualties, and enabling the school costs of providing intensive support to be maintained at realistic levels. The third component entails a recognition that, for some students, the duration of intervention may be markedly extended beyond the average for literacy to develop successfully (Torgesen, 2000).

An early screen protocol

Over time it has become apparent that the strongest predictors of success in beginning reading are a knowledge of letter-sound correspondences (Chall, 1967; Stage, Sheppard, Davidson, & Browning, 2001) and phonemic awareness (Torgesen, 1998). This provides a theoretical rationale for focussing assessment on these areas initially.

Research evidence has shown that two of the most significant predictors of success in alphabetic literacy acquisition are knowledge of alphabet letters and early phonological awareness skills (Adams, 1990; Ball & Blachman, 1991; Bradley & Bryant, 1983; Byrne & Fielding-Barnsley, 1989; Cardoso-Martins, Resende, & Rodrigues, 2002; Stuart & Coltheart, 1988) (Manolitsis, & Tafa, 2011, p.27).

Torgesen (1998) suggests a screening procedure involving: 1) a test of knowledge of letter names or sounds, because letter knowledge continues to be the best single predictor of reading difficulties; and 2) a test of phonemic awareness. Torgesen’s research indicates that, individually, knowledge of letter names is the stronger predictor for children in their first year, and knowledge of letter-sounds is stronger for first graders. McBride-Chang (1999) considers letter-sound knowledge to be more closely related to reading skills than is a grasp of letter names, because of the stronger phonological basis for letter-sound knowledge. Thus, assessing letter names has predictive value only because it is a marker for a range of previous useful literacy experiences rather than a direct cause of progress. However, letter-sound knowledge appears to have a causal rather than merely correlational relationship to reading progress (Levin, Shatil-Carmon, & Asif-Rave, 2006).

As phonemic awareness is thought to involve a developmental sequence, the decision as to which form of test to employ for a student cohort assumes importance. For example, it is recognised that blending, segmenting, and deletion are quite difficult tasks for children before and during their first year of school (Schatschneider, Francis, Foorman, Fletcher, & Mehta, 1999). Tests in which few students can achieve success or tests in which most students are near ceiling are of little use as screening devices.

In a longitudinal study of 499 children from kindergarten through Grade 3 (Vervaeke, McNamara, & Scissons, 2007), an accuracy figure of 80% was obtained when kindergarten assessment of phonological awareness and letter-sound correspondence was compared to their Grade 3 reading achievement. The false negative and false positive rates were each 12%, representing encouraging predictive capacity over a significant period of time.

Letter knowledge

One letter knowledge test is the Letter Identification subtest of the Woodcock Reading Mastery Test-Revised NU (Woodcock, 1998). It presents letters in several different fonts for which either the sound or the name is scored as correct. Its use of different fonts appears to be intended to enable the assessment of the concept of sound-symbol relationship, not simply the association between one letter-shape and its name/sound.

The Comprehensive Inventory of Basic Skills -Revised (Brigance, 2000) has several useful subtests. Visual discrimination of upper and lower case letters, Recitation of the alphabet, Reading upper and lower case letters, Printing upper and lower case letters in alphabetic sequence, and, Printing upper and lower case letters as dictated.

The Neale Analysis of Reading Ability (Revised) (Neale, 1988) has a supplementary test that assesses the names and sounds of the alphabet.

Good and his colleagues (Good & Kaminski, 2002) have established performance-based benchmarks using the freely available Dynamic Indicators of Basic Early Literacy Skills (DIBELS). The tests relevant to this screening task are Letter Naming Fluency and Initial Sound Fluency. Note that these tests are timed, so they add a component of speed along with power – efficiency along with knowledge. Employing fluency in the measurement of subword skills (e.g., letter names/sounds) has become of increasing interest (Speece, Mills, Ritchey, & Hillman, 2003) because of the significance of automaticity as a quality beyond mastery.

The DIBELS measures are also very brief, and easy to administer. Letter Naming Fluency involves a sheet with upper and lower-case letters, and students name as many letters as possible in 1 min. Fewer than 2 letters in 1 min at preschool or early first year at school is considered at-risk, between 2 and 7 constitute some risk, and 8 or more is classed as low risk.

Curriculum-based measures (CBM), such as DIBELS have been well researched, and offer brief, easy-to-administer assessments ideal for screening and progress monitoring purposes.

CBM was designed to provide educators with a set of tasks that were reliable, valid, low-cost, and time-efficient indicators of student achievement in core academic areas. In reading, there is remarkable consistency in the relationship between R-CBM and other standardized measures of reading achievement across decades, samples, and various achievement tests. These results are extraordinary when one considers the brevity, availability, and low-cost of R-CBM. Educators should have great confidence in their use of R-CBM as an indicator of students' overall reading achievement (Reschly, Busch, Betts, Deno, & Long, 2009, p.463).

See also the study by Seungsoo, Dong-Il, Lee Branum-Martin, Wayman, and Espin (2012) supporting the validity and reliability of such CBM measures as a means of assessing academic growth.

Phonemic awareness

In the DIBELS Initial Sound Fluency, students are shown (for 1 minute) a series of pages containing 4 pictures. (Pointing to the pictures) This is: tomato, cub, plate, doughnut. Which picture begins with /d/?

Fewer than 4 initial sounds correct in 1 minute at preschool or early first year at school is considered at-risk, between 4 and 7 constitute some risk, and 8 or more is classed as low risk.

A similar system called AIMSweb is available for download (The Psychological Corporation, 2007). It includes subtests for Phoneme Segmentation, Letter Naming Fluency, and Letter Sound Fluency. However, it is no longer a free service.

Thus, one direction is early screening of all students using a couple of simple tests. When more detailed testing is required, what areas are most fruitfully explored? It seems appropriate to focus on the areas deemed by the National Reading Panel (2000) to be pivotal to reading development.

More detailed testing of each of the NRP five instructional emphases

NRP instructional recommendation 1: Phonological (or phonemic) awareness

There is strong consensus among researchers that phonemic awareness is a robust predictor of future reading progress, markedly better than is intelligence (Stanovich, 1991). As this awareness is also thought by many (Adams, 1990), but not all (Castles & Coltheart, 2004), to be a major causal factor in early reading progress assessment of current levels allows both a prediction of a child’s likely progress in the absence of appropriate intervention, and a direction for any intervention to take. In a longitudinal study, students assessed (prior to reading instruction) with low phonological awareness developed reading ability at much slower rates (McNamara, Scissons, & Gutknecth, 2011).

Phonological (or phonemic) awareness is an auditory skill enabling the recognition that the spoken word consists of individual sounds. It appears to follow a developmental sequence: from simple (Do cat and comb begin with the same sound?) to complex (blending, and then segmenting). A study by Schatschneider et al. (1999) suggests that phonemic awareness is a unitary construct, but that its development is best charted as a sequence of tasks from simple phonological tasks, like rhyming to more complex appreciation and manipulation of phonemes, as in elision, blending, and segmenting.

In a huge study (Hoien, Lundberg, Stanovich, & Bjaarlid, 1995), initial-phoneme and final-phoneme matching tasks (such as assessed by the TOPA: Test of Phonological Awareness (Torgesen & Bryant, 1994)) were by far the most potent phonological predictors of early reading acquisition. There are a number of screening tests available, but fewer with norms, the TOPA being one that has an age range of 5.0 - 8.11 years. Another advantage of this instrument is its facility for group-testing.

Another test is the Phonological Awareness Screening Test (Henty, 1993) developed in Tasmania for which the author has been attempting to obtain normative data. The Sutherland Phonological Awareness Test–Revised (SPAT-R; Neilson, 2003) has norms (Australian) for Years P-3. The Lindamood Auditory Conceptualization Test (Lindamood & Lindamood, 1979) has norms for Years P-12. The Rosner Test of Auditory Analysis Skills (Rosner, 1975) is a 13 item test with norms for Years P-3. The Yopp-Singer Test of Phoneme Segmentation (Yopp, 1995) is a brief test for P-1 students. Informal un-normed tests are available in A Sound Way (Love & Reilly, 1995), Sound Linkage (Hatcher, 1994), Phonemic Awareness Checklist (Lewkowicz, 1980), Phonemic Awareness in Young Children (Adams, Foorman, Lundberg, & Beeler, 1998), among others.

Other phonological processes

There are at least two phonological skills besides phonemic awareness, and they are beginning to assume importance in the research literature because of their capacity to add discrimination power to test batteries (Badian, 1994; Cornwall, 1992; Felton, 1992; Hurford et al., 1993; Hurford, Schauf, Bunce, Blaich, & Moore, 1994; Spector, 1992), and because they may play a role in the development of reading beyond that contributed by phonemic awareness (Savage & Frederickson, 2005). They will often form part of the more comprehensive assessment in the two-tier screen approach.

1. Phonological recoding in lexical access.

Humans store the internal representations of words in sound form known as phonological segments. These representations need to be clearly distinguishable from other stored sound segments, or else the wrong word may be selected when, for example, one is asked to name an object presented in a picture, or a written number, or letter.

Not only must the representations be distinct, but they must be quickly and accurately accessible. Students with reading difficulties often display significant difficulty with rapidly retrieving and accessing names for visual material, even though the relevant names are known to them. The impact on reading development is that a deficit in this area will also adversely impact upon the basic processing necessary for fluent word recognition processes, and thereby reading comprehension (Wolf, Miller, & Donnelly, 2000). Savage and Frederickson (2005) found that alphanumeric naming capacity was particularly strongly associated with reading fluency.

These speed and accuracy problems may be evident even prior to experience with print. Naming speed for pictures or objects may be slow, as too, subsequently, naming of (known) numbers and letters. A number of researchers have noted the predictive power of naming-speed tasks, using pictures, numbers, and letters. Both naming speed and sight word reading depend on rapid, automatic symbol retrieval. Bowers (1995) argues that slow naming speed is specific to reading disability, and not common to children with either broad-based reading problems, or Attention-Deficit Hyperactivity Disorder.

Wolf and Bowers (2000) discuss the possibility that naming speed is independent of phonemic awareness and represents a second core deficit among some disabled readers (Bowers & Wolf, 1993; Miller & Felton, 2001; Wolf & Bowers, 1999). This issue is important because there may be a group whose phonemic awareness is developing normally, and who would be unidentified by a phonemic awareness screen, but who may subsequently have reading difficulties.

Additionally, there may be a group of students who have deficits in both phonemic awareness and rapid naming. Their dual difficulty may well lead them to be especially resistant to the standard procedures in reading instruction. Wolf and colleagues have described this as the Double Deficit Hypothesis. Identifying this group before the failure process commences is obviously worthwhile, because it enables the marshalling of resources to provide very intense (evidence-based) instruction to this targeted group.

A study by Lovett, Steinbach, and Frijters (2000) underlines the importance of recognising such treatment resisters. They noted that, when intensive phonologically-based instruction was implemented, even the Double Deficit students made progress commensurate with their less disabled single deficit peers. Without such carefully planned intervention, they tend to be the most severely disabled readers, and their difficulties are not relieved by maturation(Lovett, et al., 2000; Wiig, Zureich, & Chan, 2000).

Tests: RAN: Rapid Automatized Naming (Denckla & Rudel, 1974); BNT: Boston Naming Test (Kaplan, Goodglass & Weintraub, 1983); SNS: Symbol Naming Speed (Swanson, 1989); Picture Naming Test (Hempenstall, 1995). Wiig, Zureich, and Chan (2000) argue for pictures and colours as more suitable because of the exceptionally automatised nature of letter and number knowledge. However, others have found that, for children with well-established letter-sound recognition, a letter-naming test may be a better predictor (Manis, Doi, & Bhadha, 2000; Savage & Frederickson, 2006).

2. Phonological recoding in working memory.

The beginning reader is required to decode a series of graphemes, and temporarily order them to allow the cognitively expensive task of blending to occur. This skill has been found to be an important determinant of early reading success. It is usually assessed by digit span (oral & visual) and sentence memory tasks.

Tests:

Wechsler Intelligence Scale for Children: 4th Edition (WISC IV) (Wechsler, 2003): Digit Span subtest; Wechsler Pre-School and Primary Scale of Intelligence- III (WPPSI-III) (Wechsler, 2002): Sentences; Stanford-Binet: Fifth Edition (Roid, 2003): Memory subtests; Comprehensive Inventory of Basic Skills-Revised (Brigance, 2000): Sentence Memory.

Assessing all three processes

The Comprehensive Test of Phonological Processing (CTOPP) (Wagner, Torgesen & Rashotte, 1999) assesses all three phonological processes: phonological awareness, rapid naming, and phonological memory. The CTOPP is designed to identify individuals from prep to tertiary level whose reading would benefit from development of their phonological skills. One version, developed for children aged 5 and 6 has seven core subtests and one supplementary test. The second version (ages 7 to 24 years) contains six core subtests and six supplementary tests. Individual administration requires about 30 minutes for the core subtests. The CTOPP authors argue for three potential classroom uses: to provide a screening test for students who may not be developing their phonological abilities; to indicate any student’s areas of strength and weakness among those processes; and, to measure progress in phonological processes when intervention programs are in place. The subtests are: Elision, Blending Words, Sound Matching, Memory for Digits, Nonword Repetition, Rapid Color Naming, Rapid Digit Naming, Rapid Letter Naming, Rapid Object Naming, Blending Nonwords, Phoneme Reversal, Segmenting Words, and Segmenting Nonwords.

Training other phonological processes?

Even though rapid naming tasks assist in the prediction of early reading success, there is as yet little evidence that directly training those tasks improves reading (Spear-Swerling, 1998). That is not to say that such efforts can never be fruitful.Wolf, Miller, & Donnelly (2000) have developed a program (Rave-O) designed to directly address the processing deficits they consider produce impediments to reading fluency. The RAVE-O program is not a stand-alone approach, but is integrated with a phonological analysis and blending strategy based upon Reading Mastery I/II Fast Cycle (Engelmann & Bruner, 1988). The additions emphasise orthographic pattern recognition, semantic development, and retrieval strategies. Independent evaluations are as yet incomplete.

Several studies have noted improvement in lexical access following phonemic awareness intervention (Beck, Perfetti, & McKeown, 1982; McGregor & Leonard, 1995). Gillam and Van Kleeck (1996) reported a study in which pre-school aged children with speech and language disorders improved both in phonemic awareness and phonological working memory following a phonemic awareness training program. Further, they noted that children with poor initial phonological working memory were as responsive to the intervention as were those with better phonological working memory. No studies thus far have supported the value of directly teaching naming or short term memory skills.

Elbro, Nielsen, and Petersen (1994) argue that poor phonological representations of words form the core deficit in disabled readers. In this view, lexical access and working memory are restricted not because of specific modular deficits in these processes, but rather because what is sought in the lexicon, or to be held in working memory, is lacking in readily distinguishing features. They noted the confusion of similar sounding words, and the less distinct word-naming in such readers. This view also finds support in a study by Eden, Stein, Wood, and Wood (1995a; 1995b). The phonological representation explanation allows for the possibility that improved phonemic awareness may lead to an assessed improvement in one or more of these other phonological processes. In fact, Rubin, Rottella, Schwartz, and Bernstein (1991) found that training Year 3 children in phonemic awareness had a significantly beneficial effect on the picture naming speed of both the good and poor readers.

Interpreting process assessment

Teachers may anticipate that students with difficulties solely in phonological awareness tasks are likely to require additional care in the teaching of decoding skills, while those with problems solely with naming speed may be expected to require assistance in whole word recognition, and careful attention to fluency development. As noted earlier, Wolf and Bowers (2000) argue that students who have difficulty with both phonological awareness tasks and naming speed tasks are very likely to be more resistant to reading instruction than are those with a problem in one area only. Schools can then prepare for intensive assistance over a longer period of time (Torgesen, Wagner, & Rashotte, 1994) with these students -- too often efforts are only irregularly scheduled and prematurely discontinued for those students in greatest need. Progress may be slow and hard earned, but attention to detail in instruction and vastly increased opportunities for practice can make a great difference to the prognosis. The lesson to be learned from assessment of student’s phonological processing is not about identifying learner characteristics to account for lack of progress, but rather to assist the discerning of which students demand our cutting-edge best interventions.

Assessment when reading instruction has commenced: Word level processes

NRP instructional recommendation 2. Phonics

Phonemic awareness becomes important when beginners are faced with the challenge of making sense of the English alphabetic system of writing. The phonological skills of blending and segmenting act upon the knowledge of letter-sound correspondences to enable the decoding of the written word. The facility with which students can do so to decode words not before seen is a necessary step on the way to effortless fluent reading. The decoding of non-words is considered the most appropriate measure of this process (Hoover & Gough, 1990; Siegel, 1993; Wood & Felton, 1994). While it may appear to be a task only obliquely related to reading, the measure ensures that memory for words and contextual cues can be ruled out as explanations when the non-words are read accurately. Non-word decoding also correlates very highly with reading comprehension (Shankweiler, Lundquist, Dreyer, & Dickinson, 1996).

Share (1995) argued that students must achieve a certain level of facility with decoding before a self-teaching mechanism allows them to make continuous independent progress from that stage, eventually employing for the most part the orthographic strategy that enables rapid, accurate, effortless reading. This self-teaching view is supported by several studies (Bowey & Muller, 2005; Nation, Angell, & Castles, 2007; Share, 1999; 2004) highlighting the centrality of decoding to reading development, most particularly at the early stages. These sought-after orthographic strategies can only be developed through multiple examples of success in decoding phonologically (Ehri, 1998; Share & Stanovich, 1995).

Some research using brain imaging techniques (Joseph, Noble, & Eden, 2001; Gaillard et al. 2006; Pugh et al., 2002; Turkeltaub et al., 2004) has added to our understanding of this link. It appears that the left brain’s parieto-temporal region is employed in decoding (sounding-out), and in good readers this area is very active during reading. In struggling readers, there is little activity in the left hemisphere but considerably more in the less helpful right hemisphere (Simos et al., 2002).

“At the level of brain systems, relative to typically developing (TD) readers, RD children and adolescents fail to coherently activate left hemisphere (LH) occipitotemporal (OT) and temporoparietal (TP) regions during reading” (Pugh, & Hagan, 2010, p.22).

When beginning readers have decoded a word correctly a number of times, they develop a neural model that is an exact replica of the printed word, reflecting the word’s pronunciation, spelling, and meaning. This internal representation is maintained in the occipito-temporal region of the left hemisphere. Subsequent recognition of that word becomes automatic, taking less than 150 milliseconds (less than a heartbeat). The development of orthographic processing, the key to fluent reading, depends upon the occipito-temporal region. However, the occipito-temporal region does not assume responsibility for the task without first the parieto-temporal region regularly being engaged (Richards et al., 2006; Shaywitz et al., 2004).

On average, from 4-14 accurate sounding-outs (Apel & Swank, 1999) will create the firm links necessary, although some children may require many times that number (Lyon, 2001; Swanson, 2001a) to facilitate the growth of connections between those regions. Not all children have a strong phonological talent, and there may be both genetic and environmental influences to create these individual differences.

The degree to which students are then able to use their developing parieto-temporal regionin the reading task can be assessed with the Word Attack subtest, Woodcock Reading Mastery Tests-Revised (WRMT; Woodcock, 1998);with the Pseudoword Decoding subtest from the Wechsler Individual Achievement Test-II (WIAT-II; Wechsler, 2001); DIBELS Nonsense Word Fluency (Good & Kaminski, 2002); Phonemic Decoding Efficiency (PDE) subtest of Test of Word Reading Efficiency (TOWRE; Torgesen, Wagner, & Rashotte, 1999); or the Castles Non-Word List (Castles & Coltheart, 1993, Castles et al., 2009).

In the DIBELS Nonsense Word Fluency (NWF), the student in mid prep to end of 1st Year reads aloud a collection of short nonsense words as quickly as possible for one minute. NWF below 5 is considered at risk, between 5 and 12 at some risk, and 13 or more at low risk.

The TOWRE Phonemic Decoding Efficiency (PDE) subtest measures the number of pronounceable printed non-words that can be accurately decoded within a brief timeframe (45 seconds). It has norms from age 6 to 25 years.

These tests add another quality to the other tests mentioned above -- that of fluency of decoding. Fluency provides information beyond mastery, separating those who are accurate, but slow, from those for whom decoding is effortless and automatic.

Our research highlights the importance of using measures that assess the fluency or automaticity of skill development (i.e., phonological awareness, letter knowledge, connected text). It is not enough for a student to be simply accurate on the component skills of reading; the skills must be so well developed that the accuracy and pace of performance is effortless in order to support continued reading development (Ehri, 2005; Harn, Stoolmiller, & Chard, 2008). Students who do not display this ease early in their reading development are the most in need of intensive instructional supports (Harn, Jamgochian, & Parisi, 2009).

In Stanovich’s (2000) view, the rapid decoding of nonwords is one of the best discriminators of good and struggling readers. Comprehension is disrupted by slow word reading (Perfetti, 1985; Perfetti & Hogaboam, 1975; Perfetti & Lesgold, 1977). Words should be effortlessly identified so that word reading takes up a minimal amount of processing capacity, leaving as much as possible for understanding the text meaning (Bowers & Wolf, 1993; Campton & Carlisle 1994; Joshi & Aaron, 2002; Metsala & Ehri, 1998).

The TOWRE also contains a Sight Word Efficiency (SWE) subtest that assesses the number of real printed words that can be accurately identified within 45 seconds. The TOWRE is helpful for a number of purposes, such as in monitoring the growth in efficiency of phonemic decoding and sight word reading skills during the primary school years. It also highlights any differences between the two skills in the same student. This has implications for any intervention that may required by a student. The two subtests can be administered to a child in less than 5 minutes, and there are two parallel forms of each subtest.

It has been suggested that assessment at the level of the single word, as in lists rather than employing only authentic literature, are in some way not real tests of real reading, because it involves fractionating the reading process (Goodman, 1986). However, two studies by Landi, Perfetti, Bolger, Dunlap, and Foorman (2006) have pointed to the potential for list-type assessment to provide a purer measure of orthographic and phonological skills, because when beginning readers read words in context, they may depend on context to attempt to circumvent their inadequacies in reading unfamiliar words.

Assessment for older students

One question may be How delayed is this student’s reading development? A general reading assessment will provide some information. It will provide an idea of the length of time it may take for the child to achieve a reasonable level of reading skill (i.e., to be able to adequately comprehend grade-level textbooks as a minimum outcome) given a good program, regularly and competently taught to a motivated student.

Older students demonstrate a broad and complex range of difficulties related to reading. These include problems in recognizing words, understanding word meanings, and understanding and connecting with text; students often lack background knowledge required for reading comprehension (Biancarosa & Snow, 2004). We examined several syntheses on interventions for secondary students with reading difficulties to identify effective interventions to meet this range of reading difficulties. Edmonds et al. (2009) conducted a meta-analysis examining the effects of adolescent reading interventions (Grades 6 through 12) that included instruction in decoding, fluency, vocabulary, or comprehension on reading comprehension outcomes. Analyses revealed a mean weighted effect size in the moderate range in favor of treatment students over comparison students. Promising approaches were those that provided targeted reading intervention in comprehension, multiple reading components, or word-recognition strategies (Vaughn et al., 2011, p.392).

Although it is unlikely that these students will make accelerated progress without intensive interventions, there is evidence that secondary students may experience improved reading outcomes when provided explicit reading intervention with adequate time and intensity for reading instruction (Archer, Gleason, & Vachon, 2003; Torgesen et al., 2001) (Vaughn et al., 2010, p.932).

Normed reading tests may continue to be used for older students, bearing in mind the various problems they have in specifying precise grade levels. In the RMIT Clinic, the most commonly used general tests are the Woodcock Reading Mastery Tests – Revised NU (Woodcock, 1998), the Neale Analysis of Reading Ability-Revised (Neale, 1988), the Spadafore Diagnostic Reading Test (Spadafore, 1983), and various subtests of the Comprehensive Inventory of Basic Skills-Revised (Brigance, 2000).

These tests will usually provide an indication of the student’s ability to read accurately from word lists or connected text (reading accuracy) and the capacity to make sense of that which they read (reading comprehension). Reading accuracy tests do not adequately discriminate between those students who have memorised whole words and those students who additionally have the capacity to decode words not recognised. The Woodcock has a significant advantage over the Neale because of the inclusion of a Word Attack subtest that indicates the degree to which the student can apply his phonemic awareness to the task of reading (sometimes called phonological recoding). Additionally, it is normed to an adult level. The Neale allows for testing of reading rate, an important element in a student’s progress, reflecting the level of automaticity or fluency achieved. Rate also provides information about the attentional capacity a reader has available to commit to the task of reading comprehension.

Assessing reading fluency

NRP instructional recommendation 3: Reading fluency

According to Wolf and Katzir-Cohen (2001):

In its beginnings, reading fluency is the product of the initial development of accuracy and the subsequent development of automaticity in underlying sublexical processes, lexical processes, and their integration in single-word reading and connected text. These include perceptual, phonological, orthographic, and morphological processes at the letter, letter-pattern, and word-level; as well as semantic and syntactic processes at the word-level and connected-text level. After it is fully developed, reading fluency refers to a level of accuracy and rate, where decoding is relatively effortless; where oral reading is smooth and accurate with correct prosody; and where attention can be allocated to comprehension(p. 219).

Oral reading fluency has particular relevance during the alphabetic stage of reading development because this is the phase during which self-teaching begins (Share, 1995). In the early alphabetic stage, simple letter pattern-to-sound conversion begins to provide a means of decoding unknown words, though the process is necessarily laborious (as is any new skill prior to its automatization). As they progress with their understanding of the function of the alphabet, students begin to appreciate that each time they decode an unfamiliar word its recognition subsequently becomes easier and faster. Practising decoding enables them to become adept at storing letter-patterns -- orthographic information that can dramatically hasten word recognition of these and new words (Torgesen, 1998). These are not simply visual images, but alphabetic sequences.

It is in reaching the stage of automaticity that the apparent magic of skilled reading becomes evident – whole words are recognised as quickly as are individual letters. The actual process of reading, of transforming squiggles into language, appears transparent – that is, the words seem to leap off the page and into consciousness without any noticeable effort or strategy (LaBerge & Samuels, 1974). The issue of variation in the effort required to make sense of print has been addressed by employing neuro-imaging techniques when both capable and struggling students are engaged in reading. Richards et al. (1999, 2000) noted that the poor readers used four to five times as much physical energy (oxygen, glucose) as the capable readers to complete the same phonological tasks. This difference was not observed when non-language tasks were presented. It is unsurprising that a lack of motivation to read is a serious secondary obstacle for dysfluent readers.

Oral reading fluency has been found to be strongly related to reading comprehension (Miller & Schwanenflugel, 2006; O'Connor et al., 2002; Roehrig, Petscher, Nettles, Hudson, & Torgesen, 2008; Swanson, & O’Connor, 2009).). In fact, Shinn, Good, Knutson, Tilly, and Collins (1992) found that oral reading fluency in the early grades was as valid a measure of reading comprehension as of decoding ability. Others have reported correlations as high as .91 for older students (Fuchs, Fuchs, Hosp, & Jenkins, 2001). Oral reading fluency measures correlate even better with other reading comprehension tests than those same tests correlate with each other (Fuchs & Fuchs, 1992).

Both standardised and informal assessments of oral reading accuracy and rate are recommended in the National Reading Panel Report (National Institute of Child Health and Human Development, 2000). The report also recommends guided oral reading as a valuable fluency enhancing activity, yet both fluency assessment and instruction are notably absent from the reading curricula of many schools. This is unsurprising given that reading fluency is not mentioned in the English curriculum standards documents from at least three Australian states: Victoria, South Australia, and Queensland (Department of Education, Employment & Training, 2001). Perhaps the National Curriculum eventually will incorporate such an emphasis, give that the National Inquiry into the Teaching of Literacy (2005) recommended both assessment and structured teaching of reading fluency.

While suggested rates vary among writers, Howell and Nolet (2000) recommend the following benchmarks for text appropriately graded. From early Year 1 to late Year 1, the anticipated progression is from 35–50 words correct per minute; whilst from early Year 2 to late Year 2, the target is from 70–100 correct wpm; and from early Year 3 to late Year 3 the progression is from 120–140 correct wpm. A slightly different trajectory is suggested by Binder, Haughton, and Bateman (2002). They anticipate a more rapid progression throughout Year 1 reaching between 60-100 correct wpm. They also provide additional yearly expectations: Year 2–Year 3 100–120 correct wpm; Year 4–Year 5 120–150 correct wpm; Year 6–Year 8 150–180 correct wpm; and Year 9 and above 180–200 correct wpm.

The Gray Oral Reading Test–4 (Wiederholt & Bryant, 2001) is a standardized measure of oral reading allowing assessment of reading accuracy, rate, and passage comprehension for ages 6-19, as does the Neale Analysis of Reading Ability –Revised (Neale, 1988), though only for ages from 6-12 years.

An increasingly popular curriculum-based oral reading fluency measure is the DIBELS (Good & Kaminski, 2002) which provides reasonably reliable and valid indicators of skills associated with reading success from beginning to Year 6. Numerous graded passages are provided, and students read the appropriate passages orally for one minute each. The median score (words correct per minute) of the cold reading of each of three passages forms the data. Performance-based benchmarks allow the identification of children who are doing well in their reading instruction, and detects those whose response to instruction places them at risk for experiencing later reading difficulties.

They are simple, quick, and cost-effective (free) measures that are more sensitive to small changes over time than are most standardised tests. There are multiple passages for each grade level, making them easily repeatable for continuous progress monitoring. Using DIBELS, all students are expected to be assessed three times a year, while those receiving intervention are typically assessed fortnightly or monthly.

Another curriculum based measure is known as AIMSweb (The Psychological Corporation, 2007). It follows a similar protocol, and has multiple passages up to a Year 8 level.

Norms in standardised tests

An issue sometimes arises about the appropriateness of tests in Australia employing only US norms. Obviously, it would be an advantage to have local norms for all the tests we wish to use; however, the huge cost of properly norming tests is prohibitive for many local developers. There are some grounds for defending US normed tests of reading. We speak and write the same language, and, in most Australian states, we commence school at about the same age. In international comparisons (e.g., OECD, 2004; UNICEF, 2002), our average reading attainment exceeds that in the USA, perhaps because of our lower proportion of disadvantaged and non-English speaking students. The implication of this disparity is that tests using US norms may slightly flatter our students. When students do not do well on such a test, it is likely that they would actually be lower on that test using local norms than is indicated by the test manual. So, if a student, for example, scores below the 30th percentile on the TOWRE (the cut-off for being classified as at-risk), any error caused by the non-local norms is likely to lead to an underestimate of their level of difficulty.

Assessment of vocabulary

NRP instructional recommendation 4: Vocabulary

The significance of vocabulary in the context of reading development involves its role in underpinning reading comprehension (Beck & McKeown, 1991). Vocabulary produces correlations with reading comprehension of between .6 and .8 (Pearson, Hiebert, & Kamil, 2007), although receptive language assessment when performed at the beginning of school tends to produce lower figures, such as .38 (Scarborough, 1998).

It is acknowledged that early vocabulary development is important for later literacy, and that there are marked differences in the vocabulary levels of children at school entry. Hart and Risley (1995; 2003) observed that (on average) parents with professional jobs spoke about 2,000 words an hour to toddlers. For working-class parents, the rate averaged 1,200 words an hour, and for those receiving welfare only 600 words an hour. Hart and Risley concluded that by age 3, children receiving welfare have heard 30 million fewer words than children of professional families. A year’s preschool experience could not entirely compensate for the experiential deprivation that could occur during the first 3 years of life.

There may in the near future be the means for very early identification of language development (Swingley, 2008). Young toddlers tend to look at images or objects that are named by an adult. Through eye movement tracking while a child observes two objects (e.g., an apple and a dog) it is possible to see if the child’s eyes move to the named object. It enables a measure of a 12 month old child’s knowledge of the meaning of words that they are not yet able to articulate. This early phase is quite strongly related to success of later language tasks, and may lead to early identification and support at the optimal period for changing a child’s trajectory.

If schools are to attempt to compensate for these dramatic discrepancies noted by Hart and Risley, then vocabulary assessment needs to be included in the planning. However, there are a number of uses of the term: receptive and expressive vocabulary, oral and reading vocabulary, reading and writing vocabulary.

The National Reading Panel (National Institute of Child Health and Human Development, 2000) described numerous studies that emphasised the way in which vocabulary develops. New words taught directly in a year are typically about 300 to 500; however, the number of new words learned in a year is around 3,000 to 4,000 (Beck & McKeown, 1991). So, much of the development must be dependent upon reading. In fact, beyond Year 3 the amount of free reading is the major determinant of vocabulary growth, and the best readers may read 100 times that of the least adept (Nagy, 1998; Nagy & Anderson, 1984). So, the initial gap is inclined to widen. Why is reading such a source of vocabulary growth? Children’s books contain 50% more "rare" words than does adult television, or even the conversation of college graduates. Even popular magazines have 3 times as many opportunities for new word learning as prime-time television and adult conversation (Stanovich, 1993). Reading stories to children appears not to adequately compensate for a lack of reading experience. Listening to stories has only been shown to increase the vocabulary of above average readers (Nicholson & Whyte, 1992). Students who don’t choose to read regularly fall further behind (Matthew Effects; Stanovich, 1986). So, the extent of vocabulary knowledge is both a cause and a consequence of reading development.

It is fair to say that the field of vocabulary assessment is less well developed than some of the other dimensions of reading. A great deal of the research employed experimenter-designed tests, and hence there has not arisen a clear consensus about which type of vocabulary assessment is most helpful in relation to reading development. According to the NRP, standardized tests should only be used to provide a baseline, as they offer only a more general measure of vocabulary. For evaluating instruction, more than a single measure of vocabulary should be utilised, preferably measures associated with the teaching curriculum.

In standardized tests, one way of assessing vocabulary is to have the student select a definition for a word from a list of alternatives. Another is to ask what various words mean (WISC-IV; Wechsler, 2003). A third is to select the word that doesn’t belong in a list either spoken or written (brown, big, red, green, yellow; Brigance, 2000). In the Woodcock Reading Mastery Tests-Revised/Normative Update (1998), three subtests comprise the Word Comprehension test: Antonyms, Synonyms, and Analogies.

The most commonly employed vocabulary test is one of receptive language, using the Peabody Picture Vocabulary Test(PPVT-3; Dunn & Dunn, 1997). There is no reading involved; the task is to identify the one picture of four that matches the word spoken by the test administrator. A similar protocol is provided in the Wechsler Individual Achievement Test-II (WIAT-II; Wechsler, 2001): Receptive Vocabulary subtest. Another option is in the Vocabulary subtest of the various Wechsler scales (WISC-IV, WPPSI-III, WAIS-III: Wechsler, 2004; 2002; 1997). The Wechsler task is to provide definitions for various, progressively more complex words.

Vocabulary deficits may impede reading comprehension, but reasons for students performing poorly on a comprehension measure are not immediately obvious from only the comprehension measure. Was low attainment caused by a decoding problem, or did inattention preclude correct answers. Did the student forget the passage details because of short term memory problems, or might anxiety have interfered? Was it a metacognition failure in which the student has simply never learned strategies to aid comprehension, or was it due to a vocabulary lag? The vocabulary test can assist with this diagnosis, but is insufficient of itself.

Assessment of comprehension

NRP instructional recommendation 5: Comprehension

As the basic decoding and word recognition skills become automatised, comprehension strategies become an area of variability among students. Strategies that were adequate in simple text may become insufficient for the increasingly complex language (semantics and syntax) in the upper primary and secondary grades.

Without the automatisation of basic processes, reading comprehension progress stalls. With automatisation, students are at least able to make use of their existing oral language comprehension skills (Crowley, Shrager, & Siegler, 1997; O’Connor et al., 2002). The growth of these largely oral comprehension skills is partly dependent upon the quality and extent of oral language activities in their curriculum. The student converts print to speech (perhaps subvocally), and comprehends the speech, rather than the text directly – as in the Simple View of reading (Hoover & Gough, 1990).

However, text is not simply transcribed speech. It has its own formats, and additional comprehension strategies assume importance over the longer period of reading sophistication. Those with a history of problems will have had reduced exposure to text that hampers subsequent progress impeding their vocabulary development (Nagy, 1998), as discussed earlier.

The research into enhancing comprehension has lagged behind that for the underpinning word-level processes, though there is some agreement about a few promising components. For example, the student who interrogates the text is likely to understand more than one who passively reads it (National Reading Panel, 2000; Pressley, 2000). Useful strategies, including prediction, analyzing stories with respect to story grammar elements, question asking, image construction, and summarizing, may be intuited by some students. However, for others these strategies are highly dependent on teachers’ modelling of the process orally, and their providing multiple practice opportunities (Pressley, 2001; Swanson, 2001b)). Unfortunately, many comprehension activities in schools involve only testing students (reading a text and subsequently answering questions) rather than actually providing instruction.

Good readers are aware of why they are reading a text, gain an overview of the text before reading, make predictions about the upcoming text, read selectively based on their overview, associate ideas in text to what they already know, note whether their predictions and expectations about text content are being met, revise their prior knowledge when compelling new ideas conflicting with prior knowledge are encountered, figure out the meanings of unfamiliar vocabulary based on context clues, underline and reread and make notes and paraphrase to remember important points, interpret the text, evaluate its quality, review important points as they conclude reading, and think about how ideas encountered in the text might be used in the future. Young and less skilled readers, in contrast, exhibit a lack of such activity (e.g., Cordón & Day, 1996). (Pressley, 2000, p.548).

Given the under-developed state of research into reading comprehension, it is unsurprising that current testing instruments also have their problems. Much of the intervention research has involved experimenter-devised tests, and these have produced rather larger effect sizes than have standardised tests when evaluating the same instructional method. For the studies on question generation, the average effect sizeaveraged about 0.90 for experimenter-written tests, which is a large effect; whereas, for standardized tests, the average effect size was small at 0.36. The pattern was similar for the multiple strategy instruction experiments in which for experimenter-written tests the average effect size was 0.88, and for standardized tests, only0.32 (National Reading Panel, 2000). Clearly some consensus is needed about what forms of comprehension assessment are optimal for a specific given purpose.

Standardised comprehension tests are predicated on the assumption that there is a consensus on what are appropriate, progressively increasing grade levels of comprehension. However, there are many variables to cloud interpretation of results. Grade level materials can be analysed on the basis of their readability, usually utilising one or other algorithms based upon word length, word prevalence, and sentence length. However, difficulty levels of vocabulary and syntax can vary significantly across tests, and are not quantified by readability measures. Are the questions literal or inferential? Inferential questions are usually considered harder than literal questions, but both have difficulty levels along a continuum. To further complicate the issue, domain knowledge about a topic dramatically influences task success (Hirsch, 2006), as can command of English. It has also been observed that speed of comprehension is slower and test scores are lower when unfamiliar topics are read than when familiar topics arise. A weakness, then, of comprehension measures is that the methods chosen are only indirect indicators of whether the reader has got it, and to what extent. And each of the numerous and varied methods tried has had its own set of weaknesses, whether issues of validity (particularly for individual scores), external accountability, reliability, or generalisability (Pearson & Hamm, 2005). Perhaps, future brain imaging techniques will provide more insight into the process of comprehension.

Most comprehension measures occur as subtests within omnibus tests, such as the Woodcock Reading Mastery Tests – Revised NU (Woodcock, 1998), the Neale Analysis of Reading Ability-Revised (Neale, 1988), Gray Oral Reading Test–4 (Wiederholt & Bryant, 2001), Spadafore Diagnostic Reading Test (Spadafore, 1983), WIAT-II (Wechsler, 2001), DIBELS Retell Fluency (Good & Kaminski, 2002), and the Comprehensive Inventory of Basic Skills-Revised (Brigance, 2000).

Of particular interest is to compare attainment on a reading comprehension task to that on a listening comprehension task. The Comprehensive Inventory of Basic Skills-Revised (Brigance, 2000) has the capacity to provide such a comparison, with its reading comprehension and listening comprehension subtests (up to Year 9). So too does the Spadafore Diagnostic Reading Test (Spadafore, 1983), and it has an advantage in that it is normed to Year 12.

The interest lies in the degree to which performance differs on the two tasks. For the average child, if reading is fluent, then the two scores should be similar. The listening comprehension task represents the language comprehension aspect in the Simple View of Reading (Hoover & Gough, 1990), and the level of reading comprehension obtained depends also upon the adequacy of the lower level processing in addition to language comprehension.

Comparing the results of listening comprehension to reading comprehension offers the capacity to define those children who have a major problem only at the level of print. They will perform well on the listening comprehension tasks, using their impressive general language skills to answer questions about a story read to them. On the reading comprehension task, however, they will do relatively poorly as their under-developed decoding skills prevent them bringing into play their well-developed general language skills.

When required to decode a passage unassisted, these students struggle, as do their garden-variety peers -- those with a non-modular, broad-based reading problem (Stanovich, 1988b). On the other hand, the garden-variety students would be expected to perform similarly on both tasks. Their reading problems are general rather than specific, and they have more than just one or two reading subskills restricting their development. Their decoding skill is commensurate with their other language skills, such that if they know the meaning of a word (or phrase, or sentence), they can comprehend it whether it is presented orally or in print. The consequence for the high listening comprehension-low reading comprehension child should be intensive assistance at the decoding level. For the low listening comprehension-low reading comprehension child, intensive assistance at both the decoding and comprehension levels is indicated.

Other possible outcomes are high listening comprehension-high reading comprehension, a result predictable from an all-round good reader; and low listening comprehension-high reading comprehension, a rare result, possibly from a student with acute attentional, hearing, or short-term memory problems. In this case, the permanence of text would allow the student to use his intact language comprehension skills; whereas, the ephemeral nature of the spoken story precludes such access. Hyperlexic students (a less common sub-group with excellent word recognition but poor reading comprehension) would not be detected by this discrepancy analysis, because their listening comprehension parallels their reading comprehension (Sparks, 1995). Hyperlexic students should not be confused with the oft seen older struggling reader who may appear to decode adequately, and have only under-developed comprehension skills. These latter students usually have a long history of inadequate decoding skills and fluency -- a history that has compounded across domains. Despite some apparent improvement in decoding, their fluency tends to remain problematic.

This listening comprehension - reading comprehension discrepancy represents an alternative definition of the group sometimes described as dyslexic; however, as with the IQ discrepancy-defined dyslexic, an issue is how great a discrepancy should be considered significant. Some have considered a two year discrepancy to be very significant (Anderson, 1991), given the extent of commonality of the tasks. However, this is clearly an arbitrary figure, its significance being higher the younger is the age of the child. This is its major value since the intervention techniques employed include systematic synthetic phonics instruction whether the difficulty is described as dyslexic or garden-variety. The dyslexic classification can, however, sensitise teachers to the possibility that dyslexic students may be more treatment-resistant (Berninger & Abbott, 1994) than garden-variety students, and may also require more intensive or extended phonologically-based instruction if their progress in a systematic synthetic phonics program appears to be unsatisfactory despite it being appropriately taught.

Although not covered here, a full assessment would also include a measure of spelling, and one of written expression. Spelling is closely related to reading – correlations between 0.66 to 0.90 having been reported (Malatesha, Joshi, Treiman, Carreker, & Moats, 2008). Early spelling errors can provide a means of understanding how young students make use of letter/sound correspondence – Ehri (2000) described reading and spelling as two sides of a coin, in that each requires an understanding of the alphabetic principle and spellings of specific words. Unfortunately, few of the studies conducted of reading disabilities include measures of spelling to assist teachers of struggling students with this important aspect of literacy (Joshi, Treiman, Carreker, & Moats, 2009).

Teaching writing has benefits in enhancing reading, and thus assessment of written expression is also relevant (Todd et al., 2011). The relationship is noted in the following quotations:

A large body of data supports the view that movement plays a crucial role in letter representation and suggests that handwriting contributes to the visual recognition of letters. … After training, we found stronger and longer lasting(several weeks) facilitation in recognizing the orientationof characters that had been written by hand compared to thosetyped. Functional magnetic resonance imaging recordings indicatedthat the response mode during learning is associated with distinctpathways during recognition of graphic shapes. Greater activityrelated to handwriting learning and normal letter identificationwas observed in several brain regions known to be involved inthe execution, imagery, and observation of actions, in particular,the left Broca's area and bilateral inferior parietal lobules.Taken together, these results provide strong arguments in favourof the view that the specific movements memorized when learninghow to write participate in the visual recognition of graphicshapes and letters (Longcamp, Zerbato-Poudou, & Velay, 2005, p.67).

The empirical evidence that the writing practices described in this report strengthen reading skills provides additional support for the notion that writing should be taught and emphasized as an integral part of the school curriculum. Previous research has found that teaching the same writing process and skills improved the quality of students’ writing (Graham and Perin, 2007a; see also Graham, in press; Rogers and Graham, 2008) and learning of content (as demonstrated in Graham and Perin [2007a] and Bangert-Drowns, Hurley, and Wilkinson [2004]). Students who do not develop strong writing skills may not be able to take full advantage of the power of writing as a tool to strengthen reading” (Graham & Hebert, 2010, p.29).

In the RMIT Clinic the test employed most frequently is the Test of Written Language - 3rd Edition (TOWL-3, Hammill & Larsen, 1996). It has eight subtests: Eight subtests: Vocabulary -- write a sentence including a specified word; Writing style -- writing sentences from dictation using proper punctuation and capitalization; Spelling -- writing sentences from dictation and spelling is also assessed; Logical sentences -- edit an illogical sentence so it makes sense; Sentence combining -- several short sentences are combined to make one grammatically correct sentence; Contextual connections -- write a story about a picture (capitalization, spelling, punctuation and other writing elements are assessed); Contextual language -- the story from the previous subtest is evaluated for vocabulary, sentence construction and grammar; Story construction-- the story is evaluated for quality of plot, prose, development of characters, interest to the reader, and other compositional aspects. There is a newer version (TOWL-4) available.

Assembling the results of the various assessments for a particular student leads to some interpretation of cause and possible interventions. Below is an example of the frameworks used in the RMIT Clinic to assist this judgement. It represents a way of looking at a student’s scores, in this case a student diagnosed with dyslexia, and offering intervention appropriate to the diagnosis, which in turn is based upon the assessment findings.

Figure 1 Hempenstall Blog

Interventions

One purpose for the fine-grained, domain-specific assessments described in this paper is to enable the assigning any intervention precisely to the area that is seen to be impeding student progress. This targeting enables a more efficient use of scarce school resources, and increases the likelihood of rapid progress for the student. If identified and addressed early, impediments to progress can be removed before the debilitating Matthew Effects (Stanovich, 1986) are able to force the student into the familiar and depressing downward trajectory.

Consistent with research findings (Adams, 1990; Foorman, 1995; Perfetti, 1992), good results for decoding and fluency intervention at the RMIT Clinic have come from programs with a strong synthetic phonics emphasis and involve explicit, carefully planned instructional sequences, such as Reading Mastery (Engelmann & Bruner, 1988), Teach Your Child to Read in 100 Easy Lessons (Engelmann, Haddox, & Bruner, 1983), the Corrective Reading Program – Decoding strand(Engelmann, Meyer, Carnine, & Johnson, 1999). Also enhancing reading development are programs focusing on spelling, such as Spelling Mastery (Dixon, Engelmann, Bauer, Steely, & Wells, 1998); and on writing, such as Expressive Writing (Engelmann & Silbert, 1983) and Reasoning and Writing (Engelmann & Silbert, 1991). For comprehension problems, Language for Learning (Engelmann & Osborn, 1999), and the Corrective Reading Program – Comprehension strand (Engelmann, Haddox, Hanner, & Osborne, 1989) have proved to be valuable teaching agents.

The scripted nature of these programs is a great benefit when training parents to work effectively with their children. Although this Clinic role can never hope to change the system that creates/maintains student reading failure, it does provide parents with the tools to help them compensate for the weakness of their school system.

At the system level, parents cannot be expected to be responsible for their children’s literacy development. The results of reports from Australia, USA, and Great Britain have been remarkably consistent about what is needed. The next major step involves policy reform that takes as its primary source the scientific theory of reading development and empirically validated approaches that incorporate this theory. When combined with wisely used, salient assessment instruments, the potential for the education system to enter a self-sustaining improvement cycle is very exciting. All that is needed is for the education industry and political bureaucracies to see the light. As a first step, they might devise explicit, measurable standards, and insist upon close, transparent progress monitoring to evaluate instructional adequacy. Straightforward really!

So, what might a report based upon this approach look like?

PSYCHO-EDUCATIONAL ASSESSMENT REPORT

Client’s Name:                                      Jacob Smith

Date of Birth:                                        24.01.94

Age at Assessment:                             15 years, 5 months

Parent’s Name:                                     Elly and Jim

School:                                                 Mid West Specialist School

Year Level:                                           9

Teacher’s Name:                                  Mr Jones

Assessments Conducted:                     Wechsler Intelligence Scale for Children – Fourth Edition

(WISC-IV)

- Comprehensive Test of Phonological Processing (CTOPP)

- Test of Word Reading Efficiency (TOWRE)

- Wide Range Achievement Test – Revision 3 (WRAT-3) - Spelling

- Oral Reading Fluency (AIMSweb)

- Brigance Comprehensive Inventory of Basic Skills

-Reading Comprehension, Listening Comprehension & Sentence Writing

- Writing Fluency (Zaner-Bloser Scale)

- Placement Tests for Corrective Reading, Spelling Mastery

Examiners:

Kate Devine B.App.Sci. (Psychology)  Provisional Psychologist (Hons)                    

Assoc. Prof Kerry Hempenstall B.Sc., Dip.Ed., Dip.Ed.Psych., Dip.Soc.Studs., PhD, MAPSs -  Case Supervisor

                                                            

                                                           

Referral Information:

Jacob was referred to the RMIT University Psychology Clinic by his mother, Elly, to determine his current level of cognitive and academic functioning. According to Elly, Jacob’ educational needs and development of life skills would be better suited to Jacob attending a special school, and therefore she is seeking support for him. Jacob was initially referred to the RMIT University Psychology Clinic in June 2008, at which time an initial interview was conducted with Dr Kerry Hempenstall and Jean Laurent (Provisional Psychologist). At this time, Elly noted that she wanted Jacob re-assessed for an Autism Spectrum Disorder, and the family was referred to the Autism Assessment Team. Elly also noted that she is getting support from Jill Felps, a solicitor from the Disability Discrimination Legal Service, who had recommended the RMIT University Psychology Clinic.

Background Information:

Family

Jacob is 15 years and 2 months old and lives at home with his mother, Elly, his father, Jim, his sister Faye (16 years), and his brother Tony (9 years). Jim runs a bricklaying company and Elly manages the bookkeeping for the business and their rental properties.

Developmental/Medical History

According to Elly, Jacob’ placenta was not expelled naturally at birth. Jacob’s birth as noted by Elly was also surrounded by marital stress. Jacob was reported to be walking by 12 months of age. He was described as “quiet”, spoke words at 2 years of age and short sentences at 3 years of age. In terms of dietary intake, Jacob was reported to have been on several diets in the past in an attempt to overcome his behavioural difficulties. According to Elly, Jacob is currently taking Risperidone daily to assist with these problem behaviours. Jacob has also been diagnosed with an Autism Spectrum Disorder and ADHD in the past by Dr M. Berle, a consultant child psychiatrist. Jacob has a long history of bed wetting and has sleep difficulties according to Elly.

Educational Background

Jacob attended Mid West Primary School for from Prep to Grade 6. Jacob had an aide from Prep (received additional support through the Program for Students with a Disability, under the criterion of Severe Behaviour Disorder) and was reported to have “always” experienced difficulties with the school curriculum. Jacob was described as having “no friends”, and was reported to interact only with his teachers throughout his school years.

Following completion of primary school, Jacob made the transition into Smithfield Secondary College. According to Elly, Jacob was subjected to bullying and harassment which precipitated her removing Jacob from school after Term Three. Jacob did not attend school for a considerable amount of time (for over 12-months), until he was enrolled in the Mid WestSpecialistSchool. Elly described a tiresome process of having Jacob accepted into the Specialist School, and emphasised that she wants Jacob to learn basic life skills, develop socials skills, and be in an environment where he is not going to “get bullied”. Elly noted that when Jacob made the transition into the Specialist School, he felt as though he “belonged” (e.g., other children would say hello him), and he also displayed calmer behaviours.

Since beginning at the Specialist School, Elly reported that Jacob has been suspended for extended periods of time (the most recent was reported to be 4-weeks) due to his difficult behaviours. Elly indicated that Jacob has difficulties regulating his emotions and needs one-on-one support in the classroom to assist him with being able to control his frustration. According to Elly, Jacob is treated differently to other children within the Specialist School (in terms of the consequences of his behaviour), and believes that the antecedents that precipitate Jacob’s challenging behaviours (e.g., what another child may have said) are often overlooked.

Behavioural Observations:

Jacob was assessed in 2009. The WISC-IV and educational assessments were conducted over one day due to the family travelling from afar. Jacob was administered .75ml of Resperidone on the morning of the assessment, which is consistent with his usual daily dose. Jacob was generally cooperative during the assessments and applied himself during most activities. Over the course of the day, regular breaks were provided. Jacob showed little eye contact over the course of the assessment, and at times made comments which deviated from the topic (for example, when Jacob was asked to take his sunglasses off and asked about the colour of his eyes he responded by saying, “mouldy coloured” and “toxic dump”).

At times, Jacob became distracted and innattentive (for example, looking around the room); however, he returned to the task after being instructed and praised by the clinician. Jacob often become frustrated by some of the tasks presented to him (for example, silent reading comprehension, and spelling), as demonstrated by him banging his hand on the desk and making comments such as “I hate it when people don’t understand my writting” in an angry and frustrated tone.

Assessment Results:

 Intellectual Assessment

The Wechsler Intelligence Scale for Children (WISC-IV) was used to determine Jacob’ current level of intellectual functioning. The WISC-IV contains 10 individual tests that measure a variety of skills and abilities thought to be important in overall intellectual functioning. The 10 individual tests assess four areas of intellectual functioning: Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed. Scores from each test are combined to give a total, of Full Scale, IQ score. The Full Scale IQ score is considered to be the best measure of cognitive ability and is a strong predictor of academic achievement. Jacob’s unique set of thinking and reasoning abilities make his overall level of intellectual functioning difficult to summarize by a single, Full Scale, IQ score. Due to the variability in the four indices that comprise this full-scale score, Jacob’s overall level of intellectual ability cannot be interpreted meaningfully, and it would not be a valid measure of general intelligence. This is a characteristic of intellectual assessment tools that attempt to produce an estimate of general intelligence through a range of multiple subtests assessing different elements thought to comprise intelligence. If the results diverge too widely then simply averaging them does not produce a valid IQ.

There is also a procedure for obtaining a variant of the IQ, called the General Ability Index (GAI). When the Full Scale IQ cannot be reported, it is recommended that the GAI be used as long as the Verbal Comprehension index and the Perceptual Reasoning index do not differ by 23 points or more. However, this criterion was not met as the difference between Jacob Verbal Comprehension and Perceptual Reasoning index scores exceeded 23 points. Therefore, Jacob’s intellectual ability is best understood in terms of the four WISC-IV indices, namely Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed.

The Verbal Comprehension Index assessed how well Jacob comprehends verbal information and expresses himself verbally, and also provided information about Jacob’ verbal reasoning, concept formation, learning, and memory. Jacob’ Verbal Comprehension score is equal to or better than 6% of his same age peers. This places his performance in the Borderline range.

The Perceptual Reasoning Index assessed Jacob’ visual perception and visual-motor coordination, and his nonverbal reasoning, learning and memory. Jacob’ Perceptual Reasoning score is equal to or better than 70% of his peers, which places his performance in the upper end of the Average range.

The Working Memory Index assessed how well Jacob is able to attend to, hold and manipulate information in his short-term memory. Jacob’ working memory abilities are equal to or better than 9% of his peers, placing his performance in the lower end of the Low Average range.

The Processing Speed Index assessed how quickly Jacob can process simple visual material without making errors. Jacob’ score on processing speed is below the 1st percentile, placing his performance in the lower end of the Extremely Low range. Jacob’ scores on the WISC-IV are summarized in the table below.

 

Extremely Low

Borderline

Low Average

Average

High Average

Superior

Very Superior

Verbal Comprehension

             

Perceptual Reasoning

             

Working Memory

             

Processing Speed

             

Full Scale

IQ (Uninterpretable)

General Ability Index (Uninterpretable)

It is important to remember that full scale scores on cognitive tests such as the WISC-IV reflect various problem solving abilities and retained facts, and are usually reasonably good predictors of learning and academic success.

When results of the intellectual assessment vary as dramatically as do those of Jacob, and IQ is not able to be determined, it is necessary to consider his academic skills to determine the degree to which intellectual factors, such as his very low processing speed and low verbal comprehension may have contributed to his current circumstances.

It should be noted that there are also many other qualities in an individual’s profile of potential that have not been investigated as part of this assessment. For instance, factors such as motivation, creative talent, curiosity, work habits, and study skills are not ascertained by this assessment.

As will become clear from the educational assessment, functionally Jacob has the same needs as a student whose low full scale IQ score invites eligibility for assistance from programs for students with disability. As such he should be treated in the same manner as a student who does clearly meet eligibility criteria, despite Jacob’ uninterpretable WISC-IV assessment.

Full scale IQ is not as strong a predictor of literacy success as is often believed. Phonological processing is a much better predictor of reading ability.

Academic Assessment

Phonological Processing

The Comprehensive Test of Phonological Processing (CTOPP) was administered to assess Jacob’ ability to use his knowledge of sounds to process oral and written language. The CTOPP assesses phonological awareness, rapid naming, and phonological memory. A deficit in one or more of these kinds of processing is viewed as the most common cause of learning disabilities in general, and of reading disabilities in particular. Phonological processing refers to the use of phonological information, especially the sound structure of one’s oral language, in processing written language (i.e., reading, writing), and oral language (i.e., listening and speaking). In addition to their role in reading, phonological processing abilities also support writing, spelling, and mathematics.

Phonological Awareness

Phonological awareness refers to an individual’s ability to recognize that spoken words are composed of individual sounds. Two subtests from the CTOPP were administered to assess Jacob’ phonological awareness: Elision and Blending Words. The Elision test assessed Jacob’ ability to break down words into sounds. Jacob was required to repeat a word with one phoneme deleted (e.g. Say time – now say time without the “m”). Jacob’ performance on this test was at the 25th percentile. The Blending Word test assessed how well Jacob is able to blend individual sounds together to make words. His performance on this test was at the 25th percentile. Overall, these two results indicate that Jacob’s phonological awareness skills are equal to or better than 21% of his same age peers, which is in the low average range.

Rapid Naming

Apart from the common difficulty that struggling readers have with phonemic awareness, some can also be characterised by another highly specific deficit in speech and language development - the ability to rapidly name visually presented material. These individuals have significant difficulty with rapidly retrieving and accessing names for visual material, even though the relevant names are known to them.

A number of researchers have noted the predictive power of naming speed tasks in assessing reading difficulty - tests use pictures, numbers, and letters. Both naming speed and sight word reading depend on rapid, automatic symbol retrieval from one’s mental store. Efficient retrieval of phonological information and execution of sequences of operations are required when readers attempt to decode unfamiliar words. A lack of fluency in reading is a likely consequence of problems in this area, as are resultant comprehension problems.

Two subtests from the CTOPP were administered to assess rapid naming: Rapid Digit Naming and Rapid Letter Naming. The Rapid Digit Naming test assessed Jacob’ ability to read numerals from a list as quickly as possible. Jacob’ score on this test was below the 1st percentile. The Rapid Letter Naming test assessed Jacob’s ability to read alphabetic letters from a list as quickly as possible. Jacob’s performance on this test was also below the 1st percentile. Together, these results indicate Rapid Naming capacity is below the 1st percentile, indicating that Jacob has a significant deficit in this important area. This result is consistent with his extremely low score on the Processing Speed index of the WISC-IV

Phonological Memory

Phonological memory refers to the ability to hold and manipulate phonological information in short term memory. A deficit in phonological memory can impair one’s ability to decode new words and can also impair both listening and reading comprehension. Two subtests from the CTOPP were administered to assess Jacob’ phonological memory: Memory for Digits and Nonword Repetition. On Memory for Digits, Jacob was required to repeat a group of digits that were read aloud. Jacob performed at the 2nd percentile. On Nonword Repetition, a series of non-words were read aloud, and Jacob was asked to repeat the words. Jacob achieved a score at the 37th percentile. Combined, these results indicate that Jacob’ phonological memory is equal to or better than 8% of peers his age, which is in the Borderline range. This result is consistent with Jacob’ score on the short term memory task of the WISC-IV.

Reading Fluency

The Test of Word Reading Efficiency (TOWRE) was used to assess Jacob’s level of reading abilities. This test allows the assessment of speed and accuracy in reading, known as reading fluency. The ability to recognize letters or words automatically enables a student to allocate more attention to understanding what they are reading. Thus, reading fluency is strongly associated with reading comprehension, and provides more information than does the untimed reading of word lists.

The TOWRE contains two subtests to assess two important aspects of reading fluency – the ability to rapidly recognize familiar words, and the ability to sound out unfamiliar words. The Sight Word Efficiency subtest assesses the number of real printed words that can be read correctly within 45 seconds. The Phonemic Decoding Efficiency subtest measures the number of pronounceable printed non-words that can be accurately decoded within 45 seconds.

Jacob's performance on the Sight Word Efficiency is below the 1st percentile, demonstrating that he has extreme difficulties correctly identifying printed words in a timely manner. Given this, it is understandable that Jacob is having difficulties reading, as he is unable to recognize a considerable number of words.

Jacob also achieved a Phonemic Decoding Efficiency score below the 1st percentile. This means that he also struggles with new words (not before seen), because he has not developed the capacity to attack words. His ability to use phonic strategies are under-developed, and this could be an area for intervention, as it is pivotal to his progress in reading, and progressively, in all areas of the curriculum.

Taking these results together, Jacob achieved a Total Word Reading Efficiency score below the 1st percentile. These results suggest that without intensive intervention Jacob will continue to have difficulties decoding both familiar and unfamiliar words as he progresses through schooling. This will limit his ability to comprehend the nature of the texts he is required to read.

The AIMSweb was also used to assess Jacob’ oral reading fluency on grade level (Year 8) text. His reading rate of 22 words read correctly in a minute (WCPM) is considerably below the average when compared with his same age peers, the average of which is reported to range between 106 and 161 WCPM. The top 10% of Year 8 readers read above 200 WCPM, while the bottom 10% of Year 8 readers read below 97 WCPM. When provided with an easier (Year 3) passage, his fluency was measured at 32 words correct per minute; whereas, the average Year 3 range is reported as between 66 and 114 WCPM.

Taken together, these results consistently show extremely low levels of reading attainment. An important foundation of reading success – the capacity to get the words off the page accurately and speedily has barely commenced.

Comprehension

Silent Reading Comprehension

The capacity to comprehend is the ultimate outcome of reading development. It is the ability to understand the meanings of individual printed words and connected text. The Silent Reading Comprehension subtest of the Brigance Comprehensive Inventory of Basic Skills was used to assess Jacob’ ability to understand written material. This task requires the respondent to read passages silently, and then answer questions related to the passages, going from level to level until correct answers fall below 4 questions out of 5 correct. Jacob was able to read at no higher than the Lower Third Grade Level on this assessment. This result indicates that Jacob is reading well below his expected level, and is having extreme difficulties extracting meaning from written text.

Listening Comprehension

The Listening Comprehension subtest of the Brigance Comprehensive Inventory of Basic Skills was used to assess Jacob’ ability to understand orally presented information. This subtest requires the student to listen to a short story, and then answer oral questions directly related to the story. This tests the student’s ability to identify the main ideas of a story, remember the story sequence, and understand cause and effect. It gives an indication of the student’s ability to comprehend and remember, but without the additional requirement of having first to decode the words. Results indicated that Jacob’ listening comprehension was equivalent to that of an Upper Second Grade student.

While the closeness of the results makes interpretation speculative (it may be simply due to the test’s standard error of measurement), these latter two test results could be viewed as suggesting that Jacob may have slightly better capacity to understand and deal with information when it is presented in text format as opposed to when it is delivered orally. This is sometimes observed in students who are distractible. Their listening attention is more vulnerable to distraction than their reading attention, perhaps because the concreteness of print helps maintain focus. Additionally, the student can refer back to the story for the answer to a reading comprehension question, but not for a listening comprehension question. Despite Jacob’ having a slightly better result when working with information presented to him in text, he has extreme difficulties in doing this, and his ability to read and comprehend a passage is well below that expected of a Year 9 student.

Spelling

Spelling skills are closely related to phonological and reading skills, often considered the other side of the same coin. The Spelling subtest from the Wide Range Achievement Test – Revision 3 (WRAT-3) was used to assess Jacob’ ability to spell orally presented words. Jacob performed within the 1st percentile for his age, which equates to the extremely low range. It is not unusual for spelling to be in this range when there is below average word reading ability, as these two related skills usually develop closely together. To illustrate the difficulties that Jacob had, the following examples are provided, whach for watch; kichin for kitchen; ejucat for educate.

Sentence Writing

The Sentence Writing subtest of the Brigance Comprehensive Inventory of Basic Skills was used to assess Jacob’ capacity to write comprehensible, grammatically correct sentences. Results indicated that Jacob’ sentence writing ability was equivalent to that of a Grade Four student. Although not assessed, Jacob appears to have difficulties with his fine motor skills, as demonstrated by his hand writing.

Writing Fluency

Unless handwriting is automated, the cognitive load required for the physical act of writing can interfere with more complex processes that require conscious thought for developing and sequencing ideas, and monitoring of accuracy and clarity of expression. The Zaner-Bloser Scale: Writing Fluency was used to assess Jacob’ ability to rapidly write as many lower case letters of the alphabet in order, followed by the uppercase letters, in 60 seconds. Jacob was able to complete 24 letters in 60 seconds. This is below the average of 80 letters per minute for a student Jacob’ age, and similar to the writing fluency results (average 28 letters/min) reported in a 2005 study of Year 8 struggling students by Christensen.

Summary

Jacob, 15 years and 2 months old, was referred to the RMIT University Psychology Clinic for a cognitive and academic assessment to support his application into a special school. Consistent with Elly’s report, the results of this assessment suggest that Jacob has a range of academic difficulties, particularly with his verbal skills, processing speed, working memory and reading. His overall cognitive ability on the WISC-IV cannot be summarised with one score due to the discrepancy between his nonverbal reasoning abilities (e.g., Picture Concepts, Block Design) and his verbal reasoning skills (e.g., Comprehension). Jacob’ non-verbal reasoning abilities are at the upper end of the Average range, while his verbal abilities are in the Borderline range. Jacob’ performance on Processing Speed is at the lower end of the Extremely Low range, and his working memory abilities are in the Low Average range. These results are consistent with previous cognitive assessments. Assessment of his academic skills revealed that Jacob has profound deficits in reading comprehension, spelling, and written expression. Taken together, these results indicate that Jacob has exceptional learning needs and requires appropriate educational resources and support. Given the late stage of his educational career and the degree of his educational needs, he should be treated as though he meets the normal eligibility criteria for programs for students with disabilities.

Recommendations

Based upon discussions with Elly, and with consideration of Jacob’ current assessment results,

the following recommendations are made for your consideration:

  • With the degree of discrepancy between Jacob’ verbal and non-verbal abilities he is likely to continue to have significant difficulties in his academic functioning without the appropriate support. Jacob’ educational needs are profound and he is a student at extreme risk.
  • Based upon the results of the assessments, it is likely that Jacob would benefit from a highly structured and reinforcing literacy program tailored to his various areas of difficulties. Recognising that there are several literacy areas of need, the question arises as to how best to intervene. The research into students with a variety of problems in making progress suggests that all students are capable of learning if the learning environment is sufficiently supportive. It appears from previous assessments that Jacob does not have a strong innate capacity to manage his own learning; thus, student-centred approaches are unlikely to be optimal. Thus, Jacob requires a highly structured environment in which every component sub-skill of a valued curriculum outcome is presented systematically, and practiced assiduously to mastery and beyond - to obtain retention.
  • In addressing the decoding area, it is recommended that Jacob commence the SRA Corrective Reading Program – Decoding Program. The Corrective Reading – Decoding Program is a program that has achieved considerable success at the RMIT University Psychology Clinic and in schools. This program is designed to be presented five times per week, taking approximately 30-40 minutes per session. This program is available at the RMIT University Psychology Clinic where instruction and support of its use is provided. Placement testing indicates that he should commence at Level A.
  • The Corrective Reading - Comprehension Program is also recommended and is designed to be presented five times per week, taking approximately 30 minutes per session. Placement testing indicates that he should commence at Level A. The training, presentation and follow-up for this program are similar to that of the Corrective Reading – Decoding program.
  • The Spelling Mastery would also be beneficial in addressing Jacob’ spelling difficulties, and simultaneously supporting his reading progress. Placement testing indicates that he should commence at Level A. Again, the training, presentation and follow-up for this program are similar to that for the Corrective Reading programs.
  • When time permits, the appropriate level of Reasoning and Writing should be considered as an aid to Jacob’s capacity to express himself in writing. Placement testing is available to determine the appropriate level.
  • Ideally, this exceptionally intensive intervention would be achieved through negotiation between home and school to spread the load. In some cases parents have provided program(s) at home, and in other situations they have liaised with school staff to implement their program(s) in the school setting, through daily withdrawal from class. Sometimes, teacher aides or school volunteers have accepted the challenge of addressing a child’s literacy requirements. Training can be provided to parents, volunteers and teachers to implement these programs - which can be delivered in either an individual or group format. At least one complete Clinic session is devoted to training the program presenter for each of the programs to be implemented, after which the programs are commenced. Follow-up monitoring is provided, initially weekly then fortnightly, until the programs are completed. Thus, apart from the initial training, the Clinic monitors the progress of the student, and thereby, the skills of the presenter - providing ongoing support and pre- and post-test evaluation. The intensity, scope, and duration of this proposed intervention are unusual, but so is the extent of Jacob’ needs.
  • The research on students with high levels of educational need highlight the concepts of intensity and duration. Given Jacob’s age, one would anticipate literacy instruction would need to be provided for more than 2 hours per day, and would be expected to last until the completion of his schooling. Additionally, careful monitoring (at least weekly) is required to ensure time is being used effectively, and that the chosen programs are being presented faithfully.

How long will it take?

  • Addressing the decoding area, it is recommended that Julia participate in the Corrective Reading: Decoding A daily for about 30 minutes. This should take about 6 months, after which Corrective Reading: Decoding B and C should be implemented. The complete Corrective Reading: Decoding program has a total of 315 lessons. This should require about 18-24 months of instruction.
  • The Corrective Reading: Comprehension program (Levels A and B, 185 lessons) is designed to be presented five times per week, taking approximately 30 minutes per session. This requires about 12 months of instruction.
  • At the same time, the Spelling Mastery Levels A, B, and C (300 lessons) program would be beneficial in addressing Julia’s spelling difficulties, and simultaneously supporting her reading progress. This requires about 18-24 months of instruction. The training, presentation and follow-up for this program are similar to that for the Corrective Reading programs.
  • At school, Jacob will require intensive, systematic and individualised teaching if he is to improve his grasp of academic skills. Jacob will also need substantial accommodations to help him meet the demands of the school curriculum. An accommodation is a change that allows students to utilise their learning strengths, precluding or diminishing the limiting effects of their difficulties. For example, Jacob may require alternative arrangements to access written material in text books, alternatives to note taking, to written composition, and to ways of taking tests. Accommodations may also include extra time to complete tasks, having instructions repeated or reworded, and receiving instructions both orally and in writing. He may also require modification to curriculum content across most subjects.
  • During the course of the assessment, it became evident that Jacob responded well to positive behaviour support. As such, it is important to provide Jacob with opportunities for success and to use positive reinforcement strategies to promote appropriate behaviours.
  • It is possible that Jacob’ behavioural and emotional difficulties are additional hurdles to his progress at school. The heavy emphasis on skill development may produce resistance from him, and the task ahead for the school should not be under-estimated. Given this, Jacob may benefit from further assessments and ongoing support from a healthcare professional closer to the family’s area to assist with his reported problem behaviours and difficulties with regulating emotions. The Australia Psychological Society website may be of assistance when trying to identify an appropriate source of support close to the family home (http://www.psychology.org.au/).

Kate Devine                                                 Assoc. Prof. Kerry Hempenstall

B.App.Sci. (Psychology) (Hons)                     B.Sc., Dip.Ed., Dip.Ed.Psych., Dip.

Provisional Psychologist                             Soc.Studs., PhD, MAPS Case Supervisor

 ---------------------------------------------------------------------------------------------------------------

References

ACER (2010). PISA in Brief. Highlights from the full Australian Report: Challenges for Australian Education: Results from PISA 2009.         Retrieved from http://www.acer.edu.au/documents/PISA-2009-In-Brief.pdf

ACER (2012). ACER releases results from latest international studies of student achievement. Australian Council for Educational Research. Retrieved from http://www.acer.edu.au/media/acer-releases-results-from-latest-international-studies-of-student-achievem

Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press.

Adams, M. J., Foorman, B. R., Lundberg, I., & Beeler, T. (1998). Phonemic awareness in young children. Baltimore, MA: Brookes Publishing.

Adkins, D., Kingsbury, G.G., Dahlin, M., & Cronin, J. (2007).The proficiency illusion. Thomas B. Fordham Institute. Retrieved from http://www.edexcellence.net/publications/theproficiencyillusion.html

Al Otaiba, S., Connor, C., Lane, H., Kosanovich, M. L., Schatschneider, C., Dyrlund, A. K., Miller, M. S., & Wright, T. L. (2008). Reading First kindergarten classroom instruction and students' growth in phonological awareness and letter naming–decoding fluency. Journal of School Psychology, 46(3), 281-314.

Alessi, G. (1988). Diagnosis diagnosed: A systemic reaction. Professional School Psychology, 3, 145-151.

Anderson, V. (1991). The neuropsychology of learning disabilities: Assessment, diagnosis, and treatment. Unpublished doctoral dissertation. University of Melbourne.

Apel, K., & Swank, L. K. (1999). Second chances: Improving decoding skills in the older student. Language, Speech & Hearing Services in Schools, 30, 231-243.

Australian National Audit Office. (2012). National Partnership Agreement on Literacy and Numeracy. Retrieved from http://www.anao.gov.au/Publications/Audit-Reports/2011-2012/National-Partnership-Agreement-on-Literacy-and-Numeracy/Audit-brochure

Badian, N. A. (1994). Preschool prediction: Orthographic and phonological skills, and reading. Annals of Dyslexia, 44, 3-25.

Bates, C., & Nettlebeck, T. (2001). Primary school teachers’ judgements of reading achievement. Educational Psychology, 21(2), 179-189.

Beck, I. L., & McKeown, M. G. (1991). Social studies texts are hard to understand: Mediating some of the difficulties. Language Arts, 68, 482-490.

Beck, I. L., Perfetti, C. A., & McKeown, M. G. (1982). The effects of long-term vocabulary instruction on lexical access and reading comprehension. Journal of Educational Psychology, 74, 506-521.

Berninger, V. W., & Abbott, R. D. (1994). Redefining learning disabilities. Moving beyond aptitude-achievement discrepancies to failure to respond to validated treatment protocols. In G. Reid Lyon (Ed.), Frames of reference for the assessment of learning disabilities. New views on measurement issues (pp. 163-184). Baltimore, MD: Brooks Publishing.

Binder, C., Haughton, E., & Bateman, B. (2002). Fluency: Achieving true mastery in the learning process. Professional Papers in Special Education. VA: University of Virginia Curry School of Education. Retrieved 1/2/2003 from http://curry.edschool.virginia.edu/sped/projects/ose/papers/

Bolt, S. (2011). Making consistent judgments: Assessing student attainment of systemic achievement targets. The Educational Forum, 75(2), 157-172.

Bowers, P. G. (1995). Tracing symbol naming speed's unique contributions to reading disabilities over time. Reading and Writing: An Inter-Disciplinary Journal, 7, 189-216.

Bowers, P. G., & Wolf, M. (1993). Theoretical links among naming speed, precise timing mechanisms and orthographic skill in dyslexia. Reading and Writing: An Interdisciplinary Journal, 5(1), 69-85.

Bowey, J. A., & Muller, D. (2005). Phonological recoding and rapid orthographic learning in third-grade children’s silent reading: a critical test of the self-teaching hypothesis, Journal of Experimental Child Psychology, 92, 203–219.

Bradley, L., & Bryant, P. (1983). Categorizing sounds and learning to read - A causal connection. Nature, 301, 419-421.

Brigance, A. H. (2000). Comprehensive Inventory of Basic Skills-Revised. Australia: Hawker Brownlow.

Brown, I. S. & Felton, R. H. (1990). Effects of instruction on beginning reading skills in children at risk for reading disability. Reading & Writing: An Interdisciplinary Journal, 2, 223-241.

Bruck, M. (1992). Persistence of dyslexics' phonological awareness deficits. Developmental Psychology, 28, 874-886.

California Department of Education. (1999). Reading/language arts framework for California public schools: Kindergarten through Grade Twelve. Retrieved June 3, 2000. from http://www.cde.ca.gov/cdepress/lang_arts.pdf

Campton, D. L., & Carlisle, J. F. (1994). Speed of word recognition as a distinguishing characteristic of reading disabilities. Educational Psychology Review, 6, 115 – 140.

Carnine, D. (2003, Mar 13). IDEA: Focusing on improving results for children with disabilities. Hearing before the Subcommittee on Education Reform Committee on Education and the Workforce United States House of Representatives. Retrieved July 11, 2005, from http://edworkforce.house.gov/hearings/108th/edr/idea031303/carnine.htm

Castles, A., & Coltheart, M. (1993). Varieties of developmental dyslexia. Cognition 47, 14-180.

Castles, A., & Coltheart, M. (2004). Is there a causal link from phonological awareness to success in learning to read? Cognition, 91, 77-111.

Castles A., Coltheart M., Larsen L., Jones P., Saunders S., McArthur G. (2009). Assessing the basic components of reading: A revision of the Castles and Coltheart test with new norms. Australian Journal of Learning Difficulties, 14, 67-88.

Chall, J. S. (1967). The great debate. New York: McGraw Hill.

Collier, K. (2008, October 18). The ABC of ignorance. Herald Sun, p.9.

Cornwall, A. (1992). The relationship of phonological awareness, rapid naming and verbal memory to severe reading and disability. Journal of Learning Disabilities, 25, 532-538.

Coyne, M. D., Kame'enui, E. J., & Simmons, D. C. (2004). Improving beginning reading instruction and intervention for students with LD: Reconciling "All" with "Each". Journal of Learning Disabilities, 37(3), 231-239.

Cronin, J., Dahlin, M., Adkins, D., & Gage Kingsbury, G. (2007). The proficiency illusion. Thomas B. Fordham Foundation. Retrieved October 1, 2007, from http://edexcellence.net/template/page.cfm?id=264.

Crowley, K., Shrager, J. & Siegler, R. S. (1997). Strategy discovery as a competitive negotiation between metacognitive and associative mechanisms. Developmental Review, 17, 462-489.

Cuttance, P. (1998). Quality assurance reviews as a catalyst for school improvement in Australia. In A. Hargreaves, A. Lieberman, M. Fullan., & D. Hopkins (Eds.), International handbook of educational change, Part II (pp. 1135-1162). Dordrecht: Kluwer Publishers.

Denckla, M., & Rudel, R. G. (1974). Rapid "automatized" naming of pictured objects, colours, letters and numbers by normal children, Cortex, 10, 186-202.

Department for Education and Employment. (1998). The National Literacy Strategy: Framework for Teaching. London: Crown.

Department of Education, Employment & Training. (2001). Consistency Project: Links between curriculum frameworks in Vic, SA and QLD in English. Retrieved November 11, 2004, from http://www.sofweb.vic.edu.au/assess/consist/englink.htm

Department of Education, Science and Training (DEST). (2007). Parents’ attitudes to schooling. Canberra: Australian Government. Retrieved, 16/10/2008, from http://www.dest.gov.au/NR/rdonlyres/311AA3E6-412E-4FA4-AC01-541F37070529/16736/ParentsAttitudestoSchoolingreporMay073.rtf

Dixon, R., Engelmann, S. Bauer, M.M., Steely, D., & Wells, T. (1998). Spelling Mastery. Chicago: Science Research Associates.

Dunn, L. M., & Dunn, L. M. (1997). Peabody Picture Vocabulary Test. Circle Pines, MN: American Guidance Service.

Eden, G. F., Stein, J. F., Wood, H. M., & Wood, F. B. (1995a). Temporal and spatial processing in reading disabled and normal children. Cortex, 31, 451-468.

Eden, G. F., Stein, J. F., Wood, H. M., & Wood, F. B. (1995b). Verbal and visual problems in reading disability. Journal of Learning Disabilities, 28(5), 272-290.

Ehri, L. C. (2000). Learning to read and learning to spell: Two sides of a coin. Topics in Language Disorders, 20(3), 19-36.

Ehri, L. C. (1998). Presidential address. In Joanna P. Williams (Ed.), Scientific Studies of Reading Vol. 2, (No. 2, pp. 97-114). Mahwah, NJ: Lawrence Erlbaum.

Elbro, C., Nielsen, I., & Petersen, D. K. (1994). Dyslexia in adults: Evidence for deficits in non-word reading and in the phonological representation of lexical items. Annals of Dyslexia, 44, 205-226.

Engelmann, S. & Bruner, E. C. (1988). Reading Mastery. Chicago, Ill: Science Research Associates.

Engelmann, S. Haddox, P., & Bruner, E. (1983). Teach Your Child to Read in 100 Easy Lessons. New York: Simon & Schuster.

Engelmann, S., & Osborn, J. (1999). Language for Learning. Columbus, OH: SRA/McGraw-Hill.

Engelmann, S., & Silbert, J. (1983). Expressive Writing I. Desoto, TX: SRA/McGraw-Hill.

Engelmann, S., & Silbert, J. (1991). Reasoning and Writing. Desoto, TX: SRA/McGraw-Hill.

Engelmann, S., Haddox, P., Hanner, S., & Osborne, J. (1989). Corrective Reading: Comprehension. Chicago: Science Research Associates.

Engelmann, S., Meyer, L., Carnine, L., & Johnson, G. (1999). Corrective Reading: Decoding. Chicago: Science Research Associates.

Feinberg, A. B., & Shapiro, E. S. (2009). Teacher accuracy: An examination of teacher-based judgments of students reading with differing achievement levels. The Journal of Educational Research, 102(6), 453-462, 480.

Fehring, H. (2001). Literacy assessment and reporting: Changing practices. 12th European Conference on Reading, RAI Dublin, 1st - 4th July. Retrieved November 1, 2006, from http://sece.eu.rmit.edu.au/staff/fehring/irish.htm

Felton, R. H. (1992). Early identification of children at risk for reading disabilities. Topics in Early Childhood Special Education, 12, 212-229.

Felton, R. H. (1993). Effects of instruction on the decoding skills of children with phonological-processing problems. Journal of Learning Disabilities, 26, 583-589.

Fleming, N. (2013). NAEP shows most students lack writing proficiency. Education Week, January 13, 2013. Retrieved from http://www.edweek.org/ew/articles/2012/09/14/04naep.h32.html

Foorman, B. R. (1995). Research on "the great debate" code-oriented versus whole language approaches to reading instruction. School Psychology Review, 24, 376-392.

Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45-58.

Fuchs, L. S., Fuchs, D., Hosp, M. K., and Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239-256.

Gaillard, R., Naccache, L., Pinel, P., Clemenceau, S., Volle, E., Hasboun, D., Dupont, S., Baulac, M., Dehaene, S., Adam, C., Cohen, L. (2006). Direct intracranial, fMRI and lesion evidence for the causal role of left inferotemporal cortex in reading. Neuron, 50, 191-204.

Gillam, R. B., & Van Kleek, A. (1996). Phonological awareness training and short-term working memory: Clinical implication. Topics in Language Disorders, 17(1), 72-81.

Good, R. H., & Kaminski, R. A. (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for Development of Educational Achievement.

Goodman, K. S. (1986). What's whole in whole language. Richmond Hill, Ontario: Scholastic.

Graham, S., & Hebert, M. A. (2010). Writing to read: Evidence for how writing can improve reading. A Carnegie Corporation Time to Act Report. Washington, DC: Alliance for Excellent Education. Retrieved from http://carnegie.org/fileadmin/Media/Publications/WritingToRead_01.pdf

Greenberg, D., Ehri, L. C., & Perin, D. (1997). Are word reading processes the same or different in adult literacy students and third-fifth graders matched for reading level? Journal of Educational Psychology, 89, 262-275.

Gresham, F. (2001). Responsiveness to intervention: An alternative approach to the identification of learning disabilities.In R. Bradley, L. Danielson, & D. P. Hllahan (Eds.), Identification of learning disabilities: Research in practice (pp.467–519). Mahwah, NJ: Erlbaum.

Grossen, B. (1997). A synthesis of research on reading from the National Institute of Child Health and Human Development. Retrieved September 1, 2008, from http://www.nrrf.org/synthesis_research.htm.

Hammill, D.D., & Larsen,S.C. (1996). Test of Written Language - 3rd Edition (TOWL-3). San Antonio, TX: Psychcorp.

Harn, B.A. Jamgochian, E., & Parisi, D.M. (2009). Characteristics of students who don’t respond to research-based interventions. Council for Exceptional Children. Retrieved fromhttp://www.cec.sped.org/AM/Template.cfm?Section=CEC_Today1&TEMPLATE=/CM/ContentDisplay.cfm&CONTENTID=10645

Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experiences of young American children. Baltimore, MD: Paul H. Brookes.

Hart, B., & Risley, T. R. (2003, Spring). The early catastrophe: The 30 million word gap. American Educator. Retrieved April 11, 2003 from http://www.aft.org/american_educator/spring2003/catastrophe.html

Hatcher, P. J. (1994). Sound Linkage: An integrated programme for overcoming reading difficulties. London: Whurr Publishers.

Hattie, J. A., Clinton, J., Thompson, M., & Schmidt-Davies, H. (1995). Identifying highly accomplished teachers: A validation study. Greensboro, NC: Center for Educational Research and Evaluation, University of North Carolina.

Hempenstall, K. (1995). The Picture Naming Test. Unpublished manuscript. Royal Melbourne Institute of Technology.

Hempenstall, K. (1996). The gulf between educational research and policy: The example of direct instruction and whole language. Behaviour Change, 13, 33-46.

Hempenstall, K. (1998). Miscue analysis: A critique. Australian Journal of Learning Disabilities, 3(4), 32-37.

Hempenstall, K. (2001). School-based reading assessment: Looking for vital signs. Australian Journal of Learning Disabilities, 6, 26-35.

Hempenstall, K. (2003). The three-cueing system: Trojan horse? Australian Journal of Learning Disabilities, 8(3), 15-23.

Hempenstall, K. (2005). The Whole Language-Phonics controversy: An historical perspective. Australian Journal of Learning Disabilities, 10(3 & 4), 19-33.

Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11(2), 83-92.

Henty, A. (1993). Speech Pathology Phonological Screening Awareness Test. Unpublished manuscript.

Hirsch, E. D. (2006, Spring).What do reading comprehension tests measure? Knowledge. American Educator, 30(1). Retrieved October 21, 2007, from www.aft.org/pubsreports/american_educator/issues/spring06/tests.htm

Hoien, T., Lundberg, I., Stanovich, K. E., & Bjaalid, I-K. (1995). Components of phonological awareness. Reading and Writing: An Inter-Disciplinary Journal, 7, 171-188.

Hoover, W. & Gough, P. (1990). The simple view of reading. Reading and Writing: An Inter-Disciplinary Journal, 2, 127-160.

Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and decision making (3rd ed.). Belmont, CA: Wadsworth.

HuffPost Education. (2011, July 11). Teachers implicated in Atlanta cheating scandal told to resign or get fired. Retrieved from http://www.huffingtonpost.com/2011/07/17/teachers-implicated-in-at_n_900853.html

HuffPost Education. (2013, April 22). Beverly Hall's lawyers deny the schools chief had role in Atlanta cheating scandal. Retrieved from http://www.huffingtonpost.com/2013/04/22/atlanta-cheating-scandal-beverly-hall_n_3132583.html

Hurford, D. P., Darrow, L. J., Edwards, T. L., Howerton, C. J., Mote, C. R., Schauf, J. D., & Coffey, P. (1993). An examination of phonemic processing abilities in children during their first grade year. Journal of Learning Disabilities, 26, 167-177.

Hurford, D. P., Schauf, J. D., Bunce, L., Blaich, T., & Moore, K. (1994). Early identification of children at risk for reading disabilities. Journal of Learning Disabilities, 27, 371-382.

Jones, J.M. (2012). Confidence in U.S. public schools at new low. Gallup Politics. Retrieved from http://www.gallup.com/poll/155258/Confidence-Public-Schools-New-Low.aspx

Joseph, J., Noble, K., & Eden, G. (2001). The neurobiological basis of reading. Journal of Learning Disabilities, 34, 566-579.

Joshi, R. M., & Aaron, P. (2002). Naming speed and word familiarity as confounding factors in decoding. Journal of Research in Reading, 25(2),160 –171.

Joshi, R. M., Treiman, R., Carreker, S., & Moats, L. (2009). How words cast their spell: Spelling is an integral part of learning the language, not a matter of memorization. American Educator, 32(4), 6-43.

Juel, C. (1988). Learning to read & write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology, 80, 437-447.

Kame'enui, E. J., Simmons, D. C., & Coyne, M. D. (2000). Schools as host environments: Toward a schoolwide reading improvement model. Annals of Dyslexia, 50, 33-51.

Kaplan, E., Goodlass, H., & Weintraub, S. (1983). Boston Naming Test. Philadelphia, PA: Lea and Febiger.

Laberge, D., & Samuels, S. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323.

Labov, L. (2003). When ordinary children fail to read. Reading Research Quarterly, 38, 128-131.

Landi, N., Perfetti, C. A., Bolger, D. G., Dunlap, S. & Foorman, B. R. (2006). The role of discourse context in developing word form representations: A paradoxical relation between reading and learning. Journal of Experimental Child Psychology, 94(2), 114-133.

Leigh, A., & Ryan, C (2006). Teacher quality: How and why has teacher quality changed in Australia, Teacher, December, pp. 14-19.

Leigh, A., & Ryan, C. (2008). How has school productivity changed in Australia? The Australian National University, Canberra. Retrieved September 12, 2008, from http://econrsss.anu.edu.au/~aleigh/pdf/SchoolProductivity.pdf

Levin, B. (1998). Criticizing the schools: Then and now. Education Policy Analysis Archives, 6(16). Retrieved from http://epaa.asu.edu/epaa/v6n16.html

Levin, I., Shatil-Carmon, S., & Asif-Rave, O. (2006). Learning of letter names and sounds and their contribution to word recognition. Journal of Experimental Child Psychology, 93(2), 139-165.

Lewkowicz, N. K. (1980). Phonemic awareness training: What to teach and how to teach it. Journal of Educational Psychology, 72, 686-700.

Lindamood, C. H., & Lindamood, P. C. (1979). Lindamood Auditory Conceptualization Test. Allen, TX: DLM Teaching Resources.

Longcamp, M., Zerbato-Poudou, M. T., & Velay, J. L. (2005). The influence of writing practice on letter recognition in preschool children: A comparison between handwriting and typing. Acta Psychologica, 119, 67-79.

Love, E., & Reilly, S. (1995). A Sound Way: Phonological Awareness - Activities for Early Literacy. Melbourne: Longman Cheshire.

Lovett, M. W., Steinbach, K. A., & Frijters, J. C. (2000). Remediating the core deficits of developmental reading disability: A double-deficit perspective. Journal of Learning Disabilities, 33, 334-342.

Lyon, G. R. (1995). Toward a definition of dyslexia. Annals of Dyslexia, 45, 3-27.

Lyon, G. R. (1999, December, 12). Special education in state is failing on many fronts. Los Angeles Times, p. A1. Retrieved September 10, 2000, from http://www.latimes.com/news/state/reports/specialeduc/lat_special991212.htm

Lyon, G. R. (2001). Measuring success: Using assessments and accountability to raise student achievement. Subcommittee on Education Reform Committee on Education and the Workforce U.S. House of Representatives Washington, D.C. Retrieved 1/2/2003 from http://www.nrrf.org/lyon_statement3-01.htm

Malatesha, R., Joshi, R. M., Treiman, R., Carreker, S., & Moats, L. C. (2008). How words cast their spell. American Educator, 8(18), 42-43.

Manis, F. R. Doi, L. M., & Bhadha, B. (2000). Naming speed, phonological awareness, and orthographic knowledge in second graders. Journal of Learning Disabilities, 33, 325-333.

Manolitsis, G., & Tafa, E. (2011). Letter-name letter-sound and phonological awareness: Evidence from Greek-speaking kindergarten children. Reading and Writing: An Interdisciplinary Journal, 24(1), 27-53.

Marshall R. M., & Hynd, G. W. (1993). Neurological basis of learning disabilities. In William W. Bender (Ed.) Learning disabilities: Best practices for professionals. Stoneham, MA: Butterworth-Heinemann.

McBride-Chang, C. (1999). The ABCs of the ABCs: the development of letter-name and letter-sound knowledge. Merrill-Palmer Quarterly, 45, 285-296.

McGregor, K. K., & Leonard, L. B. (1995). Intervention for word-finding deficits in children. In M. Fey, J. Windsor, & S. Warren, (Eds.), Language intervention: Preschool through the elementary years, (pp. 85-105). Baltimore, MD: Paul H. Brookes.

McNamara, J.K., Scissons, M., & Gutknecth, N. (2011). A longitudinal study of kindergarten children at risk for reading disabilities: The poor really are getting poorer. Journal of Learning Disabilities, 44(4), 421-430.

Metsala, I. L., & Ehri, L. C. (1998). Word recognition in beginning literacy. Mahwah, NJ: Lawrence Erlbaum Associates.

Miller, J., & Schwanenflugel, P. J. (2006). Prosody of syntactically complex sentences in the oral reading of young children. Journal of Educational Psychology, 98(4), 839-853.

Miller, L. L., & Felton, R. H. (2001). "It's one of them ... I don't know": Case study of a student with phonological, rapid naming, and word-finding deficits. Journal of Special Education, 35, 125-133.

Nagy, W. E. (1998). Increasing students’ reading vocabularies. Presentation at the Commissioner’s Reading Day Conference, Austin, Texas.

Nagy, W. E., & Anderson, R. C. (1984). How many words are there in printed English? Reading Research Quarterly, 19, 304-330.

Nation, K., Angell, P., & Castles, A. (2007). Orthographic learning via self-teaching in children learning to read English: Effects of exposure, durability, and context. Journal of Experimental Child Psychology, 96(1), 71-84.

National Center for Education Statistics (2011). National Assessment of Educational Progress (NAEP). U.S. Department of Education, Institute of Education Sciences. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2011/2012457.pdf

National Inquiry into the Teaching of Literacy (2005). Teaching reading - A review of the evidence-based research literature on approaches to the teaching of literacy, particularly those that are effective in assisting students with reading difficulties. Australian Government: Department of Education, Science and Training. Retrieved November 1, 2007, from www.dest.gov.au/nitl/documents/literature_review.pdf.

National Institute of Child Health and Human Development (2000). National Reading Panel: Teaching children to read. Retrieved September 25, 2002, from http://www.nationalreadingpanel.org.

Neale, M. D. (1988). Neale Analysis of Reading Ability-Revised. Melbourne, Australia: ACER.

Neilson, R. (2003). Sutherland Phonological Awareness Test-Revised. NSW, Australia: Language Speech & Literacy Services.

Nicholson, T., & Whyte, B. (1992). Matthew effects in learning new words while listening to stories. In C. K. Kinzer & D. J. Leu (Eds.), Literacy research, theory, and practice: Views from many perspectives: Forty-first Yearbook of the National Reading Conference (pp. 499-503). Chicago, IL: The National Reading Conference.

Oakhill, J. V., & Garnham, A. (1988). Becoming a skilled reader. Oxford: Basil. Blackwell.

O'Connor, R. E., Bell, K. M., Harty, K. R., Larkin, L. K., Sackor, S. M., & Zigmond, N. (2002). Teaching reading to poor readers in the intermediate grades: A comparison of text difficulty. Journal of Educational Psychology, 94, 474-485.

OECD. (2004). Adults at low literacy level (most recent) by country. International Adult Literacy Survey (IALS). Retrieved, 16/10/2008, from http://www.nationmaster.com/graph/edu_lit_adu_at_low_lit_lev-education-literacy-adults-low-level

Paton, G. (2008, 25 Oct). GCSE standards 'deliberately lowered' to make sure pupils pass. Telegraph.co.uk. Retrieved October 25, 2008, from http://www.telegraph.co.uk/news/newstopics/politics/education/3254233/GCSE-standards-deliberately-lowered-to-make-sure-pupils-pass.html

Pearson, P. D., & Hamm, D. N. (2005). The assessment of reading comprehension: A review of practices – past, present, and future (pp. 13-69). In S. G. Paris, & S. A. Stahl (Eds.), Children's reading comprehension and assessment. (pp. 131-160). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Pearson, P. D., Hiebert, E. H., & Kamil, M. L. (2007). Vocabulary assessment: What we know and what we need to know. Reading Research Quarterly, 42(2), 282-296.

Perfetti, C. A. (1985). Reading ability. Oxford, UK: Oxford University Press.

Perfetti, C. A. (1992). Introduction. In N. N. Singh & I. L. Beale (Eds), Learning disabilities: Nature, theory and treatment (pp.1-22). New York: Springer-Verlag.

Perfetti, C. A., & Hogaboam, T. (1975). Relationship between single word decoding and reading comprehension skill. Journal of Educational Psychology, 67, 461-469.

Perfetti, C. A., & Lesgold, A. M. (1977). Coding and comprehension in skilled reading and implications for reading instruction. In L. B Resnick & P. A Weaver (Eds.), Theory and practice of early reading (Vol. 1 pp 57-84) Hillsdale, NJ: Earlbaum.

Pressley, M. (2000). What should comprehension instruction be the instruction of? In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (vol. 3, pp. 545-561). Mahwah, NJ: Erlbaum.

Pressley, M. (2001, September). Comprehension instruction: What makes sense now, what might make sense soon. Reading Online, 5(2). Retrieved 2 October, 2003, from http://www.readingonline.org/articles/art_index.asp?HREF=/articles/handbook/pressley/index.html

Primary National Strategy (2006). Primary framework for literacy and mathematics. UK: Department of Education and Skills. Retrieved 26 October, 2006, from http://www.standards.dfes.gov.uk/primaryframeworks/

Pugh, K. P., Mencl, W.E., Jenner, A. R., Katz, L., Frost, S. J., Lee, J. R., Shaywitz, S. E., & Shaywitz, B. A. (2002). Neuroimaging studies of reading development and reading disability. Learning Disabilities Research & Practice, 16, 240-249.

Pugh, K., & Hagan, E.C. (2010). New directions in the cognitive neuroscience of reading development and reading disability. Perspectives on Language and Literacy, 36(1), 22-25.

Rack, J. P., Snowling, M., & Olson, R. K. (1992). The nonword reading deficit in developmental dyslexia: A review. Reading Research Quarterly, 27, 39-53.

Rathvon, N. (2004). Early reading assessment: A practitioner’s handbook. New York: Guilford Press.

Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2001). How psychological science informs the teaching of reading. Psychological Science in the Public Interest, 2, 31-74. Retrieved February 1, 2003, from www.psychologicalscience.org/newsresearch/publications/journals/pspi2_2.html

Reschly, A.L., Busch, T.W., Betts, J. Deno, S.L., & Long, J.D. (2009). Curriculum-Based Measurement Oral Reading as an indicator of reading achievement: A meta-analysis of the correlational evidence, Journal of School Psychology, 47(6), 427-469.

Richards, T. L., Aylward, E. H., Berninger, V. B., Field, K. M., Grimme, A. C., Richards, A. L., & Nagy, W. (2006). Individual fMRI activation in orthographic mapping and morpheme mapping after orthographic or morphological spelling treatment in child dyslexics. Journal of Neurolinguistics, 19(1), 56-86.

Richards, T. L., Corina, D., Serafini, S., Steury, K., Echelard, D. R., Dager, S. R., Marro, K., Abbott, R. D., Maravilla, K. R., & Berninger, V. W. (2000). The effects of a phonologically-driven treatment for dyslexia on lactate levels as measured by Proton MRSI. American Journal of Neuroradiology, 21, 916-922. Retrieved 12/2/03 from http://faculty.washington.edu/toddr/dyslexic2.htm

Richards, T. L., Dager, S. R., Corina, D., Serafini, S., Heide, A. C., Steury, K., Strauss, W., Hayes, C. E., Abbott, R. D., Craft, S., Shaw, D., Posse, S., & Berninger, V. W. (1999). Dyslexic children have abnormal brain lactate response to reading-related language tasks. American Journal of Neuroradiology, 20, 1393-1398.

Roehrig, A. D., Petscher, Y., Nettles, S. M., Hudson, R. F., & Torgesen, J. T. (2008). Accuracy of the DIBELS Oral Reading Fluency measure for predicting third grade reading comprehension outcomes. Journal of School Psychology, 46(3), 343–366.

Roid, G. H. (2003). The Stanford-Binet Intelligence Scale: Fifth Edition. Itasca, Ill: Riverside Publishing Company.

Rose, J. (2006). Independent review of the teaching of early reading. Bristol: Department for Education and Skills. Retrieved April 12, 2006, from www.standards.dfes.gov.uk/rosereview/report.pdf

Rosner, J. (1975). Helping children to overcome learning disabilities. Navato, CA: Academic Therapy.

Rubin, H., Rotella, T., Schwartz, L., & Bernstein, S. (1991). The effect of phonological analysis training on naming performance. Reading & Writing: An Interdisciplinary Journal, 3, 1-10.

Satz, P., Fletcher, J. M., Clark, W., & Morris, R. (1981). Lag, deficit, rate and delay constructs in specific learning disabilities: A re-examination. In A. Ansara, N. Geschwind, A. Galaburda, M. Albert, & N. Gartrell (Eds.), Sex differences in dyslexia (pp. 129-150). Towson, MD: The Orton Dyslexia Society.

Savage, R., & Frederickson, N. (2005). Evidence of a highly specific relationship between rapid automatic naming of digits and text-reading speed. Brain and Language, 93, 152–159.

Savage, R.S., & Frederickson, N. (2006). Beyond phonology: What else is needed to describe the problems of below-average readers and spellers? Journal of Learning Disabilities, 39(5), 399-413.

Scammacca, N., Roberts, G., Vaughn. S., Edmonds, M., Wexler, J., Reutebuch, C. K., & Torgesen, J. K. (2007). Interventions for adolescent struggling readers: A meta-analysis with implications for practice. Portsmouth, NH: RMC Research Corporation, Center on Instruction.

Scarborough, H. (2003). Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice. In S. Newman & D. Dickinson (Eds.), Handbook of early literacy research (pp. 97–110). New York: The Guilford Press.

Scarborough, H. S. (1998). Early identification of children at risk for reading disabilities. In B. K. Shapiro, P. J. Accardo, & A. J. Capute (Eds.), Specific reading disability (pp. 75-119). Timonium, MD: York Press.

Schatschneider, C., Francis, D. J., Foorman, B. R., Fletcher, J. M., & Mehta, P. (1999). The dimensionality of phonological awareness: An application of item response theory. Journal of Educational Psychology, 91(3), 439-449.

Seungsoo, Y., Dong-Il, K., Lee Branum-Martin, L., Wayman, M.M., & Espin, C.A. (2012). Assessing the reliability of curriculum-based measurement: An application of latent growth modelling. Journal of School Psychology, 50(2), 275-292.

Shankweiler, D., Lundquist, E., Dreyer, L. G., & Dickinson, C. C. (1996). Reading and spelling difficulties in high school students: Causes and consequences. Reading and Writing: An Inter-Disciplinary Journal, 8, 267-294.

Shankweiler, D., Lundquist, E., Katz, L., Stuebing, K. K., Fletcher, J. M., Brady, S., Fowler, A., Dreyer, L. G., Marchione, K. E., Shaywitz, S. E., & Shaywitz, B. A. (1999). Comprehension and decoding: Patterns of association in children with reading difficulties. Scientific Studies of Reading, 3, 69-94.

Share, D. L. (1995). Phonological recoding and self-teaching: Sine qua non of reading acquisition. Cognition, 55, 151-218.

Share, D. L. (1999). Phonological recoding and orthographic learning: A direct test of the self-teaching hypothesis. Journal of Experimental Child Psychology, 72(2), 95-129.

Share, D. L. (2004). Orthographic learning at a glance: On the time course and developmental onset of self-teaching. Journal of Experimental Child Psychology 87(4), 267-298.

Share, D. L., & Stanovich, K. E. (1995). Cognitive processes in early reading development: accommodating individual differences into a model of acquisition. Issues in Education, 1, 1-57.

Shaywitz, B. A., Shaywitz, S. E., Blachman, B. A., Pugh K. R., Fulbright, R. K., Skudlarski, P., Mencl, W. E., Constable, R. T., Holahan, J. M., Marchione, K. E., Fletcher, J. M., Lyon, G. R., & Gore, J. C. (2004). Development of left occipitotemporal systems for skilled reading in children after a phonologically- based intervention. Biological Psychiatry, 55, 926-33.

Shaywitz, S. (2003). On the mind of a child: A conversation with Sally Shaywitz. Educational Leadership, 60(7), 6-10.

Shaywitz, S. E., Fletcher, J. M., Holahan, J. M., Shneider, A. E., Marchione, K. E., Stuebing, K. K., Francis, D. J., Pugh, K. R., & Shaywitz, B. A. (1999). Persistence of dyslexia: The Connecticut longitudinal study at adolescence. Pediatrics, 104, 1351-1339.

Shinn, M. R., Good, R. H., Knutson, N., Tilly, W. D., & Collins, V. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 459-479.

Siegel, L. S. (1993). The development of reading. Advances in Child Development and Behaviour, 24, 63-97.

Simos, P. G., Fletcher, J. M., Sarkari, S., Billingsley-Marshall, R. L., Denton, C. A., Papanicolaou, A. C. (2007). Intensive instruction affects brain magnetic activity associated with reading fluency in children with dyslexia. Journal of Learning Disabilities, 40(1), 37-48.

Simos, P., Fletcher, J., Bergman, E., Breier, J., Foorman, B., Castillo, E., Davis, R., Fitzgerald, M., & Papanicolaou, A. (2002). Dyslexia-specific brain activation profile becomes normal following successful remedial training. Neurology, 58, 1203-1212.

Snow, C. E. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: RAND.

Snow, C. E., Burns, S., & Griffin, P. (Eds.) (1998). Preventing reading difficulties in young children. Report of the National Research Council. Retrieved September 2, 1999, from http://www.nap.edu/readingroom/books/reading/.

Spadafore, G. J. (1983). Spadafore Diagnostic Reading Test. Novato, CA: Academic Therapy Publications.

Sparks, R. L. (1995). Phonemic awareness in hyperlexic children. Reading and Writing: An Inter-Disciplinary Journal, 7, 217-235.

Sparks, S.D. (2013). Most 8th graders fall short on NAEP science test. Education Week. Jan 27, 2013. Retrieved from http://www.edweek.org/ew/articles/2012/05/10/31naep_ep.h31.html

Spear-Swerling, L. (1998, August). The use and misuse of processing tests. LD In Depth. Retrieved September 20, 1998, from ldonline.org/ld_indepth/assessment/swerling_assessment.html

Spector, J. (1992). Predicting progress in beginning reading: Dynamic assessment of phonemic awareness. Journal of Educational Psychology, 84, 353-363.

Speece, D. L., Mills, C., Ritchey, K. D., & Hillman, E. (2003). Initial evidence that letter fluency tasks are valid indicators of early reading skill. Journal of Special Education, 36, 223-233.

Stage, S. A., Sheppard, J., Davidson, M. M., & Browning, M. M. (2001). Prediction of first-graders' growth in oral reading fluency using kindergarten letter fluency. Journal of School Psychology, 9(3), 225-237.

Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360-406.

Stanovich, K. E. (1988a). The right & wrong places to look for the cognitive locus of reading disability. Annals of Dyslexia, 38, 154-157.

Stanovich, K. E. (1988b). Explaining the differences between the dyslexic and the garden-variety poor reader: The phonological-core variable-difference model. Journal of Learning Disabilities, 21, 590-612.

Stanovich, K. E. (1991). Discrepancy definitions of reading disability: Has intelligence led us astray? Reading Research Quarterly, 26(1), 7-29.

Stanovich, K. E. (1992). Speculation on the causes and consequences of individual differences in early reading acquisition. In Phillip P. Gough, Linnea C. Ehri, & Rebecca Treiman (Eds.), Reading acquisition. (pp.307-341). Mahwah, NJ: Lawrence Erlbaum.

Stanovich, K. E. (1993). Does reading make you smarter? Literacy and the development of intelligence. Advances in Child Development and Behavior, 24, 133-179.

Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford Press.

Strutt, J. (2007). Students fail on 'three Rs' test. The West Australian, Monday 10 December. Retrieved September 20, 2008, from http://www.platowa.com/Breaking_News/2007/2007_12_10.html

Stuart, M. (1995). Prediction and qualitative assessment of five and six-year-old children's reading: A longitudinal study. British Journal of Educational Psychology, 65, 287-296.

Swanson, H. L. (2001a). Research on interventions for adolescents with learning disabilities: A meta-analysis of outcomes related to higher-order processing. The Elementary School Journal, 101, 331-348.

Swanson, H. L. (2001b). Searching for the best model for instructing students with learning disabilities. Focus on Exceptional Children, 34(2), 1-14.

Swanson, H.L., & O’Connor, R. (2009). The role of working memory and fluency practice on the reading comprehension of students who are dysfluent readers. Journal of Learning Disabilities, 42(6), 548-557.

Swanson, L. B. (1989). Analysing naming speed-reading relationships in children. Unpublished doctoral dissertation, University of Waterloo, Ontario.

Swingley, D. (2008). The roots of the early vocabulary in infants' learning from speech. Current Directions in Psychological Science, 17(5), 308-312.

The Psychological Corporation (2007). AIMSweb progress monitoring and response to intervention system. San Antonio, TX: Pearson. Retrieved October 22, 2008, from www.aimsweb.com

Todd, L. R., Berninger, V.W., Stock, P., Altemeier, L., Trivedi, P., & Maravilla, K. R. (2011). Differences between good and poor child writers on fMRI contrasts for writing newly taught and highly practiced letter forms. Reading and Writing, 24(5), 493-516.

Torgesen, J. K. (1998, Spring/Summer). Catch them before they fall: Identification and assessment to prevent reading failure in young children. American Educator. Retrieved October 2, 1999, from http://www.ldonline.org/ld_indepth/reading/torgeson_catchthem.html

Torgesen, J. K. (2000). Individual differences in response to early interventions in reading: The lingering problem of treatment resistors. Learning Disabilities Research and Practice, 15, 55– 64.

Torgesen, J. K., & Bryant, B. R. (1994). Test of Phonological Awareness. Austin, TX: Pro-Ed.

Torgesen, J. K., Wagner, R. J., & Rashotte, C. A. (1994). Longitudinal studies of phonological processing & reading. Journal of Learning Disabilities, 27, 276-286.

Torgesen, J. K., Wagner, R. J., & Rashotte, C. A. (1999). Test of Word Reading Efficiency (TOWRE). Austin, TX: PRO-ED Inc.

Torgesen, J. K., Wagner, R., Rashotte, C., Alexander, A., & Conway, T. (1997). Preventative and remedial interventions for children with severe reading disabilities. Learning Disabilities: A Multidisciplinary Journal, 8, 51-61.

Tucker, M.S. (2011). Standing on the shoulders of giants: An American agenda for education reform. National Center on Education and the Economy. Retrieved from http://www.ncee.org/wp-content/uploads/2011/05/Standing-on-the-Shoulders-of-Giants-An-American-Agenda-for-Education-Reform.pdf

Turkeltaub, P. E., Flowers, D. L., Verbalis, A., Miranda, M., Gareau, L., & Eden, G. F. (2004). The neural basis of hyperlexic reading: An fMRI case study. Neuron, 41, 1-20.

Tzuriel, D. (2000). Dynamic assessment of young children: Educational and intervention perspectives Educational Psychology Review, 12, 385-435.

UNICEF. (2002). A league table of educational disadvantage in rich nations. Innocenti Report Card No.4, November 2002. Florence, Italy: Innocenti Research Centre.

Vaughn, S., Wanzek, J., Wexler, J., Barth, A.E., Cirino, P.T., Fletcher, J.M., Romain, M.A., Denton, C.A., Roberts, G., & Francis, D.J. (2010). The relative effects of group size on reading progress of older students with reading difficulties. Reading and Writing: An Interdisciplinary Journal, 23(8), 931-956.

Vaughn, S., Wexler, J., Roberts, G., Barth, A.A., Cirino, P.T., Romain, M.A., Francis, D., Fletcher, J., & Denton, C.A. (2011). Effects of individualized and standardized interventions on middle school students with reading disabilities. Exceptional Children, 77(4), 391-407.

Vellutino, F. R., & Scanlon, D. M. (1987). Phonological coding, phonological awareness and reading ability: Evidence from a longitudinal & experimental study. Merrill-Palmer Quarterly, 33, 321-363.

Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, A., Chen, R., & Denckla, M. B. (1996). Cognitive profiles of difficult to remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88, 601-638.

Vervaeke, S.-L., McNamara, J. K., & Scissons, M. (2007, April). Kindergarten screening for reading disabilities. Journal of Applied Research on Learning, 1(1), 1-19.

Wade, B., & Moore, M. (1993). Experiencing special education. Buckingham: Open University Press.

Wagner, R. K., & Torgesen, J. K. (1987). The nature of phonological processing and its causal role in the acquisition of reading skills. Psychological Bulletin, 101, 192-212.

Wagner, R. K., Torgesen, J. K., & Rashotte, C. A. (1999). Comprehensive Test of Phonological Processing. Austin, TX: Pro-Ed.

Watson, A. (1998). Potential sources of inequity in teachers' informal judgements about pupils' mathematics. Paper presented at Mathematics Education and Society: An International Conference, Nottingham, September. University of Oxford Department of Educational Studies. Retrieved from http://www.nottingham.ac.uk/csme/meas/papers/watson.html

Weaver, C. (1988). Reading process & practice: From socio-psycholinguistics to whole language. Portsmouth, NH: Heinemann.

Wechsler, D. (1997). Wechsler Adult Intelligence Scale (3rd ed.). Austin, TX: The Psychological Corporation.

Wechsler, D. (2001). Wechsler Individual Achievement Test-II. San Antonio, TX: Harcourt Brace.

Wechsler, D. (2002). The Wechsler Preschool and Primary Scales of Intelligence-Third Edition: WPPSI-III. San Antonio, TX: The Psychological Corporation.

Wechsler, D. (2003). Wechsler Intelligence Scale for Children – 4th Edition (Australian Adaptation). Australia: Harcourt Brace.

Wiederholt, J. L., & Bryant, B. R. (2001). Gray Oral Reading Tests-4th Edition. Austin, TX: Pro-Ed.

Wiig, E. H., Zureich, P., & Chan, H. H. (2000). A clinical rationale for assessing rapid automatized naming in children with language disorders. Journal of Learning Disabilities, 33, 359-371.

Wolf, M., & Bowers, P. G. (1999). The "DoubleDeficit Hypothesis" for the developmental dyslexias. Journal of Educational Psychology, 91, 1-24.

Wolf, M., & Bowers, P. G. (2000). Naming-speed processes and developmental reading disabilities: An introduction to the special issue on the double-deficit hypothesis. Journal of Learning Disabilities, 33, 322-341.

Wolf, M., & Katzir-Cohen, T. (2001). Reading fluency and its intervention. Scientific Studies of Reading. (Special Issue on Fluency. Editors: E. Kameenui & D. Simmons), 5, 211-238.

Wolf, M., Miller, L., & Donnelly, K. (2000). Retrieval, automaticity, vocabulary elaboration, orthography (RAVE-O): A comprehensive, fluency-based reading intervention program. Journal of Learning Disabilities, 33, 375-386.

Wood, F. B., & Felton, R. H. (1994). Separate linguistic and attentional factors in the development of reading. Topics in Language Disorders, 14(4), 42-57.

Woodcock, R. W. (1998). Woodcock Reading Mastery Test – Revised NU. Circle Pines, MN: American Guidance Service.

Yopp, H. K. (1995). A test for assessing phonemic awareness in young children. The Reading Teacher, 49(1), 20-29.