Submission 27 to the Senate Inquiry: The Effectiveness of the National Assessment Program - Literacy and Numeracy. Senate Education and Employment Committees, PO Box 6100. Parliament House, Canberra ACT 2600. https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Education_and_Employment/Naplan13/Report/index
Also published as:
Hempenstall, K. (2013). What is the place for national assessment in the prevention and resolution of reading difficulties? Australian Journal of Learning Difficulties, 18(2), 105–121.https://doi.org/10.1080/19404158.2013.840887
The beginning:
“My intention in this submission supporting national assessment is to examine the context in which NAPLAN has arisen. My focus is largely on literacy, as that has been the major domain of my work. In brief, my position is that national assessment is essential. The current format of NAPLAN may not be ideal, but it is certainly better than no national assessment at all. Self-serving and specious arguments should not be allowed to threaten this vital initiative. Worthwhile additions would to include a national early identification scheme for Year One students, and providing greater transparency concerning the test raw scores that correspond to the various bands.
Some have argued that the literacy test is too limited, focussing as it does only on reading comprehension. While it might be helpful to assess other important components of reading, such as decoding and fluency, I see the main function of NAPLAN as acting as a national thermometer. It is to provide an indication as to the health of the patient – reading development across the nation’s students. Widening the test is both unnecessary and wasteful. It is unnecessary because when teachers are adequately retrained to use reading assessment properly, they can do the diagnostic testing on those students whose NAPLAN results are worrying. Making NAPLAN more extensive would take up additional student time, and would be a waste of time for all those students whose progress is quite adequate.
Before NAPLAN:
There is considerable interest and controversy in the Australian community concerning how our students are faring in the task of mastering reading, and how well our education system deals with teaching such a vital skill. Until state and subsequently national scale assessment commenced, we had largely small scale and disparate findings from researchers employing differing metrics to gauge how our students were progressing. Thus it was very difficult to make any judgements as to how students performed across the nation. Even the state based assessments that preceded NAPLAN were sufficiently different from each other to make comparisons fraught. Different emphases, formats, and scoring protocols were common. Defining where to establish the bands of competence varied significantly across studies. What does constitute an acceptable level of competence for this test or that test? Is it 50% correct - 80%, 25%?
Looking back one can see signs of this variation:
The Australian Government House of Representatives Enquiry (1993) estimated that between 10-20% of students finish primary school with literacy problems. In Victoria, about 16% were labelled reading disabled in two studies (Prior, Sanson, Smart, & Oberklaid, 1995; Richdale, Reece, & Lawson, 1996). Victorian Budget estimates (Public Accounts and Estimates Committee, 1999) anticipated that for the year 2000, 20% of Year 1 students would require the Reading Recovery program. Further concern was expressed that, after their Year Three at school, students with reading problems have little prospect of adequate progress (Australian Government House of Representatives Enquiry, 1993). Providing additional foundation for that fear was a Victorian study (Hill, 1995) that noted little discernible progress in literacy for the lowest 10% between Year Four and Year Ten. According to the Australian Council for Educational Research, more than 30% of Australian children entering high school (mainly in government and Catholic schools) could not read or write properly (“Our desperate schools”, 2000).
In more recent times, the Victorian Auditor General (2003) noted that Reading Recovery was being provided to on average 40-50% of Year 1 students, and in 2009 he reported that the budget for that program had climbed even higher each year since then. This gave pause for concern. How could so many students after a year or so of reading instruction require such extensive and costly remedial help? Could it be that initial reading instruction was not adequate? The introduction of state and then national assessment provided larger samples than were available during earlier times, and a troubling picture began to emerge.
National and international assessment
The Australian Bureau of Statistics in 2008 reported that almost half of all Australians aged 15-74 years had literacy skills below the level necessary to participate at even a basic level in our complex society. For example, they may have trouble managing transport timetable, completing basic forms, reading medicine labels, or interpreting weather maps. This indicates that any problems we have with literacy are not entirely new. How did we arrive at this position? One reason is that before broad-scale assessment our community was in the dark about the true state of affairs.
Reinforcing that alarming finding, last year, an OECD study (Programme for the International Assessment of Adult Competencies, PIAAC) revealed that:
Many adult Australians do not possess the literacy and numeracy skills necessary to participate fully in modern life and work. … Results from 2011-12 show that about 7.3 million or 44 per cent of adult Australians achieved in the lowest two bands for literacy, while about 8.9 million or 55 per cent achieved in the lowest two bands for numeracy (ACER, 2013).
There is clearly a concern that both national and international comparisons have not been flattering to us. There is a public perception that either educational outcomes for students have been further declining or that the education system is increasingly less able to meet rising community and employer expectations (Jones, 2012).
Parental concerns about literacy are becoming increasingly evident. In the Parents’ Attitudes to Schooling report (Department of Education, Science and Training, 2007), only 37.5% of the surveyed parents believed that students were leaving school with adequate skills in literacy. There has been an increase in dissatisfaction since the previous Parents' Attitudes to Schooling survey in 2003, when 61% of parents considered primary school education as good or very good, and 51% reported secondary education as good or very good. About 88 per cent of parents supported the use of a standard national process for assessing the skills of students. This level of NAPLAN support is markedly different to that expressed by many employed within the education field.
Press reports suggest that employers too have concerns about literacy development among young people generally, not simply for those usually considered to comprise an at-risk group (Business Council of Australia, 2003; Collier, 2008).
This scrutiny of public education are not new; however, the focus in recent times has shifted as the real rate of student performance becomes common knowledge. Concerns that have arisen over recent years include apparent national and state test score declines, unflattering international achievement comparisons (ACER, 2010), the failure of large funding increases to produce discernible results (DEECD, 2012; Leigh & Ryan, 2008a; Nous Group Consortium, 2011), high school-dropout rates, and a perception that employment skill demands are not being adequately met by the education system (Collier, 2008; Levin, 1998). The press has displayed increased interest in highlighting these issues, thus further raising community awareness. Thus, the expanding role for both national and international assessments has brought the issue further to public attention.
There appears now to be a public perception that there are serious problems in the education system’s capacity to meet their expectations. In the past, some teacher organisations have argued that the issues of student progress should be left in the hands of teachers, and that the school performance of students is quite acceptable compared with that of other nations. In my work - initially as a teacher, then as a school educational psychologist, and eventually in the RMIT Psychology Clinic over a forty year period, I experienced many examples of parents expressing dismay that their early concerns about their child’s progress were deflected by teachers.
Usually, it was not until Year 4 or above that schools reported to parents that their child was struggling. This delay makes effective assistance far more difficult. One finding was that it takes four times as many resources to resolve a literacy problem by Year 4 than it does in Year 1 (Pfeiffer et al., 2001). This problem only intensifies as students approach upper primary and secondary school (Wanzek et al., 2013). Equally troubling was the Lesnick, Goerge, Smithgall, and Gwynne (2010) longitudinal study, which found that third grade students who were struggling with their reading had four times the rate of early school leaving compared with average readers.
In Australia, the broad scale assessment common for many years in the USA and GB is a more recent phenomenon. Begun initially at a State level, it has expanded to a national level - National Assessment Program Literacy and Numeracy (NAPLAN) and international level - the Progress in Reading Literacy Study (PIRLS) and the Programme for International Student Assessment (PISA).
What have recent analyses demonstrated?
In the 2008 NAPLAN assessment, 19.6 per cent of Australian students were deemed at or below the national minimum standard in reading, and 18.7 per cent were at or below the standard in numeracy (Australian National Audit Office, 2012). This means that those below the national minimum standard require assistance, and so will some of those at the national minimum standard. How those national minimum standards are derived is unclear. The standards are supposed to be a snapshot of typical achievement. But it is not clear how typical achievement is defined. On what basis is a score of, say, 16/40 questions correct on a NAPLAN test considered typical achievement? Against which external criterion? This lack of information makes interpretation of results difficult. Another way of reporting the same results is to provide the figures for students who are at or above the national minimum standard. For 2012, the figure is 92% overall. This sounds much more satisfactory than around 19% at or below the standard, but how does it fit with the figures found in other studies that suggest, for reading, around 20% - 30% of students struggle? Something doesn’t compute.
Since 2008 there has been little change in average results – some national scores for some domains and for some Year levels have improved, while others have declined. Of course, there is also variation across states, and most dramatically, across socio-economic status and the indigenous/ non-indigenous divide (Australian Curriculum, Assessment and Reporting Authority, 2012.
In 2012, the international PIRLS tests revealed that almost 25 per cent of Year 4 children in Australia failed to meet the standard in reading for their age. The report released by the Australian Council for Educational Research (ACER, 2012) reveal disappointing results for Australia in this latest international study of mathematics and science achievement, and in Australia’s first ever international assessment of reading at primary school level. Australian Year 4 students ranked 27th among the 53 nations involved, outperformed by other English-speaking nations such as England, the US and New Zealand. As this is the first time Australia has been involved in PIRLS, some consternation has followed the results.
Other international data (PISA) indicated a decline in reading (2000–2009) and mathematics (2003–2009) (Australian National Audit Office, 2012).
Although the OECD average for reading literacy has not changed between 2000 and 2009, ten countries have significantly improved their performance over this time, while five countries, including Australia, have declined significantly. … Australia’s reading literacy performance has declined, not only in terms of rankings among other participating countries but also in terms of average student performance. The mean scores for Australian students in PISA 2000 was 528 points, compared to 515 for PISA 2009. A decline in average scores was also noted between PISA 2000 and PISA 2006, when reading literacy was a minor domain (ACER, 2010).
Releasing the results, ACER Chief Executive Professor Geoff Masters said, “To say the results are disappointing is an understatement (ACER, 2012, p.1)”.
Learning to read written English.
The English written language is an amalgam of numerous other languages, such as Old English, Latin, Greek, German, French, and Old Norse. Because the different languages use differing letter combinations for the same spoken sound, reading written English is more difficult than in countries like Spain and Italy where one letter makes one sound (more or less). So, picking up reading unaided is a very difficult task. What then is a reasonable rate of reading difficulty? Is a 20-30% rate inevitable? Well, no. According to research, we should not be content until the reading difficulty rate falls to around 5% (Fuchs & Fuchs, 2005; Torgesen, 1998). Until then, we are not teaching reading well enough, and many students do not have an inbuilt resistance to learning how to read, but should be considered as instructional casualties.
This quote is from an interview with G. Reid-Lyon, Past-Chief of the Child Development and Behavior Branch of the National Institute of Child Health & Human Development, National Institutes of Health, USA.
When we look at the kids who are having a tough time learning to read and we went through the statistics, thirty-eight percent nationally, disaggregate that, seventy percent of kids from poverty and so forth hit the wall. Ninety-five of those kids are instructional casualties. About five to six percent of those kids have what we call dyslexia or learning disabilities in reading. Ninety-five percent of the kids hitting the wall in learning to read are what we call NBT: Never Been Taught. They’ve probably been with teachers where the heart was in the right place, they’ve been with teachers who wanted the best for the kids, but they have been with teachers who cannot answer the questions:
1) What goes into reading, what does it take?
2) Why do some kids have difficulty?
3) How can we identify kids early and prevent it?
4) How can we remediate it? (Boulton, 2003a).
This second quote is from an interview with Grover Whitehurst, Ex-Director (2002-08), Institute of Education Sciences, U.S. Department of Education, USA.
So, we have a difficult code, we have a neural system that for some children is not optimal for dealing with this code, and then we throw them an instructional system, a teaching system; teachers who don’t understand what the code really is or how it needs to be conveyed. And so the teacher is suggesting you should do this when in fact the child should be doing that. You can sample first or second grade classrooms around the country and you will still find, despite what we know about the process of reading and have learned over the past twenty years, you will still find that teachers for a first grader who is struggling to sound out a word who will discourage the child from doing that, and encourage the child to look at the pictures in the book and guess what that word means. Good readers don’t guess, good readers sound out almost every word on the page. And so the teacher is saying you solve the task this way when in fact the task has to be solved in an entirely different way. And that can not help but confuse children. So, non-optimal instruction, and in some cases simply misleading instruction, is a significant part of the problem (Boulton, 2003b).
Throwing money in the wrong direction
The Victorian Auditor General in 2009 reported that despite investing $1.19 billion in education initiatives over the previous six years there has been little improvement in average literacy and numeracy achievement. Leigh and Ryan (2011) showed that productivity (real expenditure vs student performance) decreased by 73 per cent between 1964 and 2003. This major component of this increase in expenditure has been in decreasing class sizes (a strategy shown elsewhere to be ineffective) (Jensen, 2012). It is now becoming clear that at a federal level learning must be shown to be a consequence of expenditure, and that requires a nationally based assessment system.
National and international assessments have the potential to provide some sense of how our children are faring in their education, or alternatively, how well our system teaches our children to read. However, there are limitations to the value of this style of testing, particularly when the only tasks included are those intended to assess reading comprehension. It is unquestionably a major function of reading, but not the only important component. The science of reading development has emphasised as additional areas: decoding, vocabulary, phonemic awareness, and reading fluency. To address this concern, one of the recommendations of the 2005 National Inquiry into the Teaching of Literacy (NITL) was that the NAPLAN “be refocused to make available diagnostic information on individual student performance, to assist teachers to plan for the most effective teaching strategies”.
An example of a different focus for assessment is the Phonics Screening Check (Department of Education Schools, 2013) introduced into GB over the past couple of years (again against wide teacher complaint). It is held in mid-Year One and is designed to confirm whether students have learnt phonic decoding to an appropriate standard. As phonics is now considered an essential component of early reading instruction, this assessment is also an examination of how well schools are teaching reading, in addition to providing vital information to parents at this early stage. As a result 43 per cent of pilot schools were able to identify pupils with reading problems of which they were not already aware. This means that 235,000 pupils will now receive additional reading support from their schools, that might not have eventuated otherwise. Sub-standard results for a student are a red flag to schools to intervene early before the failure cascade called the Matthew Effect (Stanovich, 1986) becomes entrenched. Of course, when formal assessment does not commence until Year 3 (as in the NAPLAN), this opportunity in not available. Of course, this outcome does indicate how assessment alone does not have an impact on student progress. The will and the means to intervene efficiently and effectively must also accompany national assessment.
So, early assessment (assuming it leads to intervention) can have a prophylactic effect on reading failure, and is worthy of support. As to widening the scope of the NAPLAN reading assessment as the NITL recommended, I have a concern that it may make the assessment unwieldy. I agree entirely that diagnostic assessment is crucial for those students shown to be experiencing difficulty, but I am not sure that NAPLAN is the optimal vehicle for doing so.
Benchmark transparency
Because the benchmarks chosen for the various levels of proficiency in national assessments are not transparent, they are open to manipulation. Such occurrences have been reported in GB and the USA in the past (Adkins, Kingsbury, Dahlin, & Cronin, 2007). Also problematic is a slightly different issue - cheating by teachers and principals in the administration and scoring of tests (HuffPost Education, 2011; 2013). In recent times, similar claims of cheating and inappropriate test administration have been made in Australia. “Over the past three years, 164 schools around the country have sought to undermine the annual NAPLAN test by pressuring parents to withdraw their children, assisting students to complete the exam, or storing papers insecurely ahead of testing day” (Tomazin, 2013).
There can even be marked differences in results reported nationally and locally: “According to the National Assessment of Educational Progress (NAEP), the percentages of students who are proficient in basic reading and math are roughly half of the rates reported by the states” (Stone, 2013, p.5). This in itself makes a powerful case for national rather than solely state-based or locally-based assessments.
Over recent years in the USA, eight states had their reading and/or maths tests become significantly easier in at least two grades (Adkins, Kingsbury, Dahlin, & Cronin, 2007). The report, entitled The Proficiency Illusion, also found that recent improvements in proficiency rates on US state tests could be explained largely by declines in the difficulty of those tests.
So, a weakness of such opaque data is the potential for benchmarks to be manipulated to show governments of the day in the best possible light. There are examples in which benchmarks have been so low as to be at the level of chance. For example, when four multiple choice items constitute the response set for students, a 25% mark could be obtained by chance alone. Surely benchmarks would never be so low that chance alone could produce a proficiency level?
In 2006, the results needed to meet (Australian) national benchmarks for students in Years 3, 5 and 7 ranged from 22% to 44%, with an average of less than 34%. Year 3 students needed to achieve only 22% for reading, 39% for numeracy, and 30% for writing to be classified as meeting the minimum acceptable standard (Strutt, 2007, p.1).
Recently in Great Britain (Paton, 2008), the Assessment and Qualifications Alliance exam board admitted that standards had been lowered to elevate scores in 2008. In one exam paper, C grades (a good pass) were awarded to pupils who obtained a score of only 20%. Perhaps, having learned from this event, in the 2012 Phonics Screening Test, mentioned earlier, the pass mark was made public at 32 correct out of 40 questions, which is 80% correct.
Informal assessment
If community interest in literacy has been sparked, and there is public concern about the validity of the national broad scale assessment model, it is important for educators to offer guidance about high quality assessment. Part of the current literacy problem can be attributed to educators because they have not offered this high quality assessment in their schools to monitor progress. There has been a tendency to rely on informal assessment that often lacks validity and reliability (Watson, 1998), and unhelpful techniques like miscue analysis (Hempenstall, 1998), and the perusal of student folios (Fehring, 2001).
In a three-year Australian study:
Wyatt-Smith and Castleton investigated how Australian teachers made judgments about student writing using literacy benchmark standards (Department of Employment, Education, Training and Youth Affairs [DEETYA] 1998; Wyatt-Smith and Castleton 2005). … Teachers made judgments based on their own experience; explicit literacy standards were not part of teachers' experience, and teachers accepted that their "in head" standards varied from year to year and from class to class” (Bolt, 2011, p.158).
Studies by Feinberg and Shapiro (2009) and by Bates and Nettlebeck (2001) each reported that informal assessments were significantly less accurate for struggling readers when compared with formal assessment, and most teachers’ over-estimation among students with low achievement scores was greater than a year of reading age. In the Madelaine and Wheldall (2003) study, Australian teachers not only failed to identify 10% of the struggling students, but 18% of them also falsely reported as struggling some students who were not doing so. Limbos and Geva (2001) found that teachers tended to ascribe adequate reading skills to struggling readers on the basis of their reasonable oral language skills. That is, without adequate reading assessment, teachers can be fooled into incorrectly assuming that a student’s literacy must be OK because they express their thoughts well in speech.
The Productivity Commission in 2012 expressed concern that:
Because teachers are kept in ignorance of specific learning difficulties, students are under diagnosed and under supported. Teachers are not able to recognise the signs which should lead to testing by a psychologist or specialist in specific learning difficulties. Furthermore, they often don’t know who the student should be referred to. (sub. DR76, p. 2) (Productivity Commission, 2012, p.284).
If every teacher did implement a standard, agreed upon assessment schedule, based upon the current evidence on reading development, then one might argue against the need for national assessment. Data could be collected from schools, and would be comparable across the nation - based upon a similar metric. Of course, the problem of variation in teachers’ test presentation skills would remain. An important aspect of the current system is to provide consistency for the more than 100,000 students who change schools across State/Territory boundaries, sectors and regions (National Inquiry into the Teaching of Literacy, 2005).
The assessment of important literacy components can supply valuable information not available in the current broad scale testing program. For example, assessment can assist in the identification and management of students at-risk even before reading instruction commences. They can also help identify those making slow progress at any year level. This is especially important given the usually stable learning trajectory from the very early stages. If specific interventions are implemented, appropriate reading assessment can provide on-going information about the effectiveness of the chosen approach. There is an important question implicit in this potentially valuable activity. What sorts of assessment are likely to be most beneficial in precluding reading pitfalls and enhancing reading success? In this submission, the emphasis is directed towards assessment of those aspects of reading that have been identified by research as critical to reading development.
From assessment to intervention
It is recognised that literacy assessment itself has little intrinsic value; rather, it is only the consequences flowing from the assessment process that have the potential to enhance the prospects of those students currently struggling to master reading. Assessment also allows for the monitoring of progress during an intervention, and evaluation of success at the end of the intervention. However, the initial value relates to the question of whether there is a problem, and if so, what should be done. What should be done is inevitably tied to the conception of the reading process, and what can impede its progress. How do educationists tend to view the genesis of reading problems?
Perceptions of literacy problems and causes
Alessi (1988) contacted 50 school psychologists who, between them, produced about 5000 assessment reports in a year. The school psychologists agreed that a lack of academic or behavioural progress could be attributed to one or more of the five factors below. Alessi then examined the reports to see what factors had been assigned as the causes of their students’ educational problems.
1. Curriculum factors? No reports.
2. Inappropriate teaching practices? No reports.
3. School administrative factors? No reports.
4. Parent and home factors? 10-20% of reports.
5. Factors associated with the child? 100%.
In another study this time surveying classroom teachers, Wade and Moore (1993) noted that when students failed to learn 65% of teachers considered that student characteristics were responsible while a further 32% emphasised home factors. Only the remaining 3% believed that the education system was the most important factor in student achievement, a finding utterly at odds with the research into teacher effects both in Australia and overseas (Cuttance, 1998; Hattie, 2009; Hattie, Clinton, Thompson, & Schmidt-Davies, 1995).
This highlights one of the ways in which assessment can be unnecessarily limiting in its breadth, if the causes of students’ difficulties are presumed to reside solely within the students, rather than within the instructional system. Assessment of students is not a productive use of time unless it is carefully integrated into a plan involving instructional action.
When the incidence of failure is unacceptably high, as in the USA, GB, and Australia, then an appropriate direction for resource allocation is towards the assessment of instruction. It can only be flawed instruction that intensifies the reading problem from a realistic incidence of reading disability of around 5% (Brown & Felton, 1990; Felton, 1993; Marshall & Hynd, 1993; Torgesen, Wagner, Rashotte, Alexander, & Conway, 1997; Vellutino et al., 1996) to that which we find of 20 - 30% (or higher). A tendency can develop for victim blame. "Learning disabilities have become a sociological sponge to wipe up the spills of general education. … It's where children who weren't taught well go (p.A1)" (Lyon, 1999).
Evidence-based assessment and practice
There is an increasing recognition that an education system must constantly assess the quality of instruction provided in its schools, and that it should take account of the findings of research in establishing its benchmarks and policies. “Thus the central problem for a scientific approach to the matter is not to find out what is wrong with the children, but what can be done to improve the educational system” (Labov, 2003, p.128). The development of an Australian national English curriculum is an example of this emerging system interest. Up to this time, education systems in Australia have been relatively impervious to such findings (Hempenstall, 1996, 2006), lagging behind significant (if tenuous) changes in the USA with Reading First (Al Otaiba et al., 2008) and in Great Britain with the Primary National Strategy (2006).
Unfortunately, Australia has been lax in evaluating the effectiveness of the instructional approaches it has supported (Rorris et al., 2011).
Re students from disadvantaged groups, learning disabilities, indigenous, ESL, low SES, remote areas. Weak monitoring and reporting inhibits the capacity of school systems to build sector knowledge of the relevance and context of improvement strategies that have demonstrated effectiveness. This means there is a lack of evidence-based links for programs and their effects on learning (254). … There are insufficient data available to establish to what extent existing programs are effective because few have been evaluated, and fewer still have been evaluated with student outcomes as a focus” (p.87).
The Victorian Auditor-General (2009, 2012) found that there “has been no system-wide assessment of the ongoing effectiveness of key elements of the approach, such as the Reading Recovery intervention” (p.5) …. Further, “DEECD does not consistently use monitoring, program reviews and evaluations” (p.57).
Clearly, the education system failed to properly evaluate the impact of the programs it has implemented. One proposed reason offered by the Productivity Commission (2012)
… in order to ‘save face’ policymakers may continue with programs that are demonstrably poor investments rather than abandoning underperforming policies because - in acknowledging the results of any evaluation - they might be accused of ‘failure’”. … However, changing policies without the benefit of evidence offers no assurance that outcomes will improve. In fact, a cycle of constantly changing policies can be potentially destructive where it fosters instability and reduces confidence (particularly within disadvantaged communities) in the education system. Evaluation is the first step towards greater continuity in the policy and institutional landscape — a necessary, although not sufficient, condition for achieving sustained advances in education outcomes. As such, it is essential that policymakers subject their initiatives to proper analysis, and do not move on to another policy idea before evaluations can be conducted (p.299).
One means of encouraging greater scrutiny is being able to examine the impact programs have on student performance at the national level.
Even allowing that the major problem for the education system lies in the realm of instruction, particularly in the initial teaching of reading, individual student assessment remains of value. It is, of course, necessary as a means of evaluating instructional adequacy. Beyond that, there is great potential value in the early identification of potential reading problems, in determining the appropriate focus for instruction, in the monitoring of progress in relevant skill areas, and with the evaluation of reading interventions. It is the assumption in this paper that decisions about assessment should be driven by up-to-date conceptions of the important elements in reading development.
The extent of the literacy problem has been known to literacy researchers for many years, and a group of 26 Australian researchers prompted the 2005 National Inquiry into the Teaching of Literacy (NITL). The findings included 20 recommendations for reform – none of which was adopted by the government of the day or since.
It is more difficult for governments and education departments to ignore the results of national and international assessments; thus, we are beginning to see strong words and (perhaps) action, but how and to what effect become crucial questions.
In 2012, a larger group of researchers (with including many of the original 2004 group) responded to recent findings of national and international assessment with a reminder of what is known about the causes of the literacy problem:
In an open letter to federal, state and territory education ministers and their opposition counterparts, a group of 36 educators, scientists and clinicians call for a ``vast shake-up at all levels of teacher training'' to ensure children are taught to read properly. The letter was prompted by the results in the international Progress in Reading Literacy Study tests last week that revealed almost 25 per cent of Year 4 children in Australia failed to meet the standard in reading for their age, to the shock of many educators and governments. Reprising the letter they sent to then education minister Brendan Nelson in 2004 that resulted in the independent inquiry, the researchers admonish governments for their collective failure to heed the evidence and advice for almost a decade on the most effective way to teach reading (Ferrari, 2012).
A segment of that letter is reprinted below:
We have significant problems in education from the beginning stages, in that we do not teach reading well. We do not use approaches known to be effective in initial reading instruction. As a nation, we do not routinely screen students entering school for underdeveloped pre-reading skills critical for facilitating learning to read, nor do we monitor student progress in learning to read in a manner that allows for successful early intervention with students failing to progress. We do not redress our early system failure during the middle primary years. In the secondary years, we have a significant group of disillusioned students who have lost contact with the curriculum because of these earlier issues. We tend to focus attention and resources upon compensatory educational options instead of emphasising the resolution of our earlier mistakes. The sequence of initial failure-shame-frustration-disengagement-dropout is predictable and ongoing. Currently, it is being addressed piecemeal, as if they were separate problems.
We need a vast shake-up at all levels of teacher training. By turning our gaze to educational practices supported by empirical research we can make optimum use of our resources to complete the task with effectiveness and efficiency (Coltheart, Hempenstall, Wheldall, et al., 2012).
Inadequate teacher training of evidence-based assessment and instruction.
There is an increasing recognition that teacher training has produced a teaching force that has not been well prepared to provide effective teaching to our diverse student community (Malatesha Joshi et al., 2009). Why this is so is a long story, but a history is available at: (PDF) The Whole Language-Phonics controversy: An historical perspective
Evidence for this assertion follows:
All too often Victoria’s teacher training, referred to as pre-service education, falls short of the demands of today’s schools. While there are many providers, quality outcomes are inconsistent. Principals report that in the case of more than one-third of teachers, insufficient pedagogical preparation hinders student instruction (p.10). … At present less than 30 per cent of principals feel new teachers are well prepared to communicate with parents, manage classroom activities well, and provide effective support and feedback to students, which are all largely recognised as important skills for effective teaching and learning. Only around half of graduates report satisfaction with the preparation provided by their courses (DEECD, 2012, p.11).
A survey by Rohl and Greaves (2005) reported that 36% of beginning primary teachers felt unprepared to teach reading and 57% unprepared to teach phonics. Senior staff at their schools were more pessimistic, considering that 49% of these beginning teachers were unprepared to teach reading, and 65% unprepared to teach phonics. These figures on unpreparedness rose dramatically (77% - 89%) when the beginning teachers were confronted with diverse learners (those with disabilities or learning difficulties, indigenous and low SES students, and students whose initial language was not English). Other Australian studies by Maher and Richdale (2008) and by Fielding-Barnsley (2010) noted similar results.
Several other Australian studies also support these findings:
Taken together, these results indicate that for this cohort of pre-service teachers, entry knowledge of graphological/phonological rules and terminology tends to be fragmentary, suggesting that without further instruction in domain-specific knowledge in the area of phonological awareness and phonics, they may have difficulty providing systematic and explicit beginning reading instruction. This supports findings from previous studies which found that many pre-service and in-service teachers have limited knowledge of phonological awareness and phonics (e.g. Fielding-Barnsley & Purdue, 2005; Moats & Foorman, 2003; Rennie & Harper, 2006; Rohl & Greaves, 2005). … The written comments have also highlighted, unintentionally, the fact that a number of the pre-service teachers in the present study, like those in reported by Zipin and Brennan (2006), showed deficiencies in personal literacy skills with regard to grammar, punctuation, and sentence structure (Fisher, Bruce, & Greive, 2007, p.82-3, 85).
This situation was recognized in the 2005 recommendations of the National Inquiry into the Teaching of Literacy: “1. The Committee recommends that teachers be equipped with teaching strategies based on findings from rigorous, evidence-based research that are shown to be effective in enhancing the literacy development of all children”. Unfortunately, as with all the other recommendations of this important report, it was ignored, and the report removed from the Australian Government website. It can still be read at the website of the Australian Council for Educational Research – see http://tinyurl.com/d6v2v9y
One reason proposed for this situation is the failure of education faculties to take the teaching of reading seriously in their course planning. Responses to the national survey indicate that in almost all of the nominated courses, less than 10 per cent of time in compulsory subjects/units is devoted to preparing student teachers to teach reading. They also indicated that in half of all courses less than five per cent of total instructional time is devoted to teaching reading (National Inquiry into the Teaching of Literacy, p.37).
Teacher quality
Creating further concern is the trend towards accepting into teacher training students whose aptitude is substantially lower than in the past. We find that the aptitude of new teachers has fallen considerably. Between 1983 and 2003, the average percentile rank of those entering teacher education fell from 74 to 61, while the average rank of new teachers fell from 70 to 62. We find that two factors account for much of the decline: a fall in average teacher pay (relative to other occupations) and a rise in pay differentials in non-teaching occupations (Leigh & Ryan, 2008b, p.141).
Further, “Some of the biggest teaching schools are accepting entry-level students with TER scores so low as to be equivalent to failure in other states” (Senate Employment, Workplace Relations and Education Committee, 2007, p.7).
One consequence has been an apparent decline in student-teacher literacy to the extent that their capacity to teach in an evidence-based manner has been questioned.
The literacy competency of student teachers was raised as an issue in all focus group discussions. Respondents reported that many students lacked the literacy skills required to be effective teachers of reading. These students needed help to develop their foundational literacy skills. They also needed explicit teaching about meta-linguistic concepts, for example, phonemic awareness, phonics, and the alphabetic principle (NITL, p.33).
This led to another recommendation:
Recommendation 14
The Committee recommends that the conditions for teacher registration of all primary and secondary graduates include a demonstrated command of personal literacy skills necessary for effective teaching, and a demonstrated ability to teach literacy within the framework of their employment/teaching program (p.35).
So, there is a problem with teacher training that, unless overcome, will hinder efforts to raise the standards of school education. Stronger accreditation guidelines for training institutions have been requested for a number of years. There is some evidence of that occurring, and also recently attention is beginning to be directed at the entry criteria into teacher training courses.
Resistance to NAPLAN
A problem threatening the NAPLAN at present is the resistance from a number of teachers and teacher organisations either to national assessment per se, or to its current format. Some of this resistance has led to unprofessional behaviour, such as attempting to “teach to the test” and worse. The Australian Curriculum, Assessment and Reporting Authority (ACARA) noted cheating and other substantiated breaches of general administration protocols during NAPLAN tests over several years and in all states. These events, though modest considering the breadth of the exercise, have increased from 63 in 2010 to 74 in 2011, and to 98 in 2012 (Australian Curriculum, Assessment and Reporting Authority, 2013). The increases are of concern because they parallel events in both GB and the USA that have become much more widespread.
Last year an investigation into the largest ever cheating scandal in US schools revealed rampant, systematic cheating on test scores in Atlanta’s public schools. It found that 178 teachers and principals in 40 of Atlanta’s 100 public schools cheated on state standardized tests in 2009. It uncovered instances of cheating dating back to 2001. The report said that extreme pressure to boost test scores drove teachers and principals to cheat. An investigation is ongoing into high erasure rates on mathematics and reading tests in more than 100 schools in Washington DC. In England last year, even exam boards were caught cheating by providing test questions to teachers months before the exams were due. In 2009, students from 70 schools had their Sats test results annulled or changed because of cheating by teachers or bungled handling of the exams. Reported cheating incidents are notoriously under-estimates of the real situation. “It’s just the tip of the iceberg, I think,” says US testing expert Professor Tom Haladyna, “The other 80 percent is being hidden” [Atlanta Journal-Constitution, 21 June 2009] (Cobbold, 2012).
In the USA, the biennial National Assessment of Educational Progress (NAEP) is considered the gold standard, first because it is divorced from the decisions about student retention or teacher/principal job security that are consequences of state assessments. Second, the procedures used by the employed independent contractors who oversee the administration and transportation of the assessments make it practically impossible for teachers and principals to change the students’ work on these tests.
I do not intend to counter the many calls for an end to NAPLAN, as I believe they arise out of ignorance, self-serving, or fear. An excellent rebuttal of many of the criticisms (Righting some wrong thinking on NAPLAN) was published recently in The Age by Philip Henseleit, and can be found at http://www.theage.com.au/comment/righting-some-wrong-thinking-on-naplan-20130515-2jmns.html
We can and must teach reading more effectively. So few teachers have been trained in the evidence-based reading instruction espoused in almost every significant report over the past decade (National Early Literacy Panel, 2008; National Inquiry into the Teaching of Literacy, 2005; National Reading Panel, 2000, Primary National Strategy, 2006; Rose Report, 2006). Few teachers know how to assess reading in a scientific manner, or know how and why to read and interpret research and data. Today, in an era of professional accountability, teaching remains a craft rather than a science, a guild rather than a profession. Many teachers have been inculcated with a constructivist philosophy asserting that students will necessarily interpret instruction in their own unique ways. This has arisen in their training, and teachers have been diverted from effective literacy instruction by the rhetoric about constructivism, multi-literacies, inclusion, differentiated instruction, personalised learning, critical literacy, brain-based learning, discovery learning, whole language, and learning styles. To overcome these obstacles to the effective teaching of reading will not be easy, but one pre-condition is that we have regular data nationally about the progress of our teaching efforts.
It’s hardly a revelation to argue that the adoption of evidence-based practice (EBP) in some other professions is far advanced in comparison to its use in education. That’s not to say that the resistance displayed by some teacher organizations towards the adoption of EBP has not been evident in the early stages of its acceptance by those professions, such as medicine and psychology. However, as these principles have been espoused in medicine and psychology since the early nineties, a new generation of practitioners have been exposed to EBP as the normal standard for practice. This has occurred among young practitioners because their training has emphasized the centrality of evidence in competent practice.
In education, unfortunately, there are few signs of this sequence occurring. Most teachers-in-training are not exposed to either the principles of EBP (unless in a dismissive aside) or to the practices that have been shown to be beneficial to student learning, such as the principles of instructional design and effective teaching, explicit phonological instruction, and student management approaches that might be loosely grouped under a cognitive-behavioural banner.
In my view, until educational practice includes EBP as a major determinant of practice, then it will continue to be viewed as an immature profession. It is likely that the low status of teachers in many western countries will continue to be the norm unless and until significant change occurs. A fundamental component of professions is the systematic collection of data to inform decision-making. The acceptance by the teaching profession of this tenet of EBP may be promoted by the gradual acceptance of national assessment.
I conclude by re-affirming my support for national assessment, but with modifications consistent with the recommendations of the National Inquiry into the Teaching of Literacy (2005).
These include:
The Committee recommends that the teaching of literacy throughout schooling be informed by comprehensive, diagnostic and developmentally appropriate assessments of every child, mapped on common scales. Further, it is recommended that: • nationally consistent assessments on-entry to school be undertaken for every child, including regular monitoring of decoding skills and word reading accuracy using objective testing of specific skills, and that these link to future assessments, including the use of regular monitoring of decoding skills and word reading accuracy using objective testing of specific skills; education authorities and schools be responsible for the measurement of individual progress in literacy by regularly monitoring the development of each child and reporting progress twice each year for the first three years of schooling.
References:
ACER (2010). PISA in Brief. Highlights from the full Australian Report: Challenges for Australian Education: Results from PISA 2009. Retrieved from http://www.acer.edu.au/documents/PISA-2009-In-Brief.pdf
ACER (2012). ACER releases results from latest international studies of student achievement. Australian Council for Educational Research. Retrieved from http://www.acer.edu.au/media/acer-releases-results-from-latest-international-studies-of-student-achievem
ACER (2013). International study reveals serious adult literacy and numeracy problems. Retrieved from http://www.acer.edu.au/media/international-study-reveals-serious-literacy-and-numeracy-problems
Adkins, D., Kingsbury, G.G., Dahlin, M., & Cronin, J. (2007). The proficiency illusion. Thomas B. Fordham Institute. Retrieved from http://www.edexcellence.net/publications/theproficiencyillusion.html
Al Otaiba, S., Connor, C., Lane, H., Kosanovich, M. L., Schatschneider, C., Dyrlund, A. K., Miller, M. S., & Wright, T. L. (2008). Reading First kindergarten classroom instruction and students' growth in phonological awareness and letter naming–decoding fluency. Journal of School Psychology, 46(3), 281-314.
Alessi, G. (1988). Diagnosis diagnosed: A systemic reaction. Professional School Psychology, 3, 145-151.
Australian Bureau of Statistics. (2008). Adult Literacy and Life Skills Survey (ALLS 2006-2007). Retrieved from http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/4102.0Chapter6102008
Australian Curriculum, Assessment and Reporting Authority (ACARA) (2013). Report of 2012 NAPLAN test incidents. Retrieved from http://www.acara.edu.au/verve/_resources/2012_NAPLAN_TEST_INCIDENTS_REPORT_website_version.pdf#search=cheating
Australian Curriculum, Assessment and Reporting Authority (ACARA) (2012). National Assessment Program Literacy and Numeracy NAPLAN Summary Report. Preliminary results for achievement in Reading, Writing, Language Conventions and Numeracy. Retrieved from http://www.acara.edu.au/default.asp
Australian Government House of Representatives Enquiry. (1993). The literacy challenge. Canberra: Australian Printing Office.
Australian National Audit Office. (2012). National Partnership Agreement on Literacy and Numeracy.
Bates, C., & Nettlebeck, T. (2001). Primary school teachers’ judgements of reading achievement. Educational Psychology, 21(2), 179-189.
Bolt, S. (2011). Making consistent judgments: Assessing student attainment of systemic achievement targets. The Educational Forum, 75(2), 157-172. Retrieved from http://search.proquest.com/docview/863245835?accountid=13552
Boulton, D. (2003a). Children of the Code Interview with G. Reid-Lyon, Past-Chief of the Child Development and Behavior Branch of the National Institute of Child Health & Human Development, National Institutes of Health. Retrieved from http://www.childrenofthecode.org/interviews/lyon.htm#Instructionalcasualties
Boulton, D. (2003b). Children of the Code Interview with Grover Whitehurst, Ex-Director (2002-08), Institute of Education Sciences, U.S. Department of Education. Retrieved from http://www.childrenofthecode.org/interviews/whitehurst.htm#WhyisReadingsoDifficult
Brown, I. S. & Felton, R. H. (1990). Effects of instruction on beginning reading skills in children at risk for reading disability. Reading & Writing: An Interdisciplinary Journal, 2, 223-241.
Business Council of Australia. (2003). The cost of dropping out: The economic impact of early school leaving. Retrieved from http://www.bca.com.au/upload/The_Cost_of_Dropping_Out.pdf
Cobbold, T. (2012). Fighting for equity in education: Several schools found to be cheating in NAPLAN Tests. Save Our Schools. Wednesday January 18, 2012. Retrieved from http://www.saveourschools.com.au/league-tables/several-schools-found-to-be-cheating-in-naplan-tests
Collier, K. (2008, October 18). The ABC of ignorance. Herald Sun, p.9.
Coltheart, M., Hempenstall, K., Wheldall, K., et al. (2012). An open letter to all Federal and State Ministers of Education. Retrieved from http://tinyurl.com/dytdcyu
Cuttance, P. (1998). Quality assurance reviews as a catalyst for school improvement in Australia. In A. Hargreaves, A. Lieberman, M. Fullan., & D. Hopkins (Eds.), International handbook of educational change, Part II (pp. 1135-1162). Dordrecht: Kluwer Publishers.
DEECD. (2012). New directions for school leadership and the teaching profession: Discussion paper, June 2012
Department of Education Schools. (2013). Phonics screening check materials. Retrieved from http://www.education.gov.uk/schools/teachingandlearning/assessment/keystage1/a00200415/phonics
Department of Education, Science and Training. (2007). Parents’ attitudes to schooling. Canberra: Australian Government.http://www.dest.gov.au/NR/rdonlyres/311AA3E6-412E-4FA4-AC01-541F37070529/16736/ParentsAttitudestoSchoolingreporMay073.rtf
Fehring, H. (2001). Literacy assessment and reporting: Changing practices. 12th European Conference on Reading, RAI Dublin, 1st - 4th July. Retrieved from http://sece.eu.rmit.edu.au/staff/fehring/irish.htm
Feinberg, A. B., & Shapiro, E. S. (2009). Teacher accuracy: An examination of teacher-based judgments of students reading with differing achievement levels. The Journal of Educational Research, 102(6), 453-462, 480.
Felton, R. H. (1993). Effects of instruction on the decoding skills of children with phonological-processing problems. Journal of Learning Disabilities, 26, 583-589.
Ferrari, J. (2012). A decade of lost action on literacy. The Australian, 22 December 2012, p.1.
Fielding-Barnsley, R. (2010): Australian pre-service teachers' knowledge of phonemic awareness and phonics in the process of learning to read. Australian Journal of Learning Difficulties, 15(1), 99-110
Fisher, B.J., Bruce, M.E., & Greive, C. (2007). The entry knowledge of Australian pre-service teachers in the area of phonological awareness and phonics. In A Simpson (Ed.), Future directions in literacy: International conversations 2007. University of Sydney. Retrieved from http://ses.library.usyd.edu.au/bitstream/2123/2330/1/FutureDirections_Ch5.pdf
Fuchs, D., & Fuchs, L.S. (2005). Peer-assisted learning strategies: Promoting word recognition, fluency, and reading comprehension in young children. The Journal of Special Education 39(1), 34-44.
Hattie J.A. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London, UK: Routledge.
Hattie, J. A., Clinton, J., Thompson, M., & Schmidt-Davies, H. (1995). Identifying highly accomplished teachers: A validation study. Greensboro, NC: Center for Educational Research and Evaluation, University of North Carolina.
Hempenstall, K. (1996). The gulf between educational research and policy: The example of direct instruction and whole language. Behaviour Change, 13, 33-46.
Hempenstall, K. (1998). Miscue analysis: A critique. Australian Journal of Learning Disabilities, 3(4), 32-37.
Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11(2), 83-92.
Hill, P. (1995). School effectiveness and improvement: Present realities and future possibilities. Dean's Lecture: Paper presented at Melbourne University, May 24. Retrieved from http://www.edfac.unimelb.edu.au/Seminars/dean_lec/list.html
HuffPost Education. (2013, April 22). Beverly Hall's lawyers deny the schools chief had role in Atlanta cheating scandal. Retrieved from http://www.huffingtonpost.com/2013/04/22/atlanta-cheating-scandal-beverly-hall_n_3132583.html
Jensen, B. (2012). Targeting the things that matter. Grattan Institute, Vic. ACER Conference “School Improvement: What does research tell us about effective strategies?” Retrieved from http://research.acer.edu.au/research_conference/RC2012/28august/8
Jones, J.M. (2012). Confidence in U.S. public schools at new low. Gallup Politics. Retrieved from http://www.gallup.com/poll/155258/Confidence-Public-Schools-New-Low.aspx
Labov, L. (2003). When ordinary children fail to read. Reading Research Quarterly, 38, 128-131.
Leigh, A & Ryan, C. (2008b). How and why has teacher quality changed in Australia? Australian Economic Review, 41(2), 141-159. Retrieved from http://digitalcollections.anu.edu.au/handle/1885/45254
Leigh, A., & Ryan, C. (2008a). How has school productivity changed in Australia? . The Australian National University, Canberra. Retrieved from http://econrsss.anu.edu.au/~aleigh/pdf/SchoolProductivity.pdf
Leigh, A., & Ryan, C. (2011). Long-run trends in school productivity: Evidence from Australia. Education Finance and Policy, 6(1), 105–135.
Lesnick, J., Goerge, R., Smithgall, C. & Gwynne, J. (2010). Reading on grade level in third grade: How is it related to high school performance and college enrollment? Chapin Hall; Consortium on Chicago School Research; The Annie E. Casey Foundation. Retrieved from http://www.aecf.org/KnowledgeCenter/Publications.aspx?pubguid={61221250-BC02-49C9-8BDA-D64C45B1C80C}
Levin, B. (1998). Criticizing the schools: Then and now. Education Policy Analysis Archives, 6(16). Retrieved from http://epaa.asu.edu/epaa/v6n16.html
Limbos, M., & Geva, E. (2001). Accuracy of teacher assessments of second-language students at risk for reading disability Journal of Learning Disabilities, 34, 136-151.
Lyon, G. R. (1999, December, 12). Special education in state is failing on many fronts. Los Angeles Times, p. A1. Retrieved from http://www.latimes.com/news/state/reports/specialeduc/lat_special991212.htm
Madelaine, A., & Wheldall, K. (2003). Can teachers discriminate low-progress readers from average readers in regular classes? Australian Journal of Learning Disabilities, 8(3), 4-7.
Maher, N., & Richdale , A. (2008) Primary teachers’ linguistic knowledge and perceptions of early literacy instruction. Australian Journal of Learning Disabilities, 13(1), 17-37.
Malatesha Joshi, R., Binks, E., Hougen, M., Dahlgren, M. E., Ocker-Dean, E., & Smith, D. L. (2009). Why elementary teachers might be inadequately prepared to teach reading. Journal of Learning Disabilities, 42(5), 392-402
Marshall R. M., & Hynd, G. W. (1993). Neurological basis of learning disabilities. In William W. Bender (Ed.) Learning disabilities: Best practices for professionals. Stoneham, MA: Butterworth-Heinemann.
National Early Literacy Panel. (2008). Developing Early literacy: Report of the National Early Literacy Panel. Washington DC: National Institute of Literacy. Retrieved from http://www.nifl.gov/earlychildhood/NELP/NELPreport.html
National Inquiry into the Teaching of Literacy. (2005). Teaching Reading: National Inquiry into the Teaching of Literacy. Canberra: Department of Education, Science, and Training. Retrieved from http://tinyurl.com/d6v2v9y
National Reading Panel. (2000). National Reading Panel: Teaching children to read. Retrieved from http://www.nationalreadingpanel.org.
Nous Group Consortium. (2011). Schooling challenges and opportunities: A report for the Review of Funding for Schooling Panel. Nous Group Consortium, August 29. http://foi.deewr.gov.au/documents/schooling-challenges-and-opportunities
Office of the Victorian Auditor General. (2003). Improving literacy standards in government schools. Retrieved from http://www.audit.vic.gov.au/reports_par/Literacy_Report.pdf
Office of the Victorian Auditor-General (2012). Programs for students with special learning needs: Audit summary. Retrieved from http://www.audit.vic.gov.au/publications/20120829-Special-Learning-Need/20120829-Special-Learning-Need.rtf
Office of the Victorian Auditor-General. (2009). Literacy and numeracy achievement. Retrieved from http://www.audit.vic.gov.au/reports__publications/reports_by_year/2009/20090204_literacy_numeracy/1_executive_summary.aspx
Our desperate school. (2000, August 8). The Age, p.11.
Paton, G. (2008, 25 Oct). GCSE standards 'deliberately lowered' to make sure pupils pass. Telegraph.co.uk. Retrieved from http://www.telegraph.co.uk/news/newstopics/politics/education/3254233/GCSE-standards-deliberately-lowered-to-make-sure-pupils-pass.html
Pfeiffer, S., Davis, R., Kellog, E., Hern, C., McLaughlin, T.F., & Curry, G. (2001).The effect of the Davis Learning Strategies on First Grade word recognition and subsequent special education referrals. Reading Improvement, 38(2), 1-19.
Primary National Strategy (2006). Primary framework for literacy and mathematics. UK: Department of Education and Skills. Retrieved from http://www.standards.dfes.gov.uk/primaryframeworks/
Prior, M., Sanson, A. Smart, D., & Oberklaid, F. (1995). Reading disability in an Australian community sample. Australian Journal of Psychology, 47(1), 32-37.
Productivity Commission. (2012). Schools Workforce, Research Report, Canberra. JEL code: I21, I28, J24. Retrieved from http://www.pc.gov.au/projects/study/education-workforce/schools/report
Public Accounts and Estimates Committee (1999). Report on the 1999-2000 Victorian Budget Estimates. Retrieved from http://www.parliament.vic.gov.au/paec/33Report.pdf
Richdale, A. L., Reece, J. E., & Lawson, A. (1996). Teachers, children with reading difficulties, and remedial reading assistance in primary schools. Behaviour Change, 13(1), 47-61.
Rohl, M., & Greaves, D. (2005). How are pre-service teachers in Australia being prepared for teaching literacy and numeracy to a diverse range of students? Australian Journal of Learning Disabilities, 10(1), 3-8.
Rorris, A., Weldon, P., Beavis, A., McKenzie, P., Bramich, M., & Deery, A. (2011). Assessment of current process for targeting of schools funding to disadvantaged students. An Australian Council for Educational Research report prepared for The Review of Funding for Schooling Panel. Retrieved from http://www.deewr.gov.au/Schooling/ReviewofFunding/Pages/PaperCommissionedResearch.aspx
Rose, J. (2006). Independent review of the teaching of early reading. Bristol: Department for Education and Skills. Retrieved from www.standards.dfes.gov.uk/rosereview/report.pdf
Senate Employment, Workplace Relations and Education Committee. (2007). Quality of school education. Commonwealth of Australia. Retrieved from http://www.aph.gov.au/SEnate/committee/eet_ctte/completed_inquiries/2004-07/academic_standards/index.htm
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360-406.
Stone, J.E. (2013). Reversing American decline by reducing education’s casualties: First, we need to recapture our school boards. Education Consumers Foundation. Retrieved from http://www.education-consumers.org/rad.htm
Strutt, J. (2007). Students fail on 'three Rs' test. The West Australian, Monday 10 December. Retrieved from http://www.platowa.com/Breaking_News/2007/2007_12_10.html
Tomazin. F. (2013, Feb 17). Schools caught cheating on NAPLAN. The Age. Retrieved from http://www.theage.com.au/victoria/schools-caught-cheating-on-naplan-20130216-2ek6p.html
Torgesen, J. K., Wagner, R., Rashotte, C., Alexander, A., & Conway, T. (1997). Preventative and remedial interventions for children with severe reading disabilities. Learning Disabilities: A Multidisciplinary Journal, 8, 51-61.
Torgesen, J.K. (1998, Spring/Summer). Catch them before they fall: Identification and assessment to prevent reading failure in young children. American Educator. Retrieved from http://www.ldonline.org/article/225/
Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, A., Chen, R., & Denckla, M. B. (1996). Cognitive profiles of difficult to remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88, 601-638.
Wade, B., & Moore, M. (1993). Experiencing special education. Buckingham: Open University Press.
Wanzek, J., Vaughn, S., Scammacca, N.K., Metz, K., Murray, C.S., Roberts, G., & Danielson, L. (2013). Extensive reading interventions for students with reading difficulties after Grade 3 Review of Educational Research 8(2), 163-195.
Watson, A. (1998). Potential sources of inequity in teachers' informal judgements about pupils' mathematics. Paper presented at Mathematics Education and Society: An International Conference, Nottingham, September. University of Oxford Department of Educational Studies. Retrieved from http://www.nottingham.ac.uk/csme/meas/papers/watson.html
Now for some recent thoughts from practitioners
The Role of Background Knowledge in Reading Comprehension: A Critical Review (2021).
“We included empirical studies published between 1950 and 2020 that either used a knowledge-building intervention or examined correlations between preexisting knowledge and reading performance. Intervention studies were included if they used pre-teaching activities or full teaching sequences designed to increase the relevant background knowledge of children. Assessments of preexisting knowledge were either a measure of general knowledge unrelated to the target text or a specific assessment of knowledge and skills related to the passages used for comprehension. Reviews of the literature were excluded.
Outcome measures
The primary outcome of interest was reading comprehension ability. Therefore, included studies featured at least one form of objective, quantitative reading comprehension measure, such as curriculum-based outcome measures (e.g., Key Stage assessments), standardized tests (e.g., Iowa Test of Basic Skills (Hoover, Dunbar, & Frisbie, Citation2001) and Gates-MacGinitie Reading Test (MacGinitie & MacGinitie, Citation1992)) or researcher-designed assessments of reading comprehension. We included studies that used a variety of measures, such as: open-ended recall, cloze, multiple choice questions and cued recall outcomes. Studies were not included if they used assessment items that were explicitly trained in an intervention.
Studies eligible for inclusion in this review needed to include a reading comprehension measure in which the child read an extended text and was required to recall and/or answer questions related to the content of the text. We were interested only in passage-level rather than sentence-level text in order to inform classroom practices that could be useful in promoting comprehension of complex written texts. Studies were also excluded if they used electronic passages or hypertext in order to avoid confounding due to modality effects.”
Smith, R., Snow, P., Serry, T., & Hammond, L. (2021). The Role of Background Knowledge in Reading Comprehension: A Critical Review. Reading Psychology, 42(3), 214–240. https://doi.org/10.1080/02702711.2021.1888348
NAP National Assessment Program (2025)
“Latest findings from the 2025 NAPLAN National Results released by the Australian Curriculum, Assessment and Reporting Authority (ACARA) show that performance is broadly stable at a national level, with 2 out of 3 students at the “Strong” or “Exceeding” proficiency level for their reading, numeracy and writing skills, and one in 10 students at the “Needs additional support” level across all year groups and domains.
The findings also show that participation rates across all years and domains have rebounded to pre-COVID levels, reaching 93.8% – the highest level since 2017.
Commenting on the latest NAPLAN National Results, ACARA CEO, Stephen Gniel, said:
“As the only national assessment that helps teachers, parents and carers see how students are progressing in literacy and numeracy over time, NAPLAN is a key tool in the Australian education landscape.
“It’s encouraging to see higher NAPLAN scores on average across Years 5, 7 and 9 in numeracy, particularly among the stronger students. These may be small percentage changes, but the increases represent an additional 20,000 Australian students performing at the highest proficiency level – “Exceeding” – in 2025 compared to 2024.
“It’s also fantastic to see the national participation rates rebound, with Years 3 and 5 hitting their highest rates in over a decade, the Year 7 national participation rate the highest since 2017, and the Year 9 national participation rate exceeding the 90% mark for the first time since 2019.
“However, latest results also continue to highlight areas that need collective attention, such as supporting students from our regional and remote areas, those from a disadvantaged background, and Indigenous students.”
Read our media release (PDF 124 KB).
Find out more about the 2025 NAPLAN National Results.
National protocols for test administration. The nationally agreed protocols can be found in the NAPLAN national protocols for test administration (PDF 3.75 MB). The protocols provide detailed information on all aspects of the administration of the tests.
In cases where individual students with disability require adjustments to access the tests, these adjustments are provided at the school in consultation with the relevant Test administration authority (TAA). Further information about adjustments is available at Adjustments for students with disability.
Principals, NAPLAN coordinators and test administrators should be fully aware of all requirements in the protocols. Schools should contact their TAA if there are problems in meeting these requirements or if they need further guidance.
Key points to note
Test security
Adjustments for students with disability
Test integrity
To maintain the integrity of the tests and the testing process, the protocols must be followed carefully. To assist TAAs and schools to understand appropriate behaviours and what a breach of protocols entails, a code of conduct is provided in the protocols, along with information on the process following any breach that is reported.
The NAPLAN code of conduct is designed to uphold the integrity of the tests by outlining the fundamental principles upon which the tests are based. In order to provide an accurate assessment of students’ capabilities at the time of testing, at all times educators must ensure that tests are administered in a way that is fair and equitable for all students.
ACARA, in cooperation with states and territories, will continue to review the protocols to ensure that tests are delivered in an appropriate and consistent manner across all states and territories. All schools are required to administer the tests in a professional manner.
Test incidents
Generally, reports of improper conduct are very small in the context of all the students and schools participating in NAPLAN across the country. Breaches of protocol and allegations of cheating or improper behaviour are taken seriously and investigated thoroughly. Substantiated cases of improper behaviour may lead to invalidation of student results and be reported publicly.
ACARA works closely with a range of stakeholders across the country to develop nationally consistent guidelines for the fair and equitable investigation and reporting of alleged test incidents.
The Guidelines for managing test incidents in schools (PDF 1.2 MB) are available to support school principals investigate test incidents, including allegations of cheating.
Schools play a central role in ensuring the smooth running of NAPLAN tests. Each year, ACARA and test administration authorities (TAAs) in each state and territory provide information and support to schools to ensure they understand what is required to support the administration of NAPLAN tests.
From 2023, NAPLAN results are reported against proficiency standards with 4 levels of achievement to give teachers, parents and carers clearer information on how students are performing. Read more at Results and reports.
Administration of NAPLAN
The NAPLAN national protocols for test administration is the high-level guide to the administration of NAPLAN tests. The protocols are developed by ACARA with state, territory and non-government representatives and ensure that test administration is a consistent process across the country. Further details are available at National protocols for test administration.
The Guidelines for managing test incidents in schools (PDF 1 MB) provide instructions to schools on the steps to take if a test breach happens at their school.
Guidance on managing student participation, including information on which students are eligible for exemption from NAPLAN, how to manage withdrawals, and accommodating participation for students absent from school on test dates is available from Student participation.
Adjustments for students with disability
Information on the available adjustments for students with disability, including example scenarios, is set out in the Accessibility section. Further information can be found in section 6 of the protocols, and the NAPLAN FAQs.
Brochures for parents/carers and the community
ACARA produces a ‘NAPLAN information brochure for parents and carers’ which schools can provide to members of their local community. It includes the key information parents/carers need to know about NAPLAN, including dates of NAPLAN tests.
See Results and reports for information on NAPLAN individual student reports (ISRs), including FAQs and example reports for Years 3, 5, 7 and 9.
Support material and advice provided by test administration authorities (TAAs)
The protocols and a NAPLAN operations handbook for principals and NAPLAN coordinators are provided to schools by the relevant test administration authority (TAA) each year. The handbook contains supplementary information applicable to each jurisdiction. Principals are responsible for ensuring that all relevant staff in their school are aware of the testing provisions in the handbook, including the code of conduct.
The NAPLAN test administration handbook for teachers is prepared by ACARA and provided by the relevant TAA. This handbook is designed to lead teachers as test administrators through the precise process for administering the tests.
While TAAs have responsibility for the implementation and administration of NAPLAN tests within their jurisdictions, school principals have ultimate responsibility within their schools for ensuring tests are appropriately administered.”
NAP National Assessment Program. (2025). ACARA. Australian Curriculum Assessment and Reporting Authority. NAP - National protocols for test administration
Considerations for choosing and using screeners for students with disabilities (2024)
“Improve whole-school processes Choosing reading assessments in MTSS May (2024). The Australian Education Research Organisation (AERO) recommends the use of a multi-tiered system of supports (MTSS) to better assist Years 7 to 9 students struggling with foundational literacy and numeracy skills. If you’re unfamiliar with the MTSS framework, we recommend you start with AERO’s Introduction to a Multi-Tiered System of Supports explainer.
This practice guide explains universal and diagnostic student reading assessments, and how to best select them for use in an MTSS framework. It’s the third part of a series of guidance created in partnership with the Dyslexia-SPELD Foundation (DSF). edresearch.edu.au 1 of 5
AERO’s MTSS decision tree covers 3 types of assessment: universal Choosing reading assessments in MTSS Practice guide for secondary schools screening, diagnostic, and progress monitoring. This practice guide provides criteria for selecting and assessing assessments, and points to some example assessments for screening and diagnosis.
Progress monitoring is covered separately in AERO’s Choosing, Monitoring and Modifying Reading Interventions in MTSS practice guide because it’s a critical part of the support provided directly to students through intervention. It’s not recommended that teaching staff develop their own progress monitoring tools – not only because the process is time-consuming, but also because it’s difficult to maintain quality and consistency.
Universal screening
Universal screening assessments provide objective data about the reading skills of a student population. They’re usually administered at the beginning of the school year or upon entry to a school as a new student. Universal screening assessments are designed to identify students whose reading attainments fall below a minimum benchmark. The results of an individual student are compared to cohort-wide data collected from a large group of students the same age or grade.
If a student meets the minimum level expected for their age or grade, they don’t need intervention. If they don’t reach this level, they would benefit from a diagnostic assessment to inform targeted intervention. Screening assessments are effective when they’re designed to be administered in a short period of time to students individually or in a group, in-person or online.
Administration and scoring should be easy (possibly automated) and not require advanced qualifications. However, instruction on how to administer, score and interpret a specific screening assessment is needed to ensure validity and reliability. Some universal screening assessments suggest benchmarks (‘cut-off scores’) to identify students who need further diagnostic assessment.
A common benchmark used in practice and research is one standard deviation below the expected mean level for a student’s age or grade (‘-1 SD’). This equates to the 16th percentile. Universal screening with a single test may run the risk of missing a proportion of students who need help. For example, some students who struggle with word reading can correctly answer questions on a reading comprehension test by simply using their verbal reasoning skills (‘logic’).
Using a broad universal screener that assesses multiple components of reading (for example, word reading and comprehension) can guard against the collection of unreliable data. Drawing on Category B information (as defined in AERO’s MTSS decision tree – NAPLAN data, school reports, written samples) can also provide further data to inform decision-making. edresearch.edu.au 2 of 5
Choosing reading assessments in MTSS Practice guide for secondary schools Diagnostic assessments Universal screening assessments are designed to provide information about students struggling with reading, but often don’t give much information about which reading-related skills need developing. Diagnostic assessments are so-named because they diagnose the nature of a student’s difficulties with word reading and/or comprehension. They’re designed to provide information about the reading-related skills responsible for a student’s reading difficulties. This informs decisions about which interventions are needed to target those skills.
Diagnostic assessments focus on specific skills (such as word reading and decoding), so a student may need to complete a suite (‘battery’) of diagnostic assessments to accurately profile their reading skills. Diagnostic assessments often take longer to administer than universal screening assessments and are typically administered in a one-to-one or small group setting by someone trained in standardised assessments. Like universal screening assessments, diagnostic assessments compare a student’s results to the average level expected for their age and grade.
Ideally, assessments should be specifically designed or adapted for secondary school students. However, some diagnostic assessments designed for upper primary school students may be suitable. For example, if a secondary student scores below the mean level expected for Year 6 students on a word reading test, then their word reading is below that required for secondary school. It should be noted that diagnostic assessments aren’t diagnostic in the sense of being able to diagnose underlying conditions such as specific learning disorders.
If a student fails to benefit from targeted intervention at school or is suspected to have a learning disorder, they should ideally be referred for evaluation by a trained specialist in reading or spoken language. Selecting reading assessment tools in MTSS Many universal screening or diagnostic assessment tools are available in Australia. When selecting a screening assessment tool (either universal or diagnostic) for use in secondary school, there are several factors to consider.
The National Centre on Improving Literacy has created a resource to guide American educators in selecting or assessing a screening assessment for their school context.1 This information is summarised in Table 1, along with additional considerations related to cost, access and the Australian context. See Example Reading Assessment Tools in MTSS for specific examples. edresearch.edu.au 3 of 5 Table 1:
Criteria for selecting an assessment tool (universal and diagnostic) Consideration What to look for What’s the student cohort of interest? The assessment should be designed for students of the same age and grade as the population it will be used with. It’s also important to consider whether the test has been designed with Australian students and diverse populations in mind. A test that hasn’t been evaluated with students from culturally and linguistically diverse populations may over- or under-estimate student performance.
What’s the scope of the assessment? Assessments may cover a broad range of reading skills (accuracy, rate, comprehension), or only assess one or 2 skills closely. They may evaluate concepts and knowledge ranging from early to advanced or target a narrow set of skills to pinpoint instructional needs and determine short-term response to intervention (e.g., Curriculum Based Measures). Selection of assessment should be based on a clear understanding of how the assessment has been designed and what it is (and isn’t) intended to measure. Is the assessment reliable? It’s essential that the tool consistently yields accurate and stable results over time.
The outcomes should not vary notably when administered by different people. A reliable tool minimises measurement error and provides educators with confidence in the data, enhancing the accuracy of screening decisions and subsequent interventions within the MTSS framework. Is the assessment valid? The assessment tool should be an accurate, or reasonable, measure of the skill/s it claims to evaluate. Valid assessments offer educators confidence that the data generated reflects students’ actual reading proficiency.
Is the assessment sensitive and specific when identifying students whose academic skills are less developed than expected for their age and grade? The tool should be sensitive – that is, able to accurately identify students who need intervention, minimising false negatives (such as students who achieve average-range results despite below-average reading abilities). It must also be specific – that is, able to correctly identify students who don’t need intervention, reducing false positives. This balance is crucial for effective decision-making within the MTSS framework, preventing both under-identification and over-identification of students needing support.
Is the assessment suitable for the school context in terms of financial, resource and staffing demands, expected reading skills, and the number of students requiring assessment? This will vary between schools. However, assessment tools that are cost-effective, efficient, and easily scalable have obvious advantages. Consider whether the tool is available in Australia and the tool’s alignment with Australian educational standards and curricula to ensure its relevance and suitability for the Australian context. Is the assessment user-friendly and accessible?
Ease of administration, scoring and data interpretation is another key criterion. Accessibility to test materials is crucial, including whether the assessments are available in multiple formats, such as digital and paper-based, to accommodate various school settings and student needs. edresearch.edu.au 4 of 5
Choosing reading assessments in MTSS Practice guide for secondary schools Choosing reading assessments in MTSS Practice guide for secondary schools More information AERO’s MTSS resources provide further information about using MTSS to support students, including: • how to support secondary students struggling with reading using an MTSS decision tree • example reading assessment tools • how to choose interventions that target reading skills gaps.”
Petscher, Y., & Suhr, M. (2022). Considerations for choosing and using screeners for students with disabilities. In C. J. Lemons, S. R. Powell, K. L. Lane, & T. C. Aceves (Eds.), Handbook of special education research (Vol. 2, pp. 83–96). Routledge. AERO Practice guide – Choosing reading assessments in MTSS
NAPLAN results highlight need for action on reading (2024)
“National NAPLAN scores released yesterday show that one in three Australian school students are still not meeting literacy and numeracy benchmarks, and more than one in ten are so far behind they require additional support.
These literacy results tie in to recent findings from the Grattan Institute that reported one in three Australian students are poor readers, as well as recent ABS findings which report declining rates in leisure reading amongst children and young people.[1]
Research shows that reading for pleasure is four times more influential on intellectual progress in teens than having a parent with a degree.[2] Reading also has proven positive effects on mental health and self-esteem, with 74% of children agreeing that reading is a way to help them understand the world.[3]
The latest NAPLAN results show that too many Australian children are missing out on the life-changing benefits of reading.
Action to both improve literacy skills and embed a long-term love of reading is critical. Evidence shows direct links between positive attitudes towards reading, frequency of reading, and reading attainment at school.[4]
The Australian Library and Information Association (ALIA) highlighted the importance of school libraries in tackling falling reading rates. “Urgent action is needed to ensure that all students have access to a well-resourced school library run by qualified staff. Without this, teachers, parents and the whole community face an uphill battle to encourage a reading culture in our young people,” said ALIA CEO Cathie Warburton in ALIA’s statement.
Australia Reads joins the call seeking urgent action to support reading in our schools. Alongside funding the critical infrastructure that supports reading, we also call for the creation of a national reading engagement policy, as well as investment in targeted reading programs and campaigns to stop this downward trajectory.
Find out more about our ongoing work to get more Australians reading by signing up to our monthly enewsletter.
Want to learn more?
Citations:
[1] Australian Bureau of Statistics, 2021-2022, Cultural and creative activities, ABS, Australia
[2] Sullivan, A., & Brown, M. 2013 ‘Social inequalities in cognitive scores at age 16: The role of reading.’ CLS Working Papers
[3] Scholastic, 2019 Kids and Family Reading Report
[4] Clark, C., and Douglas, J. 2011 ‘Young People’s Reading and Writing An indepth study focusing on enjoyment, behaviour, attitudes and attainment’, National Literacy Trust
Australian Reads
How do we get more people reading? What are the best ways to reach those who rarely read? And what can we practically do to influence reading habits?
Understanding Australian readers: Behavioural insights into recreational reading is a new research report by Australia Reads and Monash University’s BehaviourWorks Australia tackling these questions, and providing valuable insights into how to positively influence reading participation in Australia.
The report outlines the behaviours that constitute recreational reading, and the various drivers and barriers to these behaviours for different audience segments across the Australian population.
This research provides a practical and thought-provoking base to support an industry-wide conversation about how to reach these different readers through targeted campaigns and programs, as well as how to prioritise this work most effectively to shift national reading rates in Australia.”
Fernanda Mata, Alyse Lennox, & Breanna Wright. (2024). Australia Reads& NAPLAN results highlight need for action on reading. Understanding Australian Readers | Australia Reads
NAPLAN results highlight need for action on reading - Australia Reads
It’s clear that NAPLAN has had a variable circumstance over the years!
____________________________________________________