fbpx

Hempenstall, Kerry. (2014). What works? Evidence-based practice in education is complex. Australian Journal of Learning Difficulties. 19(2), 113-127.

DOI:10.1080/19404158.2014.921631

https://www.researchgate.net/publication/272121037_What_works_Evidence-based_practice_in_education_is_complex

First, let’s look at some of the recent findings about giving every child the best chance of success (over 2020 to 2025 years).

 

Then is my older piece (as named at top). The aim is to see how issues on Evidence-based practice (EBP) have changed.

_______________________________________________ 

How to give every child the best chance of success (2024)

“A fundamental promise of an education system is that almost every child who goes to school will learn how to read. Yet recent evidence shows about one in three Australian school students are not mastering the reading skills they need. Australia is failing these children. Students from poor families, from regional and rural areas, and Indigenous students tend to face bigger barriers to reading success. But about one in four students from well-off families struggle too. Decades of disagreement about how to teach reading have contributed to many students missing out on best-practice teaching.

For too many students, learning to read well comes down to luck, not design. Every child we fail to teach to read misses out on a core life skill, and Australia misses out on their potential too. For those students in school today who are hardest hit by poor reading performance, the cost to Australia is about $40 billion over their lifetimes. There is no reason our students should perform worse than students in similar countries. England and Ireland, and about 30 US states, have made big policy changes to help schools to teach according to the evidence – with great results. Australia should follow their lead.

The evidence is clear: there should be a strong focus on phonics-based decoding skills in the early years. Students also need a knowledge-rich curriculum to build the vocabulary and background knowledge that are critical for successful reading comprehension all through school. And schools need to track student progress, so they can intervene early to help struggling students to catch-up. But governments can’t leave schools to figure out on their own how to implement these evidence-informed practices. Australia’s governments need to get serious about ensuring best practices are used in all schools, so no student falls through the cracks.

Change will not be easy. Getting this right consistently in every one of the nearly 10,000 schools across the country will involve a big shift. It will require many teachers to stop using familiar but less effective practices, and adopt new, more effective, ones. Australia’s governments, and Catholic and independent school sector leaders, should commit to a 10-year ‘Reading Guarantee’ strategy to meet the reading challenge.

The strategy should include six steps. First, they should commit publicly to ensuring that at least 90 per cent of Australian students learn to read proficiently at school. Second, they should give schools and teachers specific, practical guidelines on the best way to teach reading. Third, they should ensure schools have well-sequenced, knowledge rich curriculum materials and effective assessment tools. Fourth, they should require schools to do universal screening of reading skills and help struggling students to catch-up. Fifth, they should ensure teachers are equipped to teach according to the evidence through training, new quality-assured micro-credentials, and by creating specialist literacy roles. And sixth, they should improve system monitoring and accountability by mandating a nationally consistent Year 1 Phonics Screening Check for all students, and strengthen school and principal reviews. This will require significant investment and political commitment, but the gains will be worth it. If implemented well, Australia would finally deliver on a key promise of schooling: to teach children to read.

Recommendations

Australia’s governments, and Catholic and independent school sector leaders, should commit to a 10-year ‘Reading Guarantee’ strategy, to be reviewed every five years. This should include six key steps:

Step 1: Commit to at least 90 per cent of students becoming proficient readers (a) Commit to a long-term goal of at least 90 per cent of students reaching proficiency in reading, as measured by the proportion of students in the ‘strong’ or ‘exceeding’ categories in NAPLAN, across Years 3, 5, 7, and 9. (b) Commit to a 10-year target of increasing by 15 percentage points the proportion of students across Years 3, 5, 7, and 9 who reach proficiency, based on 2023 state-level NAPLAN data. Averaged across all states, this will require an uplift from 68 per cent in 2023 to 83 per cent in 2033. (c) Report on progress on targets, including progress of high achievers and disadvantaged students, through a stand-alone annual report tabled in all Australian parliaments.*1

Step 2: Give teachers and school leaders specific guidelines on how to teach reading according to the evidence (a) Develop national teaching practice guidelines on reading instruction and catch-up supports through a process led by the Australian Education Research Organisation.*(b) Review existing guidance provided to schools and teachers on reading instruction, and ensure advice is consistent and aligned to the evidence. (c) Invest $20 million in education research over five years to strengthen the guidelines by filling research gaps and exploring effective ways to implement best-practice reading instruction in schools.*

Step 3: Ensure schools have the high-quality curriculum materials and assessments teachers need to teach reading well (a) Ensure teachers can get quality-assured whole-class curriculum materials and intervention programs for all year levels. As a priority, governments should invest in primary school knowledge-rich materials for the Humanities and Social Sciences (HaSS), Science, and English, and reading intervention programs and assessment tools for students in secondary school. (b) Invest in decodable readers for government and low-fee non-government primary and secondary schools, for early reading instruction and intervention support for struggling students. (c) Fund disadvantaged low-performing schools, regardless of their sector, through a one-off grant to purchase quality-assured curriculum materials, including evidence-informed reading programs. (d) Commit to phasing-out materials, reading programs, and reading assessment tools that are not aligned with the evidence. (e) Establish a rigorous, independent, quality-assurance mechanism, similar to the US quality-assurance body EdReports, to continually evaluate and report on the quality of comprehensive curriculum materials available to schools. Reading programs should also be validated, as happens in England.* (f) Validate primary and secondary reading assessment tools to ensure schools know which reading assessments are effective.*

Step 4: Require all schools to do universal screening of reading skills and help students falling behind to catch-up (a) Require all schools to use evidence-informed reading assessment tools at least twice a year to screen students from Foundation to Year 2, in the transition to secondary school, and for any new school entrants. (b) Require all primary and secondary schools to embed a ‘response-to-intervention’ model, which includes additional catch-up support for students falling behind in reading, according to the best-practice guidelines (see Step 2a). (c) Ensure key data are built into student records that are attached to a national Universal Student Identifier (USI), so that students’ academic record goes with them when they switch schools.*

Step 5: Ensure teachers have the knowledge and skills they need to teach reading well, through essential training and new quality-assured micro-credentials, and by creating specialist literacy teacher roles (a) Develop and subsidise quality-assured micro-credentials in evidence-informed reading instruction for teachers, teaching assistants, specialists, and other educators, and provide incentive payments to schools that employ teachers and teaching assistants with these certifications.*(b) Require all primary school classroom teachers to spend at least 25 per cent of their professional learning hours for accreditation on quality-assured training on reading instruction. (c) Have an independent body quality-assure training in reading instruction, to ensure it is effective and in line with best-practice guidelines (see Step 2a).* (d) Coach teachers in reading instruction by creating a Literacy Instructional Specialist role in every school, a Literacy Master Teacher role in every region, and a Literacy Principal Master Teacher for every system. (e) Build the specialist supports pipeline by providing university scholarships for specialist roles, such as speech pathology and educational psychology degrees.* (f) Establish exemplar demonstration schools to showcase best practice, drawing on the ‘English Hubs’ model.

Step 6: Encourage best-practice teaching through closer monitoring and strengthened school performance reviews (a) Mandate a nationally consistent Year 1 Phonics Screening Check for all students, as a system ‘health check’ on early reading performance. (b) Commit to more frequent and more comprehensive school reviews. Reviews should be done at least every four years, and include a rigorous examination of student achievement, curriculum implementation, and instructional approaches to reading. (c) Enhance the performance reviews of school principals by including criteria on implementation of evidence-informed reading practices and assessment protocols, according to the best-practice guidelines (see Step 2a).

Summary of where we are and where we should be.

From: situation today: One in three Australian students are poor readers • Reading underperformance is persistent. • Two in three disadvantaged students are not reading proficiently. • Half of regional and remote students are not reading proficiently.

To: Reading Guarantee: Almost all Australian students are proficient readers • Proportion of proficient students increases by at least 15 percentage points over 10 years, and reaches 90 per cent in the longer term. • Gaps between advantaged and disadvantaged students are small.

From: situation today: Huge differences in the way reading is taught in classrooms • How students are taught to read varies significantly across the country, with many students not being taught according to the best evidence. • Many students are not helped to catch-up if they are falling behind, resulting in some students being years behind their year-level.

To: Reading Guarantee: All students are taught reading according to the best evidence • Students are taught how to read using a structured literacy approach, including building vocabulary and comprehension all through school. • Students don’t fall through the cracks. All those at risk of falling behind are helped to stay on track in small groups or one-on-one.

From: situation today: Teachers lack the knowledge and skills to teach reading well • Inadequate pre-service and in-service training on the best way to teach reading. • Not enough teachers with expertise in reading to coach other teachers.

To: Reading Guarantee: Teachers have the knowledge and skills to teach reading well • Teachers get high-quality in-service training on reading instruction. • Literacy Instructional Specialists and Literacy Master Teachers coach teachers to hone their teaching of reading across all year levels

From: situation today: School resources are poorly organised • Different teaching practices between classrooms in a school, undermining development of reading skills as students progress through year-levels. • Lack of access to and knowledge of which reading assessments, curriculum materials, or reading programs to use. • Not enough speech pathologists and educational psychologists. • Lack of accountability for poor instructional practice.

To: Reading Guarantee: Schools have the resources they need to teach reading well • Schools take a whole-school approach to reading instruction. • Access to quality-assured programs, materials, and decodable readers. • Schools have a ‘multi-tiered system of support’ (MTSS), with a robust screening approach and intervention supports for struggling students. • Sufficient speech pathologists and educational psychologists • Accountability for the quality of instructional practice.

From: situation today: System doesn’t provide enough support to schools and teachers • Governments provide inconsistent guidance on reading instruction. • Under-investment in supports such as guidelines, materials, etc. • Lack of accountability for students’ poor reading performance.

To: Reading Guarantee: A highly reliable system that gives every student the best chance • Evidence-based guidelines set clear expectations for instruction. • Investment in system-level supports, including materials and training. • Mandatory Yr 1 Phonics Screening Check and stronger school reviews.”

Jordana Hunter, Anika Stobart, and Amy Haywood. (2024). The Reading Guarantee: How to give every child the best chance of success. https://grattan.edu.au/wp-content/uploads/2024/02/The-Reading-Guarantee-Grattan-Institute-Report.pdf

_______________________________________________

 

Evidence-based teaching practices (2023).

“The quality of teaching delivered to students is the most impactful factor within an education setting which can improve student outcomes, particularly for those who are most marginalised and disadvantaged. As a profession, teachers dedicate their lives and careers to trying to improve outcomes for their students, including in challenging environments where numerous factors that impact achievement are outside of the control of a teacher or school.

Many teachers rely on intuitive expertise gained from years of arduous trial and error, ongoing on-the-job professional learning, and engagement with successful and accomplished colleagues to find the most effective and successful practices, often encountering inconsistent or contradictory advice in this process. The time expended upskilling on fundamental knowledge on-the-job could be more effectively used to further enhance expertise if these foundations were consistently taught through ITE programs.

Evidence-based practices in education are practices backed up by research evidence. This means there is broad consensus from rigorously conducted evaluations that they work in many cases across various contexts, for different subgroups of students and various locations. Additionally, there is extensive rigorous research on approaches to teaching subject-specific content, such as reading and numeracy, that can complement generic evidence-based pedagogical approaches.

Greater gains in positive student outcomes and a significant reduction in teacher workload could be achieved if teachers were armed with a knowledge and understanding of what works best and why. This knowledge should be built up from core foundational content delivered first through ITE, then tried and tested during professional experience, and built on coherently through ongoing professional learning and practice.”

“This model links elements of student learning processes to associated teaching practices in 4 key areas: 

  1. To align with the evidence that learning is a change in long-term memory, teachers develop a teaching and learning plan for the knowledge students will acquire.
  2. To align with the evidence that students process limited amounts of new information, teachers manage the cognitive load of learning tasks.
  3. To align with the evidence on how students develop and demonstrate mastery, teachers maximise retention, consolidation and application of learning.
  4. To align with the evidence that students are actively engaged when learning, teachers foster the conditions of a learning-focused environment

This fourth essential element wraps around the other elements of the model, recognising that: 

  • engagement and learning have a reciprocal relationship
  • students learn best in safe and supportive learning environments. 

The model recognises that:  

  • all students benefit from evidence-based practices that align with the mechanisms of memory that allow for acquiring, retaining, retrieving and consolidating learning
  • the frequency, intensity and duration of scaffolding and guidance provided may differ to meet students’ needs. 

This model aligns with Australian Professional Standards for Teachers 1.2: Understand how students learn. Teachers at all career stages can use this model and the related overview of how students learn to affirm, extend or improve their current practice by: 

  • developing their understanding and use of research into how students learn
  • reviewing the structure of their teaching programs using research evidence
  • evaluating the effectiveness of teaching practices in their schools to identify opportunities to have a greater impact for all of their students. 

Leaders can also use the learning and teaching model to develop a common language and shared understanding of how students learn, and to ensure policies and programs are aligned with this evidence to maximise learning for all students.” 

Australian Education Research Organisation (AERO). (2023). Evidence-based teaching practices. https://www.edresearch.edu.au/topics/evidence-based-practices

Primary English Teaching Association Australia PETAA (2024)

 

“This report presents findings from a comprehensive survey of 500 Australian teachers focused on reading instruction practices. Overall, teachers report strong confidence in their knowledge and pedagogical skills, with over 75% indicating they feel well-equipped to teach reading. They demonstrate a clear understanding of reading development, effectively shifting emphasis from foundational code-based instruction in early years to comprehension in later stages. Instructionally, the majority of teachers (82%) deliver teacher-led reading lessons multiple times per week, aligning with evidence-based practices.

The literacy block remains a foundational structure, with 98% of early years teachers addressing all five pillars of reading during this time, though there is notable variation in how these blocks are implemented. Overall, teachers show a nuanced approach to text selection, balancing decodable and authentic texts in the early years, and progressing appropriately in text complexity as students’ reading skills advance. However, differentiation remains a significant challenge with variability in student ability, time constraints, and limited resources as major hurdles.

Professional learning is largely self-directed, with many teachers turning to online resources and social media over formal training. Few teachers engage in regular professional dialogue about reading instruction, and most (65%) do not use department-provided instructional materials, instead relying on third-party sourced, self-made or school-based resources.

Finally, only half of surveyed teachers report operating within a whole-school reading approach, highlighting inconsistencies in instructional practices both within and between schools.

Key Findings

Survey respondents demonstrate strong professional knowledge in reading instruction, with widespread implementation of evidence-based practices. Teachers effectively adjust instructional focus according to developmental needs, transitioning from code-based emphases in early years to greater comprehension focus in upper primary. However, this does not mean comprehension and meaning-making are neglected in the early years.

Most teachers (82% of respondents) implement teacher-led reading instruction at least 3-4 days a week. Teachers employ effective strategies for differentiation instruction, including flexible and ability based grouping. However, time constraints, wide ability ranges, and resource limitations represent the primary challenges in differentiating reading instruction. 35% of respondents believe their schools do not have a whole-school approach to the teaching of reading.

Most teachers integrate reading instruction across the curriculum, particularly focusing on vocabulary and comprehension beyond dedicated literacy blocks. The majority of teachers are using a wide range of evidence-informed strategies to support and extend EAL/D students in their classrooms. Most teachers have limited interactions with staff either in their schools or elsewhere in their professional networks around their knowledge of the teaching of reading.

National Teaching of Reading Survey 2024

 In 2024, PETAA conducted its first national survey on the teaching of reading. The topic of reading instruction is extensively researched, debated, and often contentious. Media headlines frequently suggest teachers lack the requisite knowledge, yet public discourse rarely includes teachers' own voices. PETAA sought direct insights from teachers about classroom practices—examining the skills, knowledge, approaches, and strategies used daily.

Introduction, Methodology, Executive Summary and Key Findings 01

Introduction Methodology and Limitations This survey was designed to capture the perspectives and practices of teachers regarding reading instruction in Australian schools. The survey was distributed through PETAA email networks and social media, employing a convenience sampling approach. A total of 500 responses were collected from teachers across various educational contexts and systems.

Survey Design

The survey included both closed-ended questions (multiple choice, Likert scale) and open-ended questions allowing for detailed text responses. The survey was modified and expanded on from the work of Gawne (2020) in her doctoral study Principles, Practices and Priorities of Teaching Reading in the Early Years of Schooling. Questions were organised into sections addressing key aspects of reading instruction, including time allocation, instructional approaches, text selection, differentiation strategies, and professional development.

Limitations

While this survey provides valuable insights into teacher practices and perspectives, several limitations should be considered when interpreting the results: Sample Representation: With 64% of respondents over 50 years of age, the sample skews toward more experienced teachers. The perspectives of early-career teachers may be under-represented. Self-Selection Bias: As participation was voluntary and distributed through PETAA networks, respondents may not represent the full range of Australian teachers. Those with a strong interest in literacy instruction or an affiliation with PETAA may be over-represented. This report presents findings from a comprehensive survey of 500 Australian teachers focused on reading instruction practices. Overall, teachers report strong confidence in their knowledge and pedagogical skills, with over 75% indicating they feel well-equipped to teach reading. They demonstrate a clear understanding of reading development, effectively shifting emphasis from foundational code-based instruction in early years to comprehension in later stages. Instructionally, the majority of teachers (82%) deliver teacher-led reading lessons multiple times per week, aligning with evidence-based practices. The literacy block remains a foundational structure, with 98% of early years teachers addressing all five pillars of reading during this time, though there is notable variation in how these blocks are implemented.

Overall, teachers show a nuanced approach to text selection, balancing decodable and authentic texts in the early years, and progressing appropriately in text complexity as students’ reading skills advance. However, differentiation remains a significant challenge with variability in student ability, time constraints, and limited resources as major hurdles. Professional learning is largely self-directed, with many teachers turning to online resources and social media over formal training. Few teachers engage in regular professional dialogue about reading instruction, and most (65%) do not use department-provided instructional materials, instead relying on third-party sourced, self-made or school-based resources. Finally, only half of surveyed teachers report operating within a whole-school reading approach, highlighting inconsistencies in instructional practices both within and between schools. Self-Reported Data: The survey relies on self-reported practices rather than observed classroom behaviours. Research suggests self-reports may sometimes differ from actual practice.

Definition Variability:

Despite providing a glossary of terms, teachers may interpret certain concepts (e.g., "explicit instruction" or "decodable texts") differently based on their training and experience. Regional Distribution: The survey did not analyse responses by geographical region, which may mask important differences between states and metropolitan, rural, and remote educational contexts.

School Context:

Limited information was collected about specific school contexts (e.g., socioeconomic status, cultural diversity) which may influence instructional approaches. What not Why: The data tells about what teachers do, but it doesn’t tell us about why they do it. It was beyond the scope of this survey to gather detailed, systematic information about their rationale for particular pedagogical choices.

These limitations should be considered when interpreting findings and attempting to generalise to the broader population of Australian teachers. Executive Summary Survey respondents demonstrate strong professional knowledge in reading instruction, with widespread implementation of evidence-based practices. Teachers effectively adjust instructional focus according to developmental needs, transitioning from code-based emphases in early years to greater comprehension focus in upper primary. However, this does not mean comprehension and meaning-making are neglected in the early years.

Most teachers (82% of respondents) implement teacher-led reading instruction at least 3-4 days a week. Teachers employ effective strategies for differentiation instruction, including flexible and ability based grouping. However, time constraints, wide ability ranges, and resource limitations represent the primary challenges in differentiating reading instruction. 35% of respondents believe their schools do not have a whole-school approach to the teaching of reading. Most teachers integrate reading instruction across the curriculum, particularly focusing on vocabulary and comprehension beyond dedicated literacy blocks. The majority of teachers are using a wide range of evidence-informed strategies to support and extend EAL/D students in their classrooms.

Most teachers have limited interactions with staff either in their schools or elsewhere in their professional networks around their knowledge of the teaching of reading.

Key Findings

Most teachers (82%) employ teacher-led reading instruction at least 3-4 days per week, indicating strong alignment with evidence-based approaches. The literacy block remains a cornerstone of reading instruction, with 98% of early years teachers addressing all five pillars of reading within this dedicated time. There is significant variety in how literacy blocks are structured and implemented across schools, reflecting both teacher autonomy but also potentially some inconsistent practices. Instructional Practices Teachers consistently identify the wide range of student abilities within a single classroom as their greatest challenge in reading instruction. Time constraints and resource limitations create significant barriers to effective differentiation. There is substantial reliance on learning support staff (70%) for differentiation, potentially creating inequities between well-resourced and under-resourced schools.

Differentiation Challenges

Teachers demonstrate nuanced understanding of appropriate types of text, with early years teachers appropriately emphasising decodable texts while maintaining exposure to authentic texts. There is evidence of appropriate progression in text complexity as students develop reading proficiency.

 Text Selection and Use

Differentiation practices and knowledge related to teaching reading. Teachers effectively adjust instructional focus according to developmental needs, transitioning from code-based emphases in early years to greater comprehension focus in upper primary. However, this does not mean comprehension and meaning-making are neglected in the early years.

Teacher Expertise and Confidence

Based on the comprehensive survey of 500 Australian teachers regarding reading instruction practices, and acknowledging the limitations noted above, several significant conclusions emerge:

Differentiation Whole-school Approaches Only 50% of teachers reported

Key Conclusions

Whole-school Approaches

Only 50% of teachers reported working within a whole-school approach to reading instruction, with 41% having school-created approaches and 9% using commercial programs. This suggests potential inconsistency in reading instruction approaches both within and across schools, as well as a lack of guidance or structure for early career teachers. Key Conclusions Professional Development and Collaboration Teachers primarily rely on self-directed learning through online resources, blogs, and social media rather than formal professional development opportunities. Only a small percentage of teachers engage in regular collaborative discussions about reading instruction with colleagues. Most teachers (65%) are not utilising department/government-provided instructional materials, instead creating their own or school-based resources.

Develop and Implement Consistent Whole-School Approaches

Government and systems: Continue to provide foundational training in evidence-informed reading instruction to all school leaders, focusing on how to craft and implement coherent whole-school reading approaches that respect teacher knowledge and maintain autonomy. Support the inclusion of literacy leaders in schools who can employ a model of coaching thereby supporting early career teachers and embedding a whole school approach to reading.

Schools: Establish clear whole-school reading frameworks that maintain teacher autonomy while ensuring instructional coherence and continuity across year levels. Ensure schools are committed to professional learning, in line with the whole school approach to the teaching of reading. Schools should ensure teachers have both a knowledge of the essential elements of teaching reading, including the foundational element of oral language and the skills to teach reading effectively.

Recommendations

1. Develop and Implement Consistent Whole-School Approaches Government and systems:

Continue to provide foundational training in evidence-informed reading instruction to all school leaders, focusing on how to craft and implement coherent whole-school reading approaches that respect teacher knowledge and maintain autonomy. Support the inclusion of literacy leaders in schools who can employ a model of coaching thereby supporting early career teachers and embedding a whole school approach to reading.

Schools: Establish clear whole-school reading frameworks that maintain teacher autonomy while ensuring instructional coherence and continuity across year levels. Ensure schools are committed to professional learning, in line with the whole school approach to the teaching of reading. Schools should ensure teachers have both a knowledge of the essential elements of teaching reading, including the foundational element of oral language and the skills to teach reading effectively. Based on these findings, and informed by current research evidence in reading instruction (Castles et al., 2018; Duke et al., 2021) we recommend the following actions to further strengthen the teaching of reading in Australian schools: Recommendations

2. Address Time and Resource Constraints Government and systems:

Reduce administrative burdens on teachers to provide additional time for professional development in reading instruction and lesson preparation. Government: Consider creating freely available, quality phonics programs that extend current Literacy Hub resources, reducing financial burdens on schools. Victoria has produced one such program. Schools: Allocate protected time for collaborative planning and discussion of reading instruction.

3. Enhance Professional Development and Collaboration Systems: Create opportunities for cross-school teacher visits and observation of exemplary reading instruction. Schools: Establish structured professional learning communities, which support teachers to engage in collaborative practice such as peer classroom observations, reflection and feedback, focused specifically on reading instruction.

Based on these findings, and informed by current research evidence in reading instruction (Castles et al., 2018; Duke et al., 2021) we recommend the following actions to further strengthen the teaching of reading in Australian schools: Recommendations

4. Strengthen Support for Differentiation Government and systems:

Review and potentially increase funding for learning support staff, particularly in schools with high needs. Schools: Ensure clear support processes and structures between the varied teams within a school, to understand student needs, including those that require extension as well as those that require extra support, aligning the expertise of classroom teachers, learning support staff and pedagogy coaches or leaders, in planning for differentiation. PETAA extends sincere thanks to the many teachers who generously contributed their time, experience and professional insights to this landmark survey. Your voices have shaped a clearer national picture of how reading is taught in Australian classrooms today—and how it can be strengthened for the future. “Responsive, differentiated teaching based on teacher judgement works better than commercial programs.” Teacher voices: the final word “The many varied "views" on teaching reading and the discussion of different approaches to teaching reading is vastly different from school to school, system to system and pedagogically between teachers. And that's ok.

5. Diversify Reading Materials and Approaches Schools:

Review classroom and library collections to ensure they include diverse authors and perspectives. Support professional learning for teachers which includes how to engage with diverse texts where students see themselves and others represented, developing understanding and empathy through critical reflection. Government: Provide funding for schools to expand their collections of diverse reading materials.

6. Support Early Career Teachers Schools: Implement structured mentoring programs pairing experienced and early career teachers specifically focused on reading instruction. Systems: Reduce teaching loads for beginning teachers to allow more time for planning and professional learning about reading instruction.

By implementing these recommendations, Australian education systems can build on the strong foundation of teacher expertise evident in this survey while addressing key challenges and inconsistencies in reading instruction approaches. The ultimate goal is to ensure all students receive high-quality, evidence-based reading instruction that enables them to become proficient, engaged readers.

We are mandated to teach the syllabus! Not a particular program.

Personally after 20+ years of being a Literacy specialist teacher I can honestly say there isn't ONE way to teach reading. There just isn't. Not every approach, pedagogy, text, grouping, activity is suitable for every child. As the standards say, we have to know the content and how to teach it and we have to know our students and how they learn.” “It is amazing to be part of a child's reading journey, it makes you keep trying new things to help it click!” “I am sick of the reading wars and political football that reading is in our country. Why can’t evidence be used to show that it is not one approach or another but a combination of approaches that students need to learn to read. We also need to alter approaches according to the individual student’s needs. It is not a one size suits all approach.” “In implementing SOR [Science of Reading approaches] (which in many ways is returning to a form of teaching from the past), many leaders are failing to see that some staff are experienced in many of the key areas being presented. Many think it is all 'new' - when it's not. There is the possibility of many evidence-based teaching practices not identified as SOR are being ignored in this transition phase, and the baby is being thrown out with the water...”

“Reading is not a stand alone. It is the backbone of everything we do.” “Perfecting how you teach reading is a continuous process of learning, implementing and reflecting.” “It is important for the public to know that teachers do and always have taught reading explicitly. The false narrative out there that we don’t is damaging and unfair. The child has to be at the centre of my decisions, so every day I ask myself what do they want to read and why? How can I help them achieve that?” "Schools and teachers need to be provided with the tools and resources to do their job consistently across all schools, without the need for schools and teachers themselves to spend money in order to ensure that decent reading instruction is happening."

References

Adam, H. (2021). When authenticity goes missing: How monocultural children’s literature Is silencing the voices and contributing to invisibility of children from minority backgrounds. Education Sciences, 11(1), https://doi.org/10.3390/educsci11010032.

Allington, R. L. (2014). How reading volume affects both reading fluency and reading achievement. International Electronic Journal of Elementary Education, 7(1), 13-26.

Archer, A. L., & Hughes, C. A. (2011). Explicit instruction: Effective and efficient teaching. Guilford Press.

Cabell, S. Q., & Hwang, H. (2020). Building content knowledge to boost comprehension in the primary grades. Reading Research Quarterly, 55(S1), S99-S107.

Castles, A., Rastle, K., & Nation, K. (2018). Ending the reading wars: Reading acquisition from novice to expert. Psychological Science in the Public Interest, 19(1), 5-51.

Cremin, T. (2024). Developing readers who choose to read. Practical Literacy: The Early and Primary Years, 29(3), 10–13. https://doi.org/10.3316/informit.T2024101000003201975691527

Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Learning Policy Institute.

Deunk, M. I., Smale-Jacobse, A. E., de Boer, H., Doolaard, S., & Bosker, R. J. (2018). Effective differentiation practices: A systematic review and meta-analysis of studies on the cognitive effects of differentiation practices in primary education. Educational Research Review, 24, 31-54.

Duke, N. K., & Martin, N. M. (2015). Best practices for comprehension instruction in the elementary classroom. In S. R. Parris & K. Headley (Eds.), Comprehension instruction: Research-based best practices (3rd ed., pp. 211-228).

Guilford Press. Duke, N. K., & Cartwright, K. B. (2021). The science of reading progresses: Communicating advances beyond the simple view of reading. Reading Research Quarterly, 56(S1), S25-S44.

Duke, N. K., Ward, A. E., & Pearson, P. D. (2021). The science of reading comprehension instruction. The Reading Teacher, 74(6), 663-672.

Elleman, A. M. (2017). Examining the impact of inference instruction on the literal and inferential comprehension of skilled and less skilled readers: A meta-analytic review. Journal of Educational Psychology, 109(6), 761-781.

Foorman, B., Beyler, N., Borradaile, K., Coyne, M., Denton, C. A., Dimino, J., ... & Wissel, S. (2016). Foundational skills to support reading for understanding in kindergarten through 3rd grade (NCEE 2016-4008).

National Center for Education Evaluation and Regional Assistance.

Gawne, L. (2020). Principles, Practices and Priorities of Teaching Reading in the Early Years of Schooling. [Doctoral dissertation, University of Melbourne]. https://minervaaccess.unimelb.edu.au/bitstream/11343/274814/1/f2dbf902-4230-eb11-94d0- 0050568d7800_LindaGawne_PrinciplesPracticesandPrioritiesofTeachingReading.pdf

Gore, J., & Rosser, B. (2020). Beyond content-focused professional development: Powerful professional learning through genuine teacher inquiry. Professional Development in Education, 46(5), 809-825.

Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6-10.

Graham, L. J., de Bruin, K., Lassig, C., & Spandagou, I. (2020). A scoping review of 20 years of research on differentiation: Investigating conceptualisations, interpretations, and implementations. Review of Education, 8(1), 36-69.

Hargreaves, A., & O'Connor, M. T. (2018). Collaborative professionalism: When teaching together means learning for all. Corwin Press.

Hoover, W. A., & Gough, P. B. (1990). The simple view of reading. Reading and Writing, 2(2), 127- 160. International Literacy Association. (2018). Standards for the preparation of literacy professionals 2017. International Literacy Association.

Kennedy, M. M. (2016). How does professional development improve teaching? Review of Educational Research, 86(4), 945-980.

Konza, D. (2016). Understanding the process of reading: The big six. In J. Scull & B. Raban (Eds.), Growing up literate: Australian literacy research for practice (pp. 149-175). Eleanor Curtain Publishing.

Ladson-Billings, G. (2014). Culturally relevant pedagogy 2.0: a.k.a. the remix. Harvard Educational Review, 84(1), 74-84.

Nation, K. (2019). Children's reading difficulties, language, and reflections on the simple view of reading. Australian Journal of Learning Difficulties, 24(1), 47-73.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. National Institute of Child Health and Human Development.

Okkinga, M., van Steensel, R., van Gelderen, A. J., & Sleegers, P. J. (2018). Effects of reciprocal teaching on reading comprehension of low-achieving adolescents: The importance of specific teacher skills. Journal of Research in Reading, 41(1), 20-41.

Paris, D., & Alim, H. S. (2017). Culturally sustaining pedagogies: Teaching and learning for justice in a changing world. Teachers College Press.

Paris, S. G. (2005). Reinterpreting the development of reading skills. Reading Research Quarterly, 40(2), 184-202.

Puzio, K., Colby, G. T., & Algeo-Nichols, D. (2020). Differentiated literacy instruction: Boondoggle or best practice? Review of Educational Research, 90(4), 459-498.

Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 36(1), 12-19.

Scarborough, H. S. (2001). Connecting early language and literacy to later reading (dis)abilities: Evidence, theory, and practice. In S. Neuman & D. Dickinson (Eds.), Handbook of early literacy research (pp. 97-110). Guilford Press.

Shanahan, T. (2019). Reading-writing connections. In S. Graham, C. A. MacArthur, & M. Hebert (Eds.), Best practices in writing instruction (3rd ed., pp. 309-332). Guilford Press.

Shanahan, T. (2019). Which Texts for Teaching Reading: Decodable, Predictable, or Controlled Vocabulary? Shanahan on Literacy. https://www.shanahanonliteracy.com/blog/which-textsfor-teaching-reading-decodable-predictable-or-controlled-vocabulary

Shanahan, T. (2020). What is the best way to organize a classroom for reading instruction? Shanahan on Literacy. https://www.shanahanonliteracy.com/blog/what-is-the-best-way-toorganize-a-classroom-for-reading-instruction

Shanahan, T. (2024). The reading time conundrum. Shanahan on Literacy. https://www.shanahanonliteracy.com/blog/the-reading-time-conundrum

Shanahan, T. (2024). Should we teach with decodable text? Shanahan on Literacy. https://www.shanahanonliteracy.com/blog/should-we-teach-with-decodable-text-1

Slavin, R. E. (2014). Cooperative learning and academic achievement: Why does groupwork work?

Anales de Psicología, 30(3), 785-791. Snow, C. E., Griffin, P., & Burns, M. S. (Eds.). (2005). Knowledge to support the teaching of reading: Preparing teachers for a changing world. Jossey-Bass.

Tomlinson, C. A. (2017). How to differentiate instruction in academically diverse classrooms (3rd ed.). ASCD.

Vaughn, S., Roberts, G., Swanson, E. A., Wanzek, J., Fall, A. M., & Stillman-Spisak, S. J. (2020). Improving reading comprehension and social studies knowledge among middle school students with disabilities. Exceptional Children, 86(1), 7-26.

The Primary English Teaching Association Australia (PETAA) is the national professional association supporting primary educators to deliver best-practice and evidence informed English and literacy instruction. A trusted not-for-profit, PETAA provides professional learning, research-based resources, and national advocacy to ensure every child has access to high-quality teaching in reading and writing. petaa.edu.au info@petaa.edu.au 1300 307 382

We thank you for reading this critical report. 2025 - Primary English Teaching Association Australia Special thanks to PETAA’s Board of Directors and Dr Bronwyn Parkin for their support in compiling this report.

Primary English Teaching Association Australia (PETAA). (2024). National Teaching of Reading Survey 2024.

https://petaa.edu.au/common/Uploaded%20files/Surveys/PETAASurvey_Summary_KeyFindings.pdf

______________________________________________

Now for my document:

Hempenstall, Kerry. (2014). What works? Evidence-based practice in education is complex. Australian Journal of Learning Difficulties. 19(2), 113-127.

DOI:10.1080/19404158.2014.921631

https://www.researchgate.net/publication/272121037_What_works_Evidence-based_practice_in_education_is_complex

Abstract

There is a nascent movement towards evidence-based practice in education in Australia, evident in Federal and State education documents, if not in classrooms. Such a classroom-level outcome would require a number of conditions to be met. One of the critical requirements is that teachers be provided with knowledge and training in practices that have an acceptable evidence base, in other words to know what works. Many reformers pin their hopes on systematic reviews to provide the information. However, it is becoming increasingly apparent that this expectation may not be easily met, especially in the short term. This paper considers some of the recent issues that have muddied the waters.

What is evidence-based practice (EBP) in education? In EBP, the major criterion for acceptance of a teaching programme is that it has reliable, replicable evaluation research to support it.

Reliable replicable research has been defined as objective, valid, scientific studies that: (a) include rigorously defined samples of subjects that are sufficiently large and representative to support the general conclusions drawn; (b) rely on measurements that meet established standards of reliability and validity; (c) test competing theories, where multiple theories exist; (d) are subjected to peer review before their results are published; and (e) discover effective strategies for improving reading skills. (The 1999 Omnibus Appropriations Bill, 1998)

EBP has influenced many professions in recent years. A simple Google search produces over 90,000,000 hits. Among them, in varying degrees of implementation, are pursuits as diverse as medicine, psychology, agriculture, speech pathology, occupational therapy, transport, library and information practice, management, nursing, pharmacy, dentistry and health care.

Teaching has suffered, both as a profession in search of increased community respect and as a force for improving a nation’s social capital, because of its failure to adopt the results of empirical research as the major determinant of its practice. Educational practice might more accurately be described as ‘experience-based’, ‘eminence-based’ or ‘habit-based’ (Law, 2002). There are a number of reasons why this has occurred, among them a science-aversive culture endemic among education policy-makers and teacher education institutions (Hempenstall, 2006). However, there are signs that major shifts are occurring. There have been strong moves in Great Britain and the USA towards EBP in education in recent years. Many public documents now espouse EBP as a preferred approach to educational decision-making. However, while examples of such practice can be found, education systems have been slow to demonstrate reform at the classroom level.

The movement in the USA is likely to be advanced by the edict from the US Government’s Office of Management and Budget (Zient, 2012) that requests the entire Executive Branch to use every available means to promote the use of rigorous evidence in decision-making, programme administration and planning in all facets of government.

Since taking office, the President has emphasized the need to use evidence and rigorous evaluation in budget, management, and policy decisions to make government work effectively. This need has only grown in the current fiscal environment. Where evidence is strong, we should act on it. Where evidence is suggestive, we should consider it. Where evidence is weak, we should build the knowledge to support better decisions in the future. (p. 1)  

In Australia, the National Inquiry into the Teaching of Literacy (2005) asserted that ‘teaching, learning, curriculum and assessment need to be more firmly linked to findings from evidence-based research indicating effective practices, including those that are demonstrably effective for the particular learning needs of individual children’ (p. 9). It recommends a national programme to produce evidence-based guides for effective teaching practice, the first of which is to be on reading. In all, the Report used the term evidence-based 48 times. However, much to the chagrin of the Minister who introduced it (Brendan Nelson), none of the report’s recommendations has since been implemented. ‘Dr Nelson said anybody with more than a passing interest in teaching reading would regret that governments on both sides of politics had failed to implement the Rowe report’ (Ferrari, 2012, para 21).

Federally, the Council of Australian Government has established a national evidence base of effective literacy and numeracy practice in Australian schools (Department of Education, Employment and Workplace Relations, n.d.). It provides a Standards of Evidence rating scale (Australian National Audit Office, 2012) that is loose, but a step forward towards EBP. It forms part of the National Partnership Agreement on Improving Literacy and Numeracy (Australian National Audit Office, 2012), offering $577.4 million over four years. Its aim is the ‘Identification and implementation of evidence-based interventions which achieve accelerated and sustained improvements in literacy and numeracy outcomes for students, particularly those falling behind’ (p. 5). It represents a significant step towards reform of the basic mechanisms of curriculum choice in Australia. However, at this early stage (2008 –2011), there has not been any discernible improvement in student NAPLAN results noted in funded compared to unfunded schools (Australian National Audit Office, 2012).

If EBP is now being perceived as important, how readily can it be introduced into the classroom? Slavin (2013a) perceives four barriers threatening its success in education: ‘Too few rigorous evaluations of promising programs; inadequate dissemination of evidence of effectiveness; a lack of incentives for localities to implement proven interventions; and insufficient technical assistance for implementing evidence-based interventions with fidelity’ (p. 1).

Obstacles like these have been noted in the past by many writers (Carnine, 1995; Hempenstall, 1996; Marshall, 1993; Stone, 1996). But is this still true in Australia? The Productivity Commission (2012) believes that education departments provide a bad example, in that they do not typically engage in evaluation of their programmes.  

Unfortunately, very little rigorous evaluation of programs and cost-effectiveness analysis is done within the Australian school system. Decisions are made without the best possible understanding of what works in different contexts, or – critically – which programs achieve results at the lowest cost. (sub. 30, p. 8).  In Victoria, a similar criticism was made by the Victorian Auditor-General: 2 DEECD does not identify and monitor the achievement of educational and broader outcomes of students with special learning needs, and therefore does not know how effectively its policy and resource commitment is working (2012, p. 4).  Dissemination of research findings in a usable form is also problematic. Most teachers are not trained to discern sound from unsound research designs, nor to develop effective instructional interventions based upon the content of research papers (DEECD, 2012).

In fact, there is evidence that the level of quantitative research preparation has diminished in teacher education programmes over the past 20 years (Lomax, 2004). Hence, at least until there is reform in teacher education, this research base needs to be converted into a form that is accessible to, and usable by, schools.

There is also a paucity of educational research performed in Australia. Whereas, the budgets for the provision of health and education services are roughly similar, the funding provided for health research is about 16 times that for educational research (Australian Bureau of Statistics, 2010). Even allowing for this lack, the research that is available, and that for which there is reasonable consensus, has largely been ignored. Professor Peter Cuttance, then director of the Melbourne University’s Centre for Applied Educational Research, was blunt: ‘Policy makers generally take little notice of most of the research that is produced, and teachers take even less notice of it’ (2005, p. 5).

Deciding which programs are evidence-based

There are thousands of programmes that promise much but have little or no empirical background. Are there any immediate short cuts to discerning the gold from the dross? Those governments that now have moved towards a pivotal role for research in education policy have usually assembled panels of prestigious researchers to peruse the evidence in particular areas, and to report their findings widely (e.g. National Early Literacy Panel, 2008; National Reading Panel, 2000). These bodies have tended to produce recommendations at the broader policy level; for example, phonics is important. However, they have not provided the level of specificity to enable direct implementation in the classroom.

What Works Clearinghouse (WWC) is an important federally funded initiative established in 2002 by the Institute of Education Sciences, U.S. Department of Education (USDOE). Its brief is to provide systematic reviews of education programmes, and their ratings have appeared since 2007. The WWC is a response to the potential of EBP to revolutionise teaching in the classroom. It emphasises randomised control trials as the major criterion for acceptability as high-quality research. This type of large-scale research removes many of the threats to internal and external validity posed by experimental interventions (Slavin, 2013a).

The WWC begins with a set of methodological criteria, and collates a list of studies that have evaluated particular programmes. In the initial screen, they discard studies that do not meet their design expectations, such as many of those studies that have not employed random assignment. Largely for manageability, they usually exclude older studies (often those more than 20 years old). They consider issues of subject attrition, and matched groups (or at least very similar pretest scores), as important acceptability criteria. They then use their analysis criteria to evaluate the evidence for the surviving high-quality studies to produce ratings of the programmes in question. The ratings are positive, potentially positive, mixed, potentially negative and negative (WWC, 2013a).

We review the research on the different programs, products, practices, and policies in education. Then, by focusing on the results from high-quality research, we try to answer the question ‘What works in education?’ Our goal is to provide educators with the information they need to make evidence-based decisions. (WWC, 2013b, p. 1)

Until teachers become more skilled at discerning sound from unsound practice, it was hoped that bodies such as the WWC could perform a sifting process on published implementations that would fill the void between policy and practice – to simplify judgements on what programmes and practices have been reliably demonstrated to be effective. WWC is not the only organisation to engage in this systematic sifting process. Other bodies include the Coalition for Evidence-Based Policy (http://www.ed.gov/ rschstat/research/pubs/rigorousevid/rigorousevid.pdf), the Best Evidence Encyclopedia (BEE) (http://www.bestevidence.org/), Promising Practices Network (http://www. promisingpractices.net) and the Florida Center for Reading Research (www.fcrr.org/).  

Recommendations for practice produced by these bodies have at least the potential to be valuable resources in answering the question what works? These sources can provide assistance, but can also be confusing as they do not all agree on which studies should be included in their analyses, and their interpretations of evidence quality. This can lead to markedly differing interpretations. For example, Briggs (2008) noted that when WWC and BEE reviewed the same set of early maths programmes, their ratings had a correlation of only 0.57. Hence a programme may be considered to have moderate or strong research support by some organisations, but the same programme may be reported by another organisation as having insufficient high-quality evidence for it to be recommended. So, criteria for acceptability are not universally agreed upon, even within the systematic review model.  

Before this systematic review process was instituted, the narrative approach to programme analysis dominated for many years. It involved an expert(s) performing a literature search for relevant papers on a topic, interpreting them and making recommendations based upon that interpretation. Bias in study selection is clearly a potential threat to this approach, as are issues of transparency. The use of expert panels (e.g. National Reading Panel, 2000) may reduce the extent of this problem, but transparency of selection remains an issue. Meta-analyses also became popular, employing effect size statistics to estimate the overall effects of treatments. This gave the appearance, at least, of more objectivity. However, effect sizes are now known not to be universally comparable across studies. Their value is influenced by study types. For example, they tend to be larger for small sample size studies, for very brief studies and for those employing experimenter derived assessment devices. It also happens that effect sizes are not independent of wider study parameters, as has been thought in the past. They require different interpretations depending on the age levels involved in an educational intervention, and they can even require different interpretations depending on the educational domain under study (Lipsey et al., 2012). Thus, when many different effect sizes are assembled in one meta-analysis there are potential confounds. Of course, this issue also presents challenges to systematic reviews.

Based on detailed information for a set of nationally normed achievement tests, the academic developmental trajectory for average students in the United States appears to be one of rapid growth in the first several grades of elementary school, followed by gradually declining gains in later grades. Expressed as effect sizes, the annual gains in the early years are around 1.00, while those in the final grades of high school are 0.20 or less. ... an intervention effect of a given magnitude represents a much larger proportion of normal annual growth for students in higher grades than it does for students in lower grades. ... With respect to student subgroups, it was demonstrated that the gaps on standardized achievement tests range from less than 0.10 standard deviation for gender differences in math performance to almost a full standard deviation for race/ethnicity differences in math and reading. Any given intervention effect size will therefore ‘look’ very different, depending on the gap (or gaps) with which it is compared. (Bloom, Hill, Black, & Lipsey, 2008, p. 29)

The systematic reviews can also produce different findings to the two methods (narrative and meta-analysis) described above. One example involves research into repeated reading of passages as an approach to enhancing reading fluency. O’Keeffe, Slocum, Burlingame, Snyder, and Bundock (2012) described how the strong support provided by numerous narrative and meta-analytic reviews of the literature contrasted with the systematic review finding of insufficient quality research to produce a supportive recommendation. For example, the National Reading Panel (2000) noted ‘Guided repeated oral reading and repeated reading provide students with practice that substantially improves word recognition, fluency, and – to a lesser extent – reading comprehension’ (Ch. 3, p. 20). When Chard, Ketterlin-Geller, Baker, Doabler, and Apichatabutra (2009) examined the same corpus of studies for a systematic review, many of the studies accepted in previous narrative and meta-analytic reviews did not meet the chosen methodological standards.

So, a stumbling block is revealed: of the many thousands of educational research papers produced, relatively few studies meet the stringent acceptability criteria of systematic reviews, but many more are accepted under the narrative and meta-analysis methods.

In considering where this contrast may lead, O’Keeffe et al. make the point that the systematic reviews may produce false negatives through their rigorous methodology – failing to recommend programmes that are in fact effective. Meanwhile the narrative and meta-analytic reviews may produce false positives through their less stringent criteria for accepting studies. They may err on the side of recommending programmes that may appear effective in low-quality studies, but subsequently prove not to be effective. Spencer, Detrich, and Slocum (2012) and Slocum, Spencer, and Detrich (2012) address this issue by suggesting the term best available evidence to apply to those practices yet to have sufficient high-quality studies to deserve a recommendation from a systematic review. They argue that relying solely on WWC style reviews (which they call empirically supported treatment reviews) will be counter-productive: ‘However, if this is the only source of evidence that is considered to be legitimate, most educational decisions will be made without the benefit of evidence’ (p. 135).

Thus, they see a continued role for narrative reviews, meta-analyses and practice guides to offer best available evidence in those situations in which systematic reviews are unhelpful or non-existent.

The major difference, then, between the three types of research syntheses concerns the criteria employed for pre-screening the body of research. Thus, it is possible that a meta-analysis of, say, studies evaluating a beginning reading programme will include the identical set of studies, and presumably reach similar conclusions. What is very important (as noted earlier) is that there is sufficient transparency to allow interested educators and researchers to make judgements about the relative value of the research syntheses.

The WWC is the most comprehensive of the systematic review sites, and because is heavily funded by the US Government, a great deal is invested in its being considered sufficiently trustworthy to enable it to achieve its stated goals. It has a sister site that is intended to make the task of selecting effective programmes even easier. The Doing What Works Clearinghouse (DWW) is a website also sponsored by the USDOE. It was established to provide the guidance and resources to assist P –12 teachers, schools, districts, states and teacher support staff to implement research-based instructional practice. It was seen as a user-friendly bridge between the research analyses and the practice details. Its web page has recently been discontinued, reportedly for lack of financial resources (Strauss, 2013).

A great deal of money can be involved subsequent to a WWC review of a programme, as schools and districts may make decisions about purchase based upon such information. Additionally, potentially huge numbers of students can be advantaged or disadvantaged by the decision to implement one programme over another. WWC, having been established by the USDOE, has strong face validity, so it is very important to the growth of EBP that it be perceived within education as trustworthy. However, it appears that the hoped-for lighthouse that would illuminate which approaches deserve the status of EBP (or empirically supported treatments) has not yet been achieved despite the laudable objectives. The WWC methodology and some individual programme ratings have been criticised on a number of grounds over the last five years (Briggs, 2008; Carter & Wheldall, 2008; Engelmann, 2008; Greene, 2010; McArthur, 2008; Reynolds, Wheldall, & Madelaine, 2009; Slavin, 2008; Stockard, 2008, 2010; Stockard & Wood, 2012, 2013a, 2013b).

Criticisms

It is important that issues of validity and reliability of these systematic reviews are continuously examined, and this process has been gathering momentum. The criticisms have been several: of the criteria for what constitutes acceptable research; of slowness in producing evaluations; of inconsistency in applying standards for what constitutes acceptable research; of the inclusion of studies that have not been peer reviewed; and of a failure to attend to fidelity of implementation issues in the WWC analyses. This latter criticism can be subsumed under a broader criticism of ignoring external validity or generalisation in the reviews.

The focus of syntheses must be on what has worked, that is, programs for which there is evidence of an aggregate effect that is internally valid. I would argue that such evidence, although certainly important, is necessary but not sufficient for those stakeholders enacting educational policies. What the superintendent of a school district wants to know is not so much what has worked but what will work. To be relevant, a good synthesis should give policymakers explicit guidance about program effectiveness that can be tailored to specific educational contexts: When and where will a given program work? For whom will it work? Under what conditions will it work the best? For causal inferences to be truly valid, both causal estimation and generalization should at the very least be given equal weight. (Briggs, 2008, p. 20)  

The point here is that while randomised controlled trials (RCTs) may provide the best bulwark against threats to internal validity, the acceptance of small-scale and brief RCTs in a systematic review creates a strong threat to external validity. Thus, the systematic reviews have their own issues to deal with before they can be unquestioningly accepted as the royal road to truth. Further, it may also be quite some time (if ever) before gold standard research reaches critical mass to make decisions about practice easier.

It has often been noted by methodologists and authors of systematic reviews of research that studies with small sample sizes tend to have much larger, positive effect sizes than do studies with larger sample sizes. ... Much as an emphasis on randomized experiments in program evaluation syntheses is appropriate, there are other methodological factors that may be as important as random assignment, and need to be taken into account in the same way. Sample size is one of these factors. (Slavin & Smith, 2009, p. 502)

It has also been queried whether educational research can ever have randomised control trials as the norm, however desirable that may appear to be. One issue is that the high cost of such research is not matched by the available funding. For example, The USDOE spends about $80 million annually in educational research; whereas, the US Department of Health and Human Services provides about $33 billion for health research. If studies are to be true experiments and also of large scale, the cost is enormous.

Most quantitative researchers would agree that large-scale randomized studies are preferable, but in the real world such studies done well can cost a lot – more than $10 million per study in some cases. That may be chump change in medicine, but in education, we can’t afford many such studies. (Slavin, 2013b)  

Another issue concerns the limitations on methodological purity in educational research. Students in schools cannot be routinely randomly selected for intervention as can occur in other settings, such as individual therapies in medicine and psychology. However, with sufficient commitment, studies employing whole classes as the unit of study can be performed. Whereas, RCTs may not form the greater part of educational research, that is not to say that they do not have a valuable role to play. Of course, developing an appreciation of the need for RCTs in a profession that has little history of research commitment is problematic. A significant change of perspective would be required among policy-makers and education bureaucrats as a first step.

Some of the criticisms of WWC have been directed at the acceptance or rejection of specific programmes. For example, Briggs (2008) noted that Saxon Math was given the highest rating based upon one small randomised study, while four large matched studies showing negligible effects were ignored. The phonemic awareness software Daisy Quest was evaluated in RCT methodology, but in a brief (less than 5 hours) intervention. It was recommended despite the supporting studies’ control groups receiving no input, and the post-tests used material from the curriculum unseen by the controls. McArthur (2008) raised a number of areas of weakness in the rating of FastForword (a reading intervention focussing on auditory skills), including studies not peer reviewed, an important peer reviewed study not included, the use of an inappropriate outcome measure and a reporting error. She also called for the inclusion of experts in the various fields to review the WWC evaluations prior to publication, a point made also by Wolery (2013) and Lloyd (2007).

Part of the reason for these outcomes may be the work model that the WWC uses. The lead researchers conceptualize the question and lead the effort, but most of the actual analysis is conducted by early career research assistants who are to apply a set of procedures. They may faithfully apply those procedures without sufficient contextual knowledge to capture some of nuance inherent in virtually any body of literature. (Lloyd, 2007)

Small sample size, short duration interventions, neglecting studies outside of a set time frame and allowing the sole use of experimenter developed assessment are all issues that have concerned critics of WWC. As Slocum et al. (2012) pointed out – regardless of the method used for analysis of research it is possible to do it well or poorly. The no approach is foolproof caveat also applies to systematic reviews.

Reading recovery and reading mastery

A recent critique by Stockard and Wood (2013a) focussed upon the July 2013 WWC reviews of two interventions for young students: Reading Recovery (RR) and the Direct Instruction programme Reading Mastery (RM) (It should be noted that Stockard and Wood are associated with the National Institute for Direct Instruction).

RM is a six level (P –6) Direct Instruction basal reading program, implemented in small-group format for about 40 minutes/day. It is also employed for struggling students as a remedial intervention. It has had five revisions since its first publication (as Distar Reading) in 1968. WWC found that RM had no discernible effects on students with learning disabilities. Stockard and Wood mount several arguments against this finding, one being that it is not consistent with the results of meta-analyses, such as that of Hattie (2009).

He summarized the results of four meta-analyses that included DI, incorporating 304 studies, 597 effects and over 42,000 students. He found that the average effect size associated with DI was .59 and noted that the positive results were ‘similar for regular (d ¼ .99) and special education and lower ability students (d ¼ 0.86) [such as those that would be classified as having learning disabilities], ... [and] similar for the more low-level word-attack(d ¼ .64) and also for high-level comprehension (d ¼ .54)’. (Hattie, 2009, pp. 206– 207, as cited in Stockard & Wood, 2013a)

A related concern was the rejection by WWC of 21 of the 22 studies assessed. However, Stockard and Wood assert that there are far more studies available than WWC found, including two large, well designed, federally funded studies (one) included random assignment of students to treatment, a large number of assessments, and follow-up of students for several years. ... (the other) used sophisticated statistical analyses to examine growth in learning over time in a variety of schools. (Stockard & Wood, 2013a, p. 4.

They accuse WWC of being selective of studies for reasons other than study quality. One of the RM authors (Engelmann, 2008) asserted that there were more than 90 studies available for review. ‘Jean Stockard identified 54 studies that occurred no earlier than 1985, and 38 earlier studies’ (p. 4). Further, WWC established an arbitrary time threshold of 1985, and no studies prior to that time were considered. This makes the review task less onerous, but risks ignoring well-designed studies. The suggestion that either the reading process or student characteristics have changed so much over that period to render earlier research valueless challenges credulity.

This potential for discrepancy of findings between the different approaches to evidence review has been discussed earlier as a general problem for all education research reviews. Each side can mount an argument for their approach.

However, actual errors in interpretation of results are more egregious.

An article by Cooke, Gibbs, Campbell, and Shalvis (2004) was found by the WWC to meet their evidence standards ‘without reservations’ in both 2012 and 2013. However, in both reports, the analysis of the results is deeply flawed and directly contradicts the authors’ conclusions. Cooke et al. compared the achievement of 30 students using RM with those who used Horizons, a slight modification of the RM programme developed by the author of RM and his associates. They found that students in both RM and Horizons had similar achievement gains over time and that these gains were significantly greater than those in state and national samples. In other words, they concluded that both of the programmes were effective and that the slight modifications in Horizons had not altered the effectiveness of RM documented by other authors. The WWC ignored the comparison to national norms and instead focused on the lack of differences between the two programmes. They concluded, in both the 2012 and the 2013 analyses that the lack of difference in results between RM and its modified version, Horizons, indicated that there was no evidence that RM was effective. ‘The summary judgment, included in the body of the report is “When compared to another Direct Instruction intervention, Horizons, Reading Mastery was found to have no discernible effects on alphabetics and reading comprehension for students with learning disabilities”. The fact that the students had significantly greater gains than the national norms is not mentioned at any point’. (Stockard & Wood, 2013a, p. 3)

RR (a short-term reading intervention for students in Year 1) was evaluated in 2008 and again in 2013 by WWC with different results. In 2008, of the approximately 100 studies found, 5 met the criteria for selection, whereas in 2013 of 200 studies located, only 3 met criteria. On this basis, WWC declared RR showed positive effects on general reading achievement. This finding was contrary to that of a group of 31 highly qualified reading researchers (Baker et al., 2002) who considered ‘Reading Recovery is not successful with its targeted student population, the lowest performing students’ (para 1). There have been numerous similarly negative reviews of RR, for example those of Adams et al. (2013), Center, Freeman, and Robertson (2001), Chapman, Greaney, and Tunmer (2007), Chapman and Tunmer (2011), Chapman, Tunmer, and Prochnow (2001), Tunmer, Chapman, Greaney, Prochnow, and Arrow (2013), Reynolds and Wheldall (2007) and Reynolds et al. (2009).

Stockard and Wood assert that none of these three accepted RR studies controlled for the Hawthorne Effect, a flaw that should have rendered the studies unacceptable. In contrast, a study by Iversen and Tunmer (1993) was accepted in 2008, but rejected in 2013 despite it being a study that had controlled for the Hawthorne Effect. In 2008, the WWC findings of a positive effect were inconsistent with those of the study authors. In fact, WWC reported only part of the study – a comparison of traditional RR with a control group receiving no intervention. In the other part of the published study, RR was less effective than the modified version. However, in the Horizons/RM study described earlier, a similar set of circumstances led to a no discernible effects verdict. There is an inconsistency that is difficult to fathom, and perhaps that is partly why the Iversen and Tunmer study was rejected in the 2013 evaluation.

Apart from documenting their concerns about the reviews on RM and RR, Stockard and Wood named several general areas of concern with the WWC model: how studies were selected/rejected, mistakes made in the analysis of studies, lack of transparency in the review process and lack of competent quality control in the reviews. They produced a number of recommendations, including the removal from the WWC site of the reviews of both RM and RR; greater transparency in the decision-making processes within WWC, so that such errors and inappropriate conclusions become less likely; improving the training and expertise of staff engaged in the evaluation of different areas of educational research; establishing a cycle for reports to be revisited in the light of new research and criticisms of previous reports; peer review of all reports prior to release; at least two reviewers independently make recommendations of inclusion or exclusion of studies; and that draft copies of reports should be seen by authors of the relevant programmes to ensure accuracy.

Stockard and Wood (2013b) argued along similar lines that WWC made errors in three areas of their evaluations of RM and RR: in establishing which studies to include; in choosing which studies deserved follow-up; and in analysing and reporting the results. Through a comparative case study approach to analysing both WWC reports, they concluded that the errors were significant and systematic in both the RM and RR analyses.

They noted that the WWC trawl for studies of RM (for the initial screen) missed at least 100 relevant research studies, while it included many that were not efficacy studies at all, such as testimonials. This is in sharp contrast to the Hattie (2009) meta-analysis that included 304 studies, 597 effects and over 42,000 students (however, not all were RM studies). The WWC reviewed RR in 2008 (100 studies) and 2013 (200 studies, of which about 50 were published before the 2008 review). Thus, Stockard and Wood argue there are shortcomings in the WWC initial selection of studies for screening.

Stockard and Wood (2013b) also pointed to errors in the methods used to discern design acceptability. For example, a large field study involving thousands of students and hundreds of teachers (Carlson & Francis, 2002) was rejected because teacher training and monitoring in the Direct Instruction teaching techniques and behaviour management strategies was provided. WWC considered such training a confounding influence. Yet, implementation fidelity has always been an integral component of DI programmes, and is now widely recognised as a critical element in any evidence-based intervention in many fields.

By contrast, WWC was sanguine about the inability to separate the effects of the RR curriculum from the 1:1 tutoring that is a necessary component of the programme. This is Australian Journal of Learning Difficulties  clearly a confounding factor, and to overcome the confound, it would be necessary for the RR programme to be compared with another 1:1 programme as a control. In 2008, the Iversen and Tunmer (1993) was considered acceptable, but was rejected in 2013. Its design did remove the 1:1 tutoring confound by having a no treatment group, a standard RR group and a modified RR group. The modified programme, which included some explicit phonics instruction, was more efficient than the traditional RR programme. The studies that WWC did include were simple experimental 1:1 RR with a normal classroom instruction control group.

WWC appears to have been inconsistent in the criterion of demanding equivalence of groups at pretest. This is a problematic criterion for much educational research, as group selection is so often necessarily based upon convenience. Most researchers agree with the desirability of such a match, while accepting that some variation is reasonably well managed by statistical procedures. In the Iversen and Tunmer case, the average difference in pretest scores was within the 25% of the pooled variation test that WWC has recently incorporated into its criteria.

Thus, their rejection of the study was erroneous on both grounds. Once WWC has whittled down their screening list, there are relatively few studies on which to make a judgement. For example, only 3 of 200 RR studies, and only 8 of a similar number RM studies made the cut in 2013. Making judgements across such a limited number of studies is fraught, and errors at this stage are particularly egregious.

In 2012, the Herrera, Logan, Cooker, Morris, and Lyman (1997) RM study with learning disabled students was accepted for the WWC review. It assigned students to a RM group or to an enhanced RM group (extra instructional time). The enhanced group made greater progress than the standard group, which led WWC to determine that the standard RM programme had negative effects on student reading. Such a finding is simply illogical. If A is better than B, it does not follow that A has no (or negative) effect. The study was then removed from the WWC list in 2013.

In both the 2012 and 2013 reviews, the Cooke et al. (2004) RM study with learning disabled students was found to meet WWC standards. The programmes were both Direct Instruction RM vs Horizons(a variant of RM). Reading gains were similar, and greater than those in state and national samples (who did not receive RM or Horizons). The WWC conclusion was that RM was ineffective, the same error of logic noted in the Herrera et al. study.

 

Stockard and Wood (2013b) conclude:

The errors that cause these flawed conclusions resulted from problems at each stage of the review process. ... The errors we document could result from a variety of sources, such as poor training or inadequate oversight. Another possibility, however, deserves further attention. Our analysis focused on curricula that represent sharply differing approaches to early reading instruction. It is reasonable to ask if the systematic pattern of mistakes we have documented was influenced by these controversies. Certainly the conclusions presented by the WWC with regard to these two curricula have failed to provide ‘accurate information on education research’ (WWC, 2013a), systematically promoting ineffective curricula and denigrating effective approaches. It is students and the society as a whole that are the true losers in this process, and we urge the educational research community to push for changes that would correct the errors and embody more transparent and appropriate procedures. (pp. 15 –16)

Interestingly, early in 2013, the WWC agreed to reconsider their policies and procedures, and requested that interested groups/individuals make submissions. No further announcement has been forthcoming.

The future

So, where does that leave us? Several perspectives have been put forward that are worthy of follow-up.

A single study involving a small number of schools or classes may not be conclusive in itself, but many such studies, preferably done by many researchers in a variety of locations, can add some confidence that a programme’s effects are valid (Slavin, 2003). If one obtains similar positive benefits from an intervention across different settings and personnel, there is added reason to prioritise the intervention for a large gold-standard study. There is a huge body of data out there that is no longer considered fit for human consumption. It seems a waste that there are not currently analysis methods capable of making use of these studies. There is merit in investigating further the recommendations of Spencer et al. (2012) and Slocum et al. (2012) to add the best available evidence classification to apply to those practices with apparently promising effects demonstrated in narrative and meta-analytic reviews, but which are yet to have sufficient high-quality studies to deserve a recommendation from a systematic review.

O’Keeffe et al. (2012) recognise that the current system needs improvement, but have a sense of optimism: Empirically supported treatment is still a relatively new innovation in education and the methods for conducting effective reviews to identify ESTs are in their infancy. Based on our experience in comparing these review systems we believe that review methods can and should continue to develop. There is still a great need to build review methods that can take advantage of a larger range of studies yet can provide recommendations in which practitioner can place a high degree of confidence. (p. 362)

In contrast, Greene (2010) views the whole review enterprise with a more jaundiced eye, and urges consumers to rely upon their own resources. We have no alternative to sorting through the evidence and trying to figure these things out ourselves. We may rely upon the expertise of others in helping us sort out competing claims, but we should always do so with caution, since those experts may be mistaken or even deceptive. (Greene, 2010, para 15)

As a partial response to this dilemma, attention is now being paid to single case research as a possible valid and workable adjunct to RCTs in attempting to document what works in education settings.

The addition of statistical models for analysis of single-case research, especially measurement of effect size, offers significant potential for increasing the use of single-case research in documentation of empirically-supported treatments (Parker et al., 2007; Van den Noortagate & Onghena, 2003). (Horner, Swaminathan, Sugai, & Smolkowski, 2012, p. 271)  

In recognition of this potential, WWC released a document on single-case designs, providing initial WWC standards for assessing any such single case studies (Kratochwill et al., 2010). In a generally supportive response Wolery (2013) offered a number of suggestions for improvement on this initial attempt.

Translating research into classroom practice

A continuing obstacle that was noted by Slavin (2013a) involves teacher training. How can preservice training be made consistent with the findings of empirical educational research? How can even research-sensitive education policies be translated into effective classroom practice? The inertia of education systems is enormous, and at times has acted as a bulwark against the excesses of some past education policies. Currently, it is acting against educational reform that would aid struggling students in particular. However, resistance to beneficial change has been apparent in other professions that have embraced EBP, such as medicine and psychology. These professions did, for a long time, consider Australian Journal of Learning Difficulties themselves more akin to guilds than to scientific enterprises. Yet, in the last 20 years their fundamental perspective and practices have changed (and continues to change). Over time, as new generations of practitioners are trained, EBP has slowly become the norm. Again, preservice training must be addressed as it currently represents a bottleneck to reform.

Given what has transpired thus far in the move towards EBP, it would seem premature for teachers, schools and education policy-makers to base their decision-making on the results of systematic reviews such as those from WWC. The question ‘Are there any immediate shortcuts to discerning the gold from the dross?’ appears to remain unresolved for the immediate future. This is an unfortunate situation as there appears to be increasing acknowledgement of EBP as a potential new direction for education in Australia. Will the new enthusiasm within the federal and state governments and by some education departments appear in classrooms around the nation as more effective instruction? Or will it, like a solar storm, simply fade into the background as yet another bright but brief phenomenon?”

Writer above:

Hempenstall, Kerry. (2014). What works? Evidence-based practice in education is complex. Australian Journal of Learning Difficulties. 19(2), 113-127.

DOI:10.1080/19404158.2014.921631

https://www.researchgate.net/publication/272121037_What_works_Evidence-based_practice_in_education_is_complex

_________________________________________

References to above

Adams, M. J., Beck, I., Brady, S., Chapman, J., Chard, D., Connor, C. M., ... Williams, J. (2013). Guessing: Why the reading wars won’t end (Letter to Washington Post from 41 researchers concerning the NCTQ report on teacher training). Retrieved from http://www.washingtonpost. com/blogs/answer-sheet/wp/2013/09/17/another-blast-in-the-reading-wars/

Australian Bureau of Statistics. (2010). ABS research and experimental development, all sector summary, Australia, 2008 –09. Retrieved from http://www.abs.gov.au/AUSSTATS/abs@.nsf/ DetailsPage/8112.02008-09?OpenDocument

Australian National Audit Office. (2012). National partnership agreement on literacy and numeracy. Retrieved from http://www.anao.gov.au/Publications/Audit-Reports/2011-2012/NationalPartnership-Agreement-on-Literacy-and-Numeracy/Audit-brochure

Baker, S., Berninger, V. W., Bruck, M., Chapman, J., Eden, G., Elbaum, B., ... Wolf, M. (2002). Evidence-based research on Reading Recovery (Letter from 31 researchers). Retrieved from http ://www.wrightslaw.com/info/read.rr.ltr.experts.htm

Bloom, H. S., Hill, C. J., Black, A. R., & Lipsey, M. W. (2008). Performance trajectories and performance gaps as achievement effect-size benchmarks for educational interventions. MDRC Working Papers on Research Methodology. Retrieved from http://mdrc.org/sites/default/files/ full_473.pdf

Briggs, D. C. (2008). Synthesizing causal inferences. Educational Researcher, 37, 15 – 22.

Carlson, C. D., & Francis, D. J. (2002). Increasing the reading achievement of at-risk children through direct instruction: Evaluation of the Rodeo Institute for Teacher Excellence (RITE). Journal of Education for Students Placed At Risk, 7, 141– 166.

Carnine, D. (1995). Trustworthiness, usability, and accessibility of educational research. Journal of Behavioral Education, 5, 251– 258.

Carter, M., & Wheldall, K. (2008). Why can’t a teacher be more like a scientist? Science, pseudoscience and the art of teaching. Australasian Journal of Special Education, 32, 5– 21.

Center, Y., Freeman, L., & Robertson, G. (2001). The relative effect of a code-oriented and a meaning-oriented early literacy program on regular and low progress Australian students in year 1 classrooms which implement Reading Recovery. International Journal of Disability, Development, and Education, 48, 207– 232.

Chapman, J. W., Greaney, K. T., & Tunmer, W. E. (2007). How well is Reading Recovery really working in New Zealand? New Zealand Journal of Educational Studies, 42, 17 – 29.

Chapman, J. E., & Tunmer, W. E. (2011). Reading Recovery: Does it work? Perspectives on Language and Literacy, 21, 21 – 24.

Chapman, J. W., Tunmer, W. E., & Prochnow, J. E. (2001). Does success in the Reading Recovery program depend on developing proficiency in phonological skills? A longitudinal study in a meaning-based instructional context. Scientific Studies of Reading, 5, 141–176. 12 K.

Chard, D. J., Ketterlin-Geller, L. R., Baker, S. K., Doabler, C., & Apichatabutra, C. (2009). Repeated reading interventions for students with learning disabilities: Status of the evidence. Exception Children, 75, 263– 281.

Cooke, N. L., Gibbs, S. L., Campbell, M. L., & Shalvis, S. L. (2004). A comparison of Reading Mastery fast cycle and horizons fast track A-B on the reading achievement of students with mild disabilities. Journal of Direct Instruction, 4, 139– 151.

Cuttance, P. (2005, July 5). Education research ‘irrelevant’. The Age, p. 5.

DEECD. (2012, June). New directions for school leadership and the teaching profession: Discussion paper.

Department of Education and Early Childhood Development. Retrieved from http:// www.eduweb.vic.gov.au/edulibrary/public/commrel/about/teachingprofession.pdf

Department of Education, Employment and Workplace Relations. (n.d.). Standards of evidence for submissions for the teach, learn, share: The national literacy and numeracy evidence base. Retrieved from http://www.teachlearnshare.gov.au/Static/StandardsOfEvidenceForPubl icationFinal.pdf

Engelmann, S. (2008). Machinations of What Works Clearinghouse. Retrieved from http://zigsite. com/PDFs/MachinationsWWC(V4).pdf

Ferrari, J. (2012, December 12). Lack of reform slammed: A decade of lost action on literacy. WeekEnd Australian. Retrieved from http://www.theaustralian.com.au/national-affairs/adecade-of-lost-action-on-literacy/story-fn59niix-1226542150781#

Greene, J. P. (2010). What doesn’t work clearinghouse. Education Next. Retrieved from http:// educationnext.org/what-doesnt-work-clearinghouse/

Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.

Hempenstall, K. (1996). The gulf between educational research and policy: The example of Direct Instruction and whole language. Behaviour Change, 13, 33 – 46.

Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11, 83 – 92.

Herrera, J. A., Logan, C. H., Cooker, P. G., Morris, D. P., & Lyman, D. E. (1997). Phonological awareness and phonetic-graphic conversion: A study of the effects of two intervention paradigms with learning disabled children. Learning disability or learning difference? Reading Improvement, 34, 71 – 89.

Horner, R. H., Swaminathan, K., Sugai, G., & Smolkowski, K. (2012). Considerations for the systematic analysis and use of single-case research. Education and Treatment of Children, 35, 269– 290.

Iversen, S., & Tunmer, W. E. (1993). Phonological processing skills and the Reading Recovery program. Journal of Educational Psychology, 85, 112–126.

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskoph, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from http://ies. ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Law, M. (2002). Evidence-based rehabilitation: A guide to practice. Thoroughfare, NJ: Slack.

Lipsey, M. W., Puzio, K., Yun, C., Hebert, M. A., Steinka-Fry, K., Cole, M. W., Roberts, M., Anthony, K. S., & Busick, M. D. (2012). Translating the statistical representation of the effects of education interventions into more readily interpretable forms (NCSER 2013-3000). Washington, DC: U.S. Government Printing Office. Retrieved from http://ies.ed.gov/ncser/pubs/ 20133000/

Lloyd, J. W. (2007, May 29). WWC report on Direct Instruction. Spedpro Forum. Retrieved from spedpro@virginia.edu

Lomax, R. G. (2004). Whither the future of quantitative literacy research? Reading Research Quarterly, 39, 107– 112.

Marshall, J. (1993). Why Johnny can’t teach. Reason, 25, 102– 106.

McArthur, G. (2008). Does What Works Clearinghouse work? A brief review of Fast ForWordw. Australasian Journal of Special Education, 32, 101– 107.

National Early Literacy Panel. (2008). Executive summary: Developing early literacy: Report of the National Early Literacy Panel. Jessup, MD: National Institute for Literacy.

National Inquiry into the Teaching of Literacy. (2005). Teaching reading: National inquiry into the teaching of literacy. Canberra: Department of Education, Science, and Training.

 National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to read. Bethesda, MD: National Institute of Child Health and Human Development.

O’Keeffe, B. V., Slocum, T. A., Burlingame, C., Snyder, K., & Bundock, K. (2012). Comparing results of systematic reviews: Parallel reviews of research on repeated reading. Education & Treatment of Children, 35, 333 –366.

Productivity Commission. (2012). Schools workforce (Research Report). Canberra. JEL code: I21, I28, J24. Retrieved from http://www.pc.gov.au/projects/study/education-workforce/schools/ report

Reynolds, M., & Wheldall, K. (2007). Reading Recovery 20 years down the track: Looking forward, looking back. International Journal of Disability, 54, 199– 223.

Reynolds, M., Wheldall, K., & Madelaine, A. (2009). The devil is in the detail regarding the efficacy of Reading Recovery: A rejoinder to Schwartz, Hobsbaum, Briggs, and Scull. International Journal of Disability, Development and Education, 56, 17 – 35.

Slavin, R. E. (2003). A reader’s guide to scientifically based research. Educational Leadership, 60, 12 – 16. Retrieved from http://www.ascd.org/publications/educational-leadership/feb03/vol60/ num05/A-Reader%27s-Guide-to-Scientifically-Based-Research.aspx

Slavin, R. E. (2008). Evidence-based reform in education: Which evidence counts? Educational Researcher, 37, 47 –50.

Slavin, R. E. (2013a). Overcoming four barriers to evidence-based education. Education Week, 32, 1 – 2. Retrieved from http://www.edweek.org/ew/articles/2013/05/01/30slavin.h32.html? tkn¼YVYFwfchIRt5fS7cgOjTWRtaP54JDVCPeTtt&cmp¼clp-edweek

Slavin, R. E. (2013b). How to do lots of high-quality educational evaluations for peanuts. Retrieved from http://www.huffingtonpost.com/robert-e-slavin/how-to-do-lots-of-high-qu_b_3953209. html

Slavin, R. E., & Smith, D. (2009). The relationship between sample sizes and effect sizes in systematic reviews in education. Educational Evaluation and Policy Analysis, 31, 500–506. Retrieved from http://www.bestevidence.org/methods/eff_sample_size.htm

Slocum, T. A., Spencer, T. D., & Detrich, R. (2012). Best available evidence: Three complementary approaches. Education and Treatment of Children, 35, 153– 181.

Spencer, T. D., Detrich, R., & Slocum, T. A. (2012). Evidence-based practice: A framework for making effective decisions. Education & Treatment of Children, 35, 127–151.

Stockard, J. (2008). The What Works Clearinghouse beginning reading reports and rating of Reading Mastery: An evaluation and comment (Technical Report 2008-04). National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/research/reviews-of-di/what-worksclearinghouse

Stockard, J. (2010). An analysis of the fidelity implementation policies of the What Works Clearinghouse. Current Issues in Education, 13. Retrieved from http://cie.asu.edu/

Stockard, J., & Wood, T. W. (2012). Reading Mastery and learning disabled students: A comment on the What Works Clearinghouse review. National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/documents-library/doc_download/245-response-to-wwc-rm-and-ld-july2012

Stockard, J., & Wood, T. W. (2013a). The WWC review process: An analysis of errors in two recent reports (Technical Report 2013-4). Office of Research and Evaluation, National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/documents-library/doc_download/283- technical-report-2013-4-wwc

Stockard, J., & Wood, T. W. (2013b, September 9). Does the What Works Clearinghouse work? Office of Research and Evaluation, National Institute for Direct Instruction. Retrieved from http ://www.nifdi.org/what-works-clearinghouse

Stone, J. E. (1996, April 23). Developmentalism: An obscure but pervasive restriction on educational improvement. Education Policy Analysis Archives.

Strauss, V. (2013, September 24). Education department suspends ‘doing what works’ website. The Washington Post. Retrieved from http://www.washingtonpost.com/blogs/answer-sheet/wp/ 2013/09/24/education-department-suspends-doing-what-works-website/

The 1999 Omnibus Appropriations Bill. (1998). The reading excellence act, pp. 956– 1007. Retrieved from http://www.gpo.gov/fdsys/pkg/BILLS-105s1596is/html/BILLS-105s1596is.htm

Tunmer, W. E., Chapman, J. E., Greaney, K. T., Prochnow, J. E., & Arrow, A. W. (2013). Why the New Zealand national literacy strategy has failed and what can be done about it: Evidence from the Progress in International Reading Literacy Study (PIRLS) 2011 and Reading Recovery Monitoring Reports. Massey University Institute of Education, Retrieved from http://www.mass. ey.ac.nz/massey/about-massey/news/article.cfm?mnarticle_uuid¼F287725F-9E6F-011DBE0E-F34366B8376A

Victorian Auditor-General. (2012). Programs for students with special learning needs: Audit summary. Retrieved from http://www.audit.vic.gov.au/publications/20120829-SpecialLearning-Need/20120829-Special-Learning-Need.rtf

Wolery, M. (2013). A commentary: Single-case design technical document of the What Works Clearinghouse. Remedial and Special Education, 34, 39 – 43. WWC (What Works Clearinghouse). (2013a).

WWC procedures and standards handbook (Version 3.0). Washington, DC: Institute of Education Sciences. Retrieved from http://ies.ed.gov/ncee/ wwc/documentsum.aspx?sid¼19

WWC. (2013b). Evidence for what works in education. Washington, DC: Institute of Education Sciences, U.S. Department of Education.

Zient, J. D. (2012). Use of evidence and evaluation in the 2014 budget: Memorandum to the heads of executive departments and agencies. Executive Office of the President, Office of Management and Budget. http://www.whitehouse.gov/sites/default/files/omb/memoranda/2012/m-12-14.pdf

Le Finis!

 

**********************************************

Module-Bottom-Button-A rev

Module-Bottom-Button-B rev

Module-Bottom-Button-C rev2

candid-seal-gold-2024.png