What does evidence-based practice in education mean? (2006)
Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11(2), 83-92.
(PDF) What does evidence-based practice in education mean?. Available from: https://www.researchgate.net/publication/233936486_What_does_evidence-based_practice_in_education_mean
(PDF) What does evidence-based practice in education mean?
file:///E:/Users/Downloads/DVS%20Submission%20to%20the%20Inquiry%20into%20the%20Status%20of%20the%20Teaching%20Profession%20(7).pdf
This end holds the original What does evidence-based practice document.
Why then start another document?
I decided to take into account more recent documents, and I’ve chosen only findings provided in the years 2020 to 2025.
The idea behind this was to get some sense of how evidence-based practice things may have changed. My early document had numerous older documents and practitioners.
My idea was to get some sense of how the acceptance of evidence-based practice may have changed since my early document.
So, first is my work below, then come selections from recent researchers.
Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11(2), 83-92.
Abstract
“Teaching has suffered both as a profession in search of community respect and as a force for improving the social capital of Australia, because of its failure to adopt the results of empirical research as the major determinant of its practice. There are a number of reasons why this has occurred, among them a science-aversive culture endemic among education policymakers and teacher education faculties. There are signs that change maybe afoot.
The National Inquiry into the Teaching of Literacy has pointed to, and urged us to follow, a direction similar to that taken recently in Great Britain and the USA towards evidence-based practice. Acknowledging the importance of teacher education, the National Institute for Quality Teaching and School Leadership began a process for establishing national accreditation of pre-service teacher education. Two problems do require attention. The generally low quality of much educational research in the past has made the process of evaluating the evidence difficult, particularly for those teachers who have not the training to discriminate sound from unsound research designs. Fortunately, there are a number of august bodies that have performed the sifting process to simplify judging the value of research on important educational issues.
What does evidence-based practice in education mean? Teachers are coming under increasing media fire lately: Too many students are failing. Current teachers are not sufficiently well trained to teach reading. Our bright young people are not entering the teaching profession. What does that imply about those who are teachers? Are current teachers inadequate to the task entrusted to them? Surely, teaching isn’t rocket science. No, it’s much more important than rocket science. Australia’s future is not critically dependent upon rockets, but is so upon the next generation of students.
So, how should we respond as a nation? Education has a history of regularly adopting new ideas, but it has done so without the wide-scale assessment and scientific research that is necessary to distinguish effective from ineffective reforms. This absence of a scientific perspective has precluded systematic improvement in the education system, and it has impeded growth in the teaching profession for a long time (Carnine, 1995a; Hempenstall, 1996; Marshall, 1993;Stone, 1996). Some years ago in Australia, Maggs and White (1982) wrote despairingly
"Few professionals are more steeped in mythology and less open to empirical findings than are teachers" (p. 131). Since that time, a consensus has developed among empirical researchers about a number of effectiveness issues in education, and a great deal of attention (Gersten, Chard, & Baker, 2000) is now directed at means by which these research findings can reach fruition in improved outcomes for students in classrooms. Carnine (2000) noted that education continues to be impervious to research on effective practices, and he explored differences between education and other professions, such as medicine, that are strongly wedded to research as the major practice informant.
Evidence-based medicine became well known during the 1990s. It enables practitioners to gain access to knowledge of the effectiveness and risks of different interventions, using reliable estimates of benefit and harm as a guide to practice. There is strong support within the medical profession for this direction, because it offers a constantly improving system that provides better health outcomes for their patients.
Thus, increased attention is being paid to research findings by medical practitioners in their dealing with patients and their medical conditions. Practitioners have organisations, such as Medline (http://medline.cos.com) and the Cochrane Collaboration(www.cochrane.org), that perform the role of examining research, employing criteria for what constitutes methodologically acceptable studies. They then interpret the findings and provide a summary of the current status of various treatments for various medical conditions. Thus, practitioners have the option of accepting pre-digested interpretations of the research or of performing their own examinations. This latter option presumes that they have the time and expertise to discern high quality from lesser research.
Their training becomes a determinant whether this latter is likely to occur. In a parallel initiative during the 1990’s, the American Psychological Association(Chambless & Ollendick, 2001) introduced the term empirically supported treatments as a means of highlighting differential psychotherapy effectiveness. Prior to that time, many psychologists saw themselves as developing a craft in which competence arises through a combination of personal qualities, intuition, and experience. The result was extreme variability of effect among practitioners.
The idea was to devise a means of rating therapies for various psychological problems, and for practitioners to use these ratings as a guide to practice. The criteria fora treatment to be considered well established included efficacy through two controlled clinical outcomes studies or a large series of controlled single case design studies, the availability of treatment manuals to ensure treatment fidelity, and the provision of clearly specified client characteristics. A second level involved criteria for probably efficacious treatments. These criteria required fewer studies, and/or a lesser standard of rigor. The third category comprised experimental treatments, those without sufficient evidence to achieve probably efficacious status. The American Psychological Association’s approach to empirically supported treatments could provide a model adaptable to the needs of education. There are great potential advantages to the education system when perennial questions are answered. What reading approach is most likely to evoke strong reading growth?
Should "social promotion" be used or should retentions be increased? Would smaller class sizes make a difference? Should summer school programs be provided to struggling students? Should kindergarten be full day? What are the most effective means of providing remediation to children who are falling behind? Even in psychology and medicine, however, it should be noted that 15 years later there remain pockets of voluble opposition to the evidence-based practice initiatives.
The first significant indication of a similar movement in education occurred with the Reading Excellence Act (The 1999 Omnibus Appropriations Bill, 1998) that was introduced as a response to the unsatisfactory state of reading attainment in the USA. It acknowledged that part of the cause was the prevailing method of reading instruction, and that literacy policies had been insensitive to developments in the understanding of the reading process. The Act, and its successors, attempted to bridge the gulf between research and classroom practice by mandating that only programs in reading that had been shown to be effective according to strict research criteria would receive federal funding. This reversed a trend in which the criterion for adoption of a model was that it met preconceived notions of “rightness” rather than that it was demonstrably effective for students. Federal funding is now only available only for programs with demonstrated effectiveness evidenced by reliable replicable research.
Reliable replicable research was defined as objective, valid, scientific studies that:(a) include rigorously defined samples of subjects that are sufficiently large and representative to support the general conclusions drawn; (b) rely on measurements that meet established standards of reliability and validity; (c) test competing theories, where multiple theories exist; (d) are subjected to peer review before their results are published; and (e) discover effective strategies for improving reading skills (The 1999 Omnibus Appropriations Bill, 1998).In Great Britain, similar concerns have produced a National Literacy Strategy(Department for Education and Employment, 1998) that mandates practice based upon research findings.
In Australia, The National Enquiry into the Teaching of Literacy (2005) also reached similar conclusions about the proper role of educational research. Slavin (2002) considers that such initiatives will reduce the pendulum swings that have characterized education thus far, and could produce revolutionary consequences in redressing educational achievement differences within our community. The National Research Council's Center for Education (Towne, 2002) suggests that educators should attend to research that (a) poses significant questions that can be investigated empirically; (b) links research to theory; (c) uses methods that permit direct investigation of the question; (d) provides a coherent chain of rigorous reasoning; (e) replicates and generalizes; and (f) ensures transparency and scholarly debate.
The Council’s message is clearly to improve the quality of educational research, and reaffirm the link between scientific research and educational practice. Ultimately, the outcomes of sound research should inform educational policy decisions, just as a similar set of principles have been espoused for the medical profession. The fields that have displayed unprecedented development over the last century, such as medicine, technology, transportation, and agriculture have been those embracing research as the prime determinant of practice (Shavelson & Towne, 2002).
Similarly, in Australia in 2005, the National Inquiry into the Teaching of Literacy asserted that “teaching, learning, curriculum and assessment need to be more firmly linked to findings from evidence-based research indicating effective practices, including those that are demonstrably effective for the particular learning needs of individual children” (p.9). It recommends a national program to produce evidence-based guides for effective teaching practice, the first of which is to be on reading. In all, the Report used the term evidence-based 48 times. So, the implication is that education and research are not adequately linked in this country. Why has education been so slow to attend to research as a source of practice knowledge? Carnine (1991) argued that the leadership has been the first line of resistance.
He described educational policy-makers as lacking a scientific framework, and thereby inclined to accept proposals based on good intentions and unsupported opinions. Professor Cuttance, director of the Melbourne University's Centre for Applied Educational Research was equally blunt: “Policy makers generally take little notice of most of the research that is produced, and teachers take even less notice of it.” (Cuttance, 2005, p.5). Carnine (1995b) also points to teachers’ lack of training in seeking out and evaluating research for themselves.
Their training institutions have not developed a research culture, and tend to view teaching as an art form, in which experience, personality, intuition, or creativity are the sole determinants of practice. For example, he estimates that fewer than one in two hundred teachers are experienced users of the ERIC educational database. Taking a different perspective, Meyer (1991, cited in Gable & Warren, 1993) blames the research community for being too remote from classrooms.
She argued that teachers will not become interested in research until its credibility is improved. Research is often difficult to understand, and the careful scientific language and cautious claims may not have the same impact as the wondrous claims of ideologues and faddists unconstrained by scientific ethics.
Fister and Kemp (1993) considered several obstacles to research-driven teaching, important among them being the absence of an accountability link between decision-makers and student achievement. Such a link was unlikely until recently, when regular mandated state or national test programs results became associated with funding. They also apportion some responsibility to the research community for failing to appreciate the necessity of adequately connecting research with teachers’ concerns.
The specific criticisms included a failure to take responsibility for communicating findings clearly, and with the end-users in mind. Researchers have often validated practices over too brief a time-frame, and in too limited a range of settings to excite 8 general program adoption across settings. Without considering the organizational ramifications (such as staff and personnel costs) adequately, the viability of even the very best intervention cannot be guaranteed. The methods of introduction and staff training in innovative practices can have a marked bearing on their adoption and continuation.
Woodward (1993) pointed out that there is often a culture gulf between researchers and teachers. Researchers may view teachers as unnecessarily conservative and resistant to change; whereas, teachers may consider researchers as unrealistic in their expectations and lacking in understanding of the school system and culture. Teachers may also respond defensively to calls for change because of the implied criticism of their past practices, and the perceived devaluation of the professionalism of teachers. Leach (1987) argued that collaboration between change agents and teachers is a necessary element in the acceptance of novel practice. In his view, teachers need to be invited to make a contribution that extends beyond solely the implementation of the ideas of others.
There are some signs that such a culture may be in the early stages of development. Viadero (2002) reported on a number of initiatives in which teachers have become reflective of their own work, employing both quantitative and qualitative tools. She also noted that the American Educational Research Association has a subdivision devoted to the practice. Some have argued that science has little to offer education, and that teacher initiative, creativity, and intuition provide the best means of meeting the needs of students. For example, Weaver considers scientific research offers little of value to education (Weaver et al., 1997). “It seems futile to try to demonstrate superiority of one teaching method over another by empirical research” (Weaver, 1988, p.220). These writers often emphasise the uniqueness of every child as an argument against instructional designs that presume there is sufficient commonality among children to enable group instruction with the same materials and techniques. Others have argued that teaching itself is ineffectual when compared with the impact of socioeconomic status and 9 social disadvantage (Coleman et al., 1966; Jencks et al., 1972).
Smith (1992) argued that only the relationship between a teacher and a child was important in evoking learning. Further, he downplayed instruction in favour of a naturalist perspective “Learning is continuous, spontaneous, and effortless, requiring no particular attention, conscious motivation, or specific reinforcement” (p.432). Still others view research as reductionist, and unable to encompass the wholistic nature of the learning process (Cimbricz, 2002; Poplin, 1988).
What sorts of consequences have arisen in other fields from failure to incorporate the results of scientific enquiry? Galileo observed moons around Jupiter in 1610. Francesco Sizi’s armchair refutation of such planets was: There are seven windows in the head, two nostrils, two ears, two eyes and a mouth. So in the heavens there are seven - two favourable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc we gather that the number of planets is necessarily seven. We divide the week into seven days, and have named them from the seven planets. Now if we increase the number of planets, this whole system falls to the ground. Moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist (Holton & Roller, 1958, as cited in Stanovich, 1996, p.9).
Galileo taught us the value of controlled observation, whilst Sizi highlighted the limitations of armchair theorising. The failure to incorporate empirical findings into practice can have far-reaching consequences. Even medicine has had only a brief history of attending to research. Early in the 20th century, medical practice was at a similar stage to that of education currently For example, it was well known that bacteria played a critical role in infection, and 50 years 10 earlier Lister had shown the imperative of antiseptic procedures in surgery.
Yet, in this early period of the century, surgeons were still wiping instruments on whatever unsterilised cloth that was handy, with dire outcomes for their patients. More recently, advice from paediatrician Doctor Benjamin Spock to have infants sleep face down in their cots caused approximately 60 thousand deaths from Sudden Infant Death Syndrome in the USA, Great Britain and Australia between 1974 and 1991 according to researchers from the Institute of Child Health in London (Dobson & Elliott, 2005). His advice was not based upon any empirical evidence, but rather armchair analysis. The book, Baby and Child Care (Spock, 1946), was extraordinarily influential, selling more than 50 million copies. Yet, while the book continued to espouse this practice, reviews of risk factors for SIDS by 1970 had noted the risks of infants sleeping face down.
In the 1990’s, when public campaigns altered this practice, the incidence of SIDS death halved within one year. In recent times, more and more traditional medical practices are being subjected to empirical test as the profession increasingly established credibility. Are there examples in education in which practices based solely upon belief, unfettered by research support, have been shown to be incorrect, but have led to unhelpful teaching? · Learning to read is as natural as learning to speak (National Council of Teachers of English, 1999). · Children do not learn to read in order to be able to read a book, they learn to read by reading books (NZ Ministry of Education, as cited in Mooney, 1988). · Parents reading to children is sufficient to evoke reading (Fox, 2005). · Good readers skim over words rather than attending to detail (Goodman, 1985). · Fluent readers identify words as ideograms (Smith, 1973). · Skilled reading involves prediction from context (Emmitt, 1996). English is too irregular for phonics to be helpful (Smith, 1999). · Accuracy is not necessary for effective reading (Goodman, 1974). · Good spelling derives simply from the act of writing (Goodman, 1989). · Attending to students’ learning styles improves educational outcomes (Carbo, & Hodges, 1988; DEECD, 2012b; Dunn & Dunn, 1987).
These assertions have influenced educational practice for more than 20 years, yet they have each been shown by research to be either incorrect or unsupported (Hempenstall, 1999). The consequence has been an unnecessary burden upon struggling students to manage the task of learning to read. Not only have they been denied helpful strategies, but they have been encouraged to employ moribund strategies.
Consider this poor advice from a newsletter to parents at a local school: If your child has difficulty with a word: Ask your child to look for clues in the pictures. Ask your child to read on or reread the passage and try to fit in a word that makes sense. Ask your child to look at the first letter to help guess what the word might be. When unsupported belief guides practice, we risk inconsistency at the individual teacher level and disaster at the education system level.
There are three groups with whom researchers need to be able to communicate if their innovations are to be adopted. At the classroom level, teachers are the focal point of such innovations and their competent and enthusiastic participation is required if success is to be achieved. At the school administration level, principals are being given increasing discretion as to how funds are to be disbursed; therefore, time spent in discussing educational priorities, and cost-effective means of achieving them may be time well-spent, bearing in mind Gersten and 12 Guskey's (1985) comment on the importance of strong instructional leadership.
At the broader system level, decision makers presumably require different information, and assurances about the viability of change of practice. Perhaps because of frustration at the problems experienced in ensuring effective practices are employed across the nation, we are beginning to see a top-down approach, in which research based educational practices are either mandated, as in Great Britain (Department for Education and Employment, 1998) or made a pre-requisite for funding, as in the 2001 No Child Left Behind Act (U.S. Department of Education, 2002).
Whether this approach will be successful in changing teachers’ practice remains to be seen. In any case, there remains a desperate need to address teachers’ and parents’ concerns regarding classroom practice in a cooperative and constructive manner. In Australia, pressure for change is building, and the view of teaching as a purely artisan activity is being challenged. Reports such as that by the National Inquiry into the Teaching of Literacy (2005) have urged education to adopt the demeanour and practice of a research-based profession. State and national testing has led to greater transparency of student progress, and, thereby, to increased public awareness. Government budgetary vigilance is greater than in the past, and measurable outcomes are the expectation from a profession that has not previously appeared enthused by formal testing.
A further possible spur occurred when a Melbourne parent successfully sued a private school for a breach of the Trade Practices Act (Rood & Leung, 2006). She argued that it had failed to deliver on its promise to address her son's reading problems. Reacting to these various pressures, in 2005 the National Institute for Quality Teaching and School Leadership began a process for establishing national accreditation of pre-service teacher education.
The Australian Council for Educational Research is currently evaluating policies and 13 practices in pre-service teacher education programs in Australia. The intention is to raise and monitor the quality of teacher education programs around the nation. There is another stumbling block to the adoption of evidence-based practice. Is the standard of educational research generally high enough to enable sufficient confidence in its findings? Broadly speaking, some areas (such as reading) invite confidence; whereas, the quality of research in other areas cannot dispel uncertainty. Partly, this is due to a preponderance of short term, inadequately designed studies. When Slavin (2004) examined the American Educational Research Journal over the period 2000-2003, only 3 out of 112 articles reported experimental/control comparisons in randomized studies with reasonably extended treatments.
The National Reading Panel (2000) selected research from the approximately 100,000 reading research studies that have been published since 1966, and another 15,000 that had been published before that time. The Panel selected only experimental and quasi-experimental studies, and among those considered only studies meeting rigorous scientific standards in reaching its conclusions.
Phonemic Awareness: Of 1962 studies, 52 met the research methodology criteria; Phonics: Of 1,373 studies, 38 met the criteria; Guided Oral Reading: Of 364 studies, 16 met the criteria; Vocabulary Instruction: Of 20,000 studies, 50 met the criteria; Comprehension: Of 453 studies, 205 met the criteria. So, there is certainly a need for educational research to become more rigorous in future. In the areas in which confidence is justified, how might we weigh the outcomes of empirical research?
Stanovich and Stanovich (2003) propose that competing claims to knowledge should be evaluated according to three criteria. First, findings should be published in refereed journals. Second, the findings have been replicated by independent researchers with no particular stake in the outcome. Third, there is a consensus within the appropriate research community about the reliability and validity of the various findings – the converging evidence criterion. Although the use of these criteria does not produce infallibility it does offer better consumer protection against spurious claims to knowledge.
Without research as a guide, education systems are prey to all manner of gurus, publishing house promotions, and ideologically-driven zealots. Gersten (2001) laments that teachers are "deluged with misinformation" (p. 45). Unfortunately, education courses have not provided teachers with sufficient understanding of research design to enable the critical examination of research. In fact, several whole language luminaries (prominent influences in education faculties over the past 20 years) argued that research was unhelpful in determining practice (Hempenstall, 1999).
Teachers-in-training need to be provided with a solid understanding of research design to adapt to the changing policy emphasis (National Inquiry into the Teaching of Literacy, 2005). For example, in medicine, psychology, and numerous other disciplines, randomized controlled trials are considered the gold standard for evaluating an intervention’s effectiveness. Training courses in these professions include a strong emphasis on empirical research design.
There is much to learn about interpreting other forms of research too (U.S. Department of Education, 2003). In education, however, there is evidence that the level of quantitative research preparation has diminished in teacher education programs over the past twenty years (Lomax, 2004). Are there any immediate shortcuts to discerning the gold from the dross? If so, where can one find the information about any areas of consensus? Those governments that have moved toward a pivotal role for research in education policy have usually formed panels of prestigious researchers to peruse the evidence in particular areas, and report their findings widely (e.g., National Reading Panel, 2000). They assemble all the methodologically acceptable research, and synthesise the results, using statistical processes such as meta-analysis, to enable judgements about effectiveness to be made. It involves clumping together the results from many studies to 15 produce a large data set that reduces the statistical uncertainty that inevitably accompanies single studies.
So, there are recommendations for practice produced by these bodies that are valuable resources in answering the question what works? These groups include the National Reading Panel, American Institutes for Research, National Institute for Child Health and Human Development, The What Works Clearinghouse, Coalition for Evidence-Based Policy. A fuller list with web addresses can be found in the appendix. As an example, Lloyd (2006) summarises a number of such meta-analyses for some approaches. In this method an effect size of 0.2 is considered small 0.5 is a medium effect, and 0.8 is a large effect (Cohen, 1988). For early intervention programs, there were 74 studies, 215 effect sizes, and an overall effect size (ES) = 0.6. For Direct Instruction (DI), there were 25 studies, 100+ effect sizes, and an overall ES = 0.82. For behavioural treatment of classroom problems of students with behaviour disorder, there were 10 studies, 26 effect sizes, and an overall ES = 0.93. For Whole language, there were 180 studies, 637 effect sizes, and an overall ES = 0.09. For perceptual/motor training, there were 180 studies, 117 effect sizes, and an overall ES = 0.08. For learning styles, there were 39 studies, 205 effect sizes, and an overall ES = 0.14.
These sources can provide great assistance, but can also be confusing as they do not all agree on which studies should be included in their meta-analyses. For example, Hattie’s analysis of Direct Instruction studies revealed strong effects for regular (d=0.99), and special education and lower ability students (d=0.86), higher for reading (d=0.89) than for mathematics (d=0.50), similar for the more low-level word attack (d=0.64) and also for high-level comprehension (d=0.54), and similar for elementary and high school students. In contrast, the Coalition for Evidence-Based Policy does not include Direct Instruction among its list of evidence-based approaches because of their perception of a lack of long term effect studies.
The What Works Clearinghouse rejects most of the Direct Instruction studies as not meeting their criteria for methodological soundness, and ignores those older than 20 years or so. There has also been criticism (Briggs, 2008; Slavin, 2008) of some of the WWC decisions, in particular, inconsistency in applying standards for what constitutes acceptable research.
Thus, the large scale reviews have their own issues to deal with before they can be unquestioningly accepted. It may also be quite some time before gold standard research reaches critical mass to make decisions about practice easier. It is also arguable whether education can ever have randomised control trials as standard. Of course, it is not only the large scale, methodologically sophisticated studies that are worthwhile.
A single study involving a small number of schools or classes may not be conclusive in itself, but many such studies, preferably done by many researchers in a variety of locations, can add some confidence that a program's effects are valid (Slavin, 2003). If one obtains similar positive benefits from an intervention across different settings and personnel, there is added reason to prioritise the intervention for a large gold-standard study.
Taking an overview, there are a number of options available to create educational reform. One involves the use of mandate, as with education policy in England. Another option involves inveigling schools with extra money, as in the USA beginning with the No Child Left Behind Act (U.S. Department of Education, 2002).
Still another is to inculcate skills and attitudes during teacher training. Whilst these are not mutually exclusive options, the third appears to be a likely component of any reform movement in Australia, given the establishment and objectives of the National Institute for Quality Teaching and School Leadership (2005).
A prediction for the future, perhaps 15 years hence? Instructional approaches will need to produce evidence of measurable gains before being allowed within the school curriculum system. Education faculties will have changed dramatically as a new generation takes control. 17 Education courses will include units devoted to evidence-based practice, perhaps through an increased liaison with educational and cognitive psychology. Young teachers will routinely seek out and collect data regarding their instructional activities. They will become scientist practitioners in their classrooms. Student progress will be regularly monitored, problems in learning will be noticed early, and addressed systematically. Overall rates of student failure will fall. Optimistic? Of course!
More so than any generation before them, the child born today should benefit from rapid advances in the understanding of human development, and of how that development may be optimised. There has been an explosion of scientific knowledge about the individual in genetics and the neurosciences, but also about the role of environmental influences, such as socioeconomic status, early child rearing practices, effective teaching, and nutrition. However, to this point, there is little evidence that these knowledge sources form a major influence on policy and practice in education.
There is a serious disconnect between the accretion of knowledge and its acceptance and systematic implementation for the benefit of this growing generation. Acceptance of a pivotal role for empiricism is actively discouraged by advisors to policymakers, whose ideological position decries any influence of science.
There are unprecedented demands on young people to cope with an increasingly complex world. It is one in which the sheer volume of information, and the sophisticated persuasion techniques, to which they will be subjected may overwhelm the capacities that currently fad-dominated educational systems can provide for young people.
A recognition of the proper role of science in informing policy is a major challenge for us in aiding the new generation. This perspective does not involve a diminution of the role of the teacher, but rather the integration of professional wisdom with the best available empirical evidence in making decisions about how to deliver instruction (Whitehurst, 2002). Evidence-based policies have great potential to transform the practice of education, as well as research in education. Evidence based policies could finally set education on the path toward the kind of progressive improvement that most successful parts of our economy and society embarked upon a century ago. With a robust research and development enterprise and government policies demanding solid evidence of effectiveness behind programs and practices in our schools, we could see genuine, generational progress instead of the usual pendulum swings of opinion and fashion.
This is an exciting time for educational research and reform. We have an unprecedented opportunity to make research matter and to then establish once and for all the importance of consistent and liberal support for high-quality research. Whatever their methodological or political orientations, educational researchers should support the movement toward evidence-based policies and then set to work generating the evidence that will be needed to create the schools our children deserve (Slavin, 2002, p.20).”
So, my past academic document is complete.
References on that above document.
Briggs, D.C. (2008). Synthesizing causal inferences. Educational Researcher, 37(1), 15-22.
Carbo, M., & Hodges, H. (1988, Summer). Learning styles strategies can help students at risk.
Teaching Exceptional Children, 48-51.
Carnine, D. (1991). Curricular interventions for teaching higher order thinking to all students:
Introduction to the special series. Journal of Learning Disabilities, 24, 261-269.
Carnine, D. (1995a). Trustworthiness, useability, and accessibility of educational research.
Journal of Behavioral Education, 5, 251-258.
Carnine, D. (1995b.). The professional context for collaboration and collaborative research.
Remedial and Special Education, 16(6), 368-371
Carnine, D. (2000). Why education experts resist effective practices (and what it would take to make education more like medicine). Washington, DC: Fordham Foundation. Retrieved from http://www.edexcellence.net/library/carnine.html
Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.
Cimbricz, S. (2002, January 9). State-mandated testing and teachers' beliefs and practice. Education Policy Analysis Archives, 10(2). Retrieved from http://epaa.asu.edu/epaa/v10n2.html
Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.), Hillsdale, NJ: Lawrence Earlbaum. Coleman, J., Campbell, E., Hobson, C., McPartland, J., Mood, A., Weinfeld, F. D., & York, R. (1966). Equality of educational opportunity. Washington D.C.: Department of Health, Education and Welfare. Cuttance, P. (2005). Education research 'irrelevant.'
The Age, July 5, p.5. DEECD (2012). Student Support Group Guidelines 2012. Retrieved from http://www.eduweb.vic.gov.au/edulibrary/public/stuman/wellbeing/SSG_Guidelines_2012 .pdf
Department for Education and Employment. (1998). The National Literacy Strategy: Framework for Teaching. London: Crown. Dobson, R., & Elliott, J. (2005). Dr Spock’s advice blamed for cot deaths. London: University College. Retrieved March 11, 2006, from http://www.ucl.ac.uk/news-archive/in-thenews/may-2005/latest/newsitem.shtml?itnmay050418
Fister, S., & Kemp, K. (1993). Translating research: Classroom application of validated instructional strategies. In R.C. Greaves & P. J. McLaughlin (Eds.), Recent advances in special education and rehabilitation.
Boston, MA: Andover Medical. Fox, M. (2005, August 16). Phonics has a phoney role in the literacy wars. Sydney Morning Herald, p.6.
Gable, R. A. & Warren. S. F. (1993). The enduring value of instructional research. In Robert Gable & Steven Warren (Eds.), Advances in mental retardation and developmental disabilities: Strategies for teaching students with mild to severe mental retardation. Philadelphia: Jessica Kingsley.
Gersten, R. & Guskey, T. (1985, Fall). Transforming teacher reluctance into a commitment to innovation. Direct Instruction News, 11-12.
Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice, 16(1), 45-50.
Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities, 33, 445-457.
Hempenstall, K. (1996). The gulf between educational research and policy: The example of Direct Instruction and
Language. Behaviour Change, 13, 33-46. Hempenstall, K. (1999). The gulf between educational research and policy: The example of Direct Instruction and whole language. Effective School Practices, 18(1), 15-29.
Jencks, C. S., Smith, M., Acland, H., Bane, M. J., Cohen, D., Ginits, H., Heyns, B., & Michelson, S. (1972). Inequality: A reassessment of the effect of family and schooling in America. New York: Basic Books.
Leach, D. J. (1987). Increasing the use and maintenance of behaviour-based practices in schools: An example of a general problem for applied psychologists? Australian Psychologist, 22, 323-332. 19
Lloyd, J.L. (2006). Teach effectively. Retrieved June 20, 2006, from http://teacheffectively.com/index.php?s=meta+analysis
Lomax, R.G. (2004). Whither the future of quantitative literacy research? Reading Research Quarterly, 39(1), 107-112.
Maggs, A. & White, R. (1982). The educational psychologist: Facing a new era. Psychology in the Schools, 19, 129-134.
Marshall, J. (1993). Why Johnny can't teach. Reason, 25(7), 102-106.
Mooney, M. (1988). Developing life-long readers. Wellington, New Zealand: Learning Media.
National Inquiry into the Teaching of Literacy. (2005). Teaching Reading: National Inquiry into the Teaching of Literacy. Canberra: Department of Education, Science, and Training. Retrieved February 11, 2006, from www.dest.gov.au/nitl/report.htm
National Institute for Quality Teaching and School Leadership. (2005, August 25). National accreditation of pre-service teacher education. Retrieved October 15, 2005, from http://www.teachingaustralia.edu.au/home/What%20we%20are%20saying/media_release _pre_service_teacher_ed_accreditation.pdf
National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Department of Health and Human Services.
Poplin, M. (1988). The reductionist fallacy in learning disabilities: Replicating the past by reducing the present. Journal of Learning Disabilities, 21, 389-400.
Rood, D., & Leung, C. C. (2006, August 16). Litigation warning as private school settles complaint over child's literacy. The Age, p.6.
Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. Washington, DC: National Academy Press.
Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15-21. 20
Slavin, R. E. (2003). A reader's guide to scientifically based research. Educational Leadership, 60(5), 12-16. Retrieved December 16, 2003, from http://www.ascd.org/publications/ed_lead/200302/slavin.html
Slavin, R. E. (2004). Education research can and must address "What Works" questions. Educational Researcher, 33(1), 27-28. Smith, F. (1973). Psychology and reading. New York: Holt, Rinehart & Winston.
Smith, F. (1992). Learning to read: The never-ending debate. Phi Delta Kappan, 74, 432- 441.
Spock B. (1946). The commonsense book of baby and child care. New York: Pocket Books.
Stanovich, K. (1996). How to think straight about psychology (4th ed.). New York: Harper Collins.
Stanovich, P. J., & Stanovich, K. E. (2003). How teachers can use scientifically based research to make curricular & instructional decisions. Jessup, MD: The National Institute for Literacy. Retrieved September 16, 2003, from http://www.nifl.gov/partnershipforreading/publications/html/stanovich/
Stone, J. E. (April 23, 1996). Developmentalism: An obscure but pervasive restriction on educational improvement. Education Policy Analysis Archives. Retrieved November 16, 2001, from http://seamonkey.ed.asu.edu/epaa
The 1999 Omnibus Appropriations Bill (1998). The Reading Excellence Act (pp.956-1007). Retrieved February 12, 2003, from http://www.house.gov/eeo
Towne, L. (2002, February 6). The principles of scientifically based research. Speech presented at the U.S. Department of Education, Washington, DC. Retrieved December 12, 2002, from www.ed.gov/nclb/research/ U.S. Department of Education (2002, Jan).
No Child Left Behind Act, 2001. Retrieved December 12, 2002, from http://www.ed.gov/offices/OESE/esea/
U.S. Department of Education (2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. Washington, D.C.: Institute of Education Sciences, U.S. Department of Education. Retrieved November 1, 2004, from http://www.ed.gov/print/rschstat/research/pubs/rigorousevid/guide/html
Viadero, D. (2002). Research: Holding up a mirror. Editorial Projects in Education, 21(40), 32-35. Weaver, C. (1988). Reading: Progress and practice. Portsmouth, NJ: Heinemann.
Weaver, C., Patterson, L, Ellis, L., Zinke, S., Eastman, P., & Moustafa, M. (1997). Big Brother and reading instruction. Retrieved December, 16, 2004, from http://www.m4pe.org/elsewhere.htm
Whitehurst, G. J. (2002). Statement of Grover J. Whitehurst, Assistant Secretary for Research and Improvement, Before the Senate Committee on Health, Education, Labor and Pensions. Washington, D.C.: U.S.
Department of Education. Retrieved January 23, 2003 from http://www.ed.gov/offices/IES/speeches/
Woodward, J. (1993). The technology of technology-based instruction: Comments on the research, development, and dissemination approach to innovation. Education & Treatment of Children, 16, 345-360. 22
*********************************************
End of my research and start of some recent studies.
AI Overview
“In the context of education, "What does evidence-based practice mean?" by Hempenstall (2006) explores the concept of using research findings to inform and improve teaching practices. It emphasizes that evidence-based practice (EBP) is not simply about using research findings, but also about critically evaluating their relevance, validity, and applicability to specific contexts. EBP requires a systematic approach that integrates research evidence with practical experience, professional judgment, and the unique needs of students.
Here's a more detailed breakdown:
Key aspects of evidence-based practice in education:
EBP relies on the findings of rigorous, peer-reviewed research to identify effective teaching strategies and interventions.
It acknowledges that research findings may not be directly transferable to every classroom or student population, requiring educators to adapt and tailor practices to their specific situations.
EBP encourages educators to combine research evidence with their own professional knowledge, experience, and student input to make informed decisions.
EBP is not a static set of practices but rather an ongoing process of learning, evaluating, and adapting based on new research and feedback.
Importance of EBP in education:
By using evidence-based strategies, educators can increase the likelihood of positive learning outcomes for students.
EBP promotes a culture of accountability and transparency by grounding teaching practices in research-supported evidence.
EBP helps ensure that resources are used effectively by focusing on interventions and strategies that have been shown to be successful.
***********************************************
Some of the relevant papers of mine |
|
So, we are still looking at Evidence-based practice in education. One decision was that we first look at the somewhat older findings. Now we are looking at the more recent findings which are set in the fresher part of the document. See if they change much.
Building a Knowledge-Based Economy through “Sciencing (2025)
“Abstract: This study explores building a knowledge-based economy through “sciencing”: the University of Abuja students’ experiences with a focus on developing critical thinking, problem solving, and technical skills in students. The research aims to evaluate how "Sciencing" can better prepare students for STEM employment by fostering hands-on, inquiry-based learning experiences. Additionally, the study examines the role of university-industry collaborations in aligning academic curricula with industry needs, thereby contributing to Nigeria’s transition to a knowledge-based economy. It also investigates the importance of faculty professional development to support the effective implementation of "Sciencing" in teaching practices. A mixed-methods approach, combining quantitative and qualitative data, is employed to assess the impact of this approach on student learning outcomes and industry readiness. The findings revealed among others that students develop essential skills, such as critical thinking, problem-solving, and technical skills which are vital for success in STEM fields with the implementation of “Sciencing” This hands-on approach allows students to better connect theoretical concepts to real-world applications for knowledge based economy. Moreover, it strengthens university-industry partnerships, ensuring that curricula are more closely aligned with the evolving demands of the STEM workforce.”
Aregbesola, B. G. (2025). Building a Knowledge-Based Economy through “Sciencing”: The University of Abuja Students’ Experiences. International Journal on Integrated Education (IJIE) 2025, 8(2), 112-122.
_______________________________________________
Preparedness and English Learners’ Literacy Success (2025).
“English Learners (ELs) in early childhood and elementary settings continue to face a significant literacy achievement gap compared to their non-EL peers. A qualitative action research study was conducted in a suburban Midwest district to examine how teacher preparedness, instructional practices, and professional development influence EL literacy development. Through teacher interviews, surveys, and classroom observations, the study identified key challenges, including limited EL-specific training, minimal collaboration with EL specialists, and inadequate instructional resources. Aimsweb Plus data confirmed a widening literacy gap, reinforcing the need for culturally responsive teaching, improved collaboration, and targeted interventions. The findings emphasize the importance of sustained professional learning and school-wide strategies that prioritize equity and inclusion to support EL achievement.
Rhoads, Sean, "Looking Beyond the Achievement Gap: An Inquiry Into Bridging the Divide Between Teacher Preparedness and English Learners’ Literacy Success" (2025). Theses and Dissertations. 2127.
https://ir.library.illinoisstate.edu/etd/2127
A Meta-Analysis of Performance Feedback as an Evidence-Based Practice in Special Education Teacher Preparation
“This meta-analysis systematically examines the evidence supporting the effectiveness of performance feedback as an evidence-based practice in the training of preservice special education teachers. A total of 27 studies were evaluated using the rigorous What Works Clearinghouse (WWC) design standards for Single-Case Experimental Research:
Results indicate that 15 studies, encompassing 56 preservice special education teachers, met the WWC design standards with and without reservation. The weighted aggregated mean treatment effect estimates were .91 (Tau-U) and 2.59 (Between-Case Standardized Mean Difference), demonstrating a strong impact of performance feedback interventions on enhancing preservice teachers’ instructional practices. This study contributes to the extant literature by providing robust evidence supporting the classification of performance feedback as an evidence-based practice for preservice special education teacher preparation, meeting the stringent criteria set forth by the What Works Clearinghouse (WWC). Analyses also include methodological quality, risk of bias, and treatment effect estimates, addressing key gaps in extant literature and supporting recommendations for future research. The findings highlight the potential of performance feedback to shape effective teaching behaviors among preservice special education teachers.”
Balikci, S., Gulboy, E., Rock, M. L., & Rakap, S. (2025). A Meta-Analysis of Performance Feedback as an Evidence-Based Practice in Special Education Teacher Preparation. Teacher Education and Special Education, 48(3), 169-191. https://doi.org/10.1177/08884064251356657 (Original work published 2025)
_______________________________________________
Integrating evidence-based practices in preschool educators (2024)
“The integration of evidence-based practices (EBP) into preschool education is a critical advancement in the field of early childhood education, inspired by its successful application in healthcare. This study investigates the adoption and implementation of EBP among preschool educators from 2004 to 2023. By employing a scoping review methodology guided by PRISMA-ScR, this study conducted a comprehensive search of the Web of Science, EBSCO, and Scopus databases. The inclusion criteria included articles that highlighted the practical application of EBP, yielding 21 relevant studies. The findings reveal significant trends and persistent challenges. Professional development, particularly through coaching and in-service training, has emerged as crucial for enhancing the implementation of EBP.
Organizational barriers such as limited access to academic resources and insufficient leadership support were identified as significant obstacles. Nevertheless, effective strategies, including small-group instruction and behavior management, have been shown to substantially improve educational outcomes. This study emphasizes the importance of fostering collaborative environments within educational institutions and enhancing educators' research skills to bridge the research and practice gap. Future research should address the identified gaps, including the need for longitudinal studies and the evaluation of EBP's long-term effectiveness. The implications of this study are essential for policy adjustments and sustainable integration of EBP in early childhood education, aiming to improve educational quality and meet the diverse needs of young learners.”
Sepúlveda-Vallejos, S., Almonacid-Fierro, A., Valdebenito, K., & Aguilar-Valdés, M. (2024). Integrating evidence-based practices in preschool educators: A scoping review from 2004 and 2023. Multidisciplinary Reviews, 8(2), 2025054. https://doi.org/10.31893/multirev.2025054
Searching for evidence-based practice: A qualitative metasynthesis of the research on Reggio Emilia practices in Australian early years settings (2024).
“This article was motivated by a professional curiosity about the implementation of Reggio-inspired practices and their impact on the evolution of values, research, and practices in Australian early years education. Through a Meta synthesis of 23 qualitative studies, we identified a range of themes and subthemes that enrich the ongoing exploration of the Reggio Emilia Approach (REA) in Australia. Positive outcomes included a growing sense of trust and confidence among early years professionals in adopting the approach, alongside increased exploratory and creative engagement among children. These outcomes were intricately linked to the specific contexts of educators’ research and teaching efforts.
However, the success of REA’s implementation depends heavily on sustained investment in teacher support. As the approach gains momentum in Australian settings, its full potential will only be realised if educators, policymakers, and institutions collaborate to adapt and sustain these practices within local contexts. Ultimately, this work highlights the need for environments where educators feel empowered, children thrive, and innovative teaching becomes the norm. Achieving this requires proactive strategies, such as enhanced professional development, improved resources, and a rethinking of conventional teaching methods.
Guo, K., & Rouse, E. (2024). Searching for evidence-based practice: A qualitative metasynthesis of the research on Reggio Emilia practices in Australian early years settings. Australian Journal of Education, 69(1), 58-78. https://doi.org/10.1177/00049441241302831 (Original work published 2025).
____________________________________________
Using Evidence-Based Practice and Data-Based Decision Making in Inclusive Education (2021)
Abstract
There are longstanding calls for inclusive education for all regardless of student need or teacher capacity to meet those needs. Unfortunately, there are little empirical data to support full inclusion for all students and even less information on the role of data-based decision making in inclusive education specifically, even though there is extensive research on the effectiveness of data-based decision making.
In this article, we reviewed what data-based decision making is and its role in education, the current state of evidence related to inclusive education, and how data-based decision making can be used to support decisions for students with reading disabilities and those with intellectual disabilities transitioning to adulthood. What is known about evidence-based practices in supporting reading and transition are reviewed in relationship to the realities of implementing these practices in inclusive education settings. Finally, implications for using data-based decisions in inclusive settings are discussed.
Keywords:
inclusive education; data-based decision making; transition planning; reading disabilities; intellectual disabilities
1. Using Evidence-Based Practice and Data-Based Decision Making in Inclusive Education
Twenty-five years ago, Vaughn and Schumm [1] outlined the components of responsible inclusive education. These components included teacher choice and the development of their own philosophy of inclusion, adequate resources and professional development, a continuum of services rather than having only one option: full inclusion, school-based models rather than district- or state-mandated models, putting student needs first, ongoing evaluation of effectiveness, and curriculum and instructional practices that meet student needs. Making decisions that put students first ensures that students make progress academically and socially through ongoing progress monitoring.
Decisions about placement and programming should be based on data related to student progress toward their goals rather than assuming that the same placement and programming will meet all students’ needs. To understand some of the limitations in the use of data within the context of inclusive education, this article describes evidence-based practice and data-based decision making (DBDM) within the context of inclusive education, specifically in the areas of reading and transition to adulthood for students with intellectual disability (ID). In this article, we reviewed data-based decision making and how it is used to support educational outcomes, and the current state of evidence regarding the effectiveness of inclusive education. We then provided examples of how evidence-based practices related to DBDM can be used to make decisions in inclusive education that prioritizes meeting students’ needs. We used two examples: reading difficulties and intellectual disability (ID) to provide examples with groups of students who have different needs.
2. Evidence-Based Practice and Data-Based Decision-Making
DBDM is a process of gathering data about how students are progressing toward specific goals in academic or behavioral performance. This includes identifying the current and desired levels of performance, implementing an evidence-based intervention, regularly monitoring progress toward meeting that goal, and modifying the intervention as necessary [2]. This is an iterative process rather than a few steps to follow through once. DBDM is a process that can be implemented at all levels from entire districts to individual students. While staff at the school district and individual school levels should use data to inform their decisions about how they educate and support the academic and social-emotional development of their students, this article will focus on DBDM as it relates to individual students with exceptional needs.
While data are an important component of DBDM, decision makers must interpret the data to inform decisions about how to effectively support students. Data must be combined with pedagogical and content knowledge to translate it into a usable action plan, taking the context into consideration [3]. This reasoning process is not as straightforward as it appears, and decision makers need to attend to potential cognitive biases and misapplied heuristics that can interfere with their decision making, for example, confirmation bias [4,5]. While evidence suggests that DBDM can improve student outcomes [6], more work is needed to effectively translate this into widespread practice. Unfortunately, although teachers collect a great deal of data, they rarely use it explicitly in decision making for individual students’ progress [7].
The outcomes of decisions and interventions implemented within the classroom depend on the validity of the inferences drawn from the data. Unfortunately, data literacy tends to be low among school personnel, contributing to this limited use of data in decision making [8,9]; however, supporting staff understanding of data can increase their data literacy [10]. Data literacy concerns the ability to analyse and interpret data so that it can be used to inform practice. Data only becomes usable information when the observer is able to understand it, which involves multiple steps. First, it is necessary to collect and organize data related to the goal and then transform the data into information. Then, educators need to be proficient in analyzing and summarizing, creating concise and targeted summaries of relevant information. Finally, by synthesizing the information into a unified and usable summary and prioritizing what has the most importance in working toward the intended goals, the information becomes useable knowledge. Teachers can use this knowledge to determine the effectiveness of an intervention, creating a feedback loop to the previous stages which informs changes needed to increase intervention effectiveness [3]. The complexity of this process may lead to false interpretations if the educator is not proficient in these skills.
Formal professional development in areas of data use can be difficult to access. Often, knowledgeable staff members train principals or other administrative staff, who then train teachers, relying on colleagues rather than development programs [9,11]. This training tends to be brief, without consistent levels of quality, and with a focus on using data systems rather than on data interpretation or how to connect the resulting information with strategies for instructional improvement [9]. Even within schools that promote and support the use of data, data are rarely used to improve teaching or adapt instruction to meet the needs of students [9,12].
Most often teachers respond by looking at the content of instruction, re-teaching or retesting the relevant information, or forming groups based on the identified needs of students rather than adjusting the delivery methods of their instruction [12]. To appropriately use the data collected in the classroom, teachers must be given the opportunity to improve their skills in data literacy. Means and colleagues [9] suggest that collaboration can be useful in this process, allowing teachers to learn from each other, clarify any problems, correct errors, and bring a broader range of interpretive skills to the task. Greater access to technological resources such as student information systems, instructional management systems, assessment systems, and diagnostic systems, can also help teachers meaningfully collect, analyse, and communicate data when the complexity becomes overwhelming [11]. Although DBDM is a process that can support effective teaching and positive student outcomes, teachers receive little to no training in this area and often have limited time to devote to data analysis and interpretation.
Wilcox, G., Fernandez Conde, C., & Kowbel, A. (2021). Using Evidence-Based Practice and Data-Based Decision Making in Inclusive Education. Education Sciences, 11(3), 129. https://doi.org/10.3390/educsci11030129
_______________________________________________
Looking Beyond the Achievement Gap: An Inquiry Into Bridging the Divide Between Teacher Preparedness and English Learners (2025)
“English Learners (ELs) in early childhood and elementary settings continue to face a significant literacy achievement gap compared to their non-EL peers. A qualitative action research study was conducted in a suburban Midwest district to examine how teacher preparedness, instructional practices, and professional development influence EL literacy development. Through teacher interviews, surveys, and classroom observations, the study identified key challenges, including limited EL-specific training, minimal collaboration with EL specialists, and inadequate instructional resources. Aimsweb Plus data confirmed a widening literacy gap, reinforcing the need for culturally responsive teaching, improved collaboration, and targeted interventions. The findings emphasize the importance of sustained professional learning and school-wide strategies that prioritize equity and inclusion to support EL achievement.”
Rhoads, Sean. (2127). Looking Beyond the Achievement Gap: An Inquiry Into Bridging the Divide Between Teacher Preparedness and English Learners’ Literacy Success". Theses and Dissertations.
https://ir.library.illinoisstate.edu/etd/2127
So, I’m hopeful that the collection of “What does evidence-based practice in education mean?” is of some value in assisting students.
End of story!