Hempenstall, K. (2002). Will Education Ever Embrace Empirical Research? Direct Instruction News, 13.
I’m Kerry Hempenstall, and I was working in education settings around 2000, and there was not a strong area of science in the education area. I wrote a text on it around that time.
The text below was published (Slavin) around the same time and provides a sense of how the area worked.
“At the dawn of the 21st century, educational research is finally entering the 20th century. The use of randomized experiments that transformed medicine, agriculture, and technology in the 20th century is now beginning to affect educational policy. This article discusses the promise and pitfalls of randomized and rigorously matched experiments as a basis for policy and practice in education. It concludes that a focus on rigorous experiments evaluating replicable programs and practices is essential to build confidence in educational research among policymakers and educators. However, new funding is needed for such experiments and there is still a need for correlational, descriptive, and other disciplined inquiry in education. Our children deserve the best educational programs, based on the most rigorous evidence we can provide.”
Slavin, R. E. (2002). Evidence-Based Education Policies: Transforming Educational Practice and Research.
Now, let’s look at relatively recent texts of Empirical Research. All the texts included are from 2020 to 2025. Has the relationship between schools and evidence changed?
“Yes, there's a growing movement in education to embrace empirical research, but it's not a complete shift yet. Many educators are using data and evidence to understand student progress and improve their teaching practices. However, the use of evidence-informed practices can vary, and some argue that a narrow focus on statistical evidence can be detrimental to teachers' agency and student learning.
Many schools are now incorporating data and evidence into their decision-making processes, aiming to improve teaching and learning. This includes using data to understand student needs, assessing the effectiveness of different teaching strategies, and make adjustments to their approach.
The Importance of Empirical Research: Empirical research provides a way to objectively investigate educational phenomena and make informed decisions about teaching and learning. It helps to identify effective practices, evaluate the impact of different interventions, and develop evidence-based guidelines for educators.
Challenges and Criticisms: While there are benefits to using empirical research, there are also challenges and criticisms. Some argue that a narrow focus on statistical evidence can lead to a "cookbook" approach to teaching, where teachers are told what to do rather than making informed decisions based on their own expertise and understanding of their students.
The Need for Context: It's important to consider the context in which research is conducted and how it applies to specific classrooms and schools. Teachers need to be able to adapt research findings to their own unique situations and the needs of their students.
The Role of Teacher Agency: Some argue that teachers should not be reduced to simply implementing research findings, but should also have the autonomy to make informed decisions based on their own experience and expertise.
Moving Forward: The key to successfully embracing empirical research in education is to find a balance between using evidence-based practices and respecting the role of teacher agency. This requires ongoing dialogue between researchers and educators, and a commitment to using research findings to improve teaching and learning in a meaningful way.
In conclusion, while education is increasingly embracing empirical research, it's not a simple matter of blindly following research guidelines. A more nuanced approach is needed that recognizes the value of both research and teacher expertise, and that allows for the adaptation of research findings to specific contexts.”
Hello Lucinda and Andy from Pam Snow I am not sure how well this will work, but I have complied a detailed response to this piece, on my blog, The Snow Report. I understand that AARE no longer provides links to other blogposts, so am copying and pasting it here. I don’t think the formatting will survive, so you might need to go to the source, but I have numbered the points for ease of readability. Note too that my embedded links won’t show up here. I would of course welcome comments. Kind regards, Pam
Let’s all throw out the scientific method. It’s not perfect and it’s too hard.
Last week, a piece entitled The problem with using scientific evidence in education (why teachers should stop trying to be more like doctors) appeared on the AARE Blog. This was co-authored by an education academic, Dr Lucinda McKnight (Deakin University) and a medical education academic Dr Andy Morgan (Monash University). The authors purport to mount an argument as to why the notion of evidence-based practice should be resisted in education. I believe the article is deeply flawed at a number of levels, and have provided a detailed response to it here. By way of background, I worked in medical education for ten years, and have also taught teachers at postgraduate level, as well as having taught across some ten allied health professions.
For teachers to be like doctors, and base practice on more “scientific” research, might seem like a good idea. But medical doctors are already questioning the narrow reliance in medicine on randomised controlled trials that Australia seems intent on implementing in education. Where is the evidence that all that is being recommended is randomised controlled trials (RCTs)?
It is right that evidence derived from RCTs be questioned, because it is right that evidence from all research be questioned.
In medicine, there is a sound understanding of an efficacy trial Vs an effectiveness trial. This difference should be considered and discussed in education as well.
In education, though, students are very different from each other. They are no more different, nor similar to each other than are patients. Doctors, like teachers, rely on pattern recognition to form and test hypotheses. They could not do their jobs if this wasn’t the case. Unlike those administering placebos and real drugs in a medical trial, teachers know if they are delivering an intervention. Students know they are getting one thing or another. The person assessing the situation knows an intervention has taken place.
Yes, but students do not necessarily know which teachers have been exposed to an intervention, e.g. a series of professional learning seminars. And researchers with overall responsibility for a trial can easily be blinded to allocation group – I speak from personal experience on this.
This statement betrays an unfortunate lack of understanding of the nature of RCTs in education. Constructing a reliable educational randomised controlled trial is highly problematic and open to bias.
Yes, doing rigorous research is challenging.
Yes, all research is open to bias.
Skilled researchers make it their business to recognise, and minimize sources of bias, and to report their findings with caution.
Before Australia decides teachers need to be like doctors, we want to tell you what is happening and give you some reasons why evidence based medicine itself is said to be in crisis. The fact that researchers are questioning an approach does not mean it is being thrown out in its entirety. That’s exactly the kind of thinking that has plagued education for decades, as shown by the tendency to adopt fads and fashions, with zero research behind them, let alone any that has any supporting evidence. I have written about this here with my colleague, Dr Caroline Bowen. Randomised controlled trials are just one kind of evidence
And this is news because??
Of course RCTs are only one kind of evidence. That’s like saying Toyota is only one make of car.
Medicine now recognises a much broader evidence base than just randomised controlled trials. This is not news. Medicine has always recognised a range of study designs. What seems to be overlooked in educational discourse, however is the notion of levels of evidence.
In health, it is recognised that different study designs have different degrees of strength in establishing the efficacy or effectiveness of an approach.
Other kinds of medical evidence include: practical “on-the-job” expertise; professional knowledge; insights provided by other research such as case studies; intuition; wisdom gained from listening to patient histories and discussions with patients that allow for shared decision-making or negotiation. These are obviously important in all fields. They just don’t sit at the top of the hierarchy as to what can be established, replicated, and/or refuted, using the scientific method.
Privileging randomised controlled trials allows them to become sticks that beat practitioners into uniformity of practice, no matter what their patients want or need. Such practitioners become “cookbook” doctors or, in education, potentially, “cookbook” teachers. The best and most recent forms of evidence based medicine value a broad range of evidence and do not create hierarchies of evidence. Education policy needs to consider this carefully and treat all forms of evidence equally. All forms of evidence are not “equal”. This does not mean that they should not all be considered, including expert opinion, but human beings are prone to all kinds of cognitive bias.
Sometimes our intuitions tell us that something “should” work, or even that it “seems” to work, but the scientific evidence counters our intuitions. This is the subject of Andrew Leigh’s book Randomistas.
Teaching is a feminised profession, with a much lower status than medicine. It is easy for science to exert a masculinist authority over teachers, who are required to be ever more scientific to seem professional. They are called on to be phallic teachers, using data, tools, tests, rubrics, standards, benchmarks, probes and scientific trials, rather than “soft” skills of listening, empathising, reflecting and sharing. Where to start with this one? Let me point out a few facts:
Medicine is rapidly becoming feminised, with more females enrolled to study medicine in many Australian universities than males. Does this mean that it will now abandon centuries of commitment to the scientific method? I certainly hope not. Dark Ages, here we come, if it does.
There’s an assumption here that the scientific method would be an imposition on those poor feeble women in teaching, who would not be able to cope with the rigours of its analytic tools. How insulting.
There is no connection between genitalia and the tools of scientific inquiry. This is just silly. What about all those women who conduct (and use) quantitative education research? Where does this leave them?
So-called “soft skills” are just as important in medicine as knowledge of human biosciences, pharmacology, and so on. No-one is suggesting otherwise.
Why can’t doctors and teachers be content experts AND competent consumers of new research? When did it become either-or?
A Western scientific evidence-base for practice similarly does not value Indigenous knowledges or philosophies of learning. Externally mandated guidelines also negate the concepts of student voice and negotiated curriculum. Education and medicine both need to show a deep respect for and understanding of indigenous knowledge and practices. That does not mean that the Aboriginal man presenting to the Emergency Department with chest pain automatically wants to receive a different type of care from his non-indigenous counterpart. If the latter receives an immediate ECG and blood tests, then so should the Aboriginal patient. There is a place for the student/patient voice and there is a place for professionals to do what professionals are trained and paid to do.
While confident doctors know the randomised controlled trial-based statistics and effect sizes need to be read with scepticism, this is not so easy for many teachers. If randomised controlled trial-based guidelines are to rule teaching, teachers will also potentially be monitored for compliance with guidelines they may not fully understand or accept, and which may potentially harm their students. If teachers are not confident in interpreting research studies (and I agree they are not), then education faculties need to step up and teach them how to be critical consumers of research – quantitative, qualitative, and mixed methods.
All professionals are monitored for compliance – that’s part of what being a professional means. It is a highly constrained form of public accountability. I have blogged about this previously.
Evidence based medicine is about populations, not people The fallacy here of course, is that populations are not made up of people. Evidence-based medicine is about using robust study designs to control a range of sources of error so that appropriate conclusions are drawn. It is then up to the individual practitioner to consider the findings in the course of their clinical decision making on a case-by-case basis. As noted below, this entails consideration of evidence, patient values, and clinical resources. But the evidence part is non-negotiable.
While medical randomised controlled trials save lives by demonstrating the broad effects of interventions, they make individuals and their needs harder to perceive and respect. Randomised controlled trial-based guidelines can mean that diverse people are forced to conform to simplistic ideals. Rather than starting with the patient, the doctor starts with the rule. Is this what we want for teaching?. Well at least we have some acknowledgement here that RCTs can help to save lives!
It is not the role of an RCT to bring individuals into sharp focus. We have many other study designs that do that much better, and they are considered alongside the findings of RCTs in the development of treatment guidelines.
When medical guidelines are applied in rigid ways, patients can be harmed When any guidelines are applied in rigid ways people can be harmed. There is nothing illuminating about this statement.
Anyone who is familiar with the pioneering evidence-based medicine work of Dr David Sackett and his colleagues will know that this model emphasises empirical research + patient values + clinical resources. It is not, and never has been, about research evidence alone. Right from the start in medicine, it was emphasised that evidence-based medicine is not a cook-book approach. This is just a straw man.
Interestingly, evidence-based practice was initially seen as an unnecessary imposition on medicine in its early days. Now it underpins the way we educate all health professionals, and the community is the beneficiary – both as patients and as tax-payers.
Trials cannot be done on every single kind of person and so inevitably, many individuals are forced to have treatments that will not benefit them at all, or that are at odds with their wishes and beliefs. This is another nonsensical truism. No, we cannot include all kinds of people on planet earth in trials (clinical or educational). The whole purpose of research is that we sample from populations, in an effort to generalise back to the population as closely as possible.
Welcome to Research Methods 101.
Just because rigorous methodologies cannot answer every question, for every patient, every time, does not mean they are not the best horse in the race to back.
Educators need to ensure that teachers, not bureaucrats or researchers, remain the authority in their classrooms. Well a good way to make a start on this would be for education faculties to equip pre-service teachers with scientifically-derived knowledge and skills on (for example) the teaching of literacy and numeracy, as well as the ability to read and critique new research and make decisions about how this should inform practice.
Teachers cannot speak with authority if they do not know the research behind an approach and the extent to which this is contested. This is why, for example, medical students are taught that prescribing antibiotics for children with middle ear infections is controversial. They know the ground will shift under them over time, as the science changes, and are primed to watch for new evidence as it arises, and adjust their practice accordingly.
This is called accountability.
Scientific evidence gives rise to gurus A more critical and discerning teaching workforce would counter this in a flash – in the same way that it does in medicine. Gurus flourish where audiences can be easily wooed and charmed by pretty graphs and impressive looking numbers.
While medical-style guidelines may seem to have come from God, such guidelines, even in medicine are often multiple and contradictory. The “cookbook” teacher will always be chasing the latest guideline, disempowered by top-down interference in the classroom. Yes, this is the nature of scientific evidence. It changes, and is sometimes contradictory. Rather than “chasing” the latest guideline, professionals need to avail themselves of new evidence and work out how it should influence their practice.
This is called accountability.
In medicine, over five years, fifty percent of guideline recommendations are overturned by new evidence. A comparable situation in education would create unimaginable turmoil for teachers. The paper linked to here states “This investigation sheds light on low-value practices and patterns of medical research”. Wouldn’t this be a good thing in education too? That way, we might never go down the Brain Gym, coloured lenses, learning styles, multiple intelligences, left brain-right brain, brain-based learning (etc) time wasting and expensive rabbit holes that education is so fond of.
One of the challenges of living in a knowledge economy is that information changes. We all have an obligation to keep up as much as possible. Choosing your own adventure, whether as a doctor or a teacher, is not acceptable to the community.
Evidence-based practice risks conflicts of interest Then let’s be careful not to throw the baby out with the bath water. The more discerning and informed teachers (and doctors) are, the less prone they will be to commercial interests. This is part of the imperfect world in which we live and is not a reason to abandon the scientific method.
There are plenty of commercial interests at work in classrooms around the world today, regardless of the level of evidence underpinning the teaching that is occurring.
Randomised controlled trials in medicine routinely produce outcomes that are to the benefit of industry. Only certain trials get funded. Much unfavourable research is never published. Drug and medical companies set agendas rather than responding to patient needs, in what has been described as a guideline “factory”. These are all legitimate concerns about health research that need to be managed. They are not reasons to abandon the scientific method. See babies and bath water, above. Do we want what happens in classrooms to be dictated by profit driven companies, or student-centred teachers?
As noted above, there are plenty of profit-making companies doing their thing in classrooms around the world right now, cashing in on the fact that teachers are a soft target for approaches with a slick marketing spin, and a few researchy-sounding words in the glossy brochure and on the equally glossy box.
The purpose of having a more research-informed teaching workforce is being able to head off snake-oil sales people at the school gate.
We call for an urgent halt to the imposition of ‘evidence-based’ education on Australian teachers, until there a fuller understanding of the benefits and costs of narrow, statistical evidence-based practice. In particular, education needs protection from the likely exploitation of evidence-based guidelines by industries with vested interests. Ironically, such a halt wouldn’t cause a great deal of disruption, given the limited extent to which evidence-based practice has genuinely found its way into education discourse.
Rather than removing teacher agency and enforcing subordination to gurus and data-based cults, education needs to embrace a wide range of evidence and reinstate the teacher as the expert who decides whether or not a guideline applies to each student.
Perhaps we need to consider the possibility of enhanced teacher agency, in a world where teachers are knowledgeable and confident consumers of new research, by virtue of their grasp of research methodologies and critical appraisal skills?
We can’t consider “a wide range of evidence” while disregarding evidence from RCTs, and not understanding the notion of levels of evidence means that equal weight is inappropriately assigned to a single case study and a meta-analysis of several RCTs. They all contribute to the understanding of an issue, but not necessarily equally on a study-by-study basis.
One of the key differences between medicine and education, is that doctors frequently need to gain informed consent for their actions. In education, however, consent is implied. Students cannot give or withhold their consent for a particular instructional approach. It just comes their way, like it or not. This only serves to increase, not decrease the ethical burden on teachers to teach in ways that are supported by strong empirical evidence.
Pretending teachers are doctors, without acknowledging the risks and costs of this, leaves students consigned to boring, standardised and ineffective cookbook teaching. Do we want teachers to start with a recipe, or the person in front of them? No-one is pretending teachers are doctors. But if they want to be afforded at least some professional autonomy, then they have to accept professional accountability, just like other professions do. We all need to acknowledge, however, that no profession is completely autonomous, least of all medicine. We all need to be accountable to our “consumers”, our employers, our professional bodies, and the community.
No, we don’t want recipes, but nor do we want random chaos, and the wild west of everyone choosing their own adventure, either. The Age of Enlightenment created some enduring legacies that we would all do well to hang on to.
As I have stated previously:
Education and medicine, for example, have a great deal in common; they both concern people, interactions between people, complex co-occurrences, and hard-to-control (actually impossible to control) variables, such as race, gender, ethnicity, religion, intelligence, empathy, sometimes unpredictable and seemingly inexplicable behaviour, resource limitations, and the need to establish trust and rapport.
Most importantly, both have to deal with uncertainty, coupled with a weight of responsibility and accountability to communities, peers, and policy-makers for outcomes.”
Everybody Cares About Using Education Research Sometimes (2025)
“In terms of a future research agenda, it would be helpful to dig deeper into what intermediaries understand by “evaluation” in different categories, and what would motivate them to conduct evaluations. A mandate to conduct systematic or rigorous evaluation is a good baseline – but is still missing in many organisations and systems. Beyond this, good quality evaluation is arguably more likely to arise from motivation that goes beyond accountability mechanisms (e.g. evaluation because it is required by a funder). There is also scope for much more conceptual and empirical experimentation when it comes to developing measurements for complex social indicators that show the impact of intermediary activities. The work that has been done suggests measuring these indicators may never be systematic in the same way that measuring simpler indicators can be. However, such work is still needed to improve existing practice and build the evidence base on which new or emerging intermediaries can draw.
Education research plays a critical role in shaping effective policies and practices that empower individuals and societies to thrive in an ever-evolving world. The Centre for Educational Research and Innovation (CERI) remains committed to supporting countries with timely research insights and forward-thinking approaches, helping them design education systems that are both robust and future-ready. Central to CERI's mission is the acknowledgment of education research as a key pillar for achieving quality education. The Strengthening the Impact of Education Research project has been a vital part of this mission, providing countries with evidence-based insights to enhance the use of research in policy making and practice.
Science is much more than a body of knowledge. It is a way of thinking. […] Science invites us to let the facts in, even when they don't conform to our preconceptions. […] This kind of thinking… is an essential tool for a democracy in an age of change” (Sagan, 1990[1]). Engaging with evidence involves stepping out of one’s comfort zone, critically reflecting on one’s actions, and being prepared to confront prior assumptions with an open mind. The push for education systems to embrace evidence has been around for more than a quarter of a century. Yet, despite countries' significant investments over the past decades, basing decisions on sound evidence seems to be a persistent problem for policy makers and practitioners.”
OECD (2025). Everybody Cares About Using Education Research Sometimes: Perspectives of Knowledge Intermediaries, Educational Research and Innovation, OECD Publishing, Paris, https://doi.org/10.1787/5ef88972-en
Framework Programme for Empirical Education Research (2024)
“Together for better education
Good education is key to individual and societal development. It is the most important resource for a self-determined lifestyle, for personal development and social participation. It strengthens progress, democracy and prosperity and is a basic prerequisite for finding constructive and creative answers to societal change and its current and future challenges, such as the digital and social-ecological transformation. With scientifically sound findings on education and educational biographies, empirical educational research is an essential building block for enabling the best possible education and participation for everyone.
With the first two Framework Programmes for Empirical Educational Research (2007 – 2024), the Federal Ministry of Education and Research (BMBF) has made a decisive contribution to establish empirical educational research in Germany and to scientifically address current challenges in education and society. Over the next seven years, the framework programme’s focus will be on providing research findings for tackling key challenges in the education sector and on strengthening the impact of the funded research. With this focus, we aim to initiate targeted further development of the education system. Research funding in the framework programme is aligned, bundled and tailored to produce innovative, reproducible and actionable knowledge. This strengthens the evidence-orientation of decision-making in education policy and contributes to a sustainable improvement in the quality of educational practice.
What we want to achieve
The educational goals of the Framework Programme for Empirical Educational Research are to enable the best opportunities for education and participation for everyone and to apply education as a resource for the development and cohesion of our society and for prosperity.
We fund educational research that provides practitioners, administrators and policymakers with a knowledge base to guide their actions in order to achieve these goals. Educational research in the framework programme identifies challenges and seeks new approaches and solutions. To this end, it investigates educational practices and systems. It analyses how learning processes and teaching methods work and thus identifies potential for improvement and develops new ideas. Finally, it addresses the question of how innovations can find their way into educational practice and into the education system. Against this background, our research and funding policy is guided by central objectives. With these objectives, we ensure that research findings sustainably impact educational practice, administration and policy and that they contribute to the development of a future-proof education system.
Objectives of our research and funding policy:
We expand scientifically excellent and applicable educational research in Germany. We create knowledge as a basis for a future-proof education system. We strengthen the cooperation between research and practice and emphasize transfer in both directions. In this way, research results will be even more compatible with everyday needs and requirements in the education sector. We support innovative approaches and new methods, but also fund outstanding basic research - provided that this is important and appropriate for the field of research.”
Learning environments, educational success and social participation) in the Framework Programme for Empirical Educational Research. Within the scope of the research priority, this is expected to provide a comprehensive survey of the social living environment of education participants – taking account of the differing living conditions in both urban and rural areas. Until now, educational research has largely been concerned with ‘places of learning’ in the narrower institutional sense – such as day-care centres and schools. When considering the causes of educational inequalities, the focus was often on development of competence and on educational decisions made within schools and families and/or at the level of the school system. The fact that education and learning processes do not only take place at kindergarten and school, or that they are in constant interaction with learning environments outside of school, was less frequently considered. The significance of the living environments for the educational acquisition processes of children and young people will therefore be investigated more intensively within the scope of this funding directive. To this end, research projects will be funded which, in close cooperation between researchers and educational practitioners, examine the interactions between different learning environments and identify approaches that contribute to overcoming educational barriers.”
BMBF (2024). Framework Programme for Empirical Education Research, Federal Ministry for Education and Research [BMBF}, Bonn,
Use of research-based information by school practitioners and determinants of use (2012)
“The trend towards using research knowledge to improve policies and practices is on the rise. However, despite considerable effort and notable progress in recent years, it seems that school practitioners continue to make little use of research and it is not clear what conditions would facilitate or obstruct this use. This review focuses exclusively on the available empirical research about (a) the use of research by school practitioners and (b) the determinants of use, and identifies future directions for research.”
Dagenais, C., Lysenko, L., C. Abrami, P., M. Bernard, R., Ramde, J., & Janosz, M. (2012). Use of research-based information by school practitioners and determinants of use: a review of empirical research. Evidence & Policy, 8(3), 285-309. Retrieved May 23, 2025, from https://doi.org/10.1332/174426412X654031
Supporting teachers to use research evidence well in practice (2023).
“Nearly 1,400 school teachers and leaders were surveyed across all Australian states and territories in 2020 and 2021.
AERO and the Q Project used the survey data to examine the relationship between support provided by schools, confidence in using research evidence and use of research evidence in practice. This report therefore uses empirical data to examine the importance of school support, and the role that confidence plays in the use of research evidence.
We found that when teachers and leaders are supported by their school to use research evidence, they are:
More confident in using research evidence More likely to use research evidence in their practice. The support can include being given dedicated time, professional learning, and access to a coach or school leader in a specialised research dissemination role or with specialised research expertise.
Many teachers and leaders indicated that their school provides support to use research evidence.
While teachers and leaders believed in the value of using research evidence, research evidence was not often used to help improve knowledge or practice or to make decisions.
Teachers and leaders had some concerns about their abilities to assess or analyse research evidence. These confidence gaps represent important opportunities to target improvement initiatives in schools.
Context
Increasingly, teachers, leaders, schools and school systems are becoming aware that using evidence generated from academic research (also referred to as research evidence) can improve their practice (Nelson and Campbell 2019). When school leaders and teachers engage with research evidence, their own teaching skills can improve (for example, Bell et al. 2010; Godfrey 2016), and learning and school performance outcomes also improve (for example, Mincu 2014; Supovitz 2015; Rose et al. 2017). These findings provide powerful reasons to use research evidence to inform practice.
But using research evidence well and incorporating it into practice is complex and demanding work.
‘Evidence is important, but what is more important is […] teacher expertise and professionalism [to] make better judgments about when, and how, to use research.’ (Wiliam 2019)
This ‘expertise and professionalism’ requires teachers and school leaders to value research evidence and to want to use it to improve their practice. It also requires teachers and leaders to have confidence in their own skills and knowledge to engage with research evidence (Rickinson et al. 2021a). Recent Australian studies have made clear that teachers and leaders require support and resources to do this (for example, Parker et al. 2020; Prendergast and Rickinson, 2019; Rickinson et al. 2021b; Walsh et al. 2022).”
So, beginning below is my original document written about 2002. Will it be different from the recent material from 2020 to 2025?
Will Education Ever Embrace Empirical Research?
Hempenstall, K. (2002). Will Education Ever Embrace Empirical Research? Direct Instruction News, 13.
“Abstract: Teaching has suffered both as a profession in search of community respect and as a force for improving the nation’s social capital because of its failure to adopt the results of empirical research as the major determinant of its practice.
There are a number of reasons why this has occurred, among them a science-aversive culture endemic among education policymakers and teacher education faculties. There are signs that change may be afoot in several countries. The Australian National Inquiry into the Teaching of Literacy has pointed to, and urged us to follow, a direction similar to that taken recently in Great Britain and the U.S. towards evidence-based practice.
However, the generally low quality of much educational research in the past has made the process of evaluating the evidence difficult, particularly for those teachers who have not had the training to discriminate sound from unsound research designs. Fortunately, there are a number of august bodies that have performed a sifting process to assist judging the value of research on important educational issues.”
“Teachers have been under increasing media fire in recent times. Too many students are failing, we read. The achievement gap appears insurmountable. Current teachers are not sufficiently well trained to teach successfully. Our brightest young people are not entering the teaching profession.
So, how should a nation respond? Education has a history of regularly adopting new ideas, but it has done so without the wide-scale assessment and scientific research that is necessary to distinguish effective from ineffective reforms. This absence of a scientific perspective has precluded systematic improvement in the education system, and it has impeded growth in the teaching profession for a long time (Carnine, 1995a; Hempenstall, 1996; Marshall, 1993; Stone, 1996).
Since that time, a consensus has developed among empirical researchers about a number of effectiveness issues in education, and a great deal of attention (Gersten, Chard, & Baker, 2000) is now directed at means by which these research findings can reach fruition in improved outcomes for students in classrooms. Carnine (2000) noted that education continues to be impervious to research on effective practices, and he explored differences between education and other professions, such as medicine, that are strongly wedded to research as the major practice informant.
Evidence-based medicine became well known during the 1990s. It enables practitioners to gain access to knowledge of the effectiveness and risks of different interventions, using reliable estimates of benefit and harm as a guide to practice. There is strong support within the medical profession for this direction, because it offers a constantly improving system that provides better health outcomes for their patients. Thus, increased attention is being paid to research findings by medical practitioners in their dealing with patients and their medical conditions. Practitioners have organizations, such as Medline (http://medline.cos. com) and the Cochrane Collaboration (www.cochrane.org), that perform the role of examining research, employing criteria for what constitutes methodologically acceptable studies.
They then interpret the findings and provide a summary of the current status of various treatments for various medical conditions. Thus, practitioners have the option of accepting predigested interpretations of the research or of performing their own examinations. This latter option presumes that they have the time and expertise to discern high quality from lesser research. Their training becomes a determinant whether this latter is likely to occur. In a parallel initiative during the 1990’s, the American Psychological Association (Chambless & Ollendick, 2001) introduced the term empirically supported treatments as a means of highlighting differential psychotherapy effectiveness.
Prior to that time, many psychologists saw themselves as developing a craft in which competence arises through a combination of personal qualities, intuition, experience. The result was extreme variability of effect among practitioners. The idea was to devise a means of rating therapies for various psychological problems, and for practitioners to use these ratings as a guide to practice.
The criteria for a treatment to be considered well established included efficacy through two controlled clinical outcomes studies or a large series of controlled single case design studies, the availability of treatment manuals to ensure treatment fidelity, and the provision of clearly specified client characteristics.
A second level involved criteria for probably efficacious treatments. These criteria required fewer studies, and/or a lesser standard of rigor. The third category comprised experimental treatments, those without sufficient evidence to achieve probably efficacious status. The American Psychological Association’s approach to empirically supported treatments could provide a model adaptable to the needs of education. There are great potential advantages to the education system when perennial questions are answered. What reading approach is most likely to evoke strong reading growth? Should “social promotion” be used or should retentions be increased? Would smaller class sizes make a difference? Should summer school programs be provided to struggling students? Should kindergarten be full day? What are the most effective means of providing remediation to children who are falling behind?
Even in psychology and medicine, however, it should be noted that 15 years later there remain pockets of voluble opposition to the evidence based practice initiatives. The first significant indication of a similar movement in education occurred with the Reading Excellence Act (The 1999 Omnibus Appropriations Bill, 1998) that was introduced as a response to the unsatisfactory state of reading attainment in the U.S.
It acknowledged that part of the cause was the prevailing method of reading instruction, and that literacy policies had been insensitive to developments in the understanding of the reading process. The Act, and its successors, attempted to bridge the gulf between research and classroom practice by mandating that only programs in reading that had been shown to be effective according to strict research criteria would receive federal funding. This reversed a trend in which the criterion for adoption of a model was that it met preconceived notions of “rightness” rather than that it was demonstrably effective for students. Federal funding is now only available only for programs with demonstrated effectiveness evidenced by reliable replicable research. Reliable replicable research was defined as objective, valid, scientific studies that: (a) include rigorously defined samples of subjects that are sufficiently large and representative to support the general conclusions drawn; (b) rely on measurements that meet established standards of reliability and validity; (c) test competing theories, where multiple theories exist; (d) are subjected to peer review before their results are published; and (e) discover effective strategies for improving reading skills (The 1999 Omnibus Appropriations Bill, 1998).
The National Research Council’s Center for Education (Towne, 2002) suggests that educators should attend to research that: (a) poses significant questions that can be investigated empirically; (b) links research to theory; (c) uses methods that permit direct investigation of the question; (d) provides a coherent chain of rigorous reasoning; (e) replicates and generalizes; and (f) ensures transparency and scholarly debate. The Council’s message is clearly to improve the quality of educational research, and reaffirm the link between scientific research and educational practice.
Ultimately, the outcomes of sound research should inform educational policy decisions, just as a similar set of principles have been espoused for the reached similar conclusions about the proper role of educational research: “Teaching, learning, curriculum, and assessment need to be more firmly linked to findings from evidence-based research indicating effective practices, including those that are demonstrably effective for the particular learning needs of individual children” (p. 9).
It recommends a national program to produce evidence-based guides for effective teaching practice, the first of which is to be on reading. In all, the Report used the term “evidence based” 48 times. For example, they argued strongly for empirical evidence to be used to improve the manner in which reading is taught in Australia.
In sum, the incontrovertible finding from the extensive body of local and international evidence-based literacy research is that for children during the early years of schooling (and subsequently if needed), to be able to link their knowledge of spoken language to their knowledge of written language, they must first master the alphabetic code—the system of grapheme-phoneme correspondences that link written words to their pronunciations.
Because these are both foundational and essential skills for the development of competence in reading, writing and spelling, they must be taught explicitly, systematically, early, and well. (p. 37) Slavin (2002) considers that the acceptance of such initiatives will reduce the pendulum swings that have characterized education thus far, and could produce revolutionary consequences in increasing educational success generally, and redressing educational achievement differences within our community. So, the implication is that education and research have not been adequately linked in these countries. Why has education been so slow to attend to medical profession. The fields that have displayed unprecedented development over the last century, such as medicine, technology, transportation, and agriculture, have been those embracing research as the prime determinant of practice (Shavelson & Towne, 2002).
In Great Britain, similar concerns to those evident in the U.S. led to a National Literacy Strategy (Department for Education and Employment, 1998) that mandates practice based upon research findings. In 2006, the Primary Framework for Literacy and Mathematics (Primary National Strategy, 2006) was released, updating its predecessor and aligning practice even more firmly with an evidence base.
In Australia, the National Inquiry into the Teaching of Literacy (2005) also Ultimately, the outcomes of sound research should inform educational policy decisions, just as a similar set of principles have been espoused for the medical profession. research as a source of practice knowledge? Carnine (1991) argued that the leadership has been the first line of resistance. He described educational policy-makers as lacking a scientific framework, and thereby inclined to accept proposals based on good intentions and unsupported opinions. Professor Cuttance, director of the Melbourne University’s Centre for Applied Educational Research, was equally blunt: “Policy makers generally take little notice of most of the research that is produced, and teachers take even less notice of it …” (Cuttance, 2005, p. 5). Carnine (1995b) also points to teachers’ lack of training in seeking out and evaluating research for themselves.
Their training institutions have not developed a research culture, and tend to view teaching as an art form, in which experience, personality, intuition, or creativity are the sole determinants of practice. For example, he estimates that fewer than one in 200 teachers are experienced users of the ERIC educational database. Taking a different perspective, Meyer (1991, cited in Gable & Warren, 1993) blames the research community for being too remote from classrooms. She argued that teachers will not become interested in research until its credibility is improved.
Research is often difficult to understand, and the careful scientific language and cautious claims may not have the same impact as the wondrous claims of ideologues and faddists unconstrained by scientific ethics. Fister and Kemp (1993) considered several obstacles to research-driven teaching, important among them being the absence of an accountability link between decision-makers and student achievement. Such a link was unlikely until recently, when regular mandated state or national test programs results became associated with funding. They also apportion some responsibility to the research community for failing to appreciate the necessity of adequately connecting research with teachers’ Viadero (2002) reports on a number of initiatives in which teachers have become reflective of their own work, employing both quantitative and qualitative tools. She also notes that the American Educational Research Association has a subdivision devoted to the practice.
Some have argued that science has little to offer education, and that teacher initiative, creativity, and intuition together provide the best means of meeting the needs of students. For example, Weaver considers scientific research offers little of value to education (Weaver et al., 1997). “It seems futile to try to demonstrate superiority of one teaching method over another by empirical research” (Weaver, 1988, p. 220). These writers often emphasize the uniqueness of every child as an argument against instructional designs that presume there is sufficient commonality among children to enable group instruction with the same materials and techniques.
Others have argued that teaching itself is ineffectual when compared with the impact of socioeconomic status and social disadvantage (Coleman et al., 1966; Jencks et al., 1972). Smith (1992) argued that only the relationship between a teacher and a child was important in evoking learning. Further, he downplayed instruction in favour of a naturalist perspective. “Learning is continuous, spontaneous, and effortless, requiring no particular attention, conscious motivation, or specific reinforcement” (p. 432). Still others view research as reductionist, and unable to encompass the holistic nature of the learning process (Cimbricz, 2002; Poplin, 1988). What sorts of consequences have arisen in other fields from failure to incorporate the results of scientific inquiry? Galileo observed moons around Jupiter in 1610. Francesco Sizi’s armchair refutation of such planets was:
There are seven windows in the head, two nostrils, two ears, two eyes, and a mouth. So in the heavens there are seven— two favourable stars, two unpropitious, 14 Summer 2007 concerns. The specific criticisms included a failure to take responsibility for communicating findings clearly, and with the end-users in mind. Researchers have often validated practices over too brief a time frame, and in too limited a range of settings to excite general program adoption across settings. Without considering the organizational ramifications (such as staff and personnel costs) adequately, the viability of even the very best intervention cannot be guaranteed.
The methods of introduction and staff training in innovative practices can have a marked bearing on their adoption and continuation. Woodward (1993) pointed out that there is often a culture gulf between researchers and teachers. Researchers may view teachers as unnecessarily conservative and resistant to change; whereas, teachers may consider researchers as unrealistic in their expectations and lacking in understanding of the school system and culture. Teachers may also respond defensively to calls for change because of the implied criticism of their past practices, and the perceived devaluation of the professionalism of teachers. Leach (1987) argued strongly that collaboration between change-agents and teachers is a necessary element in the acceptance of novel practice. In his view, teachers need to be invited to make a contribution that extends beyond solely the implementation of the ideas of others.
There are some positive signs that such a culture may be in the early stages of development. Some have argued that science has little to offer education, and that teacher initiative, creativity, and intuition together provide the best means of meeting the needs of students. The consequence has been an unnecessary burden upon struggling students to manage the task of learning to read. Not only have they been denied helpful strategies, but they have been encouraged to employ moribund strategies.
Consider this poor advice from a newsletter to parents at a local school: If your child has difficulty with a word: Ask your child to look for clues in the pictures. Ask your child to read on or reread the passage and try to fit in a word that makes sense. Ask your child to look at the first letter to help guess what the word might be. When unsupported beliefs guide practice, we risk deleterious inconsistency at the individual teacher level and disaster at the education system level. There are several groups with whom researchers need to be able to communicate if their innovations are to have a chance of adoption.
At the classroom level, teachers are the focal point of such innovations and their competent and enthusiastic participation is required if success is to be achieved. At the school administration level, principals are being given increasing discretion as to how funds are to be disbursed; therefore, time spent in discussing educational priorities, and cost-effective means of achieving them, may be time well-spent, bearing in mind Gersten and Guskey’s (1985) comment on the importance of strong instructional leadership.
At the broader system level, decision makers presumably require different information, and assurances about the viability of change of practice. Publishers of educational texts, a further group, have not typically been viewed as allies among those seeking educational textbook reform to better reflect rigorous empirically-based standards. Perhaps because of frustration at the problems experienced in ensuring effective practices are employed across the nation, we are beginning to see a top-down approach, in which research-based educational practices two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc., we gather that the number of planets is necessarily seven. We divide the week into seven days, and have named them from the seven planets. Now if we increase the number of planets, this whole system falls to the ground. Moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist (Holton & Roller, 1958, as cited in Stanovich, 1996, p. 9). Galileo taught us the value of controlled observation, whilst Sizi highlighted the limitations of armchair theorizing.
The failure to incorporate empirical findings into practice can have far-reaching consequences. Even medicine has had only a brief history of attending to research. Early in the 20th century, medical practice was at a similar stage to that of education currently. For example, it was well known that bacteria played a critical role in infection, and 50 years earlier Lister had shown the imperative of antiseptic procedures in surgery. Yet, in this early period of the century, surgeons were still wiping instruments on whatever unsterilized cloth that was handy, with dire outcomes for their patients.
More recently, advice from paediatrician Dr. Benjamin Spock to have infants sleep face down in their cots caused approximately 60 thousand deaths from Sudden Infant Death Syndrome in the U.S., Great Britain and Australia between 1974 and 1991, according to researchers from the Institute of Child Health in London (Dobson & Elliott, 2005). His advice was not based upon any empirical evidence, but rather armchair analysis. The book, Baby and Child Care (Spock, 1946), was extraordinarily influential, selling more than 50 million copies. Yet, while the book continued to espouse this practice, reviews of risk factors for SIDS by 1970 had noted the risks of infants sleeping face down.
In the 1990’s, when public campaigns altered this practice, the incidence of SIDS death halved within one year. In recent times, more and more traditional medical practices are being subjected to empirical test as the profession increasingly established credibility. Are there examples in education in which practices based solely upon belief, unfettered by research support, have been shown to be incorrect, and have led to unhelpful teaching? • Learning to read is as natural as learning to speak (National Council of Teachers of English, 1993).
Children do not learn to read in order to be able to read a book, they learn to read by reading books (NZ Ministry of Education, as cited in Mooney, 1988). Parents reading to children is sufficient to evoke reading (Fox, 2005). Good readers skim over words rather than attending to detail (Goodman, 1985). Fluent readers identify words as ideograms (Smith, 1973). Skilled reading involves prediction from context (Emmitt, 1996). English is too irregular for phonics to be helpful (Smith, 1999). Accuracy is not necessary for effective reading (Goodman, 1974). Good spelling derives simply from the act of writing (Goodman, 1989). These assertions have influenced educational practice for the last 20 years, yet they have each been shown by research to be incorrect (Hempenstall). When unsupported beliefs guide practice, we risk deleterious inconsistency at the individual teacher level and disaster at the education system level. are either mandated, as in Great Britain (Department for Education and Employment, 1998) or made a pre-requisite for funding, as in the 2001 No Child Left Behind Act (U.S. Department of Education, 2002).
Whether this approach will be successful in changing teachers’ practice remains to be seen. In any case, there remains a desperate need to address teachers’ and parents’ concerns regarding classroom practice in a cooperative and constructive manner. In Australia, pressure for change is building, and the view of teaching as a purely artisan activity is being challenged. Reports such as that by the National Inquiry into the Teaching of Literacy (2005) have urged education to adopt the demeanor and practice of a research-based profession. State and national testing has led to greater transparency of student progress, and, thereby, to increased public awareness.
Government budgetary vigilance is greater than in the past, and measurable outcomes are the expectation from a profession that has not previously appeared enthused by formal testing. A further possible spur occurred when a parent successfully sued a private school for a breach of the Trade Practices Act (Rood & Leung, 2006). She argued that it had failed to deliver on its promise to address her son’s reading problems.
Reacting to these various pressures, in 2005 the National Institute for Quality Teaching and School Leadership began a process for establishing national accreditation of pre-service teacher education. The Australian Council for Educational Research is currently evaluating policies and practices in pre-service teacher education programs in Australia. The intention is to raise and monitor the quality of teacher education programs around the nation. There are other sources of active and passive resistance to the changes implied in instructional reform. These have been evident in each of the three counties mentioned, and include teacher education faculties, books of 453 studies, 205 met the criteria.
So, there is certainly a need for educational research to become more rigorous in the future. In the areas in which confidence is justified, how might we weigh the outcomes of empirical research? Stanovich and Stanovich (2003) propose that competing claims to knowledge should be evaluated according to three criteria.
First, findings should be published in refereed journals. Second, the findings have been replicated by independent researchers with no particular stake in the outcome. Third, there is a consensus within the appropriate research community about the reliability and validity of the various findings—the converging evidence criterion. Although the use of these criteria does not produce infallibility it does offer better consumer protection against spurious claims to knowledge.
Without research as a guide, education systems are prey to all manner of gurus, publishing house promotions, and ideologically-driven zealots. Gersten (2001) laments that teachers are “deluged with misinformation” (p. 45). Unfortunately, education courses have not provided teachers with sufficient understanding of research design to enable the critical examination of research. In fact, several whole language luminaries (prominent influences in education faculties over the past 20 years) argued that research was unhelpful in determining practice (Hempenstall, 1999).
Teachers-in training need to be provided with a solid understanding of research design to adapt to the changing policy emphasis (National Inquiry into the Teaching of Literacy, 2005). For example, in medicine, psychology, and numerous other disciplines, randomized controlled trials are considered the gold standard for evaluating an intervention’s effectiveness.
Training courses in these professions include a strong emphasis on empirical research design. There is much to learn about interpreting other forms of research too, publishers, and various teacher organizations and unions. The recent travails threatening the Reading First initiative in the U.S. have brought into sharp relief the level of resistance that reform can evoke.
There is another stumbling block to the adoption of evidence-based practice. Is the standard of educational research generally high enough to enable sufficient confidence in its findings? Broadly speaking, some areas (such as reading) invite confidence; whereas, the quality of research in other areas cannot dispel uncertainty. Partly, this is due to a preponderance of short-term, inadequately designed studies. When Slavin (2004) examined the American Educational Research Journal over the period 2000-03, only 3 out of 112 articles reported experimental/control comparisons in randomized studies with reasonably extended treatments.
The National Reading Panel (2000) selected research from the approximately 100,000 reading research studies that have been published since 1966, and another 15,000 that had been published before that time. The panel selected only experimental and quasi-experimental studies, and among those considered only studies meeting rigorous scientific standards in reaching its conclusions. Phonemic awareness: of 1,962 studies, 52 met the research methodology criteria; phonics: of 1,373 studies, 38 met the criteria; guided oral reading: of 364 studies, 16 met the criteria; vocabulary instruction: of 20,000 studies, 50 met the criteria; compare. There is another stumbling block to the adoption of evidence-based practice. Is the standard of educational research generally high enough to enable sufficient confidence in its findings? Direct Instruction News 17 (U.S. Department of Education, 2003). In education, however, there is evidence that the level of quantitative research preparation has diminished in some teacher education programs over the past 20 years (Lomax, 2004). But, are there any immediate shortcuts to discerning the gold from the dross? If so, where can one find the information about any areas of consensus?
Those governments that have moved toward a pivotal role for research in education policy have usually formed panels of prestigious researchers to peruse the evidence in particular areas, and report their findings widely (e.g., National Reading Panel, 2000). They assemble all the methodologically acceptable research, and synthesize the results, using statistical processes such as meta-analysis, to enable judgments about effectiveness to be made. It involves clumping together the results from many studies to produce a large data set that reduces the statistical uncertainty that inevitably accompanies single studies. So, there are recommendations for practice produced by these bodies that are valuable resources in answering the question what works?
These groups include the National Reading Panel, American Institutes for Research, National Institute for Child Health and Human Development, The What Works Clearinghouse, and the Coalition for Evidence-Based Policy. A fuller list with Web addresses can be found in the appendix. As an example, Lloyd (2006) summarizes a number of such meta-analyses for some approaches. In this method an effect size of 0.2 is considered small, 0.5 is effect, and 0.8 is large (Cohen, 1988).
For early intervention programs, there were 74 studies, 215 effect sizes, and an overall effect size (ES) = 0.6. For Direct Instruction (DI), there were 25 studies, 100+ effect sizes, and an overall ES = 0.82. For behavioral treatment of classroom problems of students with behavior disorder, there were 10 studies, 26 effect sizes, and an overall ES = 0.93. For whole language, there were 180 studies, 637 effect sizes of the National Institute for Quality Teaching and School Leadership (2005).
A prediction for the future, perhaps 15 years hence? Instructional approaches will need to produce evidence of measurable gains before being allowed within the school curriculum system. Education faculties will have changed dramatically as a new generation takes control. Education courses will include units devoted to evidence-based practice, perhaps through an increased liaison with educational psychology.
Young teachers will routinely seek out and collect data regarding their instructional activities. They will become scientist-practitioners in their classrooms. Student progress will be regularly monitored, and problems in learning will be noticed early and addressed systematically. Overall rates of student failure will fall. More so than any generation before them, the child born today should benefit from rapid advances in the understanding of human development, and of how that development may be optimized.
There has been an explosion of scientific knowledge about the individual in genetics and the neurosciences, but also about the role of environmental influences, such as socio-economic status, early child rearing practices, effective teaching, and nutrition. However, to this point, there is little evidence that these knowledge sources form a major influence on policy and practice in education.
There is a serious disconnect between the accretion of knowledge and its acceptance and systematic implementation for the benefit of this growing generation. Acceptance of a pivotal role for empiricism is actively discouraged by advisors to policymakers, whose ideological position decries any influence of science. There are unprecedented demands on young people to cope with an increasingly complex world. It is one in which the sheer volume of information, and the sophisticated persuasion techniques, to which they will be subjected may overwhelm the capacities that can overall ES = 0.09. For perceptual/motor training, there were 180 studies, 117 effect sizes, and an overall ES = 0.08. For learning styles, there were 39 studies, 205 effect sizes, and an overall ES = 0.14.
These sources can provide great assistance, but it is not only the large scale, methodologically sophisticated studies that are worthwhile. A single study involving a small number of schools or classes may not be conclusive in itself, but many such studies, preferably done by many researchers in a variety of locations, can add some confidence that a program’s effects are valid (Slavin, 2003). If one obtains similar positive benefits from an intervention across different settings and personnel, there is added reason to prioritize the intervention for a large gold-standard study. Taking an overview, there are a number of options available to create educational reform.
One involves the use of force, as with the National Literacy Strategy (Department for Education and Employment, 1998) in Great Britain. Another option involves inveigling schools with extra money, as in the U.S. with the No Child Left Behind Act (U.S. Department of Education, 2002). Still another is to inculcate skills and attitudes during teacher training. While these are not mutually exclusive options, the third appears to be a likely component of any reform movement in Australia, given the establishment and object currently fad-dominated educational systems can provide for young people. A recognition of the proper role of science in informing policy is a major challenge for us in aiding the new generation.
This perspective does not involve a diminution of the role of the teacher, but rather the integration of professional wisdom with the best available empirical evidence in making decisions about how to deliver instruction (Whitehurst, 2002). Evidence-based policies have great potential to transform the practice of education, as well as research in education.
Evidence-based policies could finally set education on the path toward the kind of progressive improvement that most successful parts of our economy and society embarked upon a century ago. With a robust research and development enterprise and government policies demanding solid evidence of effectiveness behind programs and practices in our schools, we could see genuine, generational progress instead of the usual pendulum swings of opinion and fashion.
This is an exciting time for educational research and reform. We have an unprecedented opportunity to make research matter and to then establish once and for all the importance of consistent and liberal support for high quality research. Whatever their methodological or political orientations, educational researchers should support the movement toward evidence-based policies and then set to work generating the evidence that will be needed to create the schools our children deserve (Slavin, 2002, p.20).
Carnine, D. (1991). Curricular interventions for teaching higher order thinking to all students: Introduction to the special series. Journal of Learning Disabilities, 24, 261-269.
Carnine, D. (1995a). Trustworthiness, usability, and accessibility of educational research. Journal of Behavioral Education, 5, 251-258.
Carnine, D. (1995b.). The professional context for collaboration and collaborative research. Remedial and Special Education, 16(6), 368-371.
Carnine, D. (2000). Why education experts resist effective practices (and what it would take to make education more like medicine). Washington, DC: Fordham Foundation. Retrieved May 15, 2001, from http://www. edexcellence.net/library/carnine.html Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.
Cimbricz, S. (2002, January 9). State-mandated testing and teachers’ beliefs and practice. Education Policy Analysis Archives, 10(2). Retrieved March 11, 2003, from http://epaa.asu.edu/epaa/v10n2.html
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.), Hillsdale, NJ: Lawrence Earlbaum.
Coleman, J., Campbell, E., Hobson, C., McPartland, J., Mood, A., Weinfeld, F. D., and York, R. (1966). Equality of educational opportunity. Washington, D.C.: Department of Health, Education and Welfare.
Cuttance, P. (2005). Education research ‘irrelevant.’ The Age, July 5, p. 5.
Department for Education and Employment. (1998). The National Literacy Strategy: Framework for Teaching. London: Crown.
Dobson, R., & Elliott, J. (2005). Dr Spock’s advice blamed for cot deaths. London: University College. Retrieved March 11, 2006, from http://www.ucl.ac.uk/news-archive/ in-the-news/may-2005/latest/newsitem. shtml?itnmay0504
Emmitt, M. (1996). Have I got my head in the sand? — Literacy matters. In ‘Keys to life’ Conference proceedings, Early Years of Schooling Conference, Sunday 26 & Monday 27 May 1996, World Congress Centre, Melbourne, pp. 69- 75.
Melbourne: Directorate of School Education. Retrieved May 21, 2002, from http://www.sofweb.vic.edu. au/eys/pdf/Proc96.pdf
Fister, S., & Kemp, K. (1993). Translating research: Classroom application of validated instructional strategies. In R.C. Greaves & P. J. McLaughlin (Eds.), Recent advances in special education and rehabilitation. Boston: Andover Medical.
Fox, M. (2005, August 16). Phonics has a phony role in the literacy wars. Sydney Morning Herald, p. 6.
Gable, R. A. & Warren. S. F. (1993). The enduring value of instructional research. In Robert Gable & Steven Warren (Eds.), Advances in mental retardation and developmental disabilities: Strategies for teaching students with mild to severe mental retardation. Philadelphia: Jessica Kingsley.
Gersten, R. & Guskey, T. (1985, Fall). Transforming teacher reluctance into a commitment to innovation. Direct Instruction News, 11-12.
Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice, 16(1), 45-50.
Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities, 33, 445-457.
Direct Instruction News 19 National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Department of Health and Human Services.
Lomax, R.G. (2004). Whither the future of quantitative literacy research? Reading Research Quarterly, 39(1), 107-112.
Marshall, J. (1993). Why Johnny can’t teach. Reason, 25(7), 102-106.
Mooney, M. (1988). Developing life-long readers. Wellington, New Zealand: Learning Media.
National Council of Teachers of English. (1993). Elementary school practices. Retrieved January 11, 1999, from http://www.ncte.org/about/over/positions/ category/lang/107653.htm
National Inquiry into the Teaching of Literacy. (2005). Teaching Reading: National Inquiry into the Teaching of Literacy. Canberra, Australia: Department of Education, Science, and Training. Retrieved February 11, 2006, from www.dest.gov.au/nitl/report. htm
National Institute for Quality Teaching and School Leadership. (2005, August 25). National accreditation of pre-service teacher education. Retrieved October 15, 2005, from http://www.teachingaustralia.edu. au/home/What%20we%20are%20saying/ media_release_pre_service_teacher_ed_ accreditation.pdf
Goodman, K. S. (1974, Sept). Effective teachers of reading know language and children. Elementary English, 51, 823-828.
Goodman, K.S. (1985). Unity in reading. In H. Singer & R.B. Ruddell (Eds.), Theoretical models and processes of reading (pp. 813- 840). Newark, DE: International Reading Association.
Goodman, K. S. (1989). Whole language research: Foundations and development. The Elementary School Journal, 90, 208-221.
Hempenstall, K. (1996). The gulf between educational research and policy: The example of Direct Instruction and Whole Language. Behaviour Change, 13, 33-46.
Hempenstall, K. (1999). The gulf between educational research and policy: The example of Direct Instruction and whole language. Effective School Practices, 18(1), 15-29.
Jencks, C. S., Smith, M., Acland, H., Bane, M. J., Cohen, D., Ginits, H., et al. (1972). Inequality: A reassessment of the effect of family and schooling in America. New York: Basic Books.
Leach, D. J. (1987). Increasing the use and maintenance of behaviour-based practices in schools: An example of a general problem for applied psychologists? Australian Psychologist, 22, 323-332.
Poplin, M. (1988). The reductionist fallacy in learning disabilities: Replicating the past by reducing the present. Journal of Learning Disabilities, 21, 389-400.
Primary National Strategy (2006). Primary framework for literacy and mathematics. UK: Department of Education and Skills. Retrieved 26 October, from http://www. standards.dfes.gov.uk/primaryframeworks
Rood, D., & Leung, C. C. (2006, August 16). Litigation warning as private school settles complaint over child’s literacy. The Age, p. 6.
Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. Washington, DC: National Academy Press.
Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15-21.
Slavin, R. E. (2003). A reader’s guide to scientifically based research. Educational Leadership, 60(5), 12-16. Retrieved December 16, 2003, from http://www. ascd.org/publications/ed_lead/200302/ slavin.html
Slavin, R. E. (2004). Education research can and must address “What Works” questions. Educational Researcher, 33(1), 27-28.
Smith, F. (1973). Psychology and reading. New York: Holt, Rinehart & Winston. Smith, F. (1992). Learning to read: The never-ending debate. Phi Delta Kappan, 74, 432-441.
Smith, F. (1999). Why systematic phonics and phonemic awareness instruction constitute an educational hazard. Language Arts, 77, 150-155.
Spock, B. (1946). The commonsense book of baby and child care. New York: Pocket Books.
Stanovich, K. (1996). How to think straight about psychology (4th ed.). New York: Harper Collins.
Stanovich, P. J., & Stanovich, K. E. (2003). How teachers can use scientifically based research to make curricular & instructional decisions. Jessup, MD: The National Institute for Literacy. Retrieved September 16, 2003, from http://www.nifl.gov/partnership forreading/publications/html/stanovich/
Stone, J. E. (April 23, 1996). Developmentalism: An obscure but pervasive restriction on educational improvement. Education Policy Analysis Archives. Retrieved November 16, 2001, from http://seamonkey.ed.asu.edu/epaa.
The 1999 Omnibus Appropriations Bill (1998). The Reading Excellence Act (pp. 956- 1007). Retrieved February 12, 2003, from http://www.house.gov/eeo
initiative to identify programs that are effective in reducing adolescent violent crime, aggression, delinquency, and substance abuse.
The International Campbell Collaboration (http://www.campbellcollaboration.org/ Fralibrary.html). Offers a registry of systematic reviews of evidence on the effects of interventions in the social, behavioral, and educational arenas.
U.S. Department of Education (2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. Washington, D.C.: Institute of Education Sciences, U.S. Department of Education. Retrieved November 1, 2004, from http://www.ed.gov/print/rschstat/research/ pubs/rigorousevid/guide/html
Viadero, D. (2002). Research: Holding up a mirror. Editorial Projects in Education, 21(40), 32-35.
Weaver, C. (1988). Reading: Progress and practice. Portsmouth, NJ: Heinemann.
Weaver, C., Patterson, L, Ellis, L., Zinke, S., Eastman, P., & Moustafa, M. (1997). Big Brother and reading instruction. Retrieved December, 16, 2004, from http://www. m4pe.org/elsewhere.htm
Whitehurst, G. J. (2002). Statement of Grover J. Whitehurst, Assistant Secretary for Research and Improvement, Before the Senate Committee on Health, Education, Labor and Pensions. Washington, D.C.: U.S.
Department of Education. Retrieved January 23, 2003 from http://www.ed.gov/ offices/IES/speeches
Woodward, J. (1993). The technology of technology-based instruction: Comments on the research, development, and dissemination approach to innovation. Education & Treatment of Children, 16, 345-360.