This invaluable 'consumer's guide to research' by Pamela Snow is an excellent place for anyone who wishes to take an astute and critical role in evaluating educational research. Highly recommended.
This web page provides a clear hierarchy for levels of evidence in research and an excellent outline on how to evaluate a research paper. The subject context of this page is nursing, but the information is equally relevant to education research. This information should be mandatory in all teacher training programmes.
This massive analysis of educational research is an excellent starting point for investigating the effectiveness of different approaches. Hattie himself points out that the findings are not meant to be conclusive, but should serve as indicators of where to look more closely. (NB This link will take you to Amazon).
This interview from the Swedish employee magazine: educ.alla Utbildningsförvaltningen – Göteborg also appeared in the ResearchEd online magazine. Hattie is quizzed on the usefulness of effect sizes, criticisms over his calculations of statistics, and the insights research may lend to specific educational debates. Hattie’s justification for training teachers in research:
“It is certainly the case that many do not want to believe evidence as their own ‘experiences’ tell them different. Research starts from the premise of attempting to falsify your pet theories and many parents, teachers and politicians work from the premise of attempting to find beliefs and evidence to support their prior beliefs. This confirmation bias means they spend millions of dollars on the wrong issues, and thus do major damage to the learning of our children.”
A key tool for evaluating research, especially meta-analyses, is effect size. Here, Robert Coe outlines what it is, how it can be used and how it should not be used. Essential reading for those who are serious about knowing not just what works, but how well an approach may work in different contexts.
Nick Hassey raises questions about the use of RCTs in educational research, suggesting that many common assumptions amongst educators about this approach may be incorrect. The ensuing comments on the post provide an informative discussion.
This blog post critiques another which claimed to provide evidence on the relative typing speeds on iPads versus traditional computers. The ‘evidence’ is examined from a number of different perspectives. It is an excellent example of why teachers should be cautious about how we evaluate ‘evidence’.
Robert Coe presents a summary of research which takes into account the quantity and quality of the evidence available, as well as comparing the effect of such approaches with the cost of implementation. He argues that schools need to make better use of research and also to implement it carefully and rigorously.
An excellent overview of the ways in which research may be applied practically in the classroom.
Stanovich outlines the issues in defining 'dyslexia', and examines the lack of empirical evidence for the long-standing assumption that it is defined by a discrepancy between (low) reading performance and high IQ. This 'folk belief' has not only become pervasive in the media but also in educational research and practice.
In 2000 the National Reading Panel presented its findings on the Essential Components of Reading Instruction. The authors of this study re-evaluated the NRP’s meta-analysis and reached significantly different conclusions.
The authors explain the underlining complexities and interactions implied by the Simple View of Reading, a model which posits that successful reading is an interaction between spoken language, word recognition and reading comprehension. They contend that the Simple View of Reading is well aligned with the findings of over two decades of research.
Kerry Hempenstall discusses what should and should not be considered “evidence” when using research to inform practice.
This concise but clear critique of Hart and Risley’s important paper raises key questions about methodology and how much weight should be placed on the findings of the study. This is an excellent example of how to ask the right questions when evaluating a research report.
Louisa Moats analyses how equivocal language is used to justify the use of programmes and practices which are not supported by scientifically-based research.
Kevin Wheldall, Emeritus Professor of Macquarie University, questions the conclusions of a report into the long-term efficacy of Reading Recovery. He points out that the report’s conclusions are at odds with the details of RR and non-RR students’ achievement. Wheldall’s article is an excellent example of why ‘research’ should always be read critically.
Zig Engelmann challenges the inconsistencies and apparent contradictions in the way “What Works Clearinghouse” selects and evaluates research as evidence for effective interventions. Not all approaches are treated equally.
Despite its claims to be a reliable and trusted guide to what works in education, the What Works Clearinghouse is shown in this report to have serious weaknesses in the accuracy of its reports – particularly with regard to evaluating the fidelity of implementation of interventions. The high-stakes nature of the WWC’s role makes it imperative that these problems are addressed.
Three years after Project Follow Through, Wes Becker followed up a large sample of students to evaluate the longer term impact of Direct Instruction. The strongest findings showed lasting gains in word decoding, spelling and maths computation. Becker discusses a range of research problems that had to be overcome in order to conduct this study.
Wes Becker discusses the history of educational research initiatives in the US, and suggests reasons for the apparent failure of many of these. He also draws conclusions as to what approaches would make the most significant difference to economically disadvantaged children.
Zig Engelmann analyses the origins of the terms in modern education and considers the teacher training problems inherent in using (lowercase) direct instruction.
This article examines the ways in which political agendas carry more weight than does evidence of success when policy decisions are made regarding particular interventions or teaching methods. Focusing on the implementation of the Every Child A Reader strategy, the central component of which is Reading Recovery, the report is critical of the limited evidence of effectiveness, the high costs and the inflexibility of the overall strategy.
This excellent article demonstrates what happens when speculative claims are tested scientifically. It summarises an ongoing debate on ‘learning styles’ and the various issues raised, objected to and refuted. It is also entertainingly written.
Martin Kozloff critically analyses the philosophical and rhetorical problems associated with much ‘constructivist’ pedagogy. He argues that the prevalence of muddled thinking in education is responsible for widespread educational mediocrity.
An analysis of the extant research material on Brain Gym. Both on face validity and effectiveness data, no support is found for the claims of this programme.