Published: April 2012
Author(s): Jennifer May Hampton

This dissertation discusses issues of effect size in education, psychology and educational psychology literature. Reporting and interpretation of effect size estimates are discussed in terms of the purpose of individual research as conceptualised by Kirk (2001). Also discussed are issues surrounding the reporting and interpretation of null hypothesis significance testing (NHST); as are confidence intervals and matters of determining practical significance. The issues raised are also considered in terms of working within a research community, cumulative knowledge growth and reporting to a non-expert audience. The papers published in 2010 in the Journal of Educational Psychology and Learning and Instruction were surveyed to determine the reporting practices, specifically for those findings reported in the abstracts. The data reveal a large proportion of studies reporting, but not discussing, effect size estimates. A cumulative frequency was calculated from reported partial eta squared values, producing contextual guidelines for interpretation. These guidelines contrast with Cohen’s (1988) but are similar to those found in other areas of psychology (Morris & Fritz, under review; 2012). Results are discussed in terms of trends in reporting and issues of interpretation. Overreliance on traditional methods as well as readily available effect size statistics calls for greater author engagement with the issue. Finally, comprehensive resources to guide researchers in these matters are presented.

Keywords
Effect Size, Educational Psychology, Practical Significance, Statistical Significance, Reporting Practices