A shocking tidbit of information cascaded across social media channels last month that got teachers and others questioning the work of noted education professor John Hattie.
Hattie’s book Visible Learning has been hailed by the Times Educational Supplement as “the holy grail of teaching.” It uses a statistical measure called “effect size” to compare the value of various educational inputs from class size to direct instruction, team teaching and summer holidays. The book claims to synthesize more than 50,000 studies covering more than 80 million students to determine the “core influences for better learning outcomes.”
Visible Learning has made big waves worldwide, and Hattie even came to Alberta in the fall to speak to educators. His findings on class size, in particular, have been widely touted by politicians and others looking to discredit a push for smaller class sizes.
The shocking information making the rounds last month was an admission from Hattie that his work contained significant mathematical errors (see http://bit.ly/1xtmtDg).
I am not in a position to analyze the critique because my understanding of statistics is limited, and I understand that Hattie’s position on the errors is that they do not change his findings on effect size. But this debate over his arithmetic is raising serious questions about the validity of his research and of quantitative educational research in general.
Which brings me to my concern, which is about the general use and misuse of data in education.
The education of individual children is a very complex pursuit, and simplifying it to a set of calculated inputs and outputs does a great disservice to our students and the work of educators.
More often than not, the measures used to judge educational effect are the results of standardized tests, because the data is easily collectable and easily comparable. But do the tests actually measure what analysts suggest they measure? Moreover, do they measure the things we want to get out of public education?
Sure, we want students to master curricular outcomes, but many of the tests being used are not measuring the curriculum that is being delivered. International assessments like the OECD’s Programme for International Student Assessment, for instance, measure the outcomes that an international economic organization deems important, not the outcomes deemed important within our own system.
Even local standardized tests like provincial achievement tests and diploma examinations do a better job of indicating the socio-economic status of students’ families than they do of measuring educational effect. The misuse of this data is exemplified by the Fraser Institute’s ranking of schools.
When we narrowly attempt to isolate inputs for the purposes of systemwide educational reform, we are similarly misusing data. Success in school, whatever that looks like, is dependent on many interrelated factors, and those factors come together in very complex and unique ways for each individual student. Any analysis of data on inputs of schools or school systems views education from a 10,000-foot level and is severely limited in its ability to prescribe meaningful information to guide the teaching practices used for individual students.
The overemphasis on inputs and outputs is a result of applying a crude economic lens in areas where it isn’t appropriate and where its application can be quite harmful.
This isn’t to suggest that educational research is meaningless or that quantitative data should be discarded entirely. Rather, it speaks to the important work of pedagogy. Teachers, as educated professionals, are responsible for staying current in educational research and for applying it in their analysis of individual student needs to determine appropriate educational programming.
Which brings me back to why small class size matters. In order for teachers to implement meaningful education that addresses individual learning needs, they need to have a manageable group of students that they can get to know on an individual basis. Teachers don’t need statistics to tell them the importance of this and, in fact, they rightfully balk at the notion that statistics suggest otherwise. ❚
I welcome your comments—contact me at firstname.lastname@example.org.