CC Pexels

Finding what really works in education

Using data and reason, rather than intuition, to decide where to invest the large sums spent globally on education would make a huge difference to the learning outcomes of many students. This is the opinion of Professor John Hattie, University of Melbourne and Dr Arran Hamilton, Cognition Education.

Around US $3.5 trillion is spent on education globally every year. To put it in perspective, this expenditure is greater than the combined economic activity of Russia and India. Globally, we spend a lot on education. And rightly so.


Most of this funding is for fixed and reoccurring costs that cannot be adjusted without great care and without expending high levels of political capital. But an estimated four per cent of global education budgets is available for procuring education products and resources for use in the classroom and for in-service teacher professional learning. If this is spent wisely and if, over time, there is also greater clarity of thought about how the other 96 per cent is spent, then, locally and globally, we would expect to see remarkable things happening in education. The trouble is, we’re not seeing enough of those remarkable things.

Global inequality in education outcomes is very far from being solved. Even in highly developed countries, large numbers of students are not graduating from secondary education with appropriate certification. The challenges in developing countries are far greater and almost too depressing to document. According to UNESCO, at least 250 million of the world’s 650 million primary school children are unable to read, write or do basic mathematics. While the problem is societal, it can be solved through education—if we invest in unlocking and effectively implementing the right stuff.

Cognitive biases are significant hurdles for educators testing their assumptions about what is working in their classrooms.

We advocate an approach to education that is built on reason, rather than intuition alone. This involves systematic collection of data on students’ learning experiences in the classroom and the ways in which teachers and product developers can accelerate this learning. From data, we can inform intuitions and judgements and build theories. And, from theories, we can build structured processes – continually testing and refining these too.

Why our brains trip us up

During the last 40 years, a growing database of cognitive biases, or glitches in our human operating system, have been catalogued and confirmed through laboratory experiment and psychometric testing.

The research suggests that biases afflict all of us, unless we have been trained to ward against them. More than 80 cognitive biases have been recorded by behavioural economists.

Inherent biases, if left unchecked, can result in unrestrained intuition over reason that drives us all to pursue products and practices with insufficient scrutiny. Examples include:

  • authority bias: the tendency to attribute greater weight and accuracy to the opinions of an authority – irrespective of whether this is deserved; and
  • confirmation bias: the tendency to collect and interpret information in a way that conforms with, rather than opposes, our existing beliefs.

These biases and others like them are significant hurdles to educators relentlessly reviewing and testing their assumptions about the impact that they are having on learning in the classroom and in selecting the right things in which to invest the precious four per cent.

The limits of lesson observation

In many education systems, it is a mandatory requirement that every teacher undergoes at least an annual observation by their school leader. These observations are often used for performance management purposes, to identify who are the ‘good’ and ‘less good’ teachers, and by national inspectorates to make more holistic judgments about whether a school is outstanding, good or poor.

As observers, we have our own lens, our own theories and beliefs about what we consider is best practice, and these can bias the observations, no matter how specific the questions in any observation system. The challenge with observation is that often we end up seeing what we want to see and we can be guided by our cognitive biases.

There has been quite a lot of research into observation in the last few years. One of the strongest datasets comes from the Measuring of Effective Teaching (MET) project.

The project concluded that a single lesson observed by one individual, where the purpose was to rate teacher performance, has a 50 per cent chance of being graded differently by a different observer. When a teacher underwent six separate observations by five separate observers, there was ‘only’ a 72 per cent chance of agreement.

Now that’s a whole lot of observation for still almost a one in three chance of error.

The limits of student achievement data

The outcomes of high-stakes assessments are also often used to make inferences about the quality of schools, school systems, individual teachers and about whether certain education products and programs are more effective than others. In this context, high-stakes assessments are blunt instruments – akin to piloting your boat by the stars on a cloudy night, rather than GPS.

We can infer something about which schools are higher and lower performers, but need to carefully tease out background variables like the starting points and circumstances of the learners, and the multiple other important outcomes, so that we can measure distance travelled, rather than the absolute end point in one set of competencies.

In the context of individual teachers (provided there is a direct link between the teacher and the particular content assessed), the outcomes of high-stakes assessments can tell us quite a lot about which teachers are more or less effective – particularly where the pattern of performance holds over several years.

Again, care is needed, as it isn’t only the outcomes of the assessments, but the growth from the beginning to end of the course that should be considered – otherwise those teachers who start with students already knowing much, but growing little, look great and those who start with students who know less at the beginning, but grow remarkably, look poor when it should be the other way around.

But, unless the outcomes of high-stakes assessments are reported back to schools at the item level (how well students did and grew on each component of the assessment, rather than just the overall grade), teachers are left in the dark about which elements of their practice (or third-party products and programs) are more or less effective.

Spending with greater care

Policymakers and educators must be more discerning in how they collectively spend the US$140 billion that we estimate is expended on educational resources, technology and teacher professional learning each year. If this funding is focused with more laser precision on effective interventions, there is a much greater probability that every learner will be able to fulfil their potential.

To make the right kinds of investments, policymakers and educators need to be aware of their cognitive biases and the ways in which these can drive us all to covet and privilege the wrong things. They also need to understand the limitations of lesson observations and student achievement data in making cast iron inferences about what works best.

This is an edited extract from Professor Hattie and Dr Hamilton’s new white paper,Education Cargo Cults Must Die, published by Corwin.

This article was first published on Pursuit. Read the original article.

Author: Professor John Hattie and Dr Arran Hamilton

Professor John Hattie, Laureate Professor, Melbourne Graduate School of Education, University of Melbourne. Dr Arran Hamilton, Group Director - Strategy, Cognition Education

Add your comment

characters remaining.

Log in through one of the following social media partners to comment.