Last year I was fortunate to undertake two weeks of professional development where I was introduced to the work of John Hattie. Most recently, I attended a refresher day conducted by Ken Avenall, principal education officer, professional learning and leadership development, with Brisbane Catholic Education. Ken presented us with a summary of Hattie’s research as described in his book, Visible Learning. Hattie’s work has had a profound effect on my views on learning and I believe this book should be compulsory reading for all teachers. Hattie has synthesised some powerful research that looks at how teaching practice affects student learning. The research involved 50,000 studies, 800 meta-analyses, 100,000 (plus) schools, 6 million teachers and 200 million students. Quite simply, it’s too huge to ignore.
Ken Avenall summarised the research into four sections, as outlined below. Before proceeding it is important to understand the scale or measure that Hattie uses. Hattie refers to ‘Effect Size.’ It is measured on a scale from zero to one point zero. A score of zero means that the teaching or situation has no effect at all on student outcomes. A score of one is extremely high and shows high relation between teaching and student outcomes. Teachers typically (on average) operate at around 0.4. We would be aiming to be above this mark, above 0.5 at least.
1 Don’t do!
These areas produced very poor learning and should be avoided. Retention or making children repeat a grade had an effect size of only 0.16, Gender Classes = 0.12, Ability Grouping = 0.12, Multi-age Classes = 0.04, Student Control over learning = 0.04.
We have schools that do segregate students according to gender and ability. Why on earth would you do it when the research demonstrates that the way classes are grouped has negligible impact on student learning? Some schools allow students to have control over their learning. Clearly, the research shows this is a poor decision as well. Two very alarming statistics in the ‘Don’t do’ area refer to moving schools and television. Children who moved schools a lot, e.g. army children, had a negative effect size of -0.10, while children who watched more than 10 hours of television a week had a negative score of -0.18 This means that the children in these two groups actually went backwards in their learning. Ignore these data at your own peril.
2 Why bother?
The areas mentioned in this section have such poor effect sizes that they should also be left out of the teaching repertoire. Individualised Instruction = 0.23, Class Size = 0.21, Homework = 0.29, Team Teaching = 0.19, Home Visits = 0.29, Testing = 0.34, Mentoring 0.15.
Teachers protest the loudest about class sizes and I admit, having taught for many years, that it is much easier for the teacher with few students than with large groups. It’s logical! An orchardist doesn’t have as much work to do with 100 fruit trees as he would with 200 trees. However, the argument that smaller class sizes improve student learning has little credibility, especially in view of the research. It’s interesting, but not surprising to me at all, that homework scores so poorly. Parents, in particular, tend to be obsessive about homework. Their complaints and concerns about homework were a constant thorn in my side when I was teaching. I’m sure that, even armed with Hattie’s evidence, they still wouldn’t listen.
I am relieved to see that testing also scores so poorly. Those of you who have read my previous articles would know that I am vehemently opposed to NAPLAN and other forms of testing that have no impact on student outcomes. Testing, per se, is useless and the research supports this. I once told a story about a farmer who was obsessed with winning a blue ribbon for the fattest pig at a local show. He spent all his time weighing the pig and little time feeding it and couldn’t understand why it wasn’t putting on weight. In fact, it was losing weight. In the short space of time since NAPLAN was introduced I have been flooded with SPAM on the internet promoting NAPLAN practice tests, courses for teachers on how to achieve better results, etc. The Myschool website is not about student learning. It’s about which school has the highest scores. Very little is related to how NAPLAN testing can improve the learning outcomes of your students. NAPLAN has spawned a self-perpetuating industry which promotes nothing more than more testing and encourages teachers to do nothing more than teach to the tests and get good scores to make their schools look good.
3 This makes a difference
Co-operative Learning = 0.41, Advance Organisers = 0.41, Parental Involvement = 0.51, Play Programs = 0.5, Small Group Learning = 0.49, Tactile Stimulation Programs = 0.58, Social Skills Programs = 0.40, Peer Tutoring = 0.55.
4 Absolutely do!
Self-reported Grades = 1.44, Piagetian Programs = 1.28, Formative Evaluation = 0.9, Feedback = 0.73, Spaced Practice = 0.71, Acceleration = 0.88, Phonics Instruction = 0.60, Teacher Student Relationships = 0.72, Teacher Professional Development = 0.62, Vocabulary Programs = 0.67.
These are the areas where we should focus most of our effort. NAPLAN and other forms of summative evaluation give you an end-of-term or end-of year view of a student’s performance. As many parents have pointed out to me at report / interview time, “... I had no idea my child was struggling with reading or maths.” It’s no good telling them at the end of the year, or in NAPLAN’S case, end of two years, that the child has a problem or area of weakness. Formative or ongoing evaluation is the way to go and the research supports this. Acceleration is the opposite to retention. How many schools are bound by red tape and stupid legislation that prevents students from being put up a grade or two if they are bright enough?
Most schools remain constrained by the calendar and group children according to age, yet the research shows that acceleration has a very strong effect size. Spaced Practice is an interesting area I hadn’t thought about. A lot of times teachers will focus on a topic in one block of learning time. Perhaps the students don’t get a maths topic so the teachers decide to spend a whole morning on it. The research suggests that the topics should be broken up or spaced. It takes three or four visits to a topic before the concept is committed to working memory and then long-term memory. The more complex the concept, the longer the spacing should be.
Feedback is related to Self-reported Grades. When a student can tell the teacher exactly what it is that they need help with (what it is that they don’t understand) the teacher can specifically focus on this rather than giving them superfluous information. Look at the high effect size – 1.44 is huge! Feedback also has a high effect size. Once again, rather than giving a generic or general feedback to a student, e.g. ‘you’re working really well’, the teacher should be specific, e.g. ‘you’ve finally mastered how to regroup with these addition problems’.
We all need feedback. I need it as a principal. I’m not surprised that Teacher Student Relationships has such a high effect size. (Ken asked us to list all our teachers from prep to Year 12. I was stunned that I could only remember one primary school teacher and for the wrong reason – he beat the heck out of me.
My teachers didn’t get out of their desks and I had very little to do with them. Contrary to the research, I did extremely well at school despite the poor relationships. It makes sense however, that if the children like their teacher they will be happy at school and this should be conducive to learning.
Every teacher who ever picked up a stick of chalk should know about Piagietian Programs. Once again, it’s common sense. Find out where the students are developmentally and aim to take them to the next stage of development. Finally, I don’t need research to tell me that Teacher Professional Development is important.