The Australian federal government requires schools to report twice a year on student achievement using an A-E system (Australian Education Act, 2013). Research and experience suggests this is not the best way to use assessment to improve learning, nor the best way to communicate assessment information.

The case presented in this article is that grades aren’t the best method because they measure current achievement, not progress, they’re linked to problematic age-based norms and in many cases, don’t provide a valid comparison. Grades also harm students’ motivation to learn.

However, the strongest criticism against using grades is that a better system exists, providing rich data that better serves teachers, students and their parents – rubrics linked to a developmental progression of skills.

Measuring progress, not just current achievement
We can measure learning progress by comparing a learner’s skill at two points in time. An increase from one to the next implies progress. Armed with this information, we know that the teaching intervention has been successful, and that a student is progressing. Grades prioritise achievement over progress.

A weakness of the Australian educational debate is its focus on current achievement levels, rather than on progress made. League tables published from NAPLAN data are one of the ways we see this in the current landscape.

Many believe that a school with high NAPLAN scores will be a place where their child will learn more than at one with lower NAPLAN scores. However, the evidence is mixed. Once pre-existing ability and socioeconomic position are controlled for, there is negligible difference between school sectors (Son Nghiem, Trong Nguyen, Khanam,  Connelly, 2015). This is the danger of reporting student achievement as a letter or number and using these grades for purposes other than what they were intended for. If we really wanted to use NAPLAN data to help inform us about school quality, we would need to look at progress across the year levels. Unfortunately, NAPLAN data cannot be used to make statistically significant judgements about progress of cohorts – the measurement error is often larger than expected growth between testing periods (Wu, Hornsby, 2012).

Age-based norms
Many assessment systems that assign grades attempt to link them to age-based norms. The Australian national curriculum is based on these norms. This is problematic. There is considerable research demonstrating that within a year level, there are somewhere between 5–7 years (increasing at later year levels) of difference in ability (Harlen, James, 1997; Rowe, Hill, 1996). Given this variation, why should our grading system be based on levels that don’t describe the majority of students? There are better systems, such as AMEB grades and the Suzuki method that use grades based on level of skill, not age (Masters, 2017).

Lack of true comparison
Even if we were to accept using grades to compare students with each other and age-based norms, it is far from clear that the typical method used to produce grades in schools is reliable or valid. The way different schools unpack state and federal curricula means it would be very difficult to compare a grade at one school with the grade at another. Moreover, comparing grades between subjects is even more difficult. To my knowledge, schools do not calibrate the relative difficulty of different school subjects.

Motivation
Young people are naturally curious and the majority have an innate desire to learn. Their motivation to learn is intrinsic. When we try to use grades to motivate students – “do well on this test or assignment and you’ll get an ‘A’” – we change their motivation from intrinsic to extrinsic. Students are then motivated by something external (desire to get an ‘A’ and the approval that engenders), rather than their own internal motivation. Yet external motivation diminishes internal motivation, and once the source of the external motivation is removed, the learner is less likely to learn independently (O’Donnell et al., 2012).

Furthermore, both high and low grades can be demotivating to students. Students who achieve ‘A’ grades with ease are less likely to push themselves, knowing that they’ve reached the top of what external stakeholders (parents, teachers and peers) value. Students who put in a lot of effort but only end up getting ‘D’ grades are also demotivated – they might reason that they’re not getting external reward for their struggle. This relates to another flaw in assigning grades – it implies a judgement. In many other industries, assessment data is used as information to, for example, improve sales, increase efficiency, speed up processes. Yet in education, a grade implies a student is ‘good’ (gets an ‘A’) or ‘bad’ or ‘stupid’ (gets a ‘D’).

Assessment for teaching
Most grades are a summative interpretation of an assessment and therefore often denote the end of a learning sequence. A much better use for assessment is to inform teaching (Griffin, Care, 2009). We should use assessment to locate a student’s current skill, then provide teaching intervention at their point of readiness, or zone of proximal development (Vygotsky, 1965).

Using a skill-based rubric that adheres to the rules for writing quality criteria (https://reliablerubrics.com/2015/02/09/rules-for-writing-quality-criteria/) we are able to obtain high quality information about a student’s current ability and what they need to do to improve. This information is relevant to both teacher and student. Teachers are armed with information that they can use to target teaching according to student need and readiness. Students know what they’re capable of and what they should work on next. Figure 1 shows an example of a rubric written in this way.

We have a range of information available to us from this rubric. We know the student can do everything up to and including the highlighted criteria. We know the student is ready to learn the criteria listed in the box above their current ability. Teachers can use this information to target teaching. Students can use the information independently to find another student in the class who can perform the skill at a higher skill level and learn from them. This technique has the added benefit that many young people learn more from their peers than they do from their teachers (Burton, 2012).

In my own practice, I use the information to set individual tasks for students. I examine the rubric data and assign students tasks to complete individually or in small groups. After each assignment, I devote class time to these targeted activities, and the student growth I’ve seen has been phenomenal. Some studies suggest that the majority of what we put in front of students is either too hard or too easy (O’Donnell et al., 2012). Using assessment strategies that provide detailed information, not just grades, can allow us to provide more developmentally appropriate learning experiences.

John Hattie’s research suggests the most useful type of feedback is to show students where they are now, and where to go to next (Hattie, 2007). Grades aren’t a great way to do this. A grade doesn’t tell a student where they are now, it tells them where they sit in the hierarchy of current achievement within their class, school, and every Australian student their age. A grade doesn’t tell a student where to go to next either, other than the bland and unhelpful: “try and get an ‘A’ ”.

Summary
Using grades prioritises achievement over progress. Grades subvert assessment into summative judgements rather than using it to change future teaching and show students where to go next. Grades are supposed to compare learners to age-based norms, but rarely do successfully. Even if they do, wide variations in student ability at any age suggest these norms are counterproductive. Grades demotivate all students by externalising the motivating factor. They demotivate students at higher and lower ends of the ability spectrum in particular. Better uses for assessment exist – such as using rubrics linked to a developmental progression of skill. These tell students what they can do and where to go next. With all these disadvantages and better systems available, it is time we got rid of grades.

References

Australian Education Act,  (2013).
Burton, B. (2012). Peer teaching as a strategy for conflict management and student reengagement in schools. The Australian Educational Researcher, DOI https://doi.org/10.1007/s13384-011-0046-4
Griffin, P., Care, E. (2009). Assessment is for teaching. Independence, 34(2), 56–59.
Harlen, W., James, M. (1997). Assessment and learning: differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4(3), 365–379.
Hattie, J. (2007). The power of feedback. Review of Educational Research, 77(81).
Masters, G. (2017). Promoting long-term learning progress. Teacher Magazine, 3 April.
O’Donnell, A., Dobozy, E., Bartlett, B., Bryer, E., Reeve, E., Smith, J. (2012). Educational psychology. Milton, QLD: John Wiley & Sons Australia.
Rowe, K., Hill, P. (1996). Assessing, Recording and Reporting Students’ Educational Progress: the case for ‘subject profiles’. Assessment in Education: Principles, Policy & Practice, 3(3), 309–352.
Son Nghiem, H., Trong Nguyen, H., Khanam, R.,  Connelly, L. (2015). Does school type affect cognitive and non-cognitive development in children? Evidence from Australian primary schools. Labour Economics, 33, 55–65.
Vygotsky, L. (1965). Thought and language. Cambridge, MA: MIT Press.
Wu, M., Hornsby, D. (2012). Inappropriate uses of NAPLAN results Say NO to NAPLAN.

Ben Lawless is Head of Faculty, Humanities, at Aitken College in Greenvale Victoria. Look out for more from Ben in coming issues.