The average teacher uses ‘Professional Judgement’ when assessing student work and may periodically meet with peers at ‘moderation meetings’ to obtain the view of colleagues on setting expectations, through the examination of student work.

Teachers also obtain feedback on their assessment expectations once a year via NAPLAN. We also see our educational results from a National perspective as PISA scores every three years compared with other nations. It is ironic how NAPLAN has received considerable criticism from educators from time to time whereas the Australian PISA results, which have become less effective each time, do not receive such opprobrium from educators. Instead the PISA results have caused members of the wider community to express disappointment in the education profession.

Many schools purchase standardised tests to see how their students rate, that is, using an external standard as a comparison. The flaw in this strategy is that such standardised tests don’t necessarily fit into the teaching program.

A more powerful approach to the application of comparing standards is for groups of teachers to produce their own assessment instruments. Many heads tend to produce better outcomes as well as reducing the overall workload on individuals.  One of the realities of assessment is that ‘you never really know how good or bad an assessment is until you have used it.’ (A&L, p. 136). However, users of AutoMarque Software have their marking and quality control done for them via their school photocopier. The image (right) provides a measure of an assessment’s reliability, the mean and the standard deviation of the test, and a confidence interval of the level of difficulty of each question and a confidence interval of the discrimination of each question. It should be noted that the test in question was purchased by a school from a reputable supplier. However, you will see how the discrimination intervals for a number of the questions show that they are not effective. That is, poor performers were having success with them. Removal of such questions in future would lift the overall reliability of the test.

AutoMarque has a further refinement of students’ outcomes, the automatic weighting of each question based on its level of difficulty. That is, the harder questions produce a better score than easy questions, based on their level of their difficulty. This helps spread the students’ results providing a more accurate view of student success.

When teachers regularly use AutoMarque to conduct pre-test/post-test in their everyday teaching, ensuring they use only quality questions and highly reliable tests, they will know how much their students’ learning improves time after time. It is not unusual for teachers using AutoMarque to improve their students’ outcomes equivalent to 18 months growth per year every year.

By contrast, when teachers don’t have an ongoing measure of their assessment instruments they are often surprised by the poor performance of their students on external tests.

If you would like a demonstration copy of AutoMarque go to www.automarque.biz

Reference
Athanasou, J., & Lamprianou, i., A Teachers Guide to Assessment, SSP 2002