David Tout and Juliette Mendelovits examine why we receive such differing reports on the literacy and numeracy skills of young Australians.
Australia participates in several large-scale assessment programs that provide information about the knowledge and skills of the population at various points in the lifespan. Each of these programs tells its own story about literacy and numeracy standards in Australia, and some of these stories appear to contradict one another. The 2006 Adult Literacy and Lifeskills Survey (ALLS) reported that about 50 per cent of Australians between the ages of 15 and 74 are below the minimum required standard of literacy and numeracy. Three years later, the 2009 OECD Programme for International Student Assessment (PISA) reported that 15 per cent of Australian 15-year-olds are below a baseline level of proficiency in reading and mathematics. Australia’s National Assessment Program – Literacy and Numeracy (NAPLAN), on the other hand, reported in 2011 that only six per cent of Year 9 students – who are around 14 years of age – are below the minimum standard of literacy and numeracy. Taken at face value, these results suggest a lot of improvement in a short space of time; however, trends observed over that same period within assessment programs do not support this view.
What, then, can explain these wildly different reports? Are these three assessment programs measuring completely different things? Or do expectations vary about what constitute adequate levels of literacy and numeracy? Or is there something else at play? Further, if the reasons for the variation can be understood, is it possible to represent these standards on a single, coherent continuum of achievement?
Explaining the differences
The apparent discrepancies between different measures of literacy and numeracy can be explained by four key factors:
• the definitions of literacy and numeracy used;
• the stated and unstated program purposes;
• the agenda of the stakeholders; and
• the way standards are represented statistically.