policy & reform
campusreview.com.au
Rank
hypocrisy
Is it time to reassess how the ATAR
is used in tertiary education?
By Dean Cooley and Annette Foley
T
he current policy climate in teacher education in Australia
has a focus on accountability for outcomes as judged
by a decreasing trend of primary and secondary student
performance on a range of national (NAPLAN) and international
rankings (Program for Internal Student Assessment [PISA] and
Trends in International Mathematics and Science Study [TIMMS]).
For example, Australian student performance on TIMMS has
flatlined, compared to the increasing performance of other OECD
countries. 1
Accordingly, whenever there is a perceived problem with student
performance, there is scrutiny from the media and policymakers
to fix the problem. One area that has attracted particular attention
is the quality of candidates entering into initial teacher education
(ITE) courses, which is somehow being blamed for the decreasing
performance of students.
The rationale for this view is that there is a presumed causal
link between lower university entry rankings, teacher quality and
student performance on such tests, 2 albeit with little evidence to
support this link. Consequently, politicians and some academics
14
have touted the well-worn but misguided mantra of increasing
entrance scores as a prophylactic to Australian school students’
decreasing performances on international and national evaluations.
Nonetheless, opposition to such a move has fallen on deaf ears,
with ITE providers now faced with the daunting prospect of being
restricted to a selective admission system.
The use of a selective admission system is not new to the
university sector, but what is new is the limiting of universities from
using an open admission system. In a sector that uses the phrase
“evidence-based practice”, 3 one wonders what happened to such
practice when the decision was made to set the Australian Tertiary
Admission Rank (ATAR) at 65 for 2018 and then increasing it to 70
for 2019?
The use of standardised aptitude tests, such as the Scholastic
Assessment Test (SAT) in the US, are widely used to rank
candidates’ potential rather than achievement. The National Center
for Fair and Open Testing 4 notes that the North American SAT
can only predict first-year grades, with little validity for predicting
grades beyond the first year of study, graduation rates, pursuit of a
graduate degree, or for placement or advising purposes.
In Australia, the ATAR is now the primary criterion for entry into
ITE programs. Rather than a standardised measure, the ATAR
is calculated as a scaled study score to try to even out subject
choice. The question that remains is, what is the evidence that a
high ATAR is a robust predictor of quality?
The first issue is the lack of large cohort randomised controlled
studies. Where there are studies, the results are mixed. For
example, McNaught and McIntyre 5 reported that low performance
in a core literacy unit was related to poor course progress, but
ATAR ranking as a predictor of student success was problematic.
Nonetheless, Wulf and Croft-Piggin 6 reported that within a small
non-randomised group from one university, ATAR ranking was a
significant predictor of academic achievement, but scores on a
motivation and engagement scale were stronger predictors.