|
Social-media companies will soon be required to do something many have long avoided: confirm how old their users really are.
Age-gating – the online equivalent of“ No ID, no entry” – has existed in a loose form for years, with adult-oriented websites often relying on a simple tickbox or self-declared birth date. Now, Australian regulators are demanding stronger safeguards.
From 10 December, new rules introduced by the eSafety Commissioner will require social-media platforms to take reasonable steps to prevent anyone under 16 from holding an account. The restrictions aim to reduce the risks young people face online and apply to major platforms including Facebook, Instagram, Snapchat, TikTok, YouTube and X( formerly Twitter). Messaging, educational and gaming services such as Google Classroom, WhatsApp, Messenger, Discord and Roblox will be exempt.
While platforms are not required to verify every user, they must detect, deactivate and remove under-age accounts, and cannot rely solely on selfdeclaration. Breaches may attract fines of up to $ 49.5 million.
However, the measures designed to protect children raise significant questions about privacy and how age
|
can be verified securely. How will platforms verify age? Under the new framework, platforms may request government-issued identification but cannot make ID compulsory. Other approaches could include assessing a user’ s search history or using facial-recognition technology. But a government trial earlier this year found that face-scanning tools could estimate a user’ s age only within an 18-month range in 85 per cent of cases.
Instagram has already announced it will use artificial intelligence( AI) to estimate the ages of Australian users. Early modelling suggests nine out of 10 teen accounts would remain active under the system.
UNSW security and privacy expert Dr Rahat Masood says large technology companies already use AI extensively to form detailed pictures of their users.
“ Big tech companies don’ t need traditional age-gating mechanisms to figure out how old their users are,” she says.“ They already know a lot from patterns of behaviour – when someone logs on, who they interact with, what they search for, or whether their geolocation matches a school during the day.”
Many young people also lack government-issued ID, meaning companies will likely rely heavily on
|
AI-based age estimation. But the technology has limitations.
“ AI can misjudge age, especially across different demographic groups,” Dr Masood says.“ And how does it tell the difference between someone who’ s 15 years and 364 days, or 16 years and one day? The signals are almost identical.” Any large-scale data collection introduces new privacy risks if sensitive information is stored or shared. UNSW cybersecurity expert Dr Hammond Pearce says zero-knowledge proof( ZKP) technology could offer a privacyprotective alternative.
“ ZKP lets one party prove a statement like being over 16 without revealing any other personal information,” he explains.
He suggests a system in which the government issues digital tokens confirming a person’ s age.
“ Websites use the token to verify the user is over 16 but don’ t learn anything else. And the government wouldn’ t track which sites use the token either. It’ s a much safer way to verify sensitive information online.”
Some European countries have begun adopting secure digital-identity apps backed by strong data-protection laws. Australia, Dr Pearce says, lags behind.
“ The European Union’ s GDPR sets
|
some of the world’ s toughest privacy standards. Australia needs stronger protections, so companies take privacy more seriously.”
Dr Pearce emphasises that the new rules won’ t ban young people from social media but may slow the age at which they join.
“ If their friends can’ t access a platform, there’ s less incentive to use it,” he says.“ You don’ t need to verify every user for the policy to have an impact.”
Yet he warns the only way to achieve perfect accuracy would be to require mandatory ID uploads – the very scenario regulators are trying to avoid. With the new rules set to begin within weeks, experts say the central challenge is balancing child safety with privacy rights.
“ There’ s still a lack of clarity on how the government plans to audit socialmedia companies for compliance,” Dr Masood says.“ Protecting children online is crucial, but the solutions must not create bigger risks in the process.”
|