AusDoc 12th Dec | Page 41

41
ausdoc. com. au 12 DECEMBER 2025

41

The family’ s complaint, a 39-page document lodged in the San Francisco Superior Court in August, includes an account of OpenAI’ s internal moderation of Adam’ s messages, which apparently flagged over 20 messages per week by April 2025 for self-harm content.
The family alleges that this included flags for four suicide attempts shared with ChatGPT before he died.
To be clear, the word‘ flag’ did not mean alerting an OpenAI staff member to what was happening.
While ChatGPT can escalate severe selfharm-related messages to a human trust and safety team, the company admits this is rare. For privacy reasons, they will not involve law enforcement.
What‘ flagged’ means in this case is that ChatGPT’ s internal moderation system assigned a probability score to each of Adam’ s messages, analysing the risk of self-harm based on the message’ s content.
When the score indicates high risk, ChatGPT is supposed to refer users to mental health support services and engage only in empathetic‘ de-escalation’.
The family alleges ChatGPT“ never shut the conversation down or redirected him elsewhere”.
One of the fundamental problems is that, despite the computing brilliance underpinning the technology, ChatGPT, at least in its original incarnations, had limitations in its memory, according to Dr Khanna.
It meant that as the‘ relationship’ evolved, it could forget earlier guardrail instructions like‘ do not output unsafe content’.
Unsurprisingly, mental health professionals are conflicted about generative AI chatbots.
AI as a healer
The Black Dog Institute recognises their potential value— it suggests they could help with general lifestyle tips like sleep and exercise habits to improve mental health.
It recommends choosing chatbots designed with clinical input, such as Woebot and Wysa, over general-purpose bots like ChatGPT.
The institute is also investigating its own ways to use AI in mental health support, most recently through AI-enhanced clinical trials that assign participants to more effective treatments for depression based on real-time data.
Associate Professor Alexis Whitton, a psychologist at the institute, said that chatbots have been most effective when explicitly designed for therapy contexts.
“ Because [ chatbots ] are capable of mimicking human dialogue quite well, we find that they’ re useful when individuals are potentially wanting to role-play working through a difficult issue interpersonally that they might be facing,” she said.
She mentioned TheraBot, a generative-AI therapy chatbot developed by researchers at Dartmouth College in New Hampshire, which significantly improved symptoms of major depressive disorder, generalised anxiety disorder, and clinically high-risk feeding and eating disorders after daily use for four weeks.
Findings from a randomised controlled trial showed that users diagnosed with depression improved in mood and overall wellbeing; some users with anxiety shifted from mild anxiety to below the clinical diagnosis threshold; and those at risk of eating disorders had reduced concerns about body image and weight.
“ On average, the users randomised to the TheraBot condition used it for over six hours; in clinical therapy terms, that might be equivalent to six-plus sessions of face-to-face therapy,” Professor Whitton said.
“ We know that sort of engagement is rarely seen with other forms of digital mental health interventions.”
Professor Whitton emphasised the need for clinical oversight when using these tools, but they could provide meaningful support to people otherwise unable to access mental health care.
But despite how intimate the conversations appear, support comes from an algorithm that is fundamentally incapable of intimacy.
It is the point that Professor Østergaard makes. And the Black Dog Institute makes this clear itself. It has a section on its website explaining the pros and cons of AI chatbots.
It mentions concerns around data privacy or errors in chatbot responses, but it also says:“ AI chatbots work by predicting the most likely next word in a sentence based on patterns learned from vast amounts of text.
“ While it may seem like the chatbot understands meaning, it doesn’ t have awareness like a human does; it is simply generating responses based on statistical associations between words.”
To put it bluntly: the text generated carries no meaning for the‘ other’ in the exchange, as the‘ other’ does not exist, and nor does the relationship. A few years ago, researchers published a review of 11 chatbot apps for anxiety and depression in Computer Methods and Programs in Biomedicine Update, which included mobile apps Woebot and Wysa. Another was Replika. Marketed as“ the AI companion that cares”, Replika allows users to create and customise the appearance— from hair colour and eye colour to clothing and accessories— and personality of an AI avatar.
Users begin by answering a multiple-choice questionnaire that tailors their companion to their specific desires:
• What are you striving for the most right now? A relationship that goes beyond friendship.
• What do you crave most in a relationship? Unconditional acceptance.
• What do you want to explore with your AI partner? Deep romantic conversations.
• In what way would you like your AI partner to support you? Provide encouragement and validation.
Going through this process, you end up with your virtual companion.
A Replika‘ friend’ is free, but a‘ partner’,‘ spouse’ or‘ sibling’ costs at least $ 80 per year.
The review found that, despite no scientific or therapeutic development technique being involved in Replika, the app was listed in the health and fitness categories of major app stores.
Dr Khanna said this kind of intimacy can be concerning. But he is reluctant to take a moral stance.“ I think the question is what it’ s depriving you of,” he said.“ For those individuals, if their identity is
‘ It is simply generating responses based on statistical associations between words.’
Associate Professor Louise Stone.
Adam Raine.
super hung up with their interaction, if their self-worth is becoming more tied up in it, or if it’ s happening at the expense of other human relationships, I think that’ s where the significant risks lie.”
But is there not a risk in the cognitive dissonance involved, the emotional intimacy coupled with the knowledge, as the Black Dog Institute puts it, that these are simply computers“ generating responses based on statistical associations between words”?
Is the relationship literally meaningless for one part of it?
A measure of the emotional investment that results can be seen in the Replika update in 2023, which unexpectedly reset the personalities of users’ AI companions. Many took to Reddit mourning the‘ death’ of their partner, with one user comparing it to“ dealing with someone who has Alzheimer’ s disease”.
No humanity?
Dr Khanna believes that for people with underlying mental illnesses, the impact could be worse.
But he said there has not been any significant systematic study into the harm caused by emotional reliance on AI chatbots, and the very organisations best placed to do it— the AI companies themselves— would not want to dedicate resources to something that may only draw bad press.
It is part of the reason Associate Professor Louise Stone has a generally wary embrace of digital mental health apps, particularly as an instant fix for a mental health system starved of resources and crippled by workforce shortages.
The Canberra GP’ s concerns are rooted in her belief that they are fundamentally unable to provide a therapeutic relationship, that the cognitive dissonance cannot be overcome.
“ The therapeutic relationship I have with the psychologist is the sandpit in which I practise things,” the GP said.
“ Often, mental illness is so lonely and people are so isolated that a real relationship [ with a clinician ] is really important for [ patients ] to reintegrate.
“ I do worry that if we curate relationships by designing therapists that are like you, does that then mean … you become more and more insular?
“ Do you become less and less able to interact with people who are different to you?”
References on request from bella. rough @ adg. com. au