Why to Use — or Avoid —
AI in Counseling continued from page 12
THE “ DARK SIDE ” OF AI IN COUNSELING
Despite the numerous , potentially beneficial ways that AI can be used as an adjunct for counseling , there ’ s no shortage of ways that the use of AI could go horribly wrong in our profession :
1 . Inaccurate Information . AI tools are powerful , but they are also fallible , just like we are . AI-generated summaries often include mistakes . For example , during a recent session , a client of mine was talking about his wife and then shifted to talking about his daughter . While I was able to track this shift , a generative AI tool we were using to take notes for the client during the session missed this shift and falsely attributed some of the client ’ s statements about his wife to his daughter .
2 . Over-Reliance . Some people have heralded the 2006 comedic movie “ Idiocracy ” as ironically prophetic . In this movie , a librarian is chosen by the US Army for a suspended animation experiment . He awakens 500 years later to a society that has regressed , partially due to over-reliance on technology . Not knowing what was happening to him or where he was , he wanders into a hospital , where healthcare professionals try to use buttons with simple graphics on them and computer-generated instructions to diagnose him , apparently lacking the capacity to apply their own reasoning to evaluate his condition . What if counselors over-rely on AI for evaluation , diagnosis , and treatment planning ? What might such a future look like ?
3 . Social Isolation and Deterioration . The 2013 movie “ Her ” depicts a man who falls in love with his AI companion , preferring her over the challenges of navigating relationships with actual humans . In October 2024 , the New York Times reported that a 14-year-old Florida teen developed a romantic relationship with his AI chatbot and then completed suicide after a conversation in which his chatbot suggested that “ maybe we can die together and be free together .”
Could relationships with AI bots offer enough companionship that some people — especially those who are socially anxious or otherwise vulnerable — might further avoid the inherent challenges of navigating relationships with other humans ? And how might this affect them ?
4 . Privacy Violations . Some AI tools are HIPAA-complaint , and others aren ’ t . For example , ChatGPT is not HIPAAcompliant and does not offer a business associate agreement for counselors . Inputting sensitive client information could , in addition to violating state and / or federal law , allow a generative AI tool to positively identify a client and then link information offered by a counselor to that individual in its knowledge base . And for those platforms that advertise that they are HIPAA-complaint ( e . g ., AutoNotes , Blueprint , Upheal ), how secure is stored data ? What happens if it is compromised ?
5 . Perpetuation of Biases and Stereotypes . Information gleaned from AI tools are only as good as the data that trains them . Articles recently published in Perspectives on Psychological Science and Frontiers in Psychiatry have highlighted the problem of AI tools perpetuating stereotypes and disseminating biased information .
For example , when I was in Ireland in 2019 , I noticed an ad campaign called “ This Is Not Us ” funded by EPIC , the Irish Emigration Museum , that drew attention to inaccurate , biased , and insulting depictions of Irish people offered by generative AI tools . Tech companies have tried to counter such biases in AI-generated outputs . I ’ m pleased to say that if you type “ show me an Irish man ” in ChatGPT ’ s image generator today , you ’ ll see an image that is nothing like the images reported by EPIC in 2019 .
6 . Client Inactivity . Ever watched the 2008 Pixar film “ WALL-E ?” It depicts a future in which obese humans glide around in electric chairs that offer them entertainment and food with a click of a button , allowing them to live lives that are fully sedentary . In the book “ The Anxious Generation ,” by social psychologist Jonathan Haidt , PhD , the invention of the smartphone is linked with increases in anxiety , depression , and self-harm among adolescents and young adults .
In addition to offering a more pessimistic depiction of society by overexposing youth to “ bad news ” and algorithms that perpetuate negative and polarizing stereotypes about society , the smartphone augmented unrealistic social comparisons and also contributed to a reduction in physical activity , synchronous and unstructured play , and exposure to nature among youth , all of which are believed to negatively impact mental health and well-being . Could AI technology further this trend ?
7 . Environmental Impact . As described in a September 2024 article published by the United Nations Environment Programme , AI can positively impact the environment by generating data that can help governments , businesses , and individuals improve efficiency with natural resources and reduce waste , but on the other hand , a tremendous amount of energy , raw materials , and hazardous chemicals are used to power AI technologies .
continued on page 14 The Advocate Magazine 2025 , Issue # 1 American Mental Health Counselors Association ( AMHCA ) www . amhca . org
13