ChatGPT has turned into the perfect therapist for many people: It’s an active “listener” that digests private information. It appears to empathize with users, some would argue, as well as professionals can. Plus, it costs a fraction of the price compared to most human therapists. While many therapists will charge anywhere from up to $200—or even more—per one-hour session, you can have unlimited access to ChatGPT’s most advanced models for $200 per month.
Yet, despite the positive anecdotes you can read online about using ChatGPT as a therapist, as well as the convenience of having a therapist that’s accessible via almost any internet-enabled computer or phone at any time of day, therapists warn ChatGPT can’t replace a licensed professional.
In a statement to Fortune, a spokesperson for ChatGPT-maker OpenAI said the LLM often suggests seeking professional advice to users who discuss topics like personal health. ChatGPT is a general-purpose technology that shouldn’t serve as a substitute for professional advice, according to its terms of service, the spokesperson added.
On social media, anecdotes about the usefulness of AI therapy are plentiful. People report the algorithm is level-headed and provides soothing responses that are sensitive to the nuances of a person’s private experiences.
“I don’t even know how to explain how much this has changed things for me. I feel seen. I feel supported. And I’ve made more progress in a few weeks than I did in literal years of traditional treatment,” the user wrote.
In a comment, another user got to the root of AI’s advantages over traditional therapy: its convenience.
“I love ChatGPT as therapy. They don’t project their problems onto me. They don’t abuse their authority. They’re open to talking to me at 11pm,” the user wrote.
Alyssa Peterson, a licensed clinical social worker and CEO of MyWellBeing, said AI therapy has its drawbacks, but it may be helpful when used alongside traditional therapy. Using AI to help work on tools developed in therapy, such as battling negative self-talk, could be helpful for some, she said.
Using AI in conjunction with therapy can help a person diversify their approach to mental health, so they’re not using the technology as their sole truth. Therein lies the rub: Relying too heavily on a chatbot in stressful situations could hurt people’s ability to deal with problems on their own, Peterson said.
In acute cases of stress, being able to deal with and alleviate the problem without external help is healthy, Peterson added.
AI responses also aren’t always objective, licensed clinical social worker Malka Shaw told Fortune. Some users have developed emotional attachments to AI chatbots, which has raised concerns about safeguards, especially for underage users.
In the past, some AI algorithms have also provided misinformation or harmful information that reinforces stereotypes or hate. Shaw said because it’s impossible to tell the biases that go into creating an LLM, it’s potentially dangerous for impressionable users.
A spokesperson for Character.ai declined to comment on pending litigation. The spokesperson said any chatbots labeled as “psychologist,” “therapist,” or “doctor,” include language that warns users not to rely on the characters for any type of professional advice. The company has a separate version of its LLM for users under the age of 18, the spokesperson added, which includes protections to prevent discussions of self-harm and redirect users to helpful resources.
Another fear professionals have is that AI could be giving faulty diagnoses. Diagnosing mental health conditions is not an exact science; it is difficult to do, even for an AI, Shaw said. Many licensed professionals need to accrue years of experience to be able to accurately diagnose patients consistently, she told Fortune.
“It’s very scary to use AI for diagnosis, because there’s an art form and there’s an intuition,” Shaw said. “A robot can’t have that same level of intuition.”
People have shifted away from googling their symptoms to using AI, said Vaile Wright, a licensed psychologist and senior director for the American Psychological Association’s office of health care innovation. As demonstrated by the cases with Character.ai, the danger of disregarding common sense for the advice of technology is ever present, she said.
“They’re not experts, and we know that generative AI has a tendency to conflate information and make things up when it doesn’t know. So I think that, for us, is most certainly the number one concern,” Wright said.
While the options aren’t yet available, it is possible that, in the future, AI could be used in a responsible way for therapy and even diagnoses, she said, especially for people who can’t afford the high price tag of treatment. Still, such technology would need to be created or informed by licensed professionals.
“I do think that emerging technologies, if they are developed safely and responsibly and demonstrate that they’re effective, could, I think, fill some of those gaps for individuals who just truly cannot afford therapy,” she said.