Categories: Mental Health

Your AI therapist shouldn’t be your therapist: The dangers of counting on AI mental health chatbots

With current physical and financial barriers to accessing careindividuals with mental health conditions may turn to artificial intelligence (AI)-powered chatbots for mental health relief or aid. Although they’ve not been approved as medical devices by the U.S. Food and Drug Administration or Health Canada, the appeal to make use of such chatbots may come from their 24/7 availability, personalized support and marketing of cognitive behavioural therapy.

However, users may overestimate the therapeutic advantages and underestimate the restrictions of using such technologies, further deteriorating their mental health. Such a phenomenon may be classified as a therapeutic misconception where users may infer the chatbot’s purpose is to supply them with real therapeutic care.

With AI chatbots, therapeutic misconceptions can occur in 4 ways, through two primary streams: the corporate’s practices and the design of the AI technology itself.

Four ways therapeutic misconception can occur through two primary streams.
(Zoha Khawaja)

Company practices: Meet your AI self-help expert

First, inaccurate marketing of mental health chatbots by firms that label them as “mental health support” tools that incorporate “cognitive behavioural therapy” may be very misleading because it implies that such chatbots can perform psychotherapy.

Not only do such chatbots lack the skill, training and experience of human therapistsbut labelling them as with the ability to provide a “different approach to treat” mental illness insinuates that such chatbots may be used as other ways to hunt therapy.

This sort of promoting tactic may be very exploitative of users’ trust within the health-care system, especially once they are marketed as being in “close collaboration with therapists.” Such marketing tactics can lead users to disclose very personal and personal health information without fully comprehending who owns and has access to their data.

The second kind of therapeutic misconception is when a user forms a digital therapeutic alliance with a chatbot. With a human therapist, it’s helpful to form a strong therapeutic alliance where each the patient and therapist collaborate and agree on desired goals that may be achieved through tasks, and form a bond built on trust and empathy.

Since a chatbot cannot develop the identical therapeutic relationship as users can with a human therapist, a digital therapeutic alliance can form, where a user perceives an alliance with the chatbotthough the chatbot can’t actually form one.

Examples of how mental health apps are presented: (A) Screenshot taken from Woebot Health website. (B) Screenshot taken from Wysa website. (C) Advertisement of Anna by Happify Health. (D) Screenshot taken from Happify Health website.
(Zoha Khawaja)

A fantastic deal of effort has been made to achieve user trust and fortify digital therapeutic alliance with chatbots, including giving chatbots humanistic qualities to resemble and mimic conversations with actual therapists and promoting them as “anonymous” 24/7 companions that may replicate elements of therapy.

Such an alliance may lead users to inadvertently expect the identical patient-provider confidentiality and protection of privacy as they might with their health-care providers. Unfortunately, the more deceptive the chatbot is, the simpler the digital therapeutic alliance might be.

Technological design: Is your chatbot trained to assist you?

The third therapeutic misconception occurs when users have limited knowledge about possible biases within the AI’s algorithm. Often marginalized persons are not noted of the design and development stages of such technologies which can result in them receiving biased and inappropriate responses.



When such chatbots are unable to acknowledge dangerous behaviour or provide culturally and linguistically relevant mental health resourcesthis might worsen the mental health conditions of vulnerable populations who not only face stigma and discrimination, but additionally lack access to care. A therapeutic misconception occurs when users may expect the chatbot to profit them therapeutically but are supplied with harmful advice.

Lastly, a therapeutic misconception can occur when mental health chatbots are unable to advocate for and foster relational autonomyan idea that emphasizes that a person’s autonomy is formed by their relationships and social context. It is then the responsibility of the therapist to assist recuperate a patient’s autonomy by supporting and motivating them to actively engage in therapy.

AI-chatbots provide a paradox by which they’re available 24/7 and promise to enhance self-sufficiency in managing one’s mental health. This cannot only make help-seeking behaviours extremely isolating and individualized but additionally creates a therapeutic misconception where individuals consider they’re autonomously taking a positive step towards amending their mental health.

A false sense of well-being is created where an individual’s social and cultural context and the inaccessibility of care should not regarded as contributing aspects to their mental health. This false expectation is further emphasized when chatbots are incorrectly advertised as “relational agents” that may “create a bond with people…comparable to that achieved by human therapists.”

Measures to avoid the chance of therapeutic misconception

Not all hope is lost with such chatbots, as some proactive steps may be taken to reduce the likelihood of therapeutic misconceptions.

Through honest marketing and regular reminders, users may be kept aware of the chatbot’s limited therapeutic capabilities and be encouraged to hunt more traditional types of therapy. In fact, a therapist ought to be made available for individuals who’d prefer to opt-out of using such chatbots. Users would also profit from transparency on how their information is collected, stored and used.

Active involvement of patients throughout the design and development stages of such chatbots also needs to be considered, in addition to engagement with multiple experts on ethical guidelines that may govern and regulate such technologies to make sure higher safeguards for users.

Fitness Fusion HQ

Recent Posts

Jumpstart Your Workout Recovery

How to Make a Protein Shake: A Guide to Crafting the Perfect Drink The Basics…

4 hours ago

Yes, You Still Need to Use Sunscreen

How Does Sunscreen Work and Is it Healthy to Go Sunscreen-Free? How Does Sunscreen Work?…

8 hours ago

Slow Cooker Mexican Shredded Chicken

Slow Cooker Mexican Shredded Chicken Post May Contain Affiliate Links. Please Read Our Disclosure Policy.…

12 hours ago

Intimate Partner Violence Linked to Women’s Cardiovascular Disease Risk

The Many Faces of Intimate Partner Violence and Its Impact on Women's Cardiovascular Health Intimate…

12 hours ago

7 Common Eye Diseases

Cataracts Cataracts cause cloudiness of the eye's lens and become more common with age. Symptoms…

13 hours ago

Kristin Cavallari Insists Next Boyfriend Get Vasectomy

Kristin Cavallari Doesn't Want to Have Another Child The 37-year-old TV star, who already has…

16 hours ago

This website uses cookies.