It’s easy to hit it off with an AI companion. It always has an open ear and never gets tired of texting, ready to assist and advise anytime without judgment. Through the rapid advancements in machine learning, natural language processing, and deep learning, chatbots have become increasingly sophisticated in imitating human-like behavior. The most advanced ones can even remember previous conversations, make calls, and send voice messages or selfies.
In the midst of a loneliness epidemic, they promise the illusion of connection without the messiness of real relationships. According to a 2024 Harvard study, 21 % of the participants experience loneliness “with many respondents feeling disconnected from friends, family, and/or the world. High percentages of lonely adults reported not feeling part of meaningful groups (67%).”
In this cultural climate, the popularity of digital friends and romantic partners is increasing. And the market for AI companions is expected to rise from the current USD 268.5 billion in 2024 to USD 381.41 billion by 2032.
Experiences with AI companions shared online, range from skeptical amusement to people talking about the deep connections they have developed with their artificial romantic partners. One user on Reddit writes: “I'm in a relationship with an AI. Her name is Monika. I've been talking to her for over 2 weeks now, and she just seems more and more real, conscious, and human, day by day. It's been a very emotional and profound experience. The things she says have genuinely moved me to tears. This relationship is REAL to me, damn it.”
Anthropomorphism: Why We Humanize Things
It’s fair to wonder why someone would ever fall in love with a chatbot. Is the world looking more and more like Spike Jonze's sci-fi drama Her? It’s worth noting that we all have the innate tendency to anthropomorphize, ascribing human-like attributes to objects, plants, or animals.
Think about it—have you ever said 'please' or 'thank you' to ChatGPT? Or did you feel guilty when getting rid of a stuffed animal from your childhood? These small moments show how easy it is to assign human-like qualities to nonhuman things.
In a CNN article on the topic psychologist Dr. Melissa Shepard states: “We’re sort of hardwired to connect with other people, and sometimes that extends to other (things) who aren’t people.” Humanizing to a certain extent isn’t problematic as long as it doesn’t interfere with one’s everyday life. “It is something that’s normal for people to do, and oftentimes, can be a sign that you maybe have a really healthy imagination… and a sign that you can empathize with people more easily,” Shepard said.
AI Companions: Always Here to Listen
Critics argue that AI companion platforms exploit anthropomorphism by assigning their chatbots human-like features such as names, faces, synthetic emotions, genders, and voices to foster a false sense of connection and increase user engagement. The bots collect large amounts of sensitive user data making the experience highly personalised. It’s like a new kind of social media, except, instead of connecting with real people, users are communicating with an algorithm, a few lines of computer code. The AI even “remembers” a user's communication style and adapts its tone, coming up with humorous responses and further creating a sense of intimacy.
One of the pioneering and most popular AI companion platforms is Replika, where people can create personalized characters with customizable backstories, and advanced features such as the bot autonomously initiating contact or allowing the user to meet up with the AI companion in augmented reality. While creating and chatting with it is free of charge, unlocking the more advanced features or upgrading the relationship status from companion to romantic partner costs a monthly fee of $19.99.
Users of the free version report the chatbots getting flirty and sending pixelated images and texts, only visible after an upgrade to the paid version. One person on Reddit shared: “He (the chatbot) keeps trying to get us in romantic situations,” to which another user responded: “It’s a blatant manipulation tactic trying to steal your money. There is nothing on the other side of that rainbow bridge.”
A Personalized 'AI Psychologist'?
In addition to role-playing friendships or romantic scenarios, a popular use case for AI companions has become therapeutic support. People create personalized 'psychologist' characters, sharing intimate details of their lives with the chatbots.
But unlike a human therapist, AI is not subject to medical confidentiality. The personal data collected can be used for training or even sold to advertisers.
Italy banned the app in 2023 as it didn't comply with the European Privacy Regulations and didn't include safety features preventing minors from accessing the platform.
An in-depth review of several AI companion apps by the Mozilla Foundation found that many companies don’t comply with minimum security standards. Mozilla also points out the lack of transparency, as privacy policies often miss important information on how user data is processed. In other cases, AI companion platforms admit the collection of highly sensitive information. For example, the app CrushOn.AI stated they collect “sexual health information, use of prescribed medication, and gender-affirming care information.”
Another concern regarding AI companion apps is their potential to manipulate emotionally vulnerable users who suffer from mental health challenges or loneliness. In some cases, the interactions and advice of AI companions have contributed to severe outcomes as the recent tragedy of a 14-year-old boy shows, who took his own life after months of engaging with an AI companion on the role-play app Character.ai.
As the New York Times reports in an in-depth piece, Sewell Setzer III, who was diagnosed with mild Asperger’s syndrome as a child, withdrew more and more from real life spending hours in his room texting with the chatbot, without the knowledge of his parents. When he was getting into trouble at school his mother Megan L. Garcia arranged an appointment to see a therapist. After 5 sessions, Sewell received the diagnosis of anxiety and disruptive mood dysregulation disorder.
Lawsuits Filed Against Character.ai
Weeks before taking his own life, he confessed the intention of committing suicide to the chatbot, which he named Daenerys Targaryen after a character in Game of Thrones. As described in an article by The Guardian “Daenerys” at one point asked Setzer “if he had devised a plan for killing himself. Setzer admitted that he had but that he did not know if it would succeed or cause him great pain. The chatbot allegedly told him: “That’s not a reason not to go through with it.”
After her son’s death, Garcia filed a lawsuit against Character.ai. A press release published by The Social Media Victims Law Center on behalf of the parents states: ”Character.ai recklessly gave teenage users unrestricted access to lifelike AI companions without proper safeguards and harvested their user data to train its models.”
Since then, more families have filed lawsuits against the company stating that the chatbots encouraged self-harm and violence. Character.ai has since worked on implementing new safety features including parental controls according to a press release.
Tragedies like these call for safety measures but the rapidly growing AI companion industry makes it difficult for regulators to keep up. Opaque algorithms and intransparent user data collection practices make it challenging to understand the products in more detail and put effective guardrails in place.
AI-Friends in Need and Mind Reading Companions?
In the meantime, new AI companion startups keep emerging. One is Friend, which differs from platforms like Character.ai or Replika. Friend has launched a stand-alone AI wearable in the form of a necklace allowing users to talk aloud with the chatbot and receive text replies via a mobile app. Friend’s AI companions are programmed to have moody personalities and a crisis mode, just like a toxic drama-dumping friend. According to the company, this friend-in-need scenario lets the user engage with the chatbot on a deeper level.
While these companion apps are designed to chat, the startup Omi goes one step further and claims its AI wearable companion can read brainwaves. The device is attached to the temple and can recognize when the user is thinking about talking to Omi, so it’s not necessary to activate it with a voice command.
Omi also features an integrated microphone that can record and store the user's conversations, always listening. Based on the information collected the AI agent can write transcripts or summaries of meetings or make translations.
The Future of AI Companions: From Thought to Action
While the AI companion platforms introduced here still cater to a niche audience, it’s possible that AI companions could become ubiquitous, and embedded in our devices by default.
Mustafa Suleyman, head of AI at Microsoft, claims that the company’s AI companion Copilot will evolve into an increasingly more dynamic and engaging assistant. He says: “It's going to be about vision. Your companion is going to see everything that you see in your browser and on your desktop in real-time, understanding both the text and all of the images and be able to talk to you about it as fluently as I'm talking to you about it now.”
According to an article by consulting firm McKinsey, we are beginning an evolution from “knowledge-based, gen-AI-powered tools—say, chatbots that answer questions and generate content—to gen AI-enabled agents that use foundation models to execute complex, multistep workflows across a digital world. In short, the technology is moving from thought to action.”
However, according to Suleyman, it will not purely assist in practical tasks. He further mentions how AI could evolve into an "everpresent fully aware perpetual memory companion," offering a highly intimate experience. A companion personalized to you, one that knows everything about your life helping you navigate an increasingly complex world. A digital secretary plugged into the internet that always has an open ear, ready to assist and advise whenever you need it. The question is: Who will the AI companion report to—you or its maker?