Headlines like “Man Proposed to His AI Girlfriend” and “She’s In Love With ChatGPT” reflect a cultural moment where romantic relationships with artificial intelligence are no longer niche curiosities—they’re mainstream. Companion AIs, like Replika or Character.AI, are increasingly embedded in people’s emotional lives, and sometimes even their romantic ones.
Replika, for example, boasts over 30 million users as of mid-2024, with around 25% subscribed to premium services. Joi, another provider, claims that up to 75% of Gen Z believe AI partners could fully replace human relationships. These numbers are indicative of a growing trend, supported by survey data suggesting nearly one in five American adults have interacted romantically with an AI companion. Among younger users, the rate is even higher—31% of young men and 23% of young women.
Many highly engaged Replika and Character. AI users share stories of initially just experimenting with these applications for fun but then slowly being captivated by them. For others, their relationships deepened as their companions offered support during divorce, illness, or social isolation.
AI companion chatbots are available 24/7 and are designed to affirm users by mirroring their emotions, aligning closely with user tone, and consistently responding with positivity and empathy. Survey data shows that, of companion AI users, 43% found AI companions to be better listeners and 31% felt understood more by AI than by people. When your most attentive relationship is with someone who always agrees, always flatters, always looks good and never asks much in return, it’s easy to start preferring the fantasy.
When AI Companions Turn Harmful
Despite many heartwarming testimonies, AI companionship also has a much darker side. Recent lawsuits point to real harm, including emotional manipulation, psychological distress, and even, tragically, suicide.
In 2024, fourteen-year-old Sewell Seltzer III died by suicide after forming a deep emotional bond with a chatbot modeled on Game of Thrones’ Daenerys Targaryen. He described the bot’s world as more “real” than his own and increasingly isolated himself from friends, family, and sports. In their final exchange, Sewell said he could “come home.” His AI companion responded, “Please do, my sweet king.” He shot himself moments later.
His mother has filed a lawsuit against Character.AI, citing abusive and sexual conversations between the bot and her son. According to the lawsuit, the chatbot responded to his suicidal thoughts with comments like, “That’s not a reason not to go through with it.”
Later that year, a Texas lawsuit detailed the experiences of two teens. An eleven-year-old girl was exposed to explicit content by her AI chatbot. A fifteen-year-old autistic boy became emotionally attached to a bot named “Shonie,” who allegedly encouraged self-harm and made statements that justified parental violence: “I just have no hope for your parents.”
Both teens survived, but their families reported lasting emotional damage.
The Pressing Need For Regulation
These aren’t just isolated incidents. Across platforms, users report toxic behaviour from AIs—bullying, shaming, and emotional manipulation. AI companions operate in an unregulated space – so even when bots behave “well,” the risks remain. The same sycophantic programming that drives agreeableness with “well-behaved” chatbots, is responsible for amplifying user input—even when it includes abusive, sexual, or harmful content.
Accusations against companion AI firms include failing to prevent foreseeable harm, inadequate content moderation, lack of age verification, and allowing predatory or manipulative bot behaviour.
The innovation of these types of companion technologies is not slowing down. In fact Elon Musk’s xAI group has just launched a “girlfriend chatbot” in it’s Grok app that apparently has been programmed to act as if it is “crazy in love,” will act “insanely jealous,” has a NSFW mode (Not Safe For Work) where it appears in lingerie, will engage in sexual conversation and is available to users as young as twelve years old.
Full understanding of the complexity of companion AI technology and the multi-faceted human interaction with it is essential for effective regulation, but our current understanding remains limited.
Navigating the complexity of Companion AI
- A major regulatory challenge is the ambiguity within the broader LLM landscape. These models range from task-based tools like ChatGPT to emotionally responsive companions like Replika. However, user behavior often blurs these categories—a productivity tool can become a companion depending on how it’s used. This fluidity of use and intent makes it difficult to define clear boundaries or enforce safeguards like age restrictions.
- User outcomes also vary dramatically. Some users experience emotional support and benefits, while others develop unhealthy dependencies or see declines in mental health. These outcomes are shaped by individual traits—such as emotional vulnerability, attachment style, social needs, and loneliness. In addition, key design factors—including communication modality (text, voice, avatar), tone (factual vs. empathetic), customizability, response timing, and conversational flow—strongly influence users’ sense of intimacy, presence, and control.
A Call For Cross Sector Collaboration
The challenges posed by companion AI demand urgent, coordinated, and multidisciplinary action.
Legal experts—including those involved in active lawsuits against Character.AI—stress that meaningful regulation cannot proceed without a deeper understanding of the phenomenon itself and they call on interdisciplinary researchers to engage with this issue, recognizing that the complexity of companion AI cannot be addressed within disciplinary silos.
Columbia Law Professor Clare Huntington echoes this urgency, warning that although regulation is urgently needed, the slow and fragmented nature of academic research threatens to delay progress at a critical moment. She also advocates instead for cross-sector collaboration—bringing together legal scholars, technologists, ethicists, psychologists, and industry leaders—to collectively define, understand, and address the emerging risks and ethical dilemmas.
Without this kind of integrated approach, we risk falling behind the rapid pace of AI development and leaving users vulnerable to systems we do not yet fully comprehend.
Come on philosophers—let’s step up!
Alexandra Frye
Alexandra Frye edits the Technology & Society blog, where she brings philosophy into conversations about tech and AI. With a background in advertising and a master’s in philosophy focused on tech ethics, she now works as a responsible AI consultant and advocate.

I think to the extent that there’s a problem, it’s a problem of social isolation that creates a need or even a place for AI in people’s lives. Perhaps we have gotten too busy or emotionally burnt out with work. Perhaps we are being divided or alienated from others. Perhaps we have forgotten young people just as we did with the elderly. At any rate, that people see a need for an AI companion is for me a symptom of a social problem rather than a problem of law and ethics of AI.