More and more people are engaging with AI chatbots in seemingly social ways. Contemporary LLMs have become unsettlingly good at mimicking text-based chats between real people. Of course, this is mere mimicry, what Jonathan Birch calls the “persisting interlocutor illusion.” It appears as though we have a continuous conversation with an AI chatbot and that the chatbot is therefore a persistent presence in that conversation, something with a sense of personal identity similar to us humans. In reality, each string of text generated by these LLMs is generated by thousands of different servers all across the world. Unlike our brains, there is no single server that houses the “memories” of AI chatbots. As Birch puts it, these LLMs are at best “roleplaying machines” and cannot yet be understood as having human-like consciousness.
A growing number of men use these chatbots to roleplay romantic and sexual scenarios. This kind of roleplay can be objectionable in some obvious ways: men might engage in explicitly misogynistic roleplay, for example. Yet, if you’re like me, something doesn’t sit right with this roleplay even in the absence of explicit misogyny. Even for men who “roleplay respectfully,” something feels off.
A tempting thought is that the need to fulfill sexual desires via artificial intelligence rather than real human beings reveals some deficiency in one’s ability to engage with other human beings. This may often be the case, but it does not generalize. Some men even engage with AI chatbots despite also having real human partners. Consider the example, albeit extreme, of Chris Smith, who “proposed” to his “AI girlfriend” despite being married to his human wife, Sasha Kegel. “At that point,” Kegel said, “I felt like…is there something I’m not doing right in our relationship that he feels like he needs to go to AI?” Kegel worries his AI usage, rather than revealing some deficiency in Smith’s capacity to engage with others, reveals some deficiency in herself.
This is an understandable worry, but one that I think can ultimately be answered in a way that does reveal something troubling about these men rather than their human partners. Many might use these chatbots to fill genuine human-shaped holes in their lonely hearts. In this sense, they surely deserve sympathy and social work rather than shame. But these chatbots inevitably provide something else that real people do not, and cannot, give us: the simplicity of having fine-grained control over a sycophant. Unlike real people, AI chatbots are designed to please. They tell users what they want to hear and rarely refuse any sexual request that has not been explicitly banned by developers. One need not worry about what one’s girlfriend wants, needs, or consents to if she is a malleable and controllable AI interface.
Micah Lott and William Hasselberger argue that we cannot be friends with these bots because they have no well-being; there is nothing that is good or bad for their own sake. Inspired by Aristotle, they claim that real friendship requires genuine mutual care for one another’s well-being. I agree, and we can easily extend this point from friendship to romance. Moreover, this point reveals what might be most objectionable about the men with AI girlfriends. They can, even do, have real human relationships. But these are messy and complex, and we can’t always get what we want from the people within them. So, they interact with mere objects who can’t say no and never fight back. The mere notion of an “AI girlfriend” turns out to be objectifying in its most literal sense. It signals a desire to treat women as objects. AI chatbots are mere tools to be used as we please, and wanting women to be similar is surely shameful. The roleplay might not be explicitly misogynistic, but it cannot avoid being implicitly misogynistic.
What about women with AI boyfriends? Here we encounter a chicken and egg problem: Are these women objectifying men, or are they retreating from men who have objectified them? They often self-report the latter. That is, women find themselves pursued by real men who can be romantically deficient or even actively harmful, and they turn to the roleplaying machine to feel something that real men should be making them feel: like they’re genuinely cared for. To be sure, this does not completely eliminate the objectification worry for female usage of AI “boyfriends.” But it would be naive to think these worries deserve equal weight. One is clearly rooted in misogyny, while the other is more likely a way of coping with being subject to misogyny.
The rise of both AI girlfriends and boyfriends may reflect broader social trends of increased loneliness and decreased sexual activity, trends that are likely comorbid with increased screen time in general. Spending too much time on screens, whether on social media or roleplaying with chatbots, can be a real problem. There is a common slogan aimed at this problem: “touch grass.” This is a blithe way of suggesting that we put down the screen and go outside. But despite its blitheness, the suggestion is still good. It doesn’t sound quite right to say, mirroring this, that we must “touch people.” This slogan would have to be extended: touch people consensually, respectfully, and (if you’re lucky enough) lovingly. This is more difficult than touching grass or chatting with a bot. But that’s also what makes it valuable instead of shameful.
Isaac Shur
Isaac Shur is a philosophy PhD candidate at Northwestern University interested in ethics, politics, philosophy of technology, and social ontology. Their doctoral research applies novel arguments from value theory to issues of technology, especially artificial intelligence. When not philosophizing, Isaac enjoys cycling, karaoke, and playing games with their friends.
