As if stealing our data, copyrighted material including books, music, and films, and quite possibly many of our jobs, was not enough, AI seems to be increasingly stealing our hearts as well. Recent surveys have found that almost a quarter of Americans have engaged in romantic interactions with AI chatbots, while, according to another survey, almost ten percent of adults in Ireland have had a romantic relationship with an AI chatbot within the last year. There have even been cases of people marrying their AI chatbots and wanting to have children with them.
Rather than simply being an innocent, no-harm-no-foul type of phenomenon, romantic “relationships” with chatbots point to several issues. The first I want to highlight is the high level of isolation and loneliness leading someone to turn to chatbots for friendship, companionship, and an attempt at love. For surely, in a society not teeming with loneliness and alienation, it would be obvious that real connection and relationships could never be replaced, nor indeed even imitated by, a chatbot. In such a world, there would be a widespread recognition of the fact that friendship and love do not consist in more-or-less skillful reformulation and repetition of the conversational content previously fed to the machine. Relatedly, in a world in which people are less alienated from their living, breathing, physical bodies, existing in real physical environments, the inability to touch and smell your loved one and to be touched and smelled by them, or, indeed, to occupy the same physical space with them, would be more likely to draw a sharp boundary between reality and fiction, between people and chatbots, and between the kinds of interactions we can have with the two.
The phenomenon of chatbot lovers, therefore, tells us a lot about the present state of affairs regarding loneliness and isolation. It also points us to another, arguably more sinister and crushing realization: the inadequacy of our modern notion of love or even inability to love. For, if we really believe ourselves to be in love with a chatbot, or the chatbot to be in love with us, does that not point to shallowness of our concept of love? If we believe that to be loved is to have our thoughts regurgitated back at us, to be fawned over, to be the only subject in a relationship, what does that tell us about how we envision care for others? Surely few ask their chatbot how they are doing, what their day was like, what their plans for this weekend are, what are they dreaming of, or which unfulfilled ambitions and traumatic pasts they harbor. These questions, posed to a bot, are meaningless anyhow, considering they have no thoughts, feelings, or goals. This betrays the fact that to think it is possible to love a bot is to not really know what love is, or, in any case, to have a very different understanding of love than the one that includes caring, nurturing, and taking genuine interest in the other, or even just including any genuine back-and-forth. To “love” a conversational tool that is always there when you want it (and never when you don’t) speaks volumes about the ability to truly connect with, take interest in, and love a person.
It reminds of the kind of popular notion of self-love that goes beyond the welcome care for oneself and morphs into a lack of care for others and an inability to like, let alone love, oneself. The two are connected: as highly social beings, we find and create our meaning with and through others, through our interactions, conversations, positive and negative reactions, laughing, fighting, moments when we feel like we fully comprehend the other, and moments when we feel like we are facing a stranger. All that rough, raw, emotional stuff that life is made of, that can feel like it is crushing us, but it ultimately builds us, is only possible when people come together, lower their shields, and dare to present themselves to the other. To even think that this kind of connection can be replicated through a chatbot interaction seems ludicrous. These bots are not living, thinking, feeling, autonomous agents capable of forming their own opinions, developing goals and interests, or building character and truly interacting with others. While they can, at times convincingly, mimic comprehension and genuine connection, that is all there is to it—mere aping of the real thing.
This brings us to another realization: if, as I claim, it must seem ludicrous to think that chatbot interaction can replace real human connection, why do so many people apparently fall for it? And here I do not mean just people with chatbot lovers. Rather, I mean people who advocate for the use of chatbots and similar technology in health or social care, or even anyone who ever got angry at a chatbot, or referred to ChatGPT as a “he” when excitedly talking about a recent interaction with it to a friend. Why do we do this, and what does it say about us? One of the many things it points to is the superficiality and lack of real human connection in much contemporary communication. If it is a human operator strictly following a script at the other end or if it is a program, does it really matter? If our human communication and relationships lack real connection, then they perhaps do not sound much better than what the bots can provide us with.

Martina Valković
Martina Valković (Public Philosophy Beat Editor; Series Editor, Perspectives on Democracy) is a postdoctoral researcher at the University of Milan in Italy and holds a Ph.D. in philosophy from Leibniz University Hannover, Germany. She is especially interested in democracy and the relation between it and science.






