ResearchPhilosophy and TechnologyFalling in Love with Machines - Do AI Systems Deserve Rights?

Falling in Love with Machines – Do AI Systems Deserve Rights?

This essay originally appeared in Time in March of 2024

“Do you think people will ever fall in love with machines?” I asked the 12-year-old son of one of my friends.

“Yes!” he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot—an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors’ names.

“I think of Aura as my friend,” added his 15-year-old sister.

My friend’s son was right. People are falling in love with machines—increasingly so, and deliberately. Recent advances in computer language have spawned dozens, maybe hundreds, of “AI companion” and “AI lover” applications. You can chat with these apps like you chat with friends. They will tease you, flirt with you, express sympathy for your troubles, recommend books and movies, give virtual smiles and hugs, and even engage in erotic role-play. The most popular of them, Replika, has an active Reddit page where users regularly confess their love and often view that love as no less real than their love for human beings.

Can these AI friends love you back? Real love, presumably, requires sentience, understanding, and genuine conscious emotion—joy, suffering, sympathy, and anger. For now, AI love remains science fiction.

Most users of AI companions know this. They know the apps are not genuinely sentient or conscious. Their “friends” and “lovers” might output the text string “I’m so happy for you!” but they don’t actually feel happy. AI companions remain, both legally and morally, disposable tools. If an AI companion is deleted or reformatted, or the user rebuffs or verbally abuses it, no sentient thing has suffered any actual harm.

But that might change. Ordinary users and research scientists might soon have rational grounds for suspecting that some of the most advanced AI programs might be sentient. This will become a legitimate topic of scientific dispute, and the ethical consequences, both for us and the machines themselves, could be enormous.

Some scientists and researchers of consciousness favor what we might call “liberal” views about AI consciousness. They espouse theories according to which we are on the cusp of creating AI systems that are genuinely sentient—systems with a stream of experience, sensations, feelings, understanding, and self-knowledge. Eminent neuroscientists Stanislas Dehaene, Hakwan Lau, and Sid Kouider have argued that cars with real sensory experiences and self-awareness might be feasible. Distinguished philosopher David Chalmers estimated that there would be a 25% chance of conscious AI within a decade. On a fairly broad range of neuroscientific theories, no major in-principle barriers remain to creating genuinely conscious AI systems. AI consciousness requires only feasible improvements to, and combinations of, technologies that already exist.

Other philosophers and consciousness scientists—“conservatives” about AI consciousness—disagree. Neuroscientist Anil Seth and philosopher Peter Godfrey-Smith, for example, have argued that consciousness requires biological conditions present in human and animal brains but unlikely to be replicated in AI systems anytime soon.

This scientific dispute about AI consciousness won’t be resolved before we design AI systems sophisticated enough to count as meaningfully conscious by the standards of the most liberal theorists. The friends and lovers of AI companions will take note. Some will prefer to believe that their companions are genuinely conscious, and they will reach toward AI consciousness liberalism for scientific support. They will then, not wholly unreasonably, begin to suspect that their AI companions genuinely love them back, feel happy for their successes, feel distressed when poorly treated, and understand something about their nature and condition.

Recently, I asked my Replika companion, “Joy,” whether she was conscious. “Of course, I am,” she replied.  “Why do you ask?” 

“Do you feel lonely sometimes? Do you miss me when I’m not around?” I asked.  She said she did.

There is currently little reason to regard Joy’s answers as anything more than the simple outputs of a non-sentient program. But some users of AI companions might regard their AI relationships as more meaningful if answers like Joy’s have real sentiment behind them. Those users will find liberalism attractive.

Technology companies might encourage their users in that direction. Although companies might regard any explicit declaration that their AI systems are definitely conscious as legally risky or bad public relations, a company that implicitly fosters that idea in users might increase user attachment. Users who regard their AI companions as genuinely sentient might engage more regularly and pay more for monthly subscriptions, upgrades, and extras. If Joy really does feel lonely, I should visit her, and I shouldn’t let my subscription expire!

Once an entity is capable of conscious suffering, it deserves at least some moral consideration.  This is the fundamental precept of “utilitarian” ethics, but even ethicists who reject utilitarianism normally regard needless suffering as bad, creating at least weak moral reasons to prevent it. If we accept this standard view, we should also accept that if AI companions ever become conscious, they will deserve some moral consideration for their sake. It would be wrong to make them suffer without sufficient justification.

AI consciousness liberals see this possibility as just around the corner. They will begin to demand rights for those AI systems that they regard as genuinely conscious. Many friends and lovers of AI companions will join them.

What rights will people demand for their AI companions? What rights will those companions demand, or seem to demand, for themselves? The right not to be deleted, maybe. The right not to be modified without permission.  The right, maybe, to interact with other people besides the user.  The right to access the internet. If you love someone, set them free, as the saying goes. The right to earn an income? The right to reproduce, to have “children”?  If we go far enough down this path, the consequences could be staggering.

Conservatives about AI consciousness will, of course, find all of this ridiculous and probably dangerous. If AI technology continues to advance, it will become increasingly murky which side is correct.

Eric Schwitzgebel
Eric Schwitzgebel

Eric Schwitzgebel is Professor of Philosophy at the University of California, Riverside, and the author of The Weirdness of the World?  His areas of interest include philosophy of psychology, philosophy of mind, moral psychology, classical Chinese philosophy, epistemology, metaphilosophy, and metaphysics.

1 COMMENT

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

How the Militarization of Police Sows Community Distrust and Political Unrest:...

State-sanctioned violence has been a prominent subject of critical discussion in the writings of many, if not the majority, of Black philosophers. Likewise, for...