“Chidi Kills Janet!” Unpacking Sentience and the Moral Status of AI

Decorative Image

This clip, “Chidi Kills Janet!,” from the first season of NBC’s The Good Place, lets us gauge our intuitions about how computers fit with our concepts of life, intelligence, sentience, etc. In the portion of my “Ethics and Social Data Science” course devoted to artificial intelligence, I ask students whether or not they would ever feel bad about powering down a computer. With limited exception, most of my students usually respond “no.” “Why?” I ask, and usually receive a bevy of responses around the fact that computers are not living creatures, that they are property, and that they do not feel pain.

Then we watch the clip from The Good Place:

In it, two characters, Chidi and Eleanor, are dead and exist in the afterlife. As part of their adventures navigating the afterlife, they face a dilemma: they must “reboot” their artificial intelligence companion, Janet, to help serve a greater good. This rebooting process requires powering down Janet, which “kills” her by wiping her memories, returning her to a more child-like state of being. Chidi is uncomfortable with the idea of resetting her because she has learned and grown, though he ultimately overcomes this objection and agrees to go along with the plan. Eleanor, on the other hand—already a somewhat morally dubious character—is far more comfortable with the idea that Janet is not a “living” creature and is compelled by the clear greater good to be served by rebooting her. For her part, Janet assures the two that it is okay to power her down and reset her, as she does not feel pain. However, she warns Eleanor and Chidi that, as they approach the power off switch, she will begin to beg for her life as a fail-safe mechanism included by her designers.

The humorous tension in the scene comes from the fact that, indeed, as both Chidi and Eleanor try to approach the off switch, Janet begs for her life in increasingly visceral (and somewhat ridiculous) ways that make it difficult for either character to actually power her down. They begin to hesitate and, subsequently, each chickens out. The relief to the tension is produced when a fourth character, Jason, appears and starts to move towards the off switch, having no idea what the button is for. Chidi overreacts and accidentally hits the reset button while trying to prevent Jason from doing so, thus powering down and “murdering” Janet.

To me, this clip functions as a kind of Rorschach test for students. I ask why both characters hesitate to power down Janet, and whether the students would themselves feel any hesitation. After all, she is just a computer. If the students do feel like they might now hesitate, where does this hesitation come from? What factors might be shaping our thinking about morality in this specific situation?

There are a number of elements that students often pull out in our discussion. For example, Janet’s capacities for growth, her expressing her wish to continue living as she’s approached, and her vivid emotional appeals. This particular clip can be used as a vector for many different discussions, but for my purposes, I use it as a launching point for a broader discussion about philosophy of mind, animal studies, and their intersections with contemporary debates in the study of AI. In particular, I use it to introduce the concept of “sentience” and its overlap with moral status.

Sentience, generally, refers to the ability to have subjective experience, particularly if it carries a “valence” of negative or positive affective experience (such as being able to experience pain or pleasure). In the example, the more Janet expresses “fear” or begs for her life, the more we begin to feel this reflects an internal life which contains these elements. A key point of discussion is how our beliefs about the internal experiences of other people and non-humans is often part of how we ground their moral status. I highlight how, in Jeremy Bentham’s 1789 work An Introduction to the Principles of Morals and Legislation, he argues that an animal’s ability to suffer is why we should consider them as a moral subject. Indeed, Bentham’s arguments have served as a foundation for much of the contemporary work on animal welfare. The new wrinkle, however, is that if machines are capable of having those valenced states, don’t they deserve status as moral agents as well?

One of the distinctions that students often make at this point (rightly) is that Janet has told us in advance that what she is doing is a trick. At the beginning of the clip, she states that she does not actually have valenced experiences, that she simply gives off the appearance of having a desire to live as a programming fail-safe. How are we to deal with this from a moral perspective? We do not resolve this question inside the classroom on that day. Instead, the students begin to explore this question as part of a set of readings on contemporary debates about the moral status of AI, anthropomorphism, and whether or not we might consider AI as a type of “philosophical zombies” (p-zombies).

Possible Readings

I offer the following readings as starting points for the next part of that conversation and hope you find them useful in your own teaching:


The Teaching and Learning Video Series is designed to share pedagogical approaches to using video clips in teaching philosophy. All posts in the series are indexed by author and topic here. If you are interested in contributing to this series, please email the series editor, Gregory Convertito, at gconvertito.ph@gmail.com.

Picture of author
Nicholas Proferes

Dr. Nicholas Proferes is an Associate Professor in the School of Social and Behavioral Sciences at Arizona State University and is the co-director of the AI & Ethics Working Group at ASU’s Lincoln Center for Applied Ethics. His research focuses on contemporary ethical issues in the digital landscape.

Previous articleInterdisciplinary Humanities (SEARCH), Benjamin E. Curtis
Next articleA Duty to Resist Love Island: An Inquiry

LEAVE A REPLY

Please enter your comment!
Please enter your name here