Home Research Philosophy and Technology The Ethics and Character of Creating Personhood

The Ethics and Character of Creating Personhood

As artificial intelligence continues to develop, the list of ethical questions regarding its creation and use grows. This is as it should be. We ought to be worried about intellectual property, privacy, bias, accountability, as well as about the effects on the environment, education, jobs, and so forth. We ought to be worried about human behavior when it comes to the use of AI. In many fictional works, however, the worry tends to be about the behavior of the machines. Will they overtake humanity and threaten our freedom, our lives, our species? The Matrix2001: A Space OdysseyEx Machina—in all of these stories, the machines’ drive for self-preservation overrides anything else in the programming. Whether or not these machines are conscious is a question for another time, but I take it that many, if not most, readers or viewers would consider these machines to be conscious in order to explain their extreme behavior. 

It also seems to be the case that some people working in AI have this as a goal: to create consciousness. In the fictional case of Ex Machina, Nathan wants to create a machine that has passed a new version of the Turing test—not just one that can convince someone it is thinking, but one that is thinking; one that might not be a person, but certainly is a unique individual experiencing being or self. Others, like Elon Musk, are interested in uploading one’s thoughts, beliefs, memories, etc.—one’s entire mind or self, if you will—to a machine in order to preserve oneself in this future technology. These two possibilities raise many metaphysical questions about personal identity: should we consider a thinking machine a person or something akin to a person, and would that machine with my mental data be me? The related ethical question is not about the behavior or desires of the machines. Rather, using a fictional account, I would like to focus the worry on, yet again, the behavior of human beings and, in particular, how humans will treat a conscious being of our own creation. 

No stranger to these questions, novelist Kazuo Ishiguro, who graduated with degrees in Philosophy and Literature from the University of Kent at Canterbury, raises similar questions in the dystopian novel Klara and the Sun. Through his characters, Ishiguro proposes different ways to conceive of what it means to be an individual and the way that others know and care for an individual is at the heart of the story. Paul, father to the main human character, young Josie, presents the reader with a particular answer to the question of personal identity centered around a “poetic sense” of the heart. Klara, the main AI character and narrator, also believes in “something special” that makes us who we are. 

In the futuristic world Ishiguro has created, some families have chosen to have their children “lifted” or genetically modified for enhanced abilities and an ostensibly better life. They are schooled at home and live what many would call a lonely life. Many lifted children, like Josie, have an artificial friend (an AF). Josie’s AF, Klara, narrates the story and so the reader gets a picture of her internal world, full of ideas, desires, and emotions. AFs are designed to study their children until they know them so well that they can truly serve as the perfect best friend. As the story progresses, it becomes clear to Klara that this is not the only reason Josie’s mother, Chrissie, purchased her. The process of lifting left Josie ill and Chrissie has plans for Klara to know her daughter so well that not only would she be the best possible friend for Josie, but also the best possible Josie

Klara is tasked with learning Josie’s mannerisms, the odd way she walks, the way she talks, her most personal and secret thoughts. Unbeknownst to Josie, her mother is also having a perfect physical likeness of her daughter created for Klara to inhabit should Josie die. Chrissie and the designer of this new body, Mr. Capaldi, ask Klara to become the new Josie, to “learn her till there’s no difference between the first Josie and the second.” Perhaps there would be no discernible difference, but on many views of personal identity, this would not be the very same Josie. Not so for Chrissie and Capaldi. They tell Klara that she is not being asked to merely mimic Josie’s behavior but to “continue her.” 

Capaldi argues that there is “scientific proof” that Klara can do this. He rejects any notion of a soul or spirit that persists through time and change and makes an individual the unique individual that they are, calling this line of thinking “sentimental.” He argues that we want to believe that there is “something that’s unique and won’t transfer,” but we know now that this isn’t so. Assuming that in this futuristic world, we have some empirical proof of his conclusion, then we do not need faith to understand personal identity but rather rationality. 

Klara seems to have both. As an AI, she’s been programmed as a rational being. She makes careful and calculated decisions based on her observations. Nevertheless, she seems to have a kind of faith; it is her unjustified belief in the power of the sun to save Josie from her illness that causes her to put herself at risk. Even so, there seems to be something that fuels Klara’s doubts about her ability to “continue” Josie. 

Josie’s father, Paul, has doubts. The rational side of Paul believes Capaldi is right; he calls the alternative view “superstition” and states that there might be “nothing there our modern tools can’t excavate, copy, transfer.” Still, Paul doesn’t seem to be able to fully commit to believing in a continuation of Josie. Paul worries that Chrissie will never love the continuation the way she loves Josie and without that, it won’t really be her. 

Derek Parfit proposed changing the question from “what makes it true that some individual in the future will be me?” to “what makes it rational for me to care in an egoistic way about some future individual?” In various thought experiments throughout his work (particularly in Reasons and Persons), Parfit argued that identity is not “all or nothing.” Rather, a person’s continuing to exist is a matter of certain relations among mental states continuing over time. 

Paul’s view is not Parfit’s view, but there are some common ideas here. For Parfit, what matters is relation R, which is a psychological connectedness or continuity with the right cause (which can be any on the wide view that he adopts). Like Capaldi, Parfit holds that there is no “separate identity”; a person isn’t a soul, but just ever-changing thoughts and body. On Parfit’s view, given a wide interpretation of relation R, the continued Josie could very well be Josie

Still, Parfit argues that the traditional question of persistence when it comes to personal identity is not the one that matters; rather, what we ought to ask is, “what makes it rational for me to care in an egoistic way about some future individual?” It doesn’t seem to be the case that the current Josie has any real reason to care about this continued Josie in the future. Her family and friends would have reason to, but it’s also not clear that they will. 

The relevant and interesting overlap in Parfit’s view and Paul’s view is the notion of “care.” For Parfit, it is whether and how one cares for a future self that matters. For Paul, it is whether and how Josie’s friends and family will care for a future Josie that matters. His doubts about Chrissie’s ability to love the continued Josie leave him even more doubtful that it would indeed be Josie. Capaldi had said that “the new Josie won’t be an imitation. She really will be Josie. A continuation of Josie.” If Chrissie and Paul—or others who know and love Josie—won’t truly believe it, won’t love her as if she is Josie, then in fact, she won’t “really be” Josie and whether or not there is something excavated that remains won’t matter at all. 

And what about Klara? As Klara narrates the story, the reader comes to see her as a conscious being with a rich inner world full of thoughts, memories, desires, hopes, and emotions. That she is artificial shouldn’t matter much to the question of her identity as an individual. On Parfit’s view, relation R, a psychological connectedness or continuity with the right cause, pertains to Klara from the moment she is “new” (not born) to her slow fade at the end of the book. To the reader, it seems obvious that Klara is conscious and having experiences and that the continuity of those experiences gives evidence of the continuity of an individual and maybe even of a person. 

Moreover, Klara cares about a future self. When she comes to realize her role in continuing Josie, Klara, gesturing to herself, asks, “what would happen to . . . all this?” Chrissie says “that’s just fabric”, without caring much for Klara’s own concerns about Klara’s future self. Klara cares that she—Klara—will be the best AF for Josie. She cares about whether she can survive a trip to the barn at sundown and about what will happen to her when she drains her fluid. Klara cares for a future self in the way that Parfit argues “matters”; more evidence that the “continued Josie” would really be Klara. 

Still, Klara is missing something that Josie has, the feature that, for Paul, is necessary for an individual to be an individual. For Paul, the “something special” that would make Klara Klara rests in way others care for Klara. From the housekeeper, Melania, to Josie’s friends, to Josie herself, the reader experiences a wide range of attitudes toward Klara. 

Klara is, of course, not human, but the whole book is Klara thinking about the past! Perhaps Chrissie is right that this is a necessary condition to be human, but perhaps it is also a necessary condition for personal identity. This would be consistent with Parfit’s relation R and would therefore make it the case that there is enough continuity with Klara at the beginning of the book and Klara at the end to say that there is one conscious being—not a human being—that just might qualify as a person. Paul’s own intuitions about personal identity center around a “poetic sense” of the human heart. Klara again certainly doesn’t have a human heart, but if Paul’s sense of this phrase is poetic and not literal, perhaps again we have reason to conclude that there is something to being Klara. 

If the right answer to the question of personal identity has to do with how others love and care for us—or even whether we ourselves have reasons to care about a future individual—it seems correct to conclude that the continued Josie would not be Josie at all, but Klara. Josie’s parents would not fully love the continuation the way they love their daughter, Josie herself has no good reason to care for this future individual, and Klara has reasons to care for this future individual and, in fact, does. Coupled with the rich internal life that Ishiguro describes, it seems correct to also conclude that Klara satisfies Parfit’s relation R, and the AF that is waiting to be purchased at the beginning of the story is the same AF experiencing her slow fade at the end. 

As readers, we are privileged with this insight. Josie, her parents, her friends, and her housekeeper are not. They have the epistemic problem of not knowing whether there is anything it’s like to be Klara. Without that knowledge, of course, they may not see anything wrong with treating Klara like an appliance or a toy. With this knowledge, the reader might judge some of these characters as unkind, uncaring, and even cruel. Klara has thoughts, emotions, desires, memories, and she herself cares for others. Any individual with such a conscious life would be deserving of care, kindness, and compassion. 

Like the characters in the novel, we ourselves have the same epistemic problem of not knowing whether there is anything it is like to be an artificially intelligent machine. John Searle argued that AI will always lack intentionality and, therefore, lack a mind or conscious states. Ishiguro’s novel shows it is at least epistemically possible for AI to have conscious states and intentional states. While that particular possibility does not prove that it is physically or metaphysically possible, there are other reasons to err on the side of caution. 

One cannot ever know in any Cartesian sense that anyone other than oneself is conscious. Yet, we believe that others are and we treat them in that way. We show kindness when others seem to be suffering, we build friendships and other intimate relationships assuming we know something about the poetic sense of the heart of others to whom we feel close, all while knowing full well that we can never “get inside their head.” When it comes to AI, particularly in the form of something like Klara, why wouldn’t we err on the side of caution and treat these beings with kindness and care? 

Ishiguro chose to call these beings “artificial friends,” but friendships are not one-sided. Klara cares for Josie, and Josie ought to care for Klara. At times it seems she does, but not in the same kind of intimate friendship of which Josie is clearly capable.  

Given the recent developments in things like ChatGPT, we have many reasons to be nervous about the future of AI. Will we create something that can pass the Turing test? If we do, at what point will we conclude it is conscious and has a mental life like that of a human being? There has always been anxiety about the AI itself, if and when we do create one with consciousness. What will it do? Will its instinct for survival outweigh any program it has to serve human beings? 

Perhaps we should be just as worried about what we will do. If we have even some reason to believe in the epistemic possibility of a conscious AI—one with genuine thoughts, desires, and emotions—how ought we to treat it? Klara was purchased essentially to exist as someone other than herself. If the creators really believed she could become Josie—a thinking, feeling, conscious Josie—they had an immoral project to begin with. Creating consciousness will come with huge responsibilities on our own behalf. 

This fictional account highlights the quandary of potentially continuing existence after death and the question of how it would change human behavior. However, it raises another question, often overlooked in these explorations, of whether, if caring for an individual is a measure of identity, what we’d want for the AI. 

Leigh Duffy
Assitant Professor, Philosophy at Buffalo State | Website

Leigh Duffy is Associate Professor of Philosophy at SUNY Buffalo State University. Her work is primarily at the intersection of Philosophy of Mind and Epistemology and she is particularly interested widening the scope of epistemic tools to understand consciousness, the mind, and the self

Previous articleReflections on Culture in American Film
Next articleDonald Trump and The Specter of Kurt Gödel’s Contradiction

LEAVE A REPLY

Please enter your comment!
Please enter your name here