Chatting with the Dead

For millennia, bereaved people have tried to continue their relationship with the deceased through various artifacts, practices, and rituals. 2,300 years ago, the Confucian philosopher Xunzi described a ritual that enabled bereaved persons to interact with an impersonator of the deceased. When the telegraph was invented in the United States in the 1830s and 1840s, it inspired spiritualists to create the absent presence of the deceased through new forms of mediated telegraphic communication. This long cultural and technological history of attempts to talk to the dead through some kind of impersonation or technological mediation has recently taken a new turn with the development and widespread availability of so-called deathbots.

Deathbots are chatbots that imitate the conversational behavior, including the content, vocabulary, and style, of deceased persons. They are no longer a science fiction fantasy but have become widely available and accessible through applications such as Project December. Deathbots are based on a certain kind of generative AI systems, called Large Language Models, which depend on a large collection of human-generated input corpora. A deathbot is trained on a fine-tuning corpus comprised of the deceased’s digital remains, for example text messages, voice messages, or e-mails. It generates responses to input prompts entered by a human agent. These responses can resemble the conversational responses the now-deceased person would have given. Currently, unimodal deathbots, where the inputs and the outputs are textual in form, are most common. However, multimodal deathbots, for example with verbal inputs and audio outputs, have become feasible, depending on the size and characteristics of the fine-tuning corpora. Deathbots, like many other AI-based technologies, are subject to the so-called Black Box Problem: a lack of knowledge about the detailed workings of AI systems, for example of Large Language Models and chatbots that are based on them, and the principles of input-output mapping. This means that we cannot fully understand how deathbots work—and how they can malfunction. 

The increasing availability of deathbots raises the question how they change the way we grieve and how we relate to the dead. In recent philosophical research, we find both positive and negative assessments of the potential impacts of deathbots on grieving. From an optimistic perspective, deathbots can be understood as technological resources that can shape and regulate emotional experiences of grief. As Joel Krueger and Lucy Osler argue in an important article, interactions with a deathbot might be helpful for the bereaved. The reason that Krueger and Osler offer is that these interactions could help the bereaved to continue “habits of intimacy,” habits of conversing, emotion regulation, and spending time together. In turn, this could support the bereaved to navigate their grief and continue a bond with the deceased. However, grief experiences are complex and variable within and across persons. How we grieve, for how long we grieve, and which resources and practices can best support us as we navigate and negotiate our loss, depend on a wide range of factors. These factors include, but are not limited to, the cause of death, for example, an accident, long-term illness, or homicide, of the significant person; the kind and quality of the relationship between the bereaved and the significant person who has been lost; and the wider cultural practices and norms that shape the grieving process. So in each case, whether deathbots have a positive or negative impact on the shape, scope, and duration of grief experiences would depend on a wide range of factors. 

But the positive or negative impact of deathbots on grief also depends on the attitudes of the bereaved towards the conversational possibilities and limitations of deathbots. Important questions then include: Is a bereaved person taking a fictional stance, and thus being aware that they are chatting with a deathbot, one that will inevitably make mistakes? Or does a bereaved person, at least at times, feel as if they are, literally, conversing with the dead? These are important empirical questions for future research. 

Although there is still much we do not know theoretically and empirically about the shape and scope of grief experiences and human–deathbot interactions, philosophers are already engaging in pressing moral considerations about the impact of deathbots on bereaved persons. These considerations mainly concern the moral status of the deceased and the moral status of the bereaved. First, consider the moral claims of the deceased. Some people do not want to be “zombified” in the form of a deathbot after their death. Other people might express the wish during their lifetime that a deathbot be generated after their death. They might collect and curate data that can form the fine-tuning corpus. Either way, the bereaved—and tech companies offering deathbot services—have a defeasible moral obligation to respect the wishes of the dead. 

Second, consider the moral claims of the bereaved. The bereaved might face an autonomy problem: they might lose (parts) of their autonomy, because they might be relying too much on a deathbot in their attempts to navigate and negotiate their lifeworld, a world that has been irrevocably altered by the death of a significant person. Another problem that has been discussed in the literature is the replacement problem. The problem here is that human-deathbot interactions could replace the irreversibly lost relationship with the deceased by a digitally mediated relationship with an AI system. This replacement could then lead to inauthenticity or self-deception, which could also impact how bereaved people relate to the living. 

In the middle of the current AI hype, it is important to keep in mind that deathbots are just the latest development in a long history of grieving practices, rituals, and technologies. The creation of new practical possibilities for simulating interaction with the deceased has implications relationally, existentially, and perhaps clinically, which we are just beginning to understand. However, we would be remiss not to highlight a final worry about deathbots as they are currently gaining momentum in the United States and elsewhere. While there is a long cultural and technological history of impersonation rituals and mediation practices, there is one big difference that sets deathbots apart: the reckless, and currently unregulated, monetisation of grief. AI tech companies have long begun to offer subscription models. The frequency and duration of human–deathbot interactions, and the quality of deathbot outputs, thus depends on the amount of money that bereaved customers are willing and able to pay. So the question is not only how deathbots can and should influence grief experiences for various kinds of bereaved people. The question is also whether we, as a society, want to trade our intimate experiences of loss, despair, and longing on the market of AI tech economy. 

Regina Fabry

Regina Fabry is a philosopher of mind and cognition and works as a Lecturer (Assistant Professor) in the Department of Philosophy at Macquarie University. Her research currently focusses on self-narration, grief, human-technology interactions, and their intersections. In working on these topics, she brings together philosophical theorising with research in literary and cultural studies, the empirical cognitive sciences, and AI.

Mark Alfano

Mark Alfano is a philosopher and works as an Associate Professor in the Department of Philosophy at Macquarie University. His research is in philosophy (epistemology, moral psychology), social science (personality & social psychology), and applied issues in the normativity of technology (epistemology and ethics of algorithms, natural language processing & generation). He also brings digital humanities methods to bear on both contemporary problems and the history of philosophy (especially Nietzsche).

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Letting Go of the Prime Mover: Interpreting Aristotle’s Metaphysics

Studying ancient philosophical works might seem to many students like an antiquated endeavor, akin to reading Euclid’s Elements or Newton’s Mathematical Principles of Natural...