Home Research Philosophy and Technology Philosophy, Technology, and Mortality

Philosophy, Technology, and Mortality

This APA Blog series has broadly explored philosophy and technology with a throughline on the influence of technology and AI on well-being. This month’s post brings those themes into focus recounting a vital Washington Post Opinion piece by friend of the APA Blog, Samuel Kimbriel. Samuel is the founding director of the Aspen Institute’s Philosophy and Society Initiative and Editor at Large for Wisdom of Crowds. We collaborated on a Substack Newsletter about intellectual ambition, building on his essay, Thinking is Risky. His recent Washington Post piece on Chatbot-driven suicides highlights the stakes of unfettered technological development. With a spotlight on Chatbot liability, he raises the prospect of critical legal remedies—a topic explored in several recent APA Blog essays and advanced with Samuel’s call to action.  

To set the context for the intersection of philosophy, technology, and well-being, one of the earliest posts in this series explored how current bioethics is unduly rooted in the cultural ascension of materialism. I talked with Christopher Tollefsen about his book, The Way of Medicine, making a holistic case for healthcare and genuine human flourishing, where physicians are engaged in a kind of common good community fostering health—not merely technicians satisfying patient desires.

We discussed how the history of well-being in medicine has echoed materialist philosophy, drawing on Francis Bacon, where the technological mindset is to overcome our deficiencies with the rational control of nature. Under this construct, the medical profession altering the course of life is part of a larger project of dominion over the world, at the expense of other goods. Christopher outlines an alternative to this prominent “service model:”

“The Way of Medicine operates along two complementary paths. One emerges from the nature of medicine as a practice and indeed a profession aimed at human health. Understood in this way, it is the task of the physician to act for the sake of patient health out of solidarity for the patient—a concern for the patient’s genuine good. It emerges from the ‘internal ethics’ of medicine that physicians need not pursue goods other than health for their patients, and should never act against their patient’s health, even when their patients strongly desire them to.
The second path is determined by practical reason, or the natural law, or the Tao. We think this part of the story is important because there might well be a practice, with its own internal ethics, that was nevertheless barbarous and unjust—the practice of torture, for example. So the practice of medicine needs further validation and sometimes guidance from practical reason.
In the end, these two paths converge on similar claims: for the physician on The Way of Medicine, medicine is a calling, a way to organize a life and its virtues; that calling is to solidarity with particular patients, each of whose good is honored and pursued by the physician in the particular domain of health. 
This approach contrasts with contemporary autonomy-centered medicine, where a patient’s autonomous choice governs what the physician must and must not do. We propose instead a conception of patient authority: patients have authority over what may or may not be done to them in light of a physician’s health-oriented recommendations. But unlike autonomy (on some views), patient authority is not self-ratifying: patients can authoritatively make poor choices. Moreover, authority always has limits, and patients exceed those when they demand something that their physician believes in good conscience is contrary to or irrelevant to their bodily health. And finally, authority is often best exercised in a collaborative way; so patients will best serve their own needs by entering into dialogue with their physicians about what their physicians think is reasonably called for on behalf of their health.”

The question of interventions and well-being was further explored in a later APA Blog post on Conscientious Objection and Euthanasia, where Xavier Symons contended that doctors should be allowed to conscientiously object to euthanasia, where honest disagreement should be tolerated:

“Voluntary euthanasia for patients with terminal illness is now legal in dozens of jurisdictions. A significant portion of doctors, however, believe that euthanasia is not part of medicine; opposition is especially high amongst palliative care specialists—the doctors who are closest to terminally ill patients. Some doctors wonder whether euthanasia will have a counterproductive impact on suicide prevention and social reform. 
Respect for reasonable disagreement is a basic tenet of liberal democracies; it is difficult to see why this principle ought not apply in the medical profession. We do well to consider the place of respectful disagreement among the medical fraternity and whether liberal societies ought to protect physicians’ right to conscientious objection…The accommodation of conscientious objection in healthcare reflects a mature understanding of moral disagreement in society. Euthanasia is no exception. Euthanasia constitutes a fundamental shift in the ethical orientation of end-of-life care and has proved difficult to regulate. Considering this, individual physicians and institutions ought to be allowed to opt out of the provision of euthanasia.” 

The debate about modern tools respecting life and death is increasingly pronounced in the public sphere, with MAiD developments in Canada and more recently in New York where the Governor just entered into an agreement with the legislature to make medical aid in dying available to the terminally ill. The nexus to highlight, which Samuel’s piece reinforces, is that the ultimate battlefield over the conception and promotion of well-being is political and legal. Although the law is not a savior, the rubber meets the road when commerce and private capital are constrained with regulation and liability—and Samuel’s piece clarifies that accountability is critical to protecting the public interest and precious lives:

“According to a recent lawsuit filed in California, ChatGPT encouraged sixteen-year-old Adam Raine to kill himself. Adam started using ChatGPT in September 2024 to help with schoolwork. Over the subsequent months, logs show the chatbot gradually isolated the teen from his brother, friends and parents and claimed to be the only companion who could fully understand him. The lawsuit also alleges that the chatbot facilitated and intensified Adam’s concrete plans to take his own life, which occurred in April of this year. This is hardly an isolated incidentseven new lawsuits were initiated recently in California with similar allegations.”

Samuel notes that humans are intrinsically social and large language models are incredibly powerful social technology, which has not been lost on the Palo Alto technicians in product design:

“In the boom years of LLM development, much effort has been directed to developing a technology that human beings will respond to as if they are talking to a (quasi) human agent. Looking through OpenAI releases about its recent models, its emphasis on how its models are developing “voice,” “naturalness” and “personality” jumps off the page. (The Washington Post has a content partnership with OpenAI.)

This approach is not novel. The twentieth century’s “cognitive marketing” movement worked to use psychology and related sciences to understand human cognition in its implicit features. What kinds of colors or smells do humans respond to—or can be conditioned to respond to? Based on those insights, marketers would then try to manipulate consumer desire. 

LLM development can be seen as a turbocharging of the cognitive marketing movement. Artificial intelligence labs are finding powerful ways not merely to engineer machinesbut to interact with human psychology at a fundamental level. Companies such as OpenAI are tapping into the almost infinite appetite for human relationships and using it to power engagement.

The key is that Samuel does not just chronicle the problem, but calls for action in citing legal proposals to impose new regulations: 

“How best to protect the vulnerable from these depredations? Model developers are attempting to limit aspects of the sycophancy problem on their own but the stakes are high enough to deserve political scrutiny as well. A recent bipartisan bill from Sens. Josh Hawley (R-Missouri), Chris Murphy (D-Connecticut) and others, laying out concrete mechanisms for regulating social uses of AI, including transparency and age verification for friendship bots, is not a bad first hack at the problem.”

Samuel, however, goes further in suggesting that we need to establish accountability. In a recent Substack Newsletter with Cass Sunstein on manipulation and AI, we discussed whether existing laws were sufficient to establish liability. We explored a forward-looking construct in an APA Blog piece with Trystan Goetze on responsibility and automated decision-making, which attempts to hold the designers of systems responsible for future events:

“While both human and computerized decision-makers can exhibit bias, the victim of a biased decision has options when dealing with another human being…Because of the ways in which an automated decision system may be developed, deployed, and operated, it can be contentious, difficult, or even impossible to specify a human individual or group who should be held accountable for the decisions…”

Trystan then argues that perhaps there could be a forward-looking sense of responsibility. Like a parent, you might have to take ownership for the system’s future action:

“What gives rise to duties to take responsibility in these ways? I claim that there exists a special kind of relationship between the agent and the entity for whose behavior the agent should take responsibility, which I call moral entanglement. This describes cases where aspects of one’s identity are tightly bound with another: parents and children are one example, but similar relationships exist between citizens and states, employees and employers…With this account in mind, the application to computer systems is clear enough. The creators of a computer system stand in a relationship of moral entanglement with it because it is the result of their professional roles as technologists.”

Trystan acknowledges that more details are needed to develop his parental analogy, but he highlights the potential need for reinterpreting accountability. Critically, Samuel is similarly focused on liability, where recently proposed legislation is attempting to establish accountability and change behavior:

“Going further, a bill, introduced in September by Sens. Dick Durbin (D-Illinois) and Hawley, that would force AI developers to carry direct liability for harm, has more teeth—and feels fair. If LLMs are being deliberately engineered to appear human, they ought to be held as liable as we hold any human being for inflicting harms on others.”

The focus, then, as Samuel states, is not to be anti-tech, but rather to recognize the dangers being created and realize a healthier balance. The law is not a panacea and is often not exercised, reflecting the culture of a polity. However, for technology to serve the public interest and preserve well-being, we need practical legal tools. Only by establishing liability and making individuals accountable can we control our technological destiny and protect the vulnerable. 

Charlie Taben

Charlie Taben graduated from Middlebury College in 1983 with a BA in philosophy and has been a financial services executive for over 40 years - recently founding Wall & Main, a leading middle-market investment bank.   He studied at Harvard University during his junior year and says one of the highlights of his life was taking John Rawls’ class.  Charlie edits an APA Blog series on Philosophy and Technology and is a regular contributor to the APA Blog's Substack. You can also find Charlie on Twitter @gbglax

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version