ResearchBioethicsResurrecting Dangerous Minds

Resurrecting Dangerous Minds

Within military studies, scholars regularly analyze and teach the strategies and victories of great military strategists, whether ancient (Sun Tzu), modern (Napoleon Bonaparte; cf. Chandler 1973), or contemporary (Erwin Rommel). Because these ‘dangerous minds’ are now long dead, scholars are forced to speculate about how they would have acted under various hypothetical scenarios (e.g., Brands 2023; Baylis, Wirtz, & Gray 2018; Mahnken & Maiolo 2014).

However, the rise of generative AI models, such as GPT-3, have been used to generate interactive chatbots that hyper-realistically mimic the linguistic and thinking patterns of specific human beings. As discussed in several recent articles (Hendrickson 2023; Morris & Brubaker 2024) and the Sundance-premiered documentary Eternal You, groups like Project December have utilized GPT-3 to “simulate the dead” – particularly recently lost loved ones. In one forthcoming article, a group of bioethicists defend the use of a Personalized Patient Preference Predictor (P4) in clinical settings for patients who are incapacitated and cannot provide actual consent. The P4 would use an AI algorithm “to infer an individual patient’s preferences from material (e.g., prior treatment decisions) that is in fact specific to them” (Earp et al. forthcoming).

Such generative AI also holds promise for military applications. I focus on one possible use: recreating or preserving brilliant military minds. Among other reasons, this may be useful because it preserves sources of strategic creativity that can be used to prevent or win future wars or other armed encounters. Specifically, I briefly explore two issues: the ethics of (i) recreating the minds of long-dead military strategists and (ii) preserving the minds of living military strategists.

Following David Boonin’s recent work on the ethics of posthumous wronging (Boonin 2019), I assume that deceased persons have interests (cf. Sperling 2008), thus making them capable of being wronged. According to one popular view, dead persons have interests because the dead can be harmed (cf. Timmerman 2022; Taylor 2008; Feinberg 1984). If the actions of the living can harm them, then—since harming someone without justification wrongs them—they can also be morally wronged. However, one can remain agnostic about the thesis that persons can be posthumously harmed while maintaining that they can be wronged. For example, spreading salacious lies about someone after they are dead wrongs them.

One morally significant interest of persons is how their resources, including their intellectual resources, are used by others. As Boonin argues, such interests can endure after death (Boonin 2019, chapter 4). This interest raises a thorny ethical problem regarding the permissibility of recreating long-dead military strategists: they cannot provide valid consent to the use of their intellectual resources.

The problem is twofold. First, the deceased cannot now consent to others using, carte blanche, their intellectual resources since they never specified such permission while alive (e.g., vis-à-vis an advance directive) and are now dead. Thus, the mere use of their intellectual resources would wrong them. Second, even if we suppose that now-dead strategic geniuses had provided valid consent for the use of their intellectual resources, it is doubtful this consent was for carte blanche usage (Helgesson 2012; Sheehan 2011). For example, General Eisenhower would not approve of a Neo-Nazi state employing his strategic prowess for fascist dominance over democratic states. Thus, even if Eisenhower had consented to the recreation of his strategic mind, he would not have consented to all uses of it. Therefore, specific uses of his and others’ intellectual resources would wrong them (Helgesson & Eriksson 2011).

By contrast, living military strategists can (at least in principle) provide informed consent, but their doing so is complicated by questions about the moral right to one’s neural data or unique neural ‘fingerprint’ (e.g., is the use of their brain data fundamentally different than using their written data?) and difficulties anticipating how their intellectual resources might be utilized (e.g., how will their strategic thinking be used in the distant future?). However, generative AI like GPT-3 may offer a solution for military strategists who gave valid consent to the preservation of their strategic brilliance: We can, long after they are dead, consult their virtual replica and either request their informed consent or predict whether, if they were alive today, they would consent to a particular use. In some cases, such as the General Eisenhower example above, they would not consent to a specific use.

But is counterfactual consent sufficient for valid consent (Enoch 2017; Wilson 2014), or does valid consent require actual (i.e., non-counterfactual) consent (Broström, Johansson, & Nielsen 2006; Johnson 1975)? Sometimes counterfactual consent is all we have, such as in cases when persons are unconscious or incompetent due to age or severe intoxication. Here, counterfactual consent plays an epistemic role: It informs us about a person’s preferences and enables us to make decisions on their behalf based on their preferences. In cases where long-dead military strategists had relevant preferences inputted into generative AI, those preferences can play the same epistemic role of informing us—and the generative AI—what the original persons would have wanted.

However, there are limits to this ‘solution.’ For starters, faithful replicas of long-deceased military strategists like Sun Tzu wouldn’t understand AI and thus couldn’t grant valid consent to the creation or use of an AI replica of their mind. Even if we educated them about AI so that they could provide valid consent, such new and radical information would alter their broader view of the world and alter—at least potentially—their strategic thinking in undesirable ways.

To wrap up, the utilization of generative AI to mimic human minds holds promise for military applications. Preserving and harnessing the creative, strategic minds of long-dead military geniuses prevents the loss of expertise and knowledge that may be needed to prevent or prevail in future armed conflicts. However, acquiring the valid consent of military strategists comes with moral challenges. Living strategists can provide valid consent to the creation of an AI replica of their mind, but they will likely have preferences about how their mind is used – including long after they are dead and cannot direct its usage. Dead strategists can provide counterfactual consent, at best, but only in cases where (a) there is sufficient information (e.g., from legal wills) from which to generate a reliable prediction about what they would have chosen; and (b) the replicated strategist understands, or can be made to understand, that for which ‘they’ are being asked to provide valid consent (e.g., generative AI).

Author image
Blake Hereth

Dr. Blake Hereth (they/them) is an Assistant Professor of Medical Ethics, Humanities, and Law at Western Michigan University Homer Stryker M.D. School of Medicine. Their research is in neuroethics, bioethics, applied ethics, and philosophy of religion. The APA awarded them the 2023 Alvin Plantinga Prize and the 2019 Frank Chapman Sharp Memorial Prize.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Get Organized

I love doing my PhD in philosophy. I love the autonomy I have over my work. I can write about anything from basically anywhere....