Public PhilosophyThe Moral Case for the Development of Autonomous Weapon Systems

The Moral Case for the Development of Autonomous Weapon Systems

This blog post is a summary of a longer paper that is forthcoming in the Journal of Military Ethics. Thank you to the journal’s editor, Henrik Syse, for allowing me to publish some of those ideas in this context. I first presented this material at the APA’s Eastern Division Conference in January of 2021.

There has been a flurry of papers in the recent military ethics literature discussing autonomous weapon systems (AWS). Lethal AWS are artificial intelligence (AI) systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. Some believe this technology ought to be banned, while others claim it will usher in a new and more ethical form of warfare. Global campaigns (e.g., The Campaign to Stop Killer Robots), national governments, and international NGOs are currently attempting to ban autonomous weapons. My goal here is not to respond to all the objections that have been raised against lethal autonomy. Nor will I argue that our all-things-considered moral judgment ought to be in favor of its use. Instead, and more modestly, I argue that we shouldn’t ban such weapons, at least not now. We ought to continue researching and developing AWS. While banning a new weapon is politically difficult and historically rare, I hope to advance the debate by showing that there are strong moral reasons to continue developing AWS, even when we bracket considerations about autonomy arms races, economic cost, and relative military might. The normative force of pro-ban arguments (or any argument against the development and/or use of AWS) must outweigh the strong moral reasons in favor of the technology elucidated below.

Plenty of modern military technology is semi-autonomous in that observation, orientation, and action—three elements of the Observe, Orient, Decide, Act (OODA) loop of military engagement—are allocated to machines; leaving only the decision of who to target squarely within direct human control. Supervised AWS can complete full OODA loops on their own; humans are “on the loop,” playing a supervisory role and retaining the ability to disengage the weapon if necessary. The US Navy’s Aegis Combat System is a supervised AWS when in Auto-Special mode, whereas Israel’s Harpy is fully autonomous; it can complete OODA loops on its own without human supervision (in one of its settings, at least). However, Harpy only targets military radar, not human beings. Humans are outside the loop of fully AWS because once the system is engaged it can function on its own without a communications link to its operator. AWS are functionally, operationally, or mechanically autonomous, meaning that they can perform certain tasks (e.g., targeting and terminating enemy combatants and installations) with little-to-no direct human intervention. However, such systems are (and will remain for the foreseeable future) extremely domain-specific: They are designed to operate in narrow parameters within the already constrained context of war. For example, Harpy is a loitering munition, meaning that it searches for enemy radars in a circumscribed area. The machines, by which I mean the hardware and software of AWS, are not morally autonomous in any sense. They cannot commit war crimes—they can only malfunction or make mistakes and “morality and legality… are ultimately a property, and the responsibility, of the system as a whole, not solely its machine components” (Lucas Jr, 2013, p. 226). Moral autonomy requires the capability to choose life-guiding principles and goals. The mechanically autonomous robots, drone swarms, and submarines of the near future are not morally autonomous in this sense. They inherit their goals from us.

In a 2010 paper, Bradley J. Strawser offers a convincing argument in favor of the use of remotely piloted but uninhabited aerial vehicles (i.e., drones) provided the war being fought is a just one. On his view, drones are just another in a long list of weapons that remove soldiers further from harm’s way. The use of spears, guns, tanks, planes, and drones are all justified, according to Strawser, by the Principle of Unnecessary Risk (PUR). PUR says (roughly) that a state and its agents should avoid exposing their soldiers to unnecessary lethal risk. Regardless of the weaponry wielded by the enemy, if the war is just, then governments, armies, and commanders ought to provide their soldiers with whatever weapon(s) will most shield them from unnecessary lethal risk while still being able to get the job done. Applying PUR to drones gives us the following conditional: “For any just action taken by a given military, if it is possible for the military to use [drones] in place of inhabited aerial vehicles without a significant loss of capability, then the military has an ethical obligation to do so” (2010, p.  346). Worrying that such a principle might be used to argue in favor of AWS, Strawser tells us that this “fails to appreciate that PUR, although a strong at first view moral principle, can be overridden by a strong enough countervailing normative reasons,” and he finds the “principled objections [to AWS] to be sufficiently strong such that they override the moral demands of PUR” (ibid., p. 350). One might further point out that drones already provide a lot of coverage when it comes to lethal risk: Can it really get any safer than bombing the enemy from a bunker in Nevada? And would the additional coverage that (perhaps) comes from taking humans further out of the loop be enough to outweigh the objections to AWS?

I think Strawser unfairly stacks the deck against AWS by focusing on lethal risk. Lethal risk is not the only type of risk that soldiers must deal with. Depression, anxiety, post-traumatic stress disorder, and feelings of guilt have a severe and negative impact on the well-being of our soldiers. Due to this, I suggest the following extension of PUR (EPUR): The state and its agents should avoid exposing their own soldiers to unnecessary moral, psychological, and lethal risk. If we have some technology that could reduce such risk, while remaining as effective as alternatives, we ought to use it. Moreover, if a military technology might seriously reduce the moral, psychological, and lethal risk of soldiers in the future if only we were to develop it, then we have strong reasons to spend some time and money doing so.

I contend that a state and its agents would avoid exposing their own troops to unnecessary moral, psychological, and lethal risk by deploying AWS, and that there is no other feasible way of achieving these decreased levels of risk. Therefore, a state and its agents are obligated to deploy technologically sophisticated AWS. A technologically sophisticated autonomous weapon is one that matches the average performance of human-in-the-loop systems (e.g., drones) when it comes to acting in accordance with the laws of war (e.g., distinctness, surrender, proportionality). In other words, if we were to create AWS that can reliably adhere to the laws of war, we would have strong moral reasons to use them. Utilizing such systems would reduce psychological risk by reducing the number of humans on the ground (or in Nevada) making life and death decisions. Fewer pilots and soldiers means less psychological harm.

The use of such systems would reduce moral risk as well. Young adults currently bear a large portion of the moral burden of our nation’s wars. Moral culpability is a bad thing from the perspective of the person who is culpable. As noted by Simpson and Müller (2016), responsibility for mistakes made by AWS will spread out to different people depending on the context. In some situations, the operator will be liable, in other situations a defense contractor, or perhaps even an international body for setting a tolerance level too high. (A tolerance level is a concept from engineering ethics which specifies via instrumental and moral reasons how reliable some piece of technology ought to be. So perhaps the tolerance level for the percentage of noncombatants killed in some specific type of attack is set to X% of the total lives lost, but ethicists argue convincingly that this is too high. In that case, the international body itself would be morally culpable for civilian deaths outside the tolerance level caused by an AWS that was developed to align with the legal standard, so long as the defense contractor built the weapon system to the requisite level and the operator deployed the system in conditions for which it was designed). There is no gap in responsibility. Instead, in an era of AWS, responsibility (and hopefully guilt) is transferred away from young men and women, up the chain of command, and to the companies and governments fueling the relevant militaries (exactly where it ought to be!), as well as to the international bodies setting the rules of war. Diffused responsibility for killing in war might be a bad thing in the case of drones (where less responsibility might change the behavior of the pilot, causing more unjust killings), but it will have no effect on the behavior of AWS. Of course, we need to create new frameworks and procedures for keeping track of and divvying out responsibility in the era of AWS, and so there are novel issues for ethicists, engineers, and lawyers to work out, but this is by no means an insurmountable problem.

A thought experiment will help make my point about moral culpability clearer: Imagine that the US is considering a drone strike in a war being fought for just reasons. A high-level terrorist is holed up in a house with two other high-level targets and three civilians. The US military considers different ways of making the decision about whether we should kill the terrorists via a drone strike, thereby incurring the collateral damage. Option A: Have a high-level commander make the decision. Option B: Convene a panel of four military ethicists, three civilians, and four high-level military commanders. The ethics panel will hear the case and have a blind vote, with a simple majority deciding the fate of those in the target house. The drone strike poses no threat to our own troops and is the only viable option (no ground attack is possible), and we cannot track the terrorists once they leave the house. However, we have no reason to think they are active threats to the United States.  

I’m not sure what the common intuitive response is to this case. However, there are good reasons for favoring option B over option A even on the assumption that the commander and military panel are equally likely to make the correct moral decision (whatever that happens to be). One reason for this is that having a number of people make the decision decreases moral risk for each of the individuals. Imagine that both the lone commander and the ethics panel would have decided the target is important enough to outweigh the killing of the three civilians, and imagine further that ethicists by and large completely disagree with this proportionality judgment. The consensus is that destroying the house was the wrong decision given the information at hand. Regardless of legal liability, the deciders (commander or panel) are morally culpable. Spreading the culpability (and the resulting guilt and psychological distress) around is morally superior to putting it on the shoulders of a single moral agent.

One might object here that it is the total amount of culpability that is bad, and since the amount of culpability is the same in the two cases, we shouldn’t favor one option over the other. The total view is mistaken, however. An analogy with the badness of pain is illuminating: Having 100 people feel 1 unit of pain is morally better than having 1 person feel 100 units of pain. This is because the badness of pain (its negative effect on our well-being) scales super-linearly with its intensity. As the intensity of pain increases, its badness becomes more and more severe. Would you rather be tortured just this once or receive a hard swat on the back once a day for the rest of your life? I propose that the same holds for moral culpability. The badness of being morally culpable scales super-linearly with the amount of culpability a person has but only linearly with the number of people who are culpable. Therefore, option B is morally superior to option A. This case supports EPUR’s extension over PUR by supporting the claim that ceteris paribus we ought not require military personnel to take on unnecessary moral risk even if the action in question (using a panel over a single commander) does not decrease lethal risk.

I see the case of the panel vs. commander as analogous in many ways to using AWS over human-in-the-loop alternatives. The culpability (and hopefully) guilt for mistakes is spread out to more people in the former case. For example, if an AWS kills a civilian unjustly even after being deployed in the conditions it was built for, then many people at the company which designed the system are going to be partially responsible for the death, for it is their responsibility to build systems that reliably adhere to the laws of war and to test their systems so that they have evidence of this fact. One important point about EPUR-based reasons in favor of AWS is that they do not require autonomous systems to be better at fighting justly than the average human soldier or human-controlled system, even though some argue that they will be (e.g., Arkin, 2010).

EPUR provides the basis of one strong argument against a ban. The other can be summed up by noting that AWS have the ability to act conservatively in war (ibid., p. 333) coupled with the fact that militaries are obligated (according to just war theory) to use nonlethal weapons insofar as the relevant military objectives can be accomplished without imposing unnecessary risk to soldiers. As to the first point, it might be monetarily expensive if a robot “dies” on the battlefield, but losing the hardware of an autonomous system poses zero direct moral cost. This fact further supports EPUR-based reasoning in favor of AWS over traditional armed forces. The risk of death to our own troops decreases when we replace human soldiers (and piloted tanks and planes) with robotic systems. AWS’ ability to act conservatively also has direct implications for proportionality and distinctness: We might reasonably require lower levels of collateral damage and higher degrees of confidence in combatant identification. John S. Canning (2009), for example, hopes to create a “dream-machine” that would target weapons instead of people. The effects of a dream machine AWS on enemy combatant and civilian suffering and death would be enormous. Innocent civilians with guns for protection—who might normally be treated as combatants by worried, death-fearing soldiers—would lose their weapons instead of their lives. The reduction of so-called collateral damage, the most heinous aspect of war, would be substantial.

Coming to the point about non-lethality, the problem for human armed forces is that objectives can rarely be accomplished nonlethally without putting our own soldiers at high levels of risk. AWS, however, are not moral patients. They have no morally relevant preferences or inherent value, nor is there any ethically important sense in which their lives could go better or worse for them. This second argument in favor of AWS applies equally to drones and other remotely controlled but uninhabited vehicles (UVs). However, when combined with EPUR, it is clear that lethal AWS are morally superior to lethal UVs and nonlethal AWS are morally superior to nonlethal UVs, bracketing other objections to each, of course. What we are left with is two sets of moral reasons in favor of AWS covering both sides of a conflict. EPUR provides moral reasons in favor of developing AWS from the perspective of our own soldiers. The non-lethality argument provides moral reasons in favor of developing AWS from the perspective of enemy combatants and civilians. These arguments together represent the high moral hill that those arguing in favor of a ban must overcome.

As noted above, my goal here is not to argue that our all-things-considered moral judgment ought to be in favor of utilizing technologically sophisticated AWS, but instead to point out that there are strong moral reasons in favor of the technology that need to be taken into account. If one accepts Strawser’s argument in favor of drones, then one ought to accept my extension of this argument in favor of AWS. Moreover, the objections that Strawser himself (et al., 2015) offers against AWS fail to outweigh the normative force of the positive case. I turn now to one of these objections. My conclusions then, are that Strawser (and those who accept his arguments) ought to be in favor of AWS development and—if the technological problems can be solved—their eventual use. Now is not the time to ban such weapons for a ban leaves the possibility of large moral gains completely untapped.

Strawser et al. claim that AWS will necessarily lack moral judgment, but that moral judgment is required for proper moral decision making. Therefore, we will never be able to trust AWS on the battlefield. This argument hinges on two claims. First, that AI systems are the products of discrete lists of rules, and second, that ethical behavior cannot be captured by lists of rules (we need judgment). No algorithm can accomplish what the minds of human soldiers can, in other words. I think there are good reasons for rejecting the first of these claims, however that is not the worry I want to push here. Instead, I think the authors’ entire conceptualization of the problem is incorrect. AWS are not “killer robots” (as they are often referred to in the literature). They are not moral agents that need to operate on the basis of humanlike moral judgment. We must stop thinking in terms of killer robots for this misconstrues the reality of technology, especially as technology is used and conceptualized in modern warfare. The hardware and software of AWS extend human moral agency and decision making. (In fact, I think we should consider AWS as extended systems that have both human and mechanical parts.) The point, however, is that it is an empirical question whether or not the hardware and software of AWS can implement, with reliability, the moral judgments that humans have made ahead of time (about what constitutes surrender, about how much collateral damage is acceptable in different contexts, etc.). This cannot be discovered a priori from the armchair.

Finally, I will respond to one further objection to AWS technology because it seems to follow from the very moral benefits I have been focusing on. If fewer soldiers die in war, and if PTSD rates associated with war decline, then we lose one of the most important disincentives for going to war in the first place. And, “[a]s a final result, democratically unchecked military operations will be more and more likely to occur, leading to more frequent breaches of the prohibition on the use of force” (Amoroso & Tamburrini, 2017). I have two replies to this sort of worry. First, such an objection is just as easily levelled against remotely controlled UVs, and my arguments are in the first instance meant to apply conditionally: If you accept Strawser’s argument in favor of UVs, then you ought to accept mine in favor of AWS. Second, and more importantly, this objection can be levelled against any technology that makes fighting wars safer. It is an empirical matter the extent to which safer wars lowers the jus ad bellum threshold, and without serious argumentation to the effect that AWS will be altogether different in this regard from proceeding technologies, the objection lacks support.

The objections above fall flat, and it is my hunch that many other objections provided by ethicists do so as well. What I have shown here, however, is that those objections must be able to counteract the extremely powerful moral reasons, covering both sides of a conflict, that we have in favor of autonomous weapon technology. While the question of whether the technological problems can be solved remains open, the positive case minimally advises that we ought to continue to research and develop such technologies for use in just wars (regardless of what China and Russia are doing, although of course they will be doing the same…).

Erich Reisen

Erich Riesen has an M.A. in philosophy from Northern Illinois University. He is currently a PhD candidate at the University of Colorado, Boulder. Erich’s background is in psychology, philosophy of science, and philosophy of mind, and his dissertation focuses on the ethics of autonomous artificial intelligence systems. He is also interested in bioethics, such as human neurological enhancement, genome editing, and gene drives.

6 COMMENTS

  1. Is moral responsibility not distributed across sovereign communities, rather than confined to individual decision makers within them? If so, moral blame would not be diminished by restricting the number of directly appointed decision makers.

    • Thank you so much for the comment! I think we need to be clear on what it means for a group of people—an organization, corporation, or community—to be morally responsible. Communities or organizations can be causally responsible for some event. They can also be ‘morally accountable’, meaning the “instigator of the act” or “the agent that owns the act”. But I don’t think groups of people can be morally responsible for an action if what we mean by ‘moral responsibility’ is something like moral culpability (the agent is morally blameworthy for having done the act, punishment is justified, and so on). It only makes sense to praise and blame companies/organizations because they are made up of moral agents who can be held responsible for their actions. We can punish corporations, but only because they are made up of moral persons that can suffer (pay fines, go to jail, etc.). Groups can also be legally liable, but again only because they are made up of moral persons who can make restitution and be punished. So the question isn’t between groups (with AWS) and individuals (without AWS). Non-autonomous drones are already operated by groups of people, for example. True moral culpability always falls on individual moral persons (on my view) whether we utilize AWS of not.

      The point about moral culpability in the EPUR argument comes down to the fact that moral agents (military personnel) will be burdened with proportionally less moral culpability in an era of AWS. This is largely an empirical matter, of course. If a commander knowingly releases an arial AWS in a situation it was not designed for and innocent people die, well then he/she will be culpable. In many cases, however, culpability for mistakes caused by AWS will fall on the designers, testers, & regulators of AWS. In these cases, groups of people will share that moral culpability. I would much rather be part of a group of people sharing culpability for making a mistake in the design or testing of an AWS which ultimately leads to the death of an innocent civilian than to be morally culpable for killing that civilian with my own gun for my own fear-based reasons. And remember, we are assuming that (overall) the AWS we are using in war are as reliable as similarly situated humans or human-in-the-loop systems. So culpability, typically, will be shared. I think this is a good result in itself. But it is also a good result in terms of psychological harm. Again, this is an empirical matter, but it is my hunch that sharing responsibility for some serious moral offense with others helps protect people from PTSD and other negative mental health outcomes.

      Thanks again! Great question.

  2. Hey Alex –

    I really like the notion of EPUR you utilize. Consideration of AWS as a protector of a nation’s military forces’ moral and psychological health (as opposed to simply seeing AWS as a more efficient killing machine) is a fascinating conception – one I haven’t come across before!

    One possible objection on that point occurred to me. You mention that:

    “We must stop thinking in terms of killer robots for this misconstrues the reality of technology, especially as technology is used and conceptualized in modern warfare. The hardware and software of AWS extend human moral agency and decision making.”

    Given that’s the case, shouldn’t we be concerned with the moral weight we place on the shoulders of those who say, built the machines? Or upon the individual who “presses the button”? We know that people have felt such guilt, even (especially?) when they didn’t know what they were building. For example, there are a number of documented cases of those working on the Manhattan project who later felt guilt for they built, though they had no clue what they were building. Perhaps it isn’t as severe as PTSD and we simply have a case of this being the lesser of possible evils. But it seems it’s worth considering this, at least, if AWS become more prevalent. Perhaps we’re just exchanging one issue for another and AWS does not provide a superior alternative on this count.

    • Thank you so much for the comment Chris! I think some of what I say in response to the comment above is relevant here as well, so perhaps take a look at that. The issue of responsibility ascription during the age of AI, especially in the military, (the so-called “responsibility gap problem”) is something that we have not yet completely worked out. Unlike many ethicists, however, I do not see it as an insurmountable problem or as a reason to pause autonomous system development. Instead, it is an issue that needs to be worked out by lawyers, ethicists, and technologists as tech evolves.

      Currently, military defense contractors are shielded from liability. If something goes wrong with a piece of technology designed by Lockheed Martin—the way I understand it—they are shielded from being held legally responsible for any damages that result. I think this would need to change in an era of AWS. Because defense contractors would be building the agents that are doing the killing, and because those agents (the AWS themselves) cannot be held responsible, then it seems to be that a great moral and legal burden lies with those companies to build systems that reliably adhere to the laws of war and to test such systems so they have evidence of this fact. Failure in this regard would put serious moral blame and legal liability on the individuals involved in the project. Importantly though, that responsibility will be shared in such cases among many people, implying a lower amount per person. This, I think, is a good result. I also think is has consequences downstream on mental health. While drone pilots can get PTSD at rates as high as troops on the ground, I would think the risk of psychological harm decreases the further away a human is from the unjustified killing. In some cases, a commanding officer or even an AWS operator may be responsible for a killing. It just depends on the context. What we will have less of, however, are cases in which 18-year-old men and women must kill fellow human beings in an up-close-and-personal way, dealing with the moral and mental health consequences on their own. So even if it is just a shift in who is responsible, I think it is a shift in the right direction (toward commanding officers, defense contractors, and international bodies).

      Thanks so much for your comment!

  3. AWS can mimic a human moral attribute: moral growth . If we can construe AWS as having virtues (this will certainly take some construing), then we see that AWS can become better at minimizing innocent casualties (or even minimizing guilty casualties) over time through experience and AI-driving postop analysis of missions. Humans also plan moral decisions in advance. This can be mimicked through simulations followed by AI analysis. AWS’ to “learn” may dramatically outpace the learning of a soldier because the AWS software can practice a mission billions of times (in simulation) with unknown factors randomized or worse-case scenario factors combined. If we are still stretching a metaphor to virtue theory, then I might as well add that one AWS can be a moral exemplar to another, or perhaps a (carefully selected) human soldier could be a moral exemplar for the AWS’s AI to try to mimic. So, to reframe here: The defender of AWS need only defend virtuous AWS (as opposed to vicious AWS) if AWS are effectively designed to grow in virtue (assuming minimal hiccups on the path to virtue). In other words, we can use AI to drive down the acceptable tolerance level for civilian casualties and damage to certain types of infrastructure.

  4. Thanks so much for the comment Daniel! While I’m not sure I would want to attribute virtues to machines (for I think AI anthropomorphism often leads thinkers astray), I do agree with much of what you say here. And, as long as we are clear that we are speaking metaphorically, the analogy with human virtue might be useful for thinking through some of these issues. While we have not yet solved the alignment problem (how can we build AI systems with values and goals that align with our own?), I think both testing via simulation and learning via human feedback (such as in inverse reinforcement learning), positive reinforcement, and mimicry will all have a role to play in developing safe autonomous systems. Plenty AI researchers and ethicists believe a bottom-up strategy of getting machines to act ethically, modeled somewhat on how children learn ethics, is the route to go…perhaps mixed in with some top-down ethical theory? Applying the virtue ethics framework to machine ethics seems to be a very plausible route forward.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Dusty Slay and Zhuangzi’s Three in the Morning

Comedian Dusty Slay tells a story about trying to purchase some DVDs at a flea market. The DVDs are cheap: three for five dollars....