ResearchPhilosophy and TechnologyResponsibility and Automated Decision-Making

Responsibility and Automated Decision-Making

In 1979, an IBM presentation included a slide with the following injunction:

Image description: A piece of paper with the following text in large print, all caps: “a computer can never be held accountable. Therefore a computer must never make a management decision.” (Source: https://twitter.com/MIT_CSAIL/status/1484933879710371846 )

Despite this warning, nearly fifty years later computing systems are increasingly used to make high-stakes decisions in an ever-growing range of fields. Sometimes, these applications are genuinely useful, saving time and labor by automating tedious, routine decisions that have clear and consistent criteria. Applications for a loan, for example, could be easy to automate: gather the applicant’s personal details, file a credit report, and issue a decision based on their FICO score. There’s plenty to be said about whether this simplified procedure is or isn’t fair, but it could be essentially the same whether a human or a computer does it.

The process of applying for a loan in-person might also enable the consideration of multiple factors that are difficult for a computing system to incorporate. Sometimes, this can be in the applicant’s favor. For example, students might receive favorable offers despite their lack of credit history, and an applicant might be able to raise and prove their student status to an in-person financial advisor if the advisor neglects to mention such offers. But an automated system might not be able to confirm whether an applicant is a student, and if it doesn’t ask them then there is no way to get that consideration heard. The computer can only process data is it equipped to receive as inputs; new data of a different sort literally cannot be incorporated into its decision.

In other cases, additional considerations might be to the applicant’s detriment. Human decision-makers can make mistakes. They might also be biased; for example, the financial advisor might be less likely to give loans to people of color, whether due to conscious prejudice or implicit bias. Sometimes, computerized decision-making systems are sold as eliminating this human fallibility. But this claim is often, to be blunt, a lie. Computer systems can exhibit exactly the same kinds of biases that human beings do. Much of the time, bias in computer systems arises precisely because the computer system is based on historical data compiled from the biased decisions of human beings. (Helen Nissenbaum and Batya Friedman, two giants of computer ethics, observed this nearly thirty years ago, and while their examples are dated, their point still stands.)

While both human and computerized decision-makers can exhibit bias, the victim of a biased decision has options when dealing with another human being. The loan applicant, for instance, can ask for the financial advisor’s reasons for their decision. If the applicant finds those reasons insufficient, they may call out the decision as flawed or biased, either directly to the financial advisor or, what is perhaps safer, by petitioning the advisor’s supervisor or a regulator. In doing so, the applicant can hold the financial advisor and their institution to account for incorrect and wrongful decisions.

Accountability is much easier to dodge when you punt your decision-making to a computerized system. The applicant might not have any sense of how the decision was made by the computer, and they probably can’t ask it for its reasons if none are given—again, the computer system can only respond to inputs it is designed to process. In some cases, it wouldn’t even be helpful to know how the computer made its decision: machine learning algorithms often make decisions based on complicated mathematics, resulting in outputs that even specialists in the technology cannot explain. And even if the applicant could get an intelligible explanation from the computer, they can’t challenge the computer directly (what good would it do to direct one’s blame at a computer?) and it might be unclear who to petition. Accountability is undercut.

We’re beginning to see the problem gestured at by the IBM injunction against computers making high-stakes decisions. When a human decision-maker makes a mistake, or issues a decision that is biased or discriminatory, or uses an unfair procedure, or neglects to ask pertinent questions, they can be held responsible for their error, oversight, or malice. Their reasoning can be challenged, their errors can be corrected, and their bad behavior can be punished, if need be. In short, they can be held accountable.

None of this is possible when you’re dealing with a computer system. If it’s not set up to take in certain information, you simply can’t get it to consider factors it may be neglecting. If the decision was biased or erroneous, it’s more difficult to tell and cannot be challenged directly; it may also be unclear if you have further recourse to challenge the decision. And, note that these problems persist regardless of how good the computer system is at making the “right” decisions! The problem is not so much about technical criteria of accuracy. Even the most accurate system might still produce errors—and technically correct outputs might still be harmful. In these circumstances, too, someone needs to be accountable.

The problem only gets worse when computing systems are used in higher-stakes decisions. Automated decision systems are increasingly used in public service, often as a cost-cutting and efficiency measure. There have been a number of high-profile uses of this sort of technology that have contributed to a range of harms and injustices. We’ve seen predictive policing systems contributing to the over-policing and harassment of marginalized communities, resumé screening systems that are biased against women, healthcare resource allocation systems biased in favor of increased treatment for white patients, student grade estimators biased in favor of private school pupils, and even the development of autonomous weapons that may someday soon decide who to kill. These automated decision systems can affect the course of people’s lives—and even end them. But they cannot themselves be held accountable for their decisions.

Could we instead hold some particular person or group of persons accountable for the decisions these systems make? It’s tempting to turn to either the most immediate users of these systems—i.e. the institutions who have deployed them to replace human decision-makers—or to the developers of the systems—i.e. the teams of programmers and engineers who created them. But making a clear case that any particular party is responsible, or that some percentage of blame should be shared between the different parties concerned, is difficult. This is a problem referred to in the computer ethics literature as the responsibility gap.

There are a few reasons why attributing responsibility for the decisions taken by a computer system to a particular person may be fraught. Some stem from a typical understanding of moral responsibility in analytic philosophy. On this received view, to be responsible for something, you must (1) have been in control of your actions causing the thing (the control condition), and (2) been in a position to know what you were doing (the epistemic condition).

It may be the case that no one was directly in control of what the computer system decided—that is, the system decides mostly autonomously, without needing approval by a human being. This might happen in the case of a loan applicant today—or with an autonomous attack drone in the near future. The lack of direct human control over these decisions knocks out condition (1): if no person was in control of the decision, having turned it fully over to the computer, then no person is responsible for the computer’s decision.

Additionally, in some cases it may be that no one can exactly predict how the computer system will decide. This is most likely when unexplainable AI systems are in use, as mentioned earlier. These cases will knock out condition (2): even if we establish that having created or deployed the system is sufficient to meet the causal condition, the uncertainty around how the system will actually make decisions will ensure that the epistemic condition is not met.

Another factor that can make responsibility attribution difficult in these cases is the complexity of the social systems involved. Today’s technologies are not usually produced by single agents, but rather by large teams. Similarly, in deployment there are many people involved in the various aspects of how the system is set up and used, particularly in large organizations such as those we’ve been considering. These organizational complexities create a situation where, even if conditions (1) and (2) are met to some degree by several people or groups within the organization, the contributions of each individual person or group to the eventual decisions made by the computer system may be so diffuse and distributed that it is difficult to assign more than small percentages of responsibility to each one. At that point, it may become difficult to say for certain which people should bear the responsibility for harms caused by the computer system and to what extent, and at whom it would be fair to direct corrective action.

To sum up the problem of the responsibility gap: Because of the ways in which an automated decision system may be developed, deployed, and operated, it can be contentious, difficult, or even impossible to specify a human individual or group who should be held accountable for the decisions taken by the computer system. Since the computer system itself is not a moral agent—whatever the capacities necessary for human agents to have control and knowledge in the appropriate senses turn out to be, we can be confident that no computer system that exists today has them—we are left with a gap where a responsible party should be. As a result, there is no one who is an obvious candidate to be held responsible for any harmful or erroneous decisions that the system may make, whether it is judging resumés, criminality, academic merit, or legitimacy as a target for lethal force.

A number of suggestions have been made to try to overcome the responsibility gap. Computer ethics essays have often suggested that regulators and professional organizations should create enforceable standards that specify guidelines for determining who should be held accountable for harms caused by computer systems. Helen Nissenbaum, for example, wrote in the ’90s that software developers should be held strictly liable for their products, much as the manufacturers of hazardous materials can be held legally liable for harms their products cause, even if they technically do nothing wrong. Others, such as Don Gotterbarn, one of the architects of the Association for Computing Machinery (ACM) Code of Ethics and Professional Conduct, have advocated for decades that the ACM and other professional organizations in computing should have stronger and more enforceable professional standards of conduct, perhaps closer to those of engineering, medical, or legal associations.

Such has not come to pass in computing. Legal regulation of computing products remains weak in many jurisdictions, and while the existing professional codes of ethics are thoughtfully written, their enforceability is limited. But I contend that frameworks of legal or professional liability, as top-down impositions, would be at best partial solutions. To accompany formal legal and professional norms that find someone to punish when things go wrong, computing needs informal ethical norms that promote a culture of taking responsibility for one’s actions as a computing practitioner.

This is not an entirely new claim. Writing about these problems in 1989, John Ladd argued that we need more than just norms and ways of holding people accountable to them—a phenomenon he calls negative responsibility. According to Ladd, we also need a notion of positive responsibility in relation to computing technologies, that is to say, the duties that computing professionals have to those who use the technologies they develop, and what traits and dispositions constitute being a responsible computing professional. However, Ladd’s article is primarily concentrated on the details how the responsibility gap arises, and less on what generates these positive responsibilities for computing professionals or what, specifically, they may be.

In a paper I presented at the 2022 Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (FAccT ’22), I sketched one way these positive responsibilities might work. I drew on an account that I began outlining in another paper I published in an issue of The Monist on vicarious responsibility. Vicarious responsibility is a phenomenon where we are morally responsible for the actions of others. For instance, a parent may be responsible for the actions of their children, as when the child misbehaves and harm results. But this sense of responsibility is different from the sense discussed above: the parents are not in control of what their children do, nor can they necessarily know how their children will behave, and we would not blame the parent for what their child does in the same way that we would blame the parent if they were to cause the same harm themselves. Something different is going on.

In the Monist paper—drawing on work by Bernard WilliamsSusan WolfIris Marion Young, and David Enoch—I argue that the relevant sense of responsibility in these cases is of a forward-looking sort. Rather than looking to hold the parent accountable for what the child did—looking backward toward the harm caused—we instead expect the parent to do something to make things right—looking forward from the harm caused. That is to say, we expect the parent to take responsibility for their child’s actions. Some of the things we might expect the parent to do include: apologizing for the harm, explaining their child’s behavior, disciplining the child so that similar bad behavior is less likely in the future, and otherwise trying to right the wrong done—cleaning up their child’s mess, so to speak, either literally or metaphorically.

What gives rise to duties to take responsibility in these ways? I claim that there exists a special kind of relationship between the agent and the entity for whose behavior the agent should take responsibility, which I call moral entanglement. This describes cases where aspects of one’s identity are tightly bound with another: parents and children are one example, but similar relationships exist between citizens and states, employees and employers, and even our present selves and past selves. In each of these instances, some aspect of our identity is connected to the other entity such that distinctive duties arise.

What these cases have in common is that the scope of our own personal agency is unclear when these closely related entities act. When children act badly, it is genuinely unclear and probably unknowable to what extent the parent’s own actions qua parent produced the child’s actions. When a state acts badly, all citizens of the state are to some degree implicated as members of that group agent, even if they have no affiliation with the governing party. Enoch evocatively describes these events as occurring in the “penumbra” of our agency—not in the darkest part of the shadow we cast upon the world with our actions, but in the dimmer edges. When something happens in that penumbral shade, we should take responsibility for it, exhibiting a virtue that Wolf claims we recognize but do not have a name for. But I think this is the virtue of being a responsible person.

With this account in mind, the application to computer systems is clear enough. The creators of a computer system stand in a relationship of moral entanglement with it because it is the result of their professional roles as technologists. What the computer system decides is connected to them by way of the professional aspect of their identity. Similar claims can be made of the users of the system. To retake our loan application review system as an example, the developers of the system and the employees of the financial institution who deploy and maintain that system are entangled with it. These parties may not be clearly responsible for what the system decides in the typical backward-looking sense, but their connection with the system makes it such that they aren’t clearly free of responsibility either. As such, they should take responsibility for the system’s decisions. They should make themselves available to explain the system’s decisions—and if its decision-making process is unexplainable, they should replace it. When the system makes mistakes or exhibits bias, they should apologize to those affected, and do what they can to fix these errors.

More details are needed to fill out this theory, but I think this is in the right direction. It complements top-down measures by demanding that individual computing professionals and institutions to take seriously their forward-looking, positive responsibilities. These practitioners should take responsibility when harm arises, in the same ways we might expect a parent to take responsibility for their children’s actions. When we put autonomous beings into the world—human or otherwise—we become responsible for ensuring that their arrival will not harm others.

Trystan Goetze
Postdoctoral Fellow in Embedded EthiCS at Harvard University | Website

Trystan completed his Ph.D at the University of Sheffield in 2018. From 2019–21, they were a Banting Postdoctoral Fellow in Philosophy at Dalhousie University, where they also taught computer ethics. Before coming to Harvard, he worked with Athabasca University and Ethically Aligned AI, Inc., to create a series of AI ethics micro-credential courses. Their current projects are on trust and Big Tech, epistemic blame, AI ethics, and computer science education. At Harvard, she is the “Bridge Fellow” in Embedded EthiCS, coordinating outreach to and collaboration with other organizations that are developing computer ethics curricula

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Asking Humanly Historical Questions in Philosophy Classrooms

My students were mad the day I told them they’d have to debate the merits of The Origin of Species. Obviously, they told me,...