Rage Against the Machine

Reactive emotional responses to machines are a commonplace in our lives.

I get angry at the Kroger checkout machines when they never register the item weights right. The system locks me out, and an attendant has to confirm that I haven’t put anything in the cart that I didn’t pay for. I am incensed that my time is being wasted and that my moral character is being questioned.

I start to feel hollow and almost sick after running a large number of prompts through ChatGPT, as everything it spits out is bullshit of the most boring kind. The content somehow averages out the discourse in a way that belies a lack of any original thought or concern about truth. I feel disturbed at its ability to sound true and legitimate, in a not dissimilar way as I would with a con person.

I am delighted when I open the NYTimes app and click on Spelling Bee. It’s a beautiful game with an excellent user interface, though accessing the hints page is harder than it should be. I feel accomplished when I make it to Queen Bee, and I religiously play the game every day. When asked about the game, my response is “Oh, I love Spelling Bee!”

I begin to resent the number of notifications that LinkedIn sends my way and the long, arduous process it takes to adjust all of them to a reasonable drip. The app uses whatever means it has available to claim my time and attention, even when I don’t want to give it. I feel manipulated and used.

Set aside for now any cases of anger or delight at a machine for simply working or not working. Instead, I want to focus in on a range of cases in which it seems that we are having reactive emotional responses to features of machines connected to the patterns of the values and reasons they express to us in our day-to-day interactions with them.

Are these reactive emotional responses properly distinguishable from the reactive attitudes we feel towards other persons? Are they responsibility-conferring in at least some sense? What does this mean about the relative uniqueness of our interpersonal practices?

First, let’s determine who or what I am responding to in the above four cases:

  1. I am responding to the machines themselves.
  2. I am responding to the people who designed the machines.
  3. I am responding to some combination of the above

I’m not sure there’s a single right answer among these three. Depending on the technology at play, it could be any of the above.

ChatGPT seems to present a case in which I’m responding to the technology itself and not merely to the creators, especially as the technology is not directly determined by the creators’ inputs. This is markedly different than the Kroger scanners, which more directly reflect a corporate interest in loss prevention (even if it’s at the expense of customer satisfaction).

The final two examples seem to be a combination response. The NYTimes Spelling Bee has its own internal beauty, but it also reflects good design on the part of the game and app designers. The LinkedIn mechanisms effectively manipulate me into spending more time and energy on the app, and this is a result of both the design itself and the inputs from the large number of designers who developed it.

If it were clear that our reactive emotions were just responding to the people who designed the machines, then we would be firmly in the realm of reactive attitudes towards persons and our inquiry would be closed. However, it’s unclear to me that our responses are solely to the human designers, and even more unclear to me that any emotions directed at the machines themselves are irrational.

In “Freedom and Resentment,” P.F. Strawson famously separates the participant stance from the objective stance. The participant stance makes reactive attitudes such as resentment, gratitude, anger, and love appropriate, and it confers responsibility and full membership in the moral community and human relationships.

The objective stance makes the moral reactive attitudes inappropriate, though emotions like fear or pity may still be felt. Through the objective stance, the person or thing being evaluated is seen as something to be understood and managed, and no responsibility is conferred.

Where do my emotional responses to my daily interactions with these machines fall in this taxonomy? Insofar as they respond to morally salient patterns of values and behaviors in the machines, they seem to go beyond the objective stance. But, so far as I don’t have full human relationships with these machines and don’t treat them in all the same ways as I would morally responsible human beings, they do not fit into the participant stance.

Concerning the kinds of emotions I feel, it does not feel wholly inappropriate to use the language of loving a certain kind of design or feeling angry at an app or interactive machine. These attitudes are not the same as the kind of love or the kind of anger felt in deeply involved human relationships, but they are emotional responses that go beyond the repertoire of pity, fear, or a managerial stance.

While ChatGPT and other human-designed interactive machines do express certain patterns of valuing that communicate good or ill will or indifference (even if none of the three is actually there), they do not have an ability to reason in the right way required for full participation in our moral practices (at least not yet). The machines cannot respond to my morally-inflected emotions in the way that human reasoners could.

This suggests a third kind of stance somewhere between the participant and the objective stance, where appraising responses and some personal involvement are appropriate but where no full responsibility or moral standing is given. These kinds of attitudes might be similar to the morally appraising responses that we feel towards AI- or human-generated art or video games.

The key difference that rules out these contemporary machines from the participant realm and interpersonal human relationships is something along the lines of Strawson’s remark that “If your attitude towards someone is wholly objective, then though you may fight him, you cannot quarrel with him, and though you may talk to him, even negotiate with him, you cannot reason with him. You can at most pretend to quarrel, or to reason, with him.”

What this suggests is that the participant attitudes are only appropriate in relationships in which the two parties can reason with each other and mutually contest and shape the norms and expectations that bind them. ChatGPT, despite its extensive abilities, cannot yet comprehend moral norms and express its reasoned intention to set any expectations on itself or its users. It can play-act at such an exchange, but it doesn’t have the understanding or consistency or embodied experience to participate in human moral relationships.

Until machines can themselves reason with us and shape our collective moral norms, our reactive emotional responses to them will not fall into the participant attitudes and will not confer responsibility.

Elizabeth Cargile Williams

Elizabeth Cargile Williams is a PhD candidate at Indiana University, Bloomington. Their research focuses on moral responsibility and character, but they also have interests in social epistemology and feminist philosophy.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

How to Practice Embodied Pedagogy

When preparing my poster for the AAPT/APA conference in New York in January 2024, I had to consider not only what topics would interest...