Do Soulless Abominations Walk Among Us?  Or Are They Us?

decorative image

I would like to suggest that they are, sometimes, but not always, us.

Sometimes, people speak without thought. This is something that can happen to most of us, and I would like to propose a way of thinking about how and why this happens, hoping that understanding the cause of a problem is a substantial part of the solution.

For a long time, I suspected that some people lacked souls—not souls in the metaphysical sense, but that quality (whatever its metaphysical nature) that makes us human or, at a minimum, makes us different from a machine, however sophisticated, that nevertheless lacks mental states. Some people just did not come across as having anything like thought behind their words—and many people come across like that some of the time. I suspected this and did not know quite how to put it in words—and I left this idea unexpressed and unexplored.

Two things made it easier for me to express this idea—the most recent wave of AI advances, Trumpism, and the political polarization that came with Trumpism.

ChatGPT and its ilk do not think—however impressive their capabilities might be. Rather, ChatGPT has huge, multidimensional arrays of words that probably follow other words.  t is the study of AI and observation of ChatGPT responses that made me realize that it is far from rare for humans to respond in a way that is nonsensical in any sort of logical or factual sense but makes sense if we think of probabilities of words that follow other words.  In many cases, human responses of this kind are inferior to what ChatGPT might come up with.

Consider President Trump’s statement that with his new measures, the numbers of illegal border crossings went down by more than 100% (4:45). At mathematical, logical, or common-sense level, this is complete nonsense. But Trump, just like ChatGPT, is terrible at math—and both make a great deal of sense, if by “sense” we mean producing words that have a high probability of following the prompting statement, and then each other.

Such ChatGPT-like responses are not limited to divisive political narratives. I thought back to events that made me think “There is no soul behind these words” and thought of the kind of conversations that angry adults sometimes have with children.

For example, let’s say a child is failing every class except art. An angry parent or caregiver may well say something like “You are failing every class!” and the child may respond with something like, “No am not, I am not failing art!”. Predictably, the adult responds by accusing the child of being a smart ass or something similar. At a logical or mathematical level, the adult is entirely wrong—but in terms of probable words as they occur when people speak their factually wrong claim and nonsensical response make perfect sense.

It was at this point that I once again went over the kind of conversations children and teenagers have with parents and teachers—and the sort of conversations I had in my dealing with children and teenagers. When tired, or angry, or pulled in multiple directions, the “like ChatGPT only less sophisticated” responses are very easy to come up with and requires very little, if any, conscious thought.

I thought of some coworkers, managers, politicians and even some purported philosophers that can be reasonably described as “bullshitters” and then I thought of the Ethics and Information Technology paper, “ChatGPT is Bullshit”—and again realized that human bullshitters are not doing anything ChatGPT could not do. I thought more of the world of work (which might be uncharitably called “alienated labor in the age of end-stage capitalism”) and realized that I too have—that is I have spoken without any regard for the truth (which is different from deliberate lying) by stringing words together that flowed together well enough, but did not necessarily make any sense. Like many others, I too have spoken in a way that did not have a “soul” behind it. I did it because I was tired, I did it because I was stressed, I did it to meet a deadline, I did it (let’s be honest here) because I wanted to get out of there and go home.

So it’s not that some people are soulless abominations, but rather great many of us (I hesitate to use a very powerful word like “all”) abandon our “higher faculties” (for lack of a better term) and do what ChatGPT does—but only, in many cases, not as well—when under stress, when upset, when lazy, when our minds are otherwise occupied. At our worse moments, we can become what Descartes would have called “automata.”

I hope that my thoughts will encourage charity and understanding when dealing with speech that appears to happen without thought—and recognition that this is something that is not just something that other people do. Further, I hope that this understanding can lead to alleviation of the sort of pressures that can lead us to speak without thought. I also hope to see greater effort on our part to be consistently better than ChatGPT, to speak in ways that are beyond ChatGPT’s capabilities.

A final note and a caution. I am offering these ideas not so much as an assertion of what is or might be the case, but a way of thinking about (and hopefully avoiding) a certain kind of mindless behavior. As to the caution—I ask the reader to avoid the temptation of thinking that the soulless behavior is just something other people do.

picture of author
Michael Voytinsky

Michael is an adjunct philosophy instructor at the University of the People and an IT Security professional from Ottawa, Canada. He got his M.A. in Philosophy from the University of Wales Trinity Saint-David - his M.A. thesis’ title is “Utilitarianism as Virtue Ethics”. He is now contemplating what to do for his Ph.D.

Previous articleSecond Shock: A Clinical Reading of Frantz Fanon’s Tabula Rasa
Next articleThe Ethics of Artificial Intimacy: Why Philosophy Must Enter the Chat

4 COMMENTS

  1. Given the circumstances under which this arises, this could qualify as alienation. Vital contact (Eugène Minkowski) through words (or signs in general) is lost, and we give exactly, but no more, than what the social system expects, as if being in contact with other consciousnesses was not on the table anymore.

    • I am not familiar with Minkowski’s work but I will look it up.

      If I understand you correctly, you are suggesting that when we fall back to being just an LLM rather than a being with a soul, we are essentially withdrawing into a kind of solipsism?

      When a parent tells a child “You are failing all your classes” and the child says “Wrong, I am not failing art” and the parent becomes an angry LLM – they are no longer communicating with the child at all – correct?

      • I wouldn’t say it’s exactly solipsism, but more of a way to relate to the world and the Other that happens to miss many desirable properties. I suspect it’s even a very active process, but in the way of inhibition: inhibiting thinking about truth, about the Other, about consequences, about ethics…

        I was also implicitly suggesting reading what you describe in comparison with Marx’s theory of commodity fetishism, especially how money allows to believe we don’t depend on anyone to survive (because we “earn money” rather than exchange work time), while turning our own work into a mere instrument to earn money (rather than as a way to relate to nature and other individuals). In the kind of situation you describe I speculate this kind of behavior is really a way to treat words as a kind of currency, utterly meaningless besides obtaining a short term result, like “leave me alone” or “i’m not wrong”.

        That could also be compared to Sartre’s bad faith—which can be understood as an inhibition of one’s efforts in relating intuitions together to form knowledge. To use language in such an alienated, alienating, and bad faith way, one probably has to reduce the span of one’s understanding, if only to avoid cognitive dissonance.

        • So looking at it that way – what I describe as “AI-like” behavior is rather acting as an object rather than a human being.

          Or perhaps the bad-faith speech is genuinely “AI-like” – because in such speech specifically rejects the speaker’s humanity.

          Is this what you are suggesting here?

LEAVE A REPLY

Please enter your comment!
Please enter your name here