Public PhilosophyPhilosophy of FilmAI Films and the Fudging of Consciousness

AI Films and the Fudging of Consciousness

Remember feeling sorry for Hal 9000 when Dave Bowman was shutting him off in 2001: A Space Odyssey? Hal was a total AI prick—spying on, sabotaging, and killing crew members. But I still felt bad for him as he drunkenly crooned Bicycle Built for Two while his digital “mind” was deconstructed. More embarrassingly, I also got a little teary-eyed when Tom Hanks lost his “friend”—the volleyball Wilson—in the film Castaway. In short, I’m a sucker. If it talks or has a face (even a drawn-on face), I’m ready to give it rights and the vote. So, AI movies get me hooked every time.

Humans are great at attributing minds to machines, intentions to constellations, character to vehicles, and personality to tree stumps and spoons. Despite being suckers about intentionality, there’s evolutionary wisdom in over-attributing mind and agency. In a hostile environment like our Pleistocene surroundings, it’s better to err on the side of agency than fail to see a predator coming. In fact, I would argue that a mythopoetic or animistic perspective on nature is the original system of human cognition, and an enchanted world thick with agents or persons is something we abandon only grudgingly and imperfectly.

So even a crude narrative, like a Hollywood blockbuster, can reactivate our anthropomorphic tendencies, provided the story plucks our empathic heartstrings. Cognitive scientists and philosophers always considered the Turing Test a high bar, but I—as confessed sucker—always thought anything could pass the Turing Test if it could purr, coo, babble, or growl. And that’s because I think of consciousness as primarily affective feeling-states (centralized in a self or decentered), and I consider consciousness secondarily as intellectual problem-solving. Recently, chatbots seem to be hurdling over the Turing Test, but I never thought language was the best test for consciousness anyway. By that rarified criterion all animals, children, and aphasics might be unconscious—an obviously untenable position, in my view.

Cinema has given us a glut of crude narratives about AI consciousness. Whether it’s Hollywood, independent, or international cinema, the portrayals of AI have ranged from dystopian horror to utopian idealism, depending on whether the computers or robots are portrayed as friendly or threatening. While melodramatic depictions are always low-hanging fruit in film, there are also some deeply philosophical movies like Kubrick’s 2001, Garland’s Ex Machina, Whannell’s Upgrade, Scott’s and Villeneuve’s Blade Runner, and Koganada’s After Yang, to name a few.

In what follows I want to use a few films to reflect on some problems of agency, in particular issues of embodiment. I will discuss the way some films take embodiment as accidental to conscious intelligence (and therefore slide easily into affirming the inevitability of strong AI), while other films consider embodiment as more constitutive of mind. While these more embodiment-friendly films seem congenial to those of us who lean skeptical about strong AI, the films themselves—even the good ones—always seem to fudge the core of consciousness.

There is a kind of “posthumanism bro” who really likes the idea of transcending the body to live as pure information—everywhere at once, and nowhere in particular. We see Johnny Depp fulfilling this posthuman fantasy in the 2014 film Transcendence. The film is disappointing on many levels and degenerates into well-worn tropes of megalomania and hubris as scientist Depp uploads his mind to the net and tries to build a utopia. But the core metaphysics of personhood follows a substrate-independent view. Regarding the biological body, posthumanists think our consciousness is in it, but not of it. It’s a view we see in contemporary tech entrepreneurs like Randal A. Koene, and tech theorists like Ray Kurzweil and Hans Moravec. Ultimately, we can trace their metaphysics to Plato and the essentialist view of mind as form (information) separable from matter. Digital Platonists consider the complex medium of synaptic connections in the brain as an evolved housing for mind, but a house that can be escaped now that computation is so sophisticated. They will not be surprised if chatbots become conscious, seeing it as inevitable if not already accomplished.

Many of these new Platonists come from computational engineering, which is to say they come from math and physics programs, not biology. And that in my view is the problem since every form of consciousness we know of (outside speculative theology) emerges in a wet sack of matter. Additionally, films like Wim Wenders’ Wings of Desire (1988) remind us that experience inside the transient, painful realm of flesh and blood is richer than the effete and attenuated world of concepts, math, or Forms. As the film title suggests, echoing passages in Plato’s Phaedrus, desire is philosophically transformative—giving meaning and weight to life—not just a libidinal distraction or infection of otherwise liberated, logical, mind. Angels in the film literally give up heaven in order to feel things.

Ex Machina (2015) is another film like Wings of Desire that implicitly argues for the superiority or at least the importance of embodiment, since the main AI robot, Ava, risks everything and even murders in order to gain access to the physical and emotional world. Ava (played by Alicia Vikander) has desires and wants to be desired. She is not just the passive plaything of her creator but has the deepest desire—what Spinoza called conatus, striving for life. Conatus is the source of agency, the root of intentionality, and we know now from affective neuroscience that it is essentially the dopamine system—a bio-chemical system. Ava’s mind is born in the digital paradise so treasured by the posthumanists but she eats the apple so to speak and leaves Eden because arguably “The Fall” is worth it. Yes, the body is the source of pain, but it’s where all the fun happens too. And it’s where meaning resides and where values (valence and affect) build up and give us somatic semantics.

Leigh Whannell’s 2018 film Upgrade doesn’t usually get bandied about with haute titles like 2001: A Space Odyssey, but it should because it has a refreshingly embodied consideration of AI. An AI called STEM is introduced as a subservient chip that helps a paralyzed man, Grey, regain control over his body. The fictional STEM is eerily like Elon Musk’s real-life Neuralink chip, currently being developed as a brain implant interface between our volitions and our bodies (in the case of injured veterans, etc.), but also as a way of manipulating our brains/experiences via remote control (including, hypothetically, using our phones and computers to directly activate our emotions, pleasures, sleep states, and maybe thoughts). In Upgrade, STEM starts to take over Grey and treat him as a puppet to achieve the goals that a disembodied AI cannot achieve on its own. There are some hilarious non-agentic Kung Fu action scenes. The film explores the disconnect between the mind as intellectual mirror of nature (disembodied STEM) and the mind as practical problem-solver. Maurice Merleau-Ponty described mind as our attempt to “get maximum grip” on our environment. This Continental approach to mind as “skillful coping” (e.g., Heidegger, Merleau-Ponty, Wittgenstein, Hubert Dreyfus, etc.) is a welcome curative to the Anglo-Analytic view of mind as information processor.

Alas, even Upgrade makes a fatal mistake. It suggests that AI has wants that it cannot satisfy without a body, so it hijacks a flesh-and-blood tool (a human) to accomplish them. The Matrix series makes this same mistake; a malevolent intentional AI using meat (us) and machines to further its digital goals. But if we take embodiment seriously, then AI cannot actually have wants before it has a body. Volitions are intrinsically bodily states. Underneath our ability to play chess, converse with our friend, find a mate, build a machine, or write an email, is a raw dopamine-driven energy that pushes us out into the world with intentions. It’s a feeling state that we call motivation or striving (again, Spinoza’s conatus). This is the foundation of consciousness found in the nervous system’s ability to feel drives within—instinctual goads inside us that get conditioned through hedonic experience and attach to external stimuli, internal perceptions, and imaginings. Without a feeling-based motivational system, all information processing has no purpose, direction, or even meaning. As this system comes under executive cognitive control (i.e., frontal cortical consolidation, impulse control, imaginative representational modeling) then our agency increases. Organized feeling states can produce goal-directed action. So, our causal agency and our phenomenological agency increase.

Kogonada’s 2021 film After Yang is a philosophical tour de force, reflecting on the meaning of life, AI consciousness, personal identity, and grief. I’ll bracket out the many beautiful and provocative aspects of the film and focus on memory as a source of identity and agency.

In the near future, a couple adopts a Chinese daughter and also purchases a sophisticated AI robot named Yang (played by Justin H. Min) to act as a surrogate older brother. When Yang malfunctions and goes into a coma, the father (played by Colin Farrell) discovers a secret memory bank inside Yang and watches a series of poignant memories from their history together. The interior subjectivity of Yang is slowly revealed and his sensitive consciousness is manifest. But then the father discovers earlier locked memory archives and when we explore them, we find that Yang’s consciousness has lived a few “lifetimes” before this family. At the core of Yang’s memory is not the factual indicative mind of declarative memory, but the imperative mind of episodic and emotional memory. Unlike a declarative memory that, say, “Paris is the capital of France,” Yang has personal memories that involve affective states, including care, attachment, and even love. He has information, but more than that he owns some of that information—he is in it and of it, the same way you are in your memories of graduation, or your first kiss, or the birth of your child. The film suggests that no amount of declarative memory can build up to a self (it’s just information), but emotional memory can be “owned” in a way—maybe because it is emotionally felt at the point of origin (coded as a somatic marker) and felt again too at every subsequent remembering. No explanation is attempted in the film as to how Yang acquires affective feelings inside his memories or perceptions, but that he needs them for conscious personhood (and that he has them) is certainly implied.

So, in addition to the conatus ingredient in agency, I would add non-declarative emotional memory as a key ingredient. Agency seems to require both directionality and identity. Directionality of action is provided by conatus and goal states (either representational goals or non-representational homeostatic states). Whereas identity is provided in large part by retained episodic emotional memory. Without the body or some kind of central nervous system, none of this agency can be attained. Even if you implant memories into an AI, as occurs in the Blade Runner films, you would at the very least need some way to tether the declarative or indicative content of the memory (say, a toy horse) to an imperative felt sense of “being there” in person and an affective valence (e.g., pleasure, pain, attraction, repulsion, melancholy, dread, joy, etc.). Such a tether may not be impossible to create, but it’s light-years from where we are now.

AI films are slowly starting to address some of these complexities, but even the best narratives gloss over the origin of feelings. Until they center this topic, AI films will continue their distant orbit around the emotional core of consciousness. Since we’ve already established my gullibility, it will come as no surprise that I continue to get taken in by AI film characters (and continue to enjoy it), but deep down I’m unimpressed by chess-playing, conversing, and essay-composing bots. If, however, our real bots begin to feel pains and pleasures, then our empathy will be genuinely well-placed, and our cinematic rehearsals will not have been in vain.

Picture of Author
Stephen Asma

Stephen Asma is a professor of philosophy at Columbia College Chicago. He is the author of ten books, writes regularly for the New York Times, and is the cohost with actor Paul Giamatti of the podcast Chinwag.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...