Public PhilosophyCurrent Events in Public PhilosophyEmbracing the Mad Science of Machine Consciousness

Embracing the Mad Science of Machine Consciousness

Almost 25 years ago, I wrote my first (almost certainly naïve) MA thesis on chatbots for what was essentially a philosophy and AI program (now defunct), trying to work out their place in the evolutionary story of intelligence alongside biological intelligences. Watching the current discourse around artificial intelligence (AI) as it plays out in mainstream media, various pedagogical contexts, and philosophy specifically is challenging. In some ways, it feels like watching a discipline that’s forgotten its history despite so many of us having lived it so recently. The history of artificial intelligence, in all its philosophical and technical guises (the metaphysics that would enable it, the ethics that should guide it, the political considerations that should temper it) is, in many ways, also the history of cognitive science, and there are several reasons why remembering the philosophical grounding of these fields matters now more than ever.

I was an undergraduate in the mid-1990s when I accidentally fell into studying AI. I was studying psychology and philosophy at Rutgers, and at least in those days, it seemed that the departments were totally blended and focused around the Center for Cognitive Science, where there was no real division between the theory and the practice of cognitive science, and hence AI, at all. I wasn’t really taught how to do psychology without the foundational theories of philosophy or philosophy without the empirical support and theory-fodder from psychology. To study AI was to necessarily wade into the psychological data and its implications, as well as work out what kinds of underlying cognitive architecture could possibly support that data. The field of AI bridged philosophy and psychology in precisely the same way that cognitive science did: it was an applied way to test the theories growing out of cognitive psychology and the philosophy of mind; AI would either produce a mind kind of like ours, in doing so answering questions about the underlying nature of our minds, or it wouldn’t, at least narrowing down the range of possibilities in that case, teaching us that minds were not, perhaps, computational systems, or if they were, we at least weren’t using the appropriate materials or structures to replicate them.

But since those exciting and promising days when I was a starry-eyed student in the 90s, cognitive science, as a major academic branch of AI, has seen an enormous shift. In a 2010 paper, Dedre Gentner predicted that cognitive science as a discipline would be absorbed, or dominated by, cognitive psychology, at the expense of all of the other core disciplines that traditionally informed its rich interdisciplinarity. The decades-long, cultural and political attack on the humanities in higher education in favor of the allegedly-more-practical STEM disciplines certainly hasn’t helped. This focus on quantitative data over theoretical questions has all but written the philosophical nature of consciousness out of the story of cognitive science and, hence, out of AI. And I think that’s a shame for several reasons.

When people talk about AI, it is outrageously unclear what we’re even talking about. Are we talking about clever algorithms cleverly deployed to solve domain-specific problems? Are we talking about a system that is meant to replicate human intelligence? (And what is human intelligence?) Do we mean the fabled AGI (artificial general intelligence), a label that seems to have replaced the name “Strong AI” that was popular for decades? There are dozens of interpretations of what the phrase “AI” has meant over its history, implying different assumptions about the nature of cognition (Symbolic? Sub-symbolic? Linguistic? Bayesian? Computational?) and, at the same time, steadfastly refusing to imply anything at all about consciousness.

William James, one of the founders of American psychology (and incidentally, also one of the most important early American pragmatist philosophers, again revealing the complex interrelation between these two fields), described psychology “as the description and explanation of states of consciousness as such.” And yet, as Evan Thompson recounts in Mind in Life, cognitive science has so distanced itself from questions of subjectivity and consciousness that it’s created a new “explanatory gap,” as though one can study cognition qua cognition without addressing or even acknowledging the conscious aspect at all. This is exactly how we get bizarre claims like that of Blake Lemoine, who claimed that Google’s Lambda chatbot was sentient; we’ve divorced the questions of cognition and consciousness so thoroughly that we have no solid underlying theory of how they tie together. As a result, we get claims like this, disconnected from the reality and possibility of the systems we’re currently building. Many of us are watching the discourse about ChatGPT as a repeat of the same discourse we saw with IBM’s Watson not that long ago. Once Watson beat Ken Jennings at Jeopardy, the media was sure we were embarking on the AI revolution and that AI would fundamentally alter how every business worked, how education was delivered and received; basically everything we’ve seen about ChatGPT in the last year was a repeat of how the wider public was introduced to Watson, and the disappointment when Watson failed to live up to the hype was wide-ranging. (Of course, many of the philosophers of AI predicted Watson would fail to live up, just as many of us are unimpressed with ChatGPT).

Indeed, back in those wild days of AI in the 90s, we were straight-up told that we might have to give up the very idea of consciousness to make progress on the question of what the mind is. Like all good nerds, I still have all my undergraduate course materials, and the pages from my 1996 “What is Cognitive Science” seminar include the following direct quote from Zenon Pylyshyn’s days of teaching the course: “Moreover it assumes that this is the type of behavior that cognitive science will be concerned to explain. It turns out that categories such as “voluntary” or “conscious” are very likely ones that we may have to give up as the science of mind develops a scientific base.” Cognitive Science’s relationship to consciousness is like when I’m holding a piece of chocolate, and my dog gets as close to me as physically possible and then looks pointedly away, refusing to make eye contact. Cognitive Science is clearly about consciousness, and yet, more than ever, we refuse to look that phenomenon in the eye. As a result, AI has proceeded with this weird attempt to create a mind without acknowledging what makes a mind interesting.

So, what role should consciousness play in the contemporary study and pursuit of AI? Well, it depends on what we mean by AI. If we’re just looking for algorithms that seem to solve some problems quickly, consciousness should probably play no role at all. Most of what masquerades as AI right now are profoundly biased, problematic probability machines with specialties in plagiarism, and the last thing we would want to do is suggest those systems might have minds. That’s an ethical disaster, handing off all of the damage those systems are enacting on minorities and oppressed peoples to the systems themselves rather than placing the blame on the creators of those systems and the lawmakers who don’t understand them well enough to regulate them. It seems evident to me that a calculator is much more mathematically intelligent than any living human. Yet, that enhanced intelligence is profoundly dull, and not at all what’s interesting about AI. The only reason AI remains interesting is because it’s a kind of mad science, trying to unlock the mysteries of consciousness to help us learn more about what it means to be human and how conscious experience, in all its richness, emerges from what are otherwise non-conscious components.

Consciousness is a tricky subject. There was a long period, what we sometimes call the “AI winter” in the early 2000s, where not much happened in AI. Around the same time, the study of consciousness got deeply weird(er). The flagship conference for the academic study of consciousness has traditionally been Towards a Science of Consciousness—founded at the University of Arizona in 1994; it has never been a place where the study of the mind has been tedious. In fact, I had stopped attending the event in recent years partly because, unlike most academic conferences I attend, they openly embrace pseudoscience and weirdness and just plain implausible avenues of study. For my first time at the conference, I remember sitting in a room listening to extremely influential people, wondering why no one was taking their work seriously (that work was just straight-up Ganzfeld experiments). As an early career researcher at the time, I distanced myself from this work that had long ago been thoroughly debunked. I returned to the conference last year by invitation and am so glad I did. The conference is still deeply weird and offers a platform to all kinds of work that I feel is misguided. (They also recently changed their name from Towards a Science of Consciousness to just The Science of Consciousness, so I hope that means we’ve made progress!) But in returning to this event, with its diverse mixture of mundane academia, fringe science, and some of what seems more like stage magic than science sometimes, I remembered why the event is like this: it’s weird because consciousness is weird. Deeply weird. And we don’t have philosophical structures that do the scaffolding work necessary to make sense of consciousness. And this is why I think those of us in AI still focused on understanding the mind need to reject the label of AI—leave it to expert systems and clever algorithms—and we need to re-embrace the idea of machine consciousness. (The Journal of Artificial Intelligence and Consciousness used to be called the Journal of Machine Consciousness—maybe it would be more honest to revert to the old name?) We’re nowhere near a theory of consciousness such that an artificial version would make sense, and that’s good. It’s a research program that will do what AI was always supposed to do: teach us more about what it means to be human, how the rich varieties of human cognition work and what they consist of. In an online discussion group I belong to, a member recently said their school had rejected a proposal for a joint degree in philosophy and computer science, claiming that they “did not find the pairing convincing” because those fields have nothing relevant to say to one another. I suggested they point out that Alan Turing’s “Computing Machinery and Intelligence” (1950), where he popularized what is now called the Turing Test, was published in one of philosophy’s most respected journals, Mind.

Of course, there are significant concerns we need to address if we take seriously the idea that the work we’re doing is in machine consciousness. Yet most of these problems arise by broadly misunderstanding the nature of AI systems—anthropomorphism, capacity for deception, and harmful bias baked in from training sets. We must strike a delicate balance—the computer may be the most promising system for modeling consciousness we’ve yet seen (or it may not be!) But we must remain open to the idea that what we are is fundamentally, at a foundational level, not the kind of thing a computer could be. We need to acknowledge profound structural differences that are belied by surface similarities (large language models appear to use language, but what they do is wildly different from what we do! See, for example, Linguistic Bodies. As philosophers and cognitive scientists, our expertise is at least in part about looking deeply at problems and trying to sus out exactly these conceptual confusions. It’s our job to evaluate these questions and have these conversations with the public at large so that the AI hype cycle doesn’t do more damage than it has already done. If our job is to chisel away at concepts until we fully understand their conditions, then we need to stop conflating “AI” with “AC”—no one doubts that machines are already very intelligent! If we’re talking about thinking and consciousness and what makes our minds unique, then we can’t keep shying away from the language. We are nowhere near making conscious machines. We must repeat this loudly and often. But we also shouldn’t be afraid of asking whether we ever could have conscious machines, and how and why.

The Current Events Series of Public Philosophy of the APA Blog aims to share philosophical insights about current topics of today. If you would like to contribute to this series, email rbgibson@utmb.edu.

Author Image
Robin L. Zebrowski

Robin Zebrowski is a Professor of Cognitive Science at Beloit College, where she chairs the cognitive science program and has a joint appointment in philosophy, psychology, and computer science. She has been working on the metaphysics of AI since the mid-1990s, and most of her work involves 4e cognition (embodied, embedded, extended, and enactive). Recent papers include enactive social cognition in AI, and anthropomorphism in relation to machine minds.

1 COMMENT

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...