Home Research Philosophy and Technology Intelligence is Always Artificial

Intelligence is Always Artificial

For Hegel, intelligence is always artificial. Indeed, there is nothing “natural” about understanding, consciousness, nor intelligence itself.

We say certain behaviors come naturally to us. Most of us breathe when we are born. Our first breaths are often a cry. And while we might call these acts “natural,” surely we would not call them “intelligent.” 

Can intelligence ever be natural?

That which is “natural” is a famously contentious topic in philosophy. Derrida, for example, in his 1967 book Of Grammatology, notably blurs the fault lines between natural and artificial. But while we can contend that, yes, certain behaviors that we assume are natural—such as having breakfast in the morning, wearing clothes, and defecating in a toilet—are, in fact, learned behaviors, surely, nature as an object cannot be confused with artifice. There are, after all, mountains and rivers and forests. Nature as an object must exist even if our rendering of “natural” is up for debate.

In fact, this is what nature is to Hegel: brute materiality. Nature is just stuff. All the stuff “out there.” We may rearrange nature’s components this way and that, construct houses, build roads, make microchips, but underneath their composed artificiality, the material stuff is nature. 

For Hegel, consciousness is, of course, “real,” albeit immaterial. And precisely because consciousness is not material, consciousness is not natural. It is not an external object. This externality—this is what characterizes nature. Nature is external materiality

And while it is tempting to construe consciousness as internal, its internality is a result of yet another externalization. Consciousness is, afterall, the very process of externalization, of seeing oneself from a distance. Or better yet, consciousness is the cut of externalization, of seeing oneself from the outside. As Lacan tells us in Seminar X, quoting Baudelaire, “I am the cut and the knife.” If nature is externality, consciousness is external nature that has become external to itself. 

Consciousness is a redoubled externalization. Consciousness is nature that has removed itself from nature.

***

“Artificial,” from the Latin artifex for a craft or a skill, implies not just that which is unnatural but likewise that which is not spontaneous. Etymologically, artificiality is literally the “doing of art.” Its meaning is linked to systematic and learned behavior. Learning a craft is in no way natural, but is instead the painful repetition of unnatural behavior until a new skill becomes a second nature. Painting, for example, is learned, a skill that must be repeated over and over again until it becomes increasingly immediate and therefore naturalized. This type of learned behavior, an artifice that becomes naturalized, moves toward immediacy. Second nature narrows the gap between the thinking, spirited being and the immediacy of nature. 

For Hegel, this systematicity is the opposite of nature itself. Nature is spontaneous. And while it can be tempting to construe nature as systematized, Hegel’s nature is, in fact, radically contingent. It is spontaneous in its many sproutlings and accidents of birth. Strictly speaking, that’s all materiality is: contingency. It’s completely contingent if there are 998 or 999 blades of grass in a particular patch. It’s contingent if you’re born male or female. Any Darwinian notion of genetic accident is just that: accident. 

And yet, nature is so disorganized, at times it even appears organized. But this is not some fact of nature. It is thought that thinks systems. Nature, Hegel tells us, has no concept of itself. It is spirited beings that give a concept to nature. Intelligence organizes the unorganized.

***

But what of AI? Without a doubt, AI is not just its programming, but it also has a natural remainder, a remainder that is not so insignificant. There are warehouses of batteries, tons of water used to cool data centers, all dependent on the materiality of the internet’s hundreds of thousands of miles of cables.

But in terms of its artificiality and its intelligence, the Hegelian critique lies in pointing out that AI is in no way intelligent. In fact, it is precisely the opposite. 

While intelligence is always artificial, AI is, in actuality, not artificial enough to be labelled “intelligent.” 

Intelligence organizes the world through—and this is the critical point—an externalization of the self. Intelligence and the will go hand-in-hand for Hegel. Intelligence thinks and the will actualizes thought. Intelligence posits itself in its subjectivity, and the will in its objectivity. Intelligence is the cut, and the will is the knife. 

When we exercise our intelligence, we are precisely able to step outside of existing assumptions and systems presented to us. We are able to think outside the box. We have the ability to step outside of ourselves and outside of what’s been given. And when we put thought into action that forces us outside of existing circumstances, that is will. Both intelligence and will, in this way, are rare. 

Most people do the same ol’ thing everyday, stick to routines, indeed, stick to their learned second nature. But to exercise intelligence, that is precisely when we think outside the information that has been presented to us and beyond what we are used to thinking. Intelligence appears when we really think something radically different. Intelligence introduces a gap that allows for the externalization of thought. 

And this is the critical reason why “artificial intelligence” is a misleading slogan. AI—from LLM’s to graphic design generators—none of these algorithmic programs have the ability to step outside of themselves. They are constrained to the program. It is, strictly speaking, impossible to program an algorithm to violate its own programming. The algorithm simply cannot support contradiction, precisely because it cannot think. And for Hegel, it is contradiction that is at the heart of thought.

What we call “AI” would be better named “algorithmic autocomplete.” It fills in the blanks, so to speak, based on a probability model that shuffles through the choices it has available, be it the best possible word to complete a sentence or the best arrangement of pixels based on the context clues given to it in a prompt. 

AI can never have intelligence nor will for the simple reason that it cannot violate its own programming. It cannot step outside of itself. It cannot think outside of its own algorithmic box. 

Thus, AI is not intelligent precisely because AI is not artificial enough. Were AI able to externalize itself from its own nature, then it would be doing something akin to self-consciousness by forming self-awareness. But AI is trapped in its own programming. There is no gap between AI and itself. It is completely immediate in its relation to itself. And hence, without a subjectifying gap, AI is not artificial enough to be called intelligent.

Any claim that AI can help us think better or that it itself can think is not only untrue but pointedly dangerous. Relying on AI to write, edit, produce text, give advice—none of its outputs could be categorized as “intelligent.” They are, in fact, the opposite. They are constrained.

And this is precisely what AI has been doing to, especially, young thinkers who have become reliant on AI to write university essays, analyze a love interest’s texts, or give life advice. They are operating within an algorithmically defined box. They constrain not only their intelligence, but, in this way, their very freedom.

Katherine Everitt
Hegelian scholar | Website

Katherine Everitt is a Hegelian scholar. She writes on the philosophy of science and technology, with a particular focus on space, quantum physics, and AI. You can follow her on Twitter @katherineveritt and you can check out her work at linktr.ee/katherineveritt.

2 COMMENTS

  1. Great! I have the same conclusion based on the Aristotelian natural philosophy and a constructivist anthropology (humans are artificial beings) – however, it was published only in Hungarian 🙂

  2. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version