ResearchPhilosophy and TechnologyWhat Hegel Has to Teach Us about AI

What Hegel Has to Teach Us about AI

This essay was previously published July 6th, 2023 in The New Statesman

In the summer of 2022, engineer Blake Lemoine posted to Medium a transcript of his conversation with LaMDA, a chatbot in development that Google had hired him to troubleshoot. Lemoine’s post made headlines—and provoked the ire of his employer—because of its incredible claims: the engineer declared LaMDA “sentient” and even suggested that it had a “soul.” At the time, Lemoine’s outlandish assertions were met with incredulity and disbelief, but several months later, following the public unveiling of OpenAI’s ChatGPT, Lemoine’s pronouncements no longer seemed so wild. ChatGPT can speak fluidly and coherently much of the time, approximating human speech. It can express opinions and write passable student essays. There is even evidence that it is capable of some form of spontaneous self-correction, a widely recognized hallmark of human intelligence among philosophers of mind. 

Have we finally built a machine that can think? The history of philosophy throws up a potential roadblock to the much-trumpeted march of AI towards human-like intellect. Such challenges are nothing new; in the 1970s, Hubert Dreyfus published a landmark book, What Computers Can’t Do, that drew on Wittgenstein and Heidegger to show that AI research at the time misunderstood what intelligence was. But another improbable protagonist—the 19th-century German philosopher G.W.F. Hegel—goes beyond these attempts, despite having lived and died over 100 years earlier. Hegel developed an explosive, and until recently largely ignored, account of the relationship between life and mind that overcomes the limitations of Dreyfus’ “critique of artificial reason” and, arguably, furnishes a new yardstick against which any purported AI must be measured.   

Large language models (LLMs) like ChatGPT differ from earlier forms of artificial intelligence—often referred to as “Good Old-Fashioned AI” (GOFAI)—through their use of deep learning neural networks. These networks are “trained” to recognize patterns and make predictions using large sets of data. For example, LLMs are trained to write and converse by analyzing the relevant data (Twitter feeds, Wikipedia entries, and so on) to create models that will generate sentences with the highest likelihood of correctness. Such deep learning techniques are a major advance over GOFAI, which is characterized by reliance on hard-coded rules for symbol manipulation and information processing. Because LLMs can “learn” to converse without such hard-coded rules and to adapt to the data, they are thought to be far better at navigating the complexities of an ever-changing world. 

This sudden leap in AI capability has reinvigorated the debate about whether machines can emulate human intelligence, with many believing that deep learning neural networks will eventually match, if not surpass altogether, our kind of mind. Yet with few exceptions, there is not much discussion of what intelligence actually is

Philosophy-informed, AI-critical works such as Dreyfus’s What Computers Can’t Do or John Haugeland’s Artificial Intelligence: The Very Idea (1985) appear to be relics of a bygone era, prior to deep learning. Yet even though they are predominantly concerned with GOFAI, their criticisms retain much of their force. Dreyfus criticizes what he calls the “traditional assumptions” of the GOFAI program, which lie in the “reduction of all reasoning to explicit rules and the world to atomic facts to which alone such rules could be applied.” If human reasoning is so understood, it is a straightforward proposition that a digital, finite-state machine could eventually be programmed to simulate it. The one problem with such assumptions, Dreyfus shows, is that they don’t account for the sort of intentional behavior we typically identify with “intelligence” in ordinary human life. In contrast to the traditional assumptions, Dreyfus identifies three core criteria of genuinely intelligent behavior. 

First, to be an intelligent being is to be embodied. To be and have a body is to be receptive to a world of tools and obstacles one has learned to navigate. We don’t have fixed responses hard-coded in advance but a practical “know-how” that shapes our sensory awareness of each new environment we find ourselves in. Second, to be intelligent is to always be situated in a “context of significance” to which certain things belong and others specifically do not. It is only in this kind of practical context that particular objects can show up in the first place as tools or obstacles. In being taught how, say, to drive a car, I gain the practical-sensory ability to grasp a specific kind of situation: my purpose of “getting around” enables me to discriminate accelerator from break, the loose feel of first gear from the resistance at the top of fifth, a traffic jam from the free and open road. This notion of “purpose” brings us to the final criterion, what Dreyfus simply refers to as need. Human purposes, needs, and ends don’t exist prior to or somehow independently of either the bodies we are and have or the “situation.” It is rather those purposes that make intentionality—our capacity for directing our attention to particular objects and for acting in a self-directed way—first possible. Our projects and purposes thus allow the practical context of the “situation” to arise and enable us to sensibly discriminate this from that in our environment. 

Dreyfus’ account helps us understand intelligence as such, and it is clear how both GOFAI and the LLMs do not qualify. GOFAI flounders on the rock of the plastic, non-formal character of our “skillful coping,” while the LLMs lack the situation-specific, sensory awareness that embodiment affords us. Most fundamentally, while AI may have “objectives” they must fulfill—composing an essay, for instance—such objectives are always derived from and dependent on the ends of their programmers. Both the classical architecture and the more recent neural networks lack the sense of purpose that enables our intentional relation to the world. 

Dreyfus’ account has its limits. Despite his emphasis on embodiment, his notion of the body is peculiarly disembodied. Dreyfus draws a hard distinction between the biological body as a set of physiological mechanisms continuous with the rest of nature and the phenomenological body as the locus of experience and skilled coping. Yet how then do we explain their interaction? Dreyfus is unable to answer this question. He is also famously insistent that skillful coping can’t be artificially simulated because it represents “orderly behavior without recourse to rules.” Yet if our purposive behavior isn’t rule-bound in some sense, how is it generalizable beyond any particular situation? And how, within any given situation, do we discriminate success from failure—driving a car well from wrecking it or stalling out? 

In works like the Science of Logic (1832) and his Encyclopedia project (1830), Hegel indirectly addressed these issues while working out his novel idea of rational life. Hegel has an unfortunate reputation as a kind of hyper-idealist who believed that the material universe is the self-expression of a cosmic super-mind, Geist (translated as either spirit or mind). Nothing could be further from Hegel’s actual thinking. Through a potent reimagining of Aristotle’s metaphysics and Immanuel Kant’s late work on the organism, Hegel endeavored to show that intelligence and intentionality first arise in nature with the contingent emergence of life. For Hegel, living organisms have the internal purpose of maintaining their own form and of flourishing as the distinct kinds of being they are. The purpose of an organism is “internal” in the sense that its existence doesn’t depend on the ideas and ends of, say, an external designer—so no “divine clockmakers” in Hegel’s account. Organisms organize and maintain their own parts in light of their internal purpose of maintaining themselves; the parts exist for the sake of the whole, and the whole depends on the parts to keep itself going. 

In contrast to Dreyfus’ view that the phenomenological body is distinct from the biological one, Hegel revealed how the parts of an organism exist for the sake of the experiential whole. Each living individual is a member of a species, from which derive the criteria for sickness versus health, withering versus flourishing. It is by virtue of their species-specific, internal purpose that animals have the bodies that they do and exhibit a purposive relation to their surroundings. As Hegel argues, pain and pleasure are the most basic forms of intelligent responsiveness to an environment: it is through pleasure and pain that animals take the things around them as good or bad, instrumental or inimical to their flourishing. Iron responds to moisture by rusting, but it does not intend to rust. A lion, by contrast, is not just causally induced to act by its desires and perceptions; rather, it takes the running gazelle as prey, the tree in the distance as a place to rest, the hyena pack as predator. It is in this way that the purpose-governed activity of organisms “enacts” their environment and allows a context of meaningful relations—the plains of the Serengeti, for instance—to arise. 

If Hegel is right that intelligence can only be exhibited by a living organism, the implications for AI research are staggering: we can’t produce artificial intelligence without also producing artificial life. What Dreyfus misses is that his third criterion, “need,” has its origin in the organic end of flourishing. And now that we’ve grasped the original ground of intelligence in life, we can re-examine higher-order organisms like us in order to see what would be required to artificially produce a human-like mind. Hegel develops a powerful, anti-Cartesian account of human reason not as a set of formal-logical processes separate from affect, desire, and supposedly primitive “animal functions.” Rather, for Hegel, human reason is a distinctly reflective way of being an animal. If the other animals maintain themselves in light of given species ends that they can’t question or change, we maintain ourselves as material beings in light of shared social norms that are intrinsically contestable and that can be revised. 

Recall that, for Dreyfus, human intelligence can’t be emulated computationally because it isn’t completely rule-governed. This creates a conceptual puzzle, because without rules, our practical activities lack determinacy—we can’t make out the boundaries between one practice and another or distinguish within a single practice success from failure. Hegel has a solution to this puzzle: his idea of “concrete universals.” For Hegel, to drive a car or to cook a meal is to be aware that I’m driving or cooking, and this means that I am trying to drive well and be sensitive to what doing so requires. This idea of self-awareness isn’t akin to the sort of introspection we engage in from time to time but is a basic condition of all action. For Hegel, the relevant question isn’t whether I’m checking off an abstract list as I drive but whether I’m responsive in driving to what it would mean to succeed under the circumstances. Ultimately, at issue is not whether my actions are codifiable into a set of unchanging principles but whether my actions are justifiable to other social actors also trying to drive or ride along. Rules in Hegel’s sense are “universal” because they must be shared by others, but they are also “concrete” because their content is always a matter of what can be justified here and now. 

In addition to solving the puzzle Dreyfus creates, Hegel’s model of rule-following upends assumptions that underlie both GOFAI and deep learning paradigms. Reasoning can’t be formalized in the way that classic AI research thought, but it also isn’t simply “non-cognitive” and uncodifiable, as Dreyfus suggests. Rules for Hegel are not recipes or blueprints for action and belief but forms of self-awareness attained through initiation into social life. It is by virtue of such rules that we reproduce ourselves as the distinctly rational organisms that we are. Such rules empower us to unify our actions and beliefs across time and to discriminate the helpful from the harmful in our everyday dealings with the world—ultimately for the sake of our flourishing as social creatures. 

At the same time, the plasticity of Hegel’s “concrete universals” should not be mistaken for the kind of probabilistic approach to learning characteristic of recent AI. In so-called “hard” ethical cases—should I visit my ailing father or help my closest friend study for a life-altering exam?—we do not step back to calculate what most people are most likely to do. Our ethical reasoning involves no “predictive calculus” but is a question of judgment and moral imaginativeness, of what we take to be justifiable to ourselves and those around us under novel circumstances.

What are the broader stakes of such a Hegelian intervention? Consider the Silicon Valley doomsayers prognosticating a Skynet-like takeover. The sapient machines they envision have little to do with the formalistic and probabilistic models of “mind” underpinning contemporary AI. Far more worrisome than the fantastical threat of the “singularity” is the threat that lies in the existing technology itself, shaped as it is by the deeply anxious and unstable, often violent, social milieu from which its data sets are drawn. Even if claims of AI-driven automation are overblown and out of touch with current economic realities, large language models like GPT-4 will eventually be integrated across a range of sectors to speed labor along and cut production costs. And as Marx pointed out, instead of freeing up our time for meaningful work, under capitalist conditions “the most developed machinery forces the worker to work longer than primitive man does, or than he himself did with the simplest, crudest tools.” Yet there is an alternative. Marx also emphasizes the untapped, emancipatory potential of the technological innovation spurred by capitalist competition. The task, then, is to rethink artificial intelligence not as a competitor to, but as an inorganic extension of, actual intelligence. But putting an end to the ongoing mechanization of human reason is not just a matter of adopting a better theory. It will first require that we “pull the emergency brake” on our runaway mode of production, instead of passively awaiting the mechanical overlords that—for anyone paying attention—have already arrived.

Jensen Suther
Junior Fellow in the Harvard Society of Fellows at Harvard Society of Fellows

Jensen Suther is a former Fulbright Scholar and received his PhD from Yale University. He is currently a Junior Fellow in the Harvard Society of Fellows. His writing has appeared or is forthcoming in a range of academic and public-facing venues, including the Hegel Bulletin, Representations, Modernism/modernity, The New Statesman, and the Los Angeles Review of Books. He is currently working on two books—Spirit Disfigured and Hegel’s Bio-Aesthetics—which explore Hegel’s legacy for Marxism in aesthetic, political, and philosophical contexts

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...