In this interview, John Kaag speaks with Mark Johnson about how the philosophy of metaphors threatens the foundations of analytical philosophy and has made waves far beyond it, the link between Mark’s work in metaphors and cognitive science, how he is constructing a Morality for Humans, and how his Philosophy 101 Teaching Assistant saved him from an engineering career.
Give me a picture of where you grew up and what your background was.
I was born in 1949 and grew up in Kansas, in America’s heartland, at a time when memories of WWII were vivid and painful, and Cold War fear of invasion and nuclear extinction dominated our daily lives. I remember numerous “duck-and-cover” drills where we grade schoolers were taught to huddle under our desks in the event of a nuclear attack, and we were told that holding a sheet of newspaper over our heads would protect us from fallout. The standard joke amongst the boys in the class was “Question: What should you do in the event of an atomic bomb blast? Answer: Step One – Bend over and kiss your ass goodbye.” Yet, in spite of this generalized existential angst over annihilation, it was also a time of post-war economic growth and unchecked expansion in business, politics, and education. In white America, there was a widespread belief in American exceptionalism, coupled with a hope for a brighter future for those who worked hard. I came from a middle-class family, with a father who worked a full-time job as an auditor for TWA and four supplementary bookkeeping jobs for small businesses, and a mother who somehow fed her family of six on $20 a week. There was lots of jello, hamburgers, vegetables from our garden, creamed corn, and creamed chipped beef on toast for dinner.
Where did you go to college and what was it like? Why did you eventually go into philosophy?
My father had a high school diploma, and after he left the Navy, he went to night school to take courses in business. My mother earned a bachelor’s degree in English literature from the local college, which was the highest degree anyone in my family had ever achieved. My father thought that the purpose of a college education (for boys only, since the girls were meant to be secretaries, airline stewardesses, and maybe teachers) was to get a good job. Since I didn’t have a clue what it meant to go to a university, I accepted my father’s suggestion that engineering was just the degree for me. So, off I went in 1967 – completely naïve – to the University of Kansas fifty miles down the road to become an engineer, even though I wasn’t really sure what engineers did to earn their high salaries. That’s how I ended up my first semester in ME1, a weekly course to help us figure out what kind of engineering we were suited for. It was a perfectly successful course, from my perspective, because every Friday they covered a different kind of engineering (electrical, chemical, civil, etc.), and every Friday afternoon I came out of class thinking, “My God, I certainly don’t want to do THAT!”
Fortunately, that same first semester I enrolled in Philosophy 101, and my destiny was determined. It wasn’t my professor who got me hooked. He was a nationally known logician, but I often found that he misrepresented many of the views he summarized. He spoke not at all to my existential concerns and passions, but I had a terrific teaching assistant, Robert Godbout, who took the time to talk to me after class about my confusions and interests, as we walked back to the dorms. It was because of him, and later, my honors thesis director, Michael Young, that I first fell in love with ideas. Not just any ideas, but views about the meaning of life, why we are here, how we ought to live and treat others. For a hick from Kansas, it was an incredibly exciting time in the late 1960’s – the Vietnam War, the Civil Rights movement, what was then called the Women’s Liberation movement, and the dawning of environmental activism. This is what philosophy meant to me then (as it does today) – the attempt to intelligently address the most pressing and vital human problems of the day, problems of the utmost existential and moral import. What could possibly be more important than that?
So, when I filed for conscientious objector status in the draft and was instead given a deferment, I went off to the University of Chicago for graduate study thinking that there I would continue my quest for the answers to the ultimate questions about human meaning, purpose, and values. Boy, was I in for a surprise. I had virtually no idea of what professional philosophy meant at a prestigious institution. It meant seminars on Frege and Quine, discussions of the apriori conditions of knowledge, and analyses of referential opacity, ontological indeterminacy, and radical untranslatability in conceptual systems. There was nary a word about the meaning of life or any of the existential concerns of ordinary humans. This was the heyday of classical linguistic analytic philosophy, and so everyone assumed that analysis of concepts and linguistic structures was the key to understanding mind, thought, and language. The existentialism that so intrigued me in my undergraduate days was nowhere to be found. It was all conceptual analysis and logical rigor, focused on the status of knowledge and truth.
Somehow, I found teachers there who DID care about these existential and moral matters, and who renewed my conviction that philosophical reflection was important for a life lived reflectively and intelligently. Ted Cohen, who blended Austinian philosophy of language with Cavellian attention to the ordinary in daily life, always seemed to find a way to relate our text for the day to something that mattered for our lives. And, from a very different perspective (i.e., hermeneutics), Paul Ricoeur taught me how to focus on human understanding and meaning making. It was in a course on Metaphor and Religious Symbol that he co-taught with David Tracy that I was introduced to the central role of metaphor in human thought. I sensed immediately that something was going on with metaphor that my training in analytic philosophy of language had not prepared me for. I began to see that the depths of human meaning far exceeded our linguistic capacities and achievements.
Your first work was in philosophy of language and metaphor. Could you summarize your findings and how it stood in marked contrast to the “big players” in the field?
Up through the first half of the twentieth century, virtually nobody in philosophy thought that metaphor was an important topic. At best, it was regarded as a device of poetry and rhetoric that could and should be avoided in serious scientific and philosophical theorizing. The general view behind this dismissal of metaphor was a pervasive objectivist and literalist understanding of the nature of language and thought. The full picture is, obviously, complex, but it can be summarized roughly as follows: philosophical analysis was supposedly concerned with language, thought, meaning, knowledge, and truth. Meaning was held to be conceptual and propositional, and it had to be literal, if our utterances were able to map directly onto states of affairs in the world in a truth-functional fashion. Philosophers humbly thought of themselves as special people — the bearers of rationality and critical reflection — because they could allegedly explain which types of linguistic expressions are meaningful and which constitute genuine knowledge claims, capable of being true or false.
Now, within this objectivist and literalist view of meaning, knowledge, and truth, metaphor was thought to be unimportant or even dangerous. It was regarded mostly as nothing more than a rhetorical figure of speech in which one thing is spoken of, indirectly, in terms appropriate to some other kind of thing. In short, metaphor was a form of illegitimate sort crossing, or a category mistake, in which we talk about something in terms actually fitting for a different kind of thing. So, there were two basic options:
- metaphor is not even a semantic phenomenon (i.e., has no cognitive meaning), and therefore is incapable of making truth claims, or
- metaphors can convey meaning, but that meaning must be reducible to a set of proper literal utterances.
In either case, metaphor is unimportant. Consequently, philosophers were inclined to dismiss metaphor as cognitively insignificant, and they thought a mature science and philosophy would eventually do away with metaphors, which they tended to see as leading us to misunderstanding, ambiguity, vagueness, and confusion.
Under the tutelage of Paul Ricoeur and Ted Cohen, I argued in my doctoral dissertation, following the influential arguments of Max Black, that metaphors were both meaningful, cognitively significant, and not reducible to literal utterances. When I got my first job, at Southern Illinois University, they didn’t really understand what I was doing, and so they hired me to teach aesthetics, assuming that metaphor was a poetic device and poetry was part of aesthetics and philosophy of art. During my second year as a young assistant professor, I had the good fortune to spend six months on a visiting appointment in philosophy at Berkeley. My second day there I met George Lakoff, the renowned linguist and cognitive scientist, and we quickly realized that we shared the conviction that metaphor had a special importance in human thought that could not be accounted for by mainstream theories in our respective disciplines of linguistics and philosophy. In fact, especially in philosophy, the idea that metaphor was at the heart of abstract conceptualization and reasoning concerning all aspects of life simply did not fit the literalist, objectivist view of meaning, knowledge, and truth that defined the mainstream philosophical orientations of the day. George, recognizing that analytic philosophy of language was fundamentally at odds with the view of meaning we were developing, said to me something like,
You know, Mark, we’re going to have to develop a new philosophy adequate to this new understanding of metaphor and meaning. You’re a philosopher, so let’s think one up.
This new philosophical perspective, which was to draw on cognitive science, phenomenology, cognitive linguistics, and pragmatism, would be non-dualistic, non-literalist, non-objectivist, and grounded in a rich notion of experience. Consequently, even though in the 1970’s a number of prominent philosophers had begun to acknowledge that metaphor was important, they could not accept our view of metaphor as a primary process of human thought, because their entire objectivist and literalist view of language and thought was fundamentally at odds with the basic assumptions of this experientialist view we were developing. For the most part, the response of the “big players” was to ignore what came to be called Conceptual Metaphor Theory, and to continue talking about metaphor and language without paying serious attention to the empirical study of language, meaning, and thought that was coming from the cognitive sciences.
What was the reception of your work on metaphor?
We got two completely different, and completely opposite, responses to our view. As I’ve just mentioned, philosophers mostly just ignored our position, because it was so dramatically at odds with their fundamental assumptions about concepts, meaning, thought, and truth. For example, Lakoff and I were once invited to the University of Chicago, when Donald Davidson was still there, for a kind of debate that would pit our Conceptual Metaphor Theory against his deflationary view of metaphor. George and I tried to engage Davidson by assessing his famous “What Metaphors Mean” essay in the context of contemporary cognitive science research on language and thought. We were hopeful that we could get Davidson to confront the empirical research that might challenge his view. However, he simply ignored us, right there in the same room, when we asked him to respond to our criticisms. Davidson, as I recall, summarized his view, made no reference to our alternative theory, even by way of criticism, and ignored our challenge to consider cognitive science research. This dismissal helped prepare me for the widespread ignoring of our theory that we would experience at the hands of mainstream philosophers within the analytic tradition.
However, outside philosophy, it was a completely different, much more positive, story. Over the thirty-seven years since we published Metaphors We Live By, our initial formulation of Conceptual Metaphor Theory, we have received thousands of letters and emails, from scholars in every conceivable field who recognized the value of metaphor analysis in their discipline and who appreciated the perspective on meaning and mind that we proposed to make sense of the various phenomena of metaphor (not just in language, but in art, music, dance, theater, architecture, spontaneous gesture, and ritual practices). They have embraced our cognitive linguistic conception of meaning, thought, and language. Over almost four decades there have appeared hundreds of cross-cultural studies of the metaphors that define various cultural systems and practices, and on the role of metaphor in scientific disciplines and fields. These have come from linguistics, cognitive and developmental psychology, neuroscience, biology, anthropology, pragmatist philosophy, phenomenology, literary theory, gender studies, political science, economics, science studies, music theory, aesthetics, art history, architectural theory, film studies, and on and on. Metaphors We Live By has sold over a quarter of a million copies and has been translated into over twenty languages, so it is clear to me that we put our finger on an important set of cognitive processes in human conceptualization and reasoning, even if we have had to revise and expand the theory over the years in response to new cognitive science research. I believe that this work on metaphor, and our subsequent work on the bodily sources of mind, meaning, and thought, has helped people talk about meaning as it operates in their own lives, in a way that traditional philosophy of language could not provide.
Why do you think that people were so averse to the theory?
As I’ve already suggested, those philosophers who reject our theory do so mostly because they find the theory of meaning we are proposing to be threatening to their philosophical projects. If they were to take it seriously and incorporate the scientific research on meaning and conceptualization that it draws on, they would have to abandon some of the grounding assumptions of their entire philosophical and methodological orientation. None of us are very inclined to give up theories and assumptions on which our research projects have been based. Most of us, myself included, are prone to confirmation bias – the tendency to use evidence that supports our views and to ignore or dismiss evidence that might challenge our views. Therefore, I do not expect mainstream traditional analytic philosophy to “come around” to this new perspective. What I see, instead, is that many young scholars and researchers in many diverse fields are simply starting to appreciate the necessity of experimental evidence, and they are beginning to realize the limitations of some of the methodological assumptions that they were trained to accept. It is their work, which is more experimental, empirically based, and open to evidence from multiple perspectives, that promises to reform philosophy for the better and to produce a richer account of mind, meaning, and thought.
Why the denigration of the imagination and metaphor?
Paul Ricoeur once said, in his class on Kant’s theory of imagination, that one of the more pressing needs in philosophy today is for an adequate theory of imagination. Philosophers have traditionally treated imagination as a non-rational and conceptually unconstrained form of creative insight. Even those who praise artistic imagination and creativity have tended to think there is nothing much that can be said about how it works, since it is believed to be a non-rule-governed process. Gottlob Frege, thought by most to be the father of modern analytic philosophy, regarded imagination as a private, idiosyncratic subjective process that could never arise to the level of shared, communicable meaning. This prejudice has tended to persist among those who restrict meaning to conceptual and propositional structures. If metaphor is then taken to be a device of creative imagination, it, too, is denied any significant cognitive status.
Imagination and metaphor come to be regarded as sites of ambiguity and indeterminateness in human thinking. They may be vehicles of creativity, but, as non-rational, they are thought to be based on a relatively inexplicable free play of mental faculties. We must start over in the way we think about imagination. It is not a distinct faculty, nor is it irrational in any deep sense. Rather, our capacity to imagine situations and events is based on our ability, first, to recognize patterns in our sensory-motor experience and, second, to appropriate those sensory, motor, and affective structures and processes for abstract conceptualization and reasoning, including the generation of novel hypotheses. We must see imagination as basic to our ability to make meaning and to broaden and deepen our understanding.
Much of your work on metaphor led us to think about “embodied cognition.” What exactly is that?
Lakoff and I saw, early on, that the source domains of our conventional conceptual metaphors tended to involve sensory, motor, and affective aspects of our experience. For example, there is a dominant conceptual metaphor ‘Understanding Is Seeing’, in which we conceptualize abstract thinking as a process of seeing. From an evolutionary perspective, it makes sense that we might recruit our basic bodily (sensory-motor-affective-social) experience to think and reason about some abstract domain, such as mind, thought, will, knowledge, reasoning, etc. This led George and I to explore the ways in which human meaning, conceptualization, and reasoning are grounded in our bodily experience as organisms engaging our physical and social environments. We started pursuing this idea in the late 1970’s, in light of a substantial body of scientific research on how our bodies and brains give rise to meaning. Especially with the rise of cognitive neuroscience, with its new neuro-imagining techniques, it became possible to test hypotheses about the bodily sources of meaning that were originally arrived at through earlier forms of cognitive science research in linguistics, cognitive psychology, developmental psychology, and cultural analysis. The basic idea of the embodied cognition approach is that, through our bodies, we directly engage a meaningful environment, and the processes of that meaning-making are recruited for “higher” cognitive functions.
It’s a tough question about what “embodied cognition” means, but the basic idea is that we can only experience, think, value, and do what our bodies, brains, and environments allow us to experience, think, value, and do. Our bodies (and brains) evolved to recruit structures of our capacities for sensory perception, motor activity (moving body parts or our entire body), and emotional systems to understand and reason about what we think of as abstract concepts. There is a rapidly growing body of neuroscience evidence that we recruit these bodily dimensions of meaning for conceptualization and reasoning. However, over the past several years it has become clear that cognition and valuing cannot be limited only to the boundaries of skin and skull. There is no mind without our fleshy corpus, but that bodily mass is what it is and does what it does only in and through its relation to its surroundings. So now the field of embodied cognition has opened up to include a broader range of structures and processes that are not limited to our bodies proper.
One big move was to study how our “minds” incorporate aspects of our “external” environments as an essential part of our cognitive activity. This “extended mind” hypothesis has caused many to rethink their assumption that mind is only an interior phenomenon lodged in a brain-in-a-body. So, the “Body” now looks as though it cannot be specified independent of its engagement with and connection to aspects of our environment that transcend the limits of our physical bodies. Thus, there has emerged the 4-E’s view of mind as Embodied, Enactive, Embedded, and Extended. Consequently, there is no simple and obvious answer to what is meant by “embodied cognition”. We’re in the process of trying to figure this out, and it’s not clear where our reflections and experiments will lead us. The only thing that is perfectly clear is that traditional notions of disembodied mind, thought, and language are fundamentally inadequate and misleading. It also seems clear, at least to me, that, however much of our cognition is “off-loaded” onto aspects of our environment, whatever meaning there is must be enacted in and through a bodily organism.
This led you to become interested in cognitive neuroscience. Do you think that cognitive science can fully answer philosophical questions? Which ones?
Cognitive neuroscience has, and will have, a huge amount to tell us about how our bodies and brains work, and not just from a neural perspective, but more broadly from the point of view of ourselves as embodied, and irreducibly social, creatures in ongoing interactions with environments that are at once physical, interpersonal (social), and cultural. The challenge is to avoid reductionism that might impoverish our experience and leave out important qualitative and social dimensions of our experience. We are NOT our brains, but we do not exist without our brains operating bodies acting within the world. Neuroscience research can be used to criticize armchair philosophical claims that are incompatible with converging mind-science evidence. This research can also suggest hypotheses about mind, thought, language, and values that ought to be pursued. However, neuroscience can never be the whole story, nor can it replace philosophy. There will always be a need for a reflective philosophical posture that (1) allows us to recognize and critically evaluate the assumptions and limitations of any particular method in the sciences, and (2) provides the broader philosophical perspective to see the implications of various bodies of scientific research and to help us construct a picture of how all of these parts of our view of mind, thought, language, and values might hang together. Therefore, we ought to seek what Pat Churchland named the “coevolution of theories” – of sciences and philosophies mutually criticizing and/or reinforcing and interpreting each other.
Do you think that science can answer moral questions? How so?
This is the hardest question you’ve asked so far. Many philosophers have traditionally argued that science has little or nothing to do with morals. They believe this because they think that sciences describe phenomena, state facts (what is), and give abstract theoretical explanations, while morality is about norms that tell us what we ought to do. So, you supposedly can’t go from statements about what is (e.g., some people rape, oppress others, and lie), to prescriptions for how we ought to behave (e.g., we shouldn’t rape, oppress, or lie), because you can’t derive the ought from the is. Owen Flanagan has explained why this view is misleading. Yes, he says, it is true that we can’t derive an ought (a norm) from an is (a factual statement). But it does not follow from this that there can’t be a normative conclusion that incorporates scientifically derived information. He gives the example of the following reasoning about breakfast:
- Breakfast that involves certain kinds of food groups, in certain amounts, prepared in a certain way, can give you important nutrients;
- these nutrients are important for your health;
- therefore, you ought to eat a good breakfast.
Now, obviously, (3) does not follow from (1) and (2) alone. To oversimplify, you only get the normative prescription (3) if you include a premise that you want to be healthy. We don’t get norms unless we start with some norms and values, and then we can employ scientific information that is relevant to our reasoning about what to do. But you must start with some values! So, the real issue becomes one of determining whether there really are some general values that contribute to human flourishing. The good news is that there is a huge and rapidly growing body of research on the types of values that emerge evolutionarily for creatures like us, from values as basic as continued existence to values as complex as wanting to live a meaningful life. And – this is crucial – there is no way to avoid the need for intelligent discussion of values and what supports them. We can have informed debates about which values might be universal while at the same time recognizing that there may be different ways to articulate and develop each of those values so that cultural differences appear. I think the cognitive science evidence indicates that there is no way to argue for one univocal set of values, or, at least, no way to argue that only one perspective is morally viable. We seem to be stuck with a plurality of values that could be realized in perhaps different ways within different cultures. So, our inquiries into morality require both mind science research and self-critical philosophical reflection engaging in ongoing dialogue.
You call your current theory “Morality for Humans”. Why?
In my book Morality for Humans: Ethical Understanding from the Perspective of Cognitive Science, I’m following the American Pragmatist philosopher John Dewey in claiming that humans are not what he called “little gods.” We are not autonomous sources of absolute values or principles. Cognitive science shows us that we do not have a perspective-free, universal grasp of absolute meanings, truths, or values. Rather, we are inescapably human — situated, embodied, finite creatures who understand our world through conceptual systems and practices tied to our embodied situations. That is why we are not little gods. That is also why moral fundamentalism – the belief that we have access to absolute moral truths – is cognitively bankrupt and in no way describes human moral cognition. Moral fundamentalism is both psychologically untenable and immoral, in that it closes off the very moral inquiry we so desperately need in confronting complex moral problems. So, when I say that we need a “morality for humans,” I mean that we need to employ the best current mind science of moral cognition and appraisal, the most well-grounded evolutionary view of human values and practices, and the most appropriately self-critical philosophical perspective available. We need to construct a view of human moral experience and reasoning that is compatible with what we are learning about moral cognition, intuitive appraisal, emotion, self-deception, moral development, and so forth, in order to construct a moral orientation that is fit for the kinds of creatures we are, not the kinds of little-god creatures we might aspire, impossibly, to be.
This is very similar to Socrates stating that his wisdom was nothing special, nothing other than human wisdom. But Socrates, I think, believed in absolutes (but realized that we humans might never be able to fully grasp them). Is this something like your theory?
Well, yes and no, but more no than yes. Yes, this moral wisdom is not something special. I see it, as Dewey did, as a form of ordinary human problem solving, and as an ongoing process that needs to be revised as we encounter new and changed situations throughout life. But I don’t buy the belief in absolutes, if those are understood as timeless moral prescriptions. In place of those, I put ideals, which are really only imagined possibilities of how a situation might develop under the guidance of certain values. However, this has to be an ongoing experimental process, not a formulation of some pre-given, ideal fixed state that we strive to realize. Dewey took a moral principle to be a summary of ideals and strategies that have usefully guided us, as a moral community, in past similar situations, but these are never absolutes, because new situations arise that introduce new conditions not present when the principle was formulated. Therefore, moral reasoning is experimental and always ongoing. This is the sense in which I appropriate a Socratic perspective. If you take away fixed moral absolutes, you still have Socrates’ beautiful notion that a life worth living is a life that subjects itself to ongoing critical reflection about what is good and how best to realize a better, more harmonious, more meaningful, and more caring world.
How do you avoid relativism? The rabid, unhelpful sort?
G.E. Moore once pointed out how we are always trapped by what he called the “open question” in moral reasoning. Someone can always ask, about any value we propose, why that value is good. So, either something is just good-in-itself, absolutely, or all good is merely relative and the open question will forever plague us. I’m convinced by the view, espoused by many folks who do the cognitive science of morals, that we cannot escape the fact that cultures have held a plurality of values and found multiple different ways to realize even the very same values (in which case the values are not really “the same”). However, like so many (such as Pat Churchland, Robert Hinde, Jonathan Haidt, Philip Kitcher, Owen Flanagan, and Antonio Damasio), it does seem possible to provide a catalogue of some very general values that recur throughout civilizations, because of the nature of our bodies, the nature of our intimate interpersonal interactions, our needs for social interaction, and our desire for shared cultural meanings. You don’t get absolutes here, because there will always be a plurality of ways to articulate those generic values, but this certainly does not mean that “anything goes”. The problem is that we can’t see, from our present limited perspective, how situations will play out in the future relative to values pursued, so we can’t give knock-down arguments for a given morality once and for all. This doesn’t stop us from committing our lives to certain value systems, and from giving the best arguments we can muster for our views, or for the criticism of other views, but it does mean that we can never speak from a god’s-eye absolute point of view, but only as finite, frail, experimental human beings.
You mentioned earlier that philosophy should help us deal with the most pressing existential and moral concerns that we face in our daily lives. Could you say something about what you consider our most pressing concerns today?
This is such a difficult question, because there are so many very pressing problems plaguing us right now. As my father would have said, there are more problems confronting us “than you can shake a stick at.” Given what I’ve just discussed in response to the previous couple of questions, I will focus on just one of many profound problems, namely, the failure in many of our political leaders and in several of their constituencies to engage in self-critical, evidence-based problem solving. Even worse, they have rejected any need for self-criticism! There may not be absolute, eternally valid perspectives and answers, but we must not conclude from this fact that there are no appropriate critical perspectives. We should recoil, in moral revulsion, at the way some influential people will try to pass off anything they want to believe, as if it were self-evident truth that supposedly requires no evidential support whatever. However, as Dewey argued, to say that a claim is “self-evident” is to say no more than that it is being taken-for-granted by someone or some community that currently has no motivation or desire to question it. Any reflective person ought to realize that intelligent inquiry starts with the willingness to scrutinize the very “truths” that have hitherto been unreflectively accepted as foundational. When politicians, scientists, and ordinary folks abandon, or even attack, critical inquiry, are we not right back in the very same problematic situation Socrates was trying to address when he insisted on the importance of rational, critical examination? We need good cognitive science and appropriate philosophical reflection now, more than ever, if we hope to have any chance of preserving a democratic attitude toward inquiry and toward our social and institutional arrangements.
Mark Johnson is Philip H. Knight Professor of Liberal Arts and Sciences in the Philosophy Department of the University of Oregon. His latest book is Morality for Humans: Ethical Understanding from the Perspective of Cognitive Science (University of Chicago, 2014).
John Kaag is a Professor of Philosophy at the University of Massachusetts Lowell and author of American Philosophy: A Love Story (Farrar, Straus and Giroux. 2016).
*
If you have an idea for a blog post or interview, we’d love to hear from you! Details are here.