The purpose of the early-career research spotlight is two-fold. First, the aim is to bring attention to an early-career APA member who is doing some interesting research. Second, the hope is to generate discussion about the spotlighted work. Feel free to ask our spotlighted researcher questions pertaining to the work discussed in the post. Comments must conform to our community guidelines and comment policy.
This installment of the early career research spotlight focuses in David Ripley, who works in the field of logic and language. He is an Assistant Professor at the University of Connecticut, a member of the UConn Logic Group and the Australasian Association for Logic, and an editor for the Australasian Journal of Logic and the Review of Symbolic Logic. A lot of his recent work has looked at how the field of symbolic logic can help us to understand the nature of truth, such as his papers “Comparing Sub-structural Theories of Truth” and “Reaching Transparent Truth”. Ripley received his PhD from the University of North Carolina, Chapel Hill in 2009, and has since been employed at the Institut Jean Nicod at the École Normale Supérieure and the University of Melbourne. You can find out more about his work at his website.
Nathan: Thanks for agreeing to talk, David. Much of your work focuses on paradoxes of various sorts. What draws you to thinking about paradoxes?
David: I’ve worked on both paradoxes involving truth and its relatives, like the liar paradox, and paradoxes involving vagueness. There are lots of ways to look at these paradoxes, and lots of puzzles generated by them, but the one I’m most interested in is this one: do our words mean what they seem to mean, and if so, how can this be?
Consider the liar paradox: using just a few basic principles involving truth and negation, we can generate a proof of any sentence you like from the mere existence of a sentence like ‘This very sentence is not true’. Since sentences like that one clearly do exist, it’s tempting to think that some of those basic principles about truth and negation can’t really hold. The trouble is that these principles are so basic that they’ve often been taken to be necessary for ‘true’ and ‘not’ to mean what they seem to mean. (The issue with vagueness is parallel: again, we have arguments leading to unacceptable conclusions that work by way of principles seemingly guaranteed by the meanings of vague words.)
Here there’s a choice point in the literature on these paradoxes: some people think that the principles involving truth are non-negotiable, and so adopt a non-obvious theory of negation, while other people think that the principles involving negation are non-negotiable, and so adopt a non-obvious theory of truth. There’s lots of interesting material that’s been developed in service of both kinds of approach. But either way you go, you end up ruling out a certain package deal: the package that says that truth and negation really do mean just what they appear to mean, and that this really does suffice for them to obey the basic principles involved in the paradoxical argument.
To hang on to this package, we need to locate the trouble elsewhere. Where I and others think it’s located is not in the basic principles themselves, but in the way they get put together in the paradoxical arguments. This is, very roughly, what makes an approach substructural. There are more choice points here: just which aspect of the way they’ve been combined creates the problem? Much of my work focuses on these narrow questions. But my excitement attaches to the overall family here; I’m convinced that these substructural approaches are where the most exciting current work on paradoxes is happening. (Because of the current flourishing in this area, it can be tempting to think the idea itself is a new one; this would be a mistake, though. For example, `Fitch-style natural deduction’, at least in Fitch’s own development of it, had a substructural solution to paradox baked into it already in 1952.)
Nathan: In a number of papers and presentations, you’ve expressed a preference for “nontransitivity” over “noncontraction”. Can you explain what these terms mean, and how they play into your work?
David: Sure! This gets to the narrower choice point I was talking about. Suppose we’re willing to accept plausible-seeming principles involving truth, negation, vague predicates, or whatever, and that we want to block the paradoxical arguments by questioning how these principles come together. Then we need to think about the structural moves that these arguments involve.
One thing that turns out to matter a great deal is the number of times a premise in an argument can be appealed to. If we pay attention to how many times each premise in an argument is actually used, some striking patterns emerge. In particular, all the familiar paradoxical arguments involve assuming something once, but then drawing on that assumption multiple times. You can imagine a discipline that disallows this, that says if you only assume something once, you can only draw on that assumption once. It turns out that, properly executed, such a discipline can in fact block the paradoxical arguments. This kind of approach is called “noncontractive”. (“Contraction” is the principle that allows multiple appeals to something that is only assumed once.)
A different kind of thing that also matters is the way arguments are chained together. We tend to think that if A entails B, and B entails C, then A must entail C. But strikingly enough, this is a principle that does no work at all in a lot of logical settings; while it happens to hold, nothing relies on it. (This is one way to read what are called “cut elimination” theorems.) And that means that there is room for logics that are very much like the familiar logics we know and love, but where this principle does not always hold. Since it’s not bearing weight, we can change it without too many ramifications. It turns out that this can provide a different way to block the paradoxical arguments. This kind of approach is called “nontransitive”.
To the extent that I have a settled view here, I think a particular kind of nontransitive approach is the way to go. But I’m fascinated by noncontractive approaches as well, and have tried to make my work useful to noncontractive projects even when I’m making trouble for them.
Nathan: You’ve also done some experiments, particularly on vagueness. How do you think these kinds of logical approaches fit together with experimental work?
David: When I started to become fascinated with these paradoxes, part of what drew me in was that there seemed to be no good response to them, no plausible story available about how language works. Over time, though, I became convinced of just the reverse: the trouble isn’t having no good response, it’s having too many good responses! There are lots of plausible stories about how language works that have been developed in response to paradoxes. The trouble is figuring out which of them, if any, are not only plausible but actually right. And this requires having some sense of how speakers actually use their language. That’s where experiments can fruitfully come in, and where I’ve tried to bring them in.
On a more personal level, I think I’m drawn to logic and to experimental work for largely the same reason: they give me some kind of external check on my theorizing, some extra tool for finding out when I’m wrong. They also make for really rewarding experiences, for me mostly based around surprise. I’m much more frequently surprised by my own proofs or my own experiments than I am by my own arguments.
For example, I worked with Paul Egré and Vincent de Gardelle to run an experiment in which we looked at people’s color judgments as we walked them through a “forced march” sorites. In one condition, we asked participants to judge whether each of a series of colors was blue, one at a time, as the colors slowly shifted from blue to green. In another, we asked them to judge the same series, but in the other order: from green to blue. Of course, people say the bluest colors are blue, and that the greenest colors aren’t; no surprises there. We were interested in what effect the order of the series would have on when they switched from one answer to another.
Our expectation, which I still think was totally reasonable, was that they would be reluctant to change their answer from one color to the next. Since the difference between any two adjacent colors was so small, we figured they wouldn’t change from one answer to the other until they felt they really had to. So we expected them to say the middle colors were blue if they were moving from blue to green, and to say that the middle colors were not blue if they were moving from green to blue. What we found, though, was the exact opposite. Our participants were eager to change, not reluctant. That’s the kind of surprise that I find really rewarding, and for me both experimental work and logical work are full of that kind of thing.
Nathan: Since many philosophers throughout the discipline don’t get directly involved in experimentation, it is fascinating to hear from someone who does that sort of work. How do you think this changes the character of your research compared to other philosophers? What about other scientists who conduct experiments?
David: I suppose I don’t think it changes the character of my research much at all, actually. I think experimental work is totally continuous with the usual methods of philosophical logic in at least some areas, surely including vagueness, where much of my experimental work has focused. One way to see this is to look at the method of considering what we would or wouldn’t say, or hold to be true, in various circumstances. This is a widespread and established methodology in the philosophical and logical literature on vagueness, and I think appropriately so: one of the key goals of this literature is to gain a clearer understanding of how our language actually works, and speaker responses are a prime source of data for an inquiry like this. But it is not only philosophical logicians who use vague language expertly; just about all speakers do. So I take much of the experimental data I’ve gathered on vagueness to be theoretically relevant in the same way that theorists’ own self-reported judgments are. As a result, even if I weren’t doing experiments in this area I think I’d be dealing with pretty much the same issues in pretty much the same ways.
Looking at experimental scientists more broadly, the main thing that stands out to me is how much better at experiments they are than me! Constructing and interpreting experiments is only part of what I do, and I only came to it partway through grad school. So I’ve got a lot less experience than even many philosophers, let alone experimentalists in other disciplines. I try to account for this by keeping things simple, and by collaborating with more experienced folks when I can.
Nathan: Talk about some of the real-world applications of your work. How can the research you’re doing help society to develop in new and better ways?
David: One thing that’s been drawing my attention for the past few years is conflation. I first came to it through my research on nontransitive logics: one way for A to entail B and B to entail C while A doesn’t entail C is for B to conceal an equivocation. In fact, this is probably the most familiar way for something like this to happen. So conflation is a topic of some interest to me.
I’ve come to think that conflation forms an important component of at least some dogwhistling, in the political-discourse sense. The best-known theories of dogwhistling treat it as a kind of coded message, where only some members of the dogwhistling orator’s audience can decode the hidden message. But I think these theories only fit some cases. For example, consider a hypothetical politician publicly questioning whether Barack Obama was born in the US; this is the kind of thing often picked out as an example of dogwhistling. But what’s going on here?
In this kind of case, I think the best way to see what’s happening is as both stoking and exploiting a particular kind of conflation in the politician’s audience. A large number of white Americans conflate being American with being white and American. It’s not that they use “American” as some kind of code for “white American”; it’s that they at least sometimes neglect the difference between these categories. This particular conflation is a useful one for a wide range of politicians, who can play on it to various ends.
I’m currently working on research, then, into the way conflation plays into dogwhistling, supported by the Public Discourse Project here at UConn. My focus in this research is primarily on understanding just what’s happening in these cases, but I certainly hope that such an improved understanding could help us devise strategies for effectively pushing back against dogwhistling in the public sphere.
Nathan: Your teaching contains a lot of courses on logic. What is the primary difficulty you have introducing students to the field of logic, and how do you work to overcome it?
David: My overall experience of logic is largely one of playful and expressive building; I see logic as a particularly wild and free medium to work in. The main difficulty I run into in teaching logic is in helping students see just how open logic really can be. The trouble is a predictable one, I think: they need examples to see what’s going on at all, and any particular example of a logical system will be governed by very specific and nonnegotiable rules. It’s easy for them to mistake the rules of whatever example system they’re looking at for the boundaries of logic itself, and that’s the mistake that breaks my heart; it often means they’ll walk away thinking that logic is about dry obedience.
I try to push back against this tendency by approaching logic courses, particularly introductory logic courses, from a perspective that emphasizes breadth over depth. If you learn one thing, you might think that’s everything. But if you learn, say, three things, then you’re expecting there to be a fourth; you know you don’t know everything. In practice, I still struggle with students seeing walls where there are none, but I think this at least helps to alleviate the problem.
This breadth-first approach also seems to prevent a problem that I take to be common in introductory logic courses. Many of these courses develop a strong divide between the students who “get it” and the students who “don’t”, and this is often reflected in the class grade distribution, which can go bactrian rather than dromedary. Designing a logic course that gives a more accurate sense of the breadth of the field allows different students to excel at different kinds of task, and in my experience brings grade distributions much more in line with the sort of thing we often get in philosophy.
Please feel free to ask David questions about his work in the comment thread.
If you know of an early-career researcher doing interesting work, nominate them for our research spotlight series through the the submission form here. Our goal is to cover early-career research from a broad array of philosophical areas and perspectives, reflecting the variety of work being done by APA members.