Public PhilosophyDo We Have a 'Bias Bias'?

Do We Have a ‘Bias Bias’?

How rational are we? And if we are not rational, how could we tell, since we would have to rely on our reasoning to make that determination? Over the last few decades, psychologists have uncovered numerous ways that humans fail to live up to our own ideal of rationality. We are overconfident of our performance (‘Dunning-Kruger Effect’), we seek out information to confirm what we already believe instead of thinking about what would challenge our beliefs (‘Confirmation Bias’), the theories we hold seem to influence what we observe (‘Theory-Laddeness of Observation’), and we think we are less biased than others (‘Bias Blind Spot’).

Worse still, many of these cognitive biases are supposed to influence scientists and experts. Charles Darwin recounts that he hadn’t noticed the glacial effects on the landscape on his first trip to Wales, even though the evidence was staring him in the face. Why? Because he had not been taught glacial theory—his theory (or lack thereof) biased his observation. Today doctors interpreting medical test results often fail to consider how common the disease is, a phenomenon known as the ‘Base-Rate Neglect Fallacy.’ These and many other systematic reasoning errors have been widely shown to affect the practice of scientific practitioners when it concerns the types of reasoning that are employed in the course of their scientific research.

As many researchers have pointed out, some of these apparently irrational inferences can actually be seen to be more reasonable in certain contexts. Sometimes the very same pattern of inference can be viewed as a bias in some contexts and a heuristic in others, where a heuristic is often defined as a shortcut in reasoning—a quick and dirty rule-of-thumb. Think of confirmation bias again. It can be debilitating to constantly seek to refute one’s own hypotheses, and so having a tendency towards confirmation as opposed to disconfirmation can be healthy in many contexts, even if it leads us astray in many others. It is at once a bias and a heuristic depending on the context in which you are using it.

The research on heuristics and biases has by now posited hundreds of such reasoning biases, ranging from the “confirmation bias” and the “conjunction fallacy” to the “cheerleader effect” (the supposed tendency for people to appear more attractive to us in a group than in isolation) and the “IKEA effect” (the tendency to place a disproportionately high value on objects that they partially assembled themselves).  At the time of writing, the Wikipedia entry “List of cognitive biases” enumerates over 175 distinct cognitive biases.

This proliferation of biases raises a question: are we overly prone to attributing biases to ourselves? When we see a pattern of reasoning error in humans, do we have a systematic tendency to posit a new bias, even if there might be alternative explanations? Given the proliferation of biases and the fact that more are ‘discovered’ each year, we seem to have strong evidence that we are biased toward explaining failures of human reasoning by positing biases. Let’s call this the ‘Bias Bias’.

But positing a Bias Bias seems to lead us into paradox. To explain why, consider again the alleged confirmation bias mentioned earlier. Suppose psychologists or cognitive scientists are interested in determining whether we have a confirmation bias. Now suppose that they conduct an experiment that purports to demonstrate that human reasoners do indeed have a confirmation bias. Imagine that one of the researchers working on the project, call her the “pesky post-doc,” raises the following inconvenient possibility. She reasons that if humans have a widespread tendency to confirm rather than refute their hypotheses, and that tendency afflicts scientists and non-scientists alike, then members of her research team will also be affected. This means that they should doubt their own results, given that they will have a tendency to confirm rather than refute their initial hypothesis.

The very same evidence that leads them to posit a confirmation bias is also evidence that would tend to cast doubt on their conclusions! However, suppose that another member of the research team, call her the “pragmatic professor,” says: “Wait, if we are rejecting our research because we were subject to the confirmation bias, then the confirmation bias does exist because we were subject to it!”

The pesky-post-doc and the pragmatic professor’s reasoning both seem reasonable, but they lead to a paradoxical conclusion. The pesky post-doc reasons that if the research team is right about the existence of the bias, then they have reason to reject their research (and so the existence of the bias too), but the pragmatic professor reasons that rejecting the research on these grounds admits that the bias does exist. If they’re right they’re wrong, but if they’re wrong they’re right. What are we to say?

Maybe we can resist the paradoxical conclusion. One way is to distinguish the context of the experiment from the context of the inquirers. If one can show that confirmation bias afflicts primarily laypersons or those who are reasoning in experimental conditions, then one might be able to argue that the researchers themselves are not subject to that bias.

In many cases, this strategy works well: in a scientific context researchers can ensure that they are avoiding the bias they are studying. However, it doesn’t seem like we can do that when we posit a Bias Bias. Anyone positing a Bias Bias is likely doing so with reference to an established body of scientific research—not just lay thinking. In other words, it is the reasoning of experts that is at issue. So any experts who come up with such a critique would need to demonstrate that they themselves are not victims of the same bias when they attribute it to others. Why are some experts subject to the bias and not others? What makes the context of the researchers being criticized different from that of their critics?

Early critics of the heuristics and biases research program once cast doubt on its very cogency. They raised questions about the very possibility of using our own frail, limited, and flawed reasoning capacities to test for biases in our own reasoning capacities. The very idea was incoherent according to them. It would be like using a scale to weigh itself—the self-reflexivity makes it impossible. That seems to be an overreaction, since human reasoning is not a monolith, and we can certainly use some aspects of human reasoning to examine other aspects without raising problems about self-reflexivity. But in some cases, we do seem to be led into a self-reflexive paradox, in particular when it comes to positing a Bias Bias, as we have argued in a recent journal article.

At least at first sight, given the multiplication of biases in the recent scholarly research and in popular culture, it seems reasonable to conjecture that humans are subject to a Bias Bias. Yet positing such a thing seems to lead us into an inevitable paradox. What looked on the face of it like a reasonable hypothesis is, in fact, a claim we seem not to be able to make without contradiction.

Joshua Mugg

Joshua Mugg is an Assistant Professor at Park University in Kansas City where he coordinates Philosophy, Religion, and Interdisciplinary Studies. He works in philosophy of psychology, mind, and religion, primarily studying cognitive architecture, belief, and faith.

Muhammad Ali Khalidi

Muhammad Ali Khalidi is Presidential Professor of Philosophy at CUNY Graduate Center.  He is currently working on a book titled, Cognitive Ontology: Taxonomic Practices in the Mind-Brain Sciences, under contract with Cambridge University Press.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Reflections on My Undergraduate Experience in Philosophy

In my first year at Queen’s University (Ontario, Canada), I had originally planned to study psychology in the hopes of becoming a therapist. I...