Home Public Philosophy Threading the Needle: Can We Respect Local Knowledge While Resisting Misinformation?

Threading the Needle: Can We Respect Local Knowledge While Resisting Misinformation?

decorative image

It’s common knowledge that we are awash in misinformation that can have severe negative consequences for society. When people hold false beliefs about the safety of vaccines, the outcomes of elections, or the causes of climate change, it is much more difficult for them to make responsible decisions on behalf of their families and communities. It is tempting to respond to this challenge by insisting that expert scientists know best and to dismiss those who challenge the experts.

Scholarship by philosophers, historians, and sociologists of science suggests that this solution is too simple. There are numerous cases where nonspecialists have drawn on “local knowledge” gleaned from their own life experiences in order to challenge erroneous or misleading claims made by expert scientists. This sometimes occurs when community members identify threats from environmental pollution that experts initially dismiss, such as in Flint, Michigan or Woburn, Massachusetts. It can also happen when patients with poorly understood diseases question how medical experts diagnose, treat, or conceptualize their illnesses.

In some cases, the dismissal of nonspecialist perspectives could even qualify as a form of epistemic injustice, in which people are wronged in their capacity as knowers. This can happen when someone’s knowledge claims are not given the credibility they deserve because of prejudice against them. This could occur, for example, if the testimony of women or disabled patients is summarily dismissed by medical professionals or if the concerns raised by marginalized ethnic or racial groups about environmental pollution in their communities is ignored by public health experts.

Unfortunately, it is often difficult to tell whether or not experts are inappropriately dismissing nonspecialist claims. How does one decide when nonspecialists are making a legitimate point and when they are just confused? I have argued that we can navigate this challenge more successfully by drawing on recent scholarship in the philosophy of science that explores how scientists make value-laden choices in the course of their research. I refer to these choices as “value-laden” because they have consequences for society but they can’t be settled just by appealing to evidence and logic. For example, when researchers study a phenomenon like attention-deficit hyperactivity disorder (ADHD), they have to decide whether to focus their attention on the causes of ADHD or whether to focus instead on potential interventions for responding to it. If they focus on interventions, they have to decide whether to focus on pharmaceutical or non-pharmaceutical approaches. When they study these interventions, they have to decide how best to design their studies and interpret the findings. Finally, they have to decide when their findings are strong enough to make public recommendations about how to respond to ADHD. All these choices have social impacts, and reasonable scientists can disagree about how to handle them because they are not settled by evidence alone.

I think we can make better sense of the perspectives of nonspecialists—and therefore avoid dismissing their legitimate claims—by examining whether they are making these sorts of value-laden choices differently from experts. Consider three kinds of choices that philosophers of science have emphasized: (1) research questions and framing, (2) background assumptions, and (3) standards of evidence. Regarding the first kind of choice, philosopher Hugh Lacey has explored the ways scientists’ choices about which research questions to ask and how to frame their investigations are value-laden. For example, he argues that most agricultural science tends to be guided by the general question, “How can we develop crops that have the greatest output?” He notes that asking this question tends to serve the values of large agricultural growers and companies. He points out that the values of many small-scale producers around the world might be better served by asking questions like, “How can we use agriculture to reduce hunger and poverty and promote environmentally sustainable development?”

In some cases, nonspecialists might approach problems differently from expert scientists because they are asking different questions. For example, philosopher Daniel J. Hicks has argued that, sometimes, when activists arguing against genetically modified organisms (GMOs) appear to be ignoring expert research about their safety, they may actually be more interested in questions about their economic and social impacts. Similarly, philosopher Maya J. Goldenberg contends that most public health experts who make claims about vaccine safety are focused primarily on their overall costs and benefits for society. She argues that some parents are unconvinced by experts’ general assurances of safety because they are worried that particular vaccines might pose significant risks to their children based on their unique characteristics. The parents might accept that the overall costs and benefits of vaccines are favorable for society as a whole, but they might doubt that the experts have adequately studied the risks of vaccines in all subpopulations.

Consider also the second kind of choice that philosophers of science commonly discuss: background assumptions. Philosopher Helen Longino famously argued that data become evidence for specific conclusions only when they are accompanied by background assumptions. For example, these background assumptions could specify which instruments are reliable, which experiments are relevant, and what confounding factors need to be avoided.

Background assumptions can also contribute to disagreements between specialists and nonspecialists. For example, sociologist Gwen Ottinger describes how communities living near industrial facilities in Louisiana have struggled to convince regulators to take their concerns about air pollution seriously. This is partly because of a difference in background assumptions: according to Ottinger, the regulators assume that they should focus on average pollution levels over an extended period of time (say, 24 hours or more), whereas community members argue that they sometimes experience lasting health effects from short-term spikes in pollution over much shorter periods of time.

A third kind of value-laden choice involves standards of evidence, meaning the amount of evidence that someone requires in order to accept a conclusion. For example, sociologist Steven Epstein points out that many AIDS activists criticized the U.S. Food and Drug Administration (FDA) in the 1980s and 1990s for being too slow to approve new drugs. The activists felt that the FDA demanded too much evidence before declaring drugs safe and effective, especially considering that AIDS patients were willing to take risks because they were likely to die otherwise. Philosopher Heather E. Douglas argues that it makes sense for different groups to demand different standards of evidence, given how their different values impact their assessment of the costs of an incorrect conclusion.

Philosophers of science have emphasized that these kinds of choices (i.e., decisions about research questions and framing, background assumptions, and standards of evidence) can reasonably be handled in different ways. Therefore, when nonspecialists differ from specialists because they are making these choices differently, it provides an opportunity to better understand and respect their perspectives. This does not automatically mean that the nonspecialists are correct, of course. For example, vaccine-hesitant parents might be asking a question that experts have already addressed. For instance, experts may have already evaluated risks to children just like theirs and found them to be insignificant. Or the background assumptions accepted by nonspecialists might be highly implausible compared to the background assumptions accepted by the specialist scientific community.

Nonetheless, even in cases where nonspecialists make implausible judgments, clarification of their differing choices can still foster greater understanding and richer dialogues between specialists and nonspecialists. By clarifying these choices, philosophers of science can help nonspecialists communicate more effectively about why they disagree with specialists, and they can help specialists interpret the perspectives of nonspecialists more sympathetically. In some cases, specialists might even change their minds. For example, AIDS activists ultimately convinced the FDA to adopt an expedited approval process for some drugs, and they altered the ways some clinical trials were designed.

Admittedly, not all cases will turn out as well as the AIDS case. There will be others in which those who question mainstream scientific views are simply misinformed or operating in bad faith. But in societies characterized by distrust, polarization, and conflict, we need to be exploring ways to promote dialogue and mutual understanding rather than summarily dismissing those who disagree with us. The philosophy of science can help with this task.

Kevin Elliott

Kevin Elliott is a Red Cedar Distinguished Professor in Lyman Briggs College, the Department of Fisheries and Wildlife, and the Department of Philosophy at Michigan State University. His research focuses on the philosophy of science and practical ethics, with an emphasis on the roles that ethical and social values play in scientific research, particularly in the environmental health sciences. His books include Values in Science (Cambridge University Press, 2022) and A Tapestry of Values: An Introduction to Values in Science (Oxford University Press, 2017).

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version