Public PhilosophyMissing and Diverging Information in COVID-19: Complex Causes, Built Environments, and Public...

Missing and Diverging Information in COVID-19: Complex Causes, Built Environments, and Public Engagement

Information changes, and, often, divergence is involved in the process—perhaps as a catalyst for more adequate information, or maybe just as fuel for the misinformation conflagration.

March 2020: Mask-less and uncertain in a crowd: to wear or not to wear?;

April 2020: The ‘32-foot rule’? A tweeted simulation leads to disagreement about aerosol transmission risk as well as the risks of media promotion of pre-peer-reviewed public science posts;

May 2020: The CDC and WHO offer conflicting mask recommendations;

July 2020: Continuously looping the aerosol droplet video (can no longer see breath, but only particles arranged breath-wise).

These perspectival snapshots offer a mixture of information uncertainty, disagreement, contradiction, contextualism, correction, and idealization (perhaps even, sensationalism). In the middle of an unfolding phenomenon, it is challenging to adequately represent complicated “things”—e.g., SARS-CoV-2 transmission. Likewise, it is challenging to represent information complexity about complicated things. It sounds unnecessarily “meta”, but it is actually quite practical. There are a number of methodological issues pertaining to scientific information-gathering that require adequate characterization. We focus on at least three: missing and diverging information, as well the source of information. 

The aforementioned snapshots about SARS-CoV-2/COVID-19 information constitute a small window, focusing on limited system variables: particles and distance. But the pandemic is an unbounded, incompressible phenomenon, composed of intersecting processes, requiring overlaid lenses. If our scientific practices are always catching up to the complexity, how much missing information is (not) out there? A better question is, in what way can we change our scientific practices to more adequately inform? Elsewhere, we have argued that critical information-gathering methods are missing in COVID-related modeling and measurement practices—for instance, in the investigation of the causal roles of the built environment, structural racism, and “unacknowledged causal interactions”. We have argued that there is a need to expand the scope of scientific information-gathering to focus on complex interactions between biotic, social, built, and health processes. 

One might counter that scientists are already doing this: measurements are combined; information is synthesized; and larger models of causal interactions are developed. But the initial investigative steps fail to represent important parameters. The implication is that doing more of that kind of measurement will just continue to produce an amalgam of limited information, leading to inadequate theory-building activities. For instance, Brandt et al. (2020) have criticized the large volumes of homogenized built environment pollution data and the lack of investigative work on how structural racism produces disparities in air pollution exposure and COVID-19 morbidity and mortality. 

Multiple agreeing perspectives are not intrinsically informative. In fact, sometimes the contrast between multiple agreeing sources and a single, disagreeing outlier can be informative. Multiple studies have reported decreased global pollution levels and improved mortality benefits due to air pollution reduction. The convergence seems reassuring about decreases in pollution-related health outcomes. But these converging studies are uninformative about who carries the pollution burden and how industrial pollution increases are often masked by car emission decreases. Disagreeing perspectives emerge to call for new types of information-gathering about accumulated lifetime exposure due to environmental racism, in order to examine if temporary decreases in pollution have any effect on health disparities.  

This brings us to a key point: disagreement between information sources can be methodologically useful. In his discussion of ‘robustness analysis’, biologist Richard Levins (1966) says, “Our truth is the intersection of independent lies”. The focus of robustness analysis has been on the methodological usefulness of converging ‘modes’ of information (e.g., models and measurements). While much of the focus has been on what we can learn from convergence, some have written about the usefulness of divergence for causal tracking—e.g., in Alzheimer’s. The method is that multiple independent modes of information (at many scales) can be used to cross-check each other and locate error-sources. The “truth at the intersections” is important, but so are the differences between the lies. 

For instance, the Surviving Sepsis Campaign, the National Institutes of Health (NIH), and the Infectious Diseases Society of America (IDSA)  converged on treatment guidelines for COVID-19, while the World Health Organization (WHO) diverged from IDSA and NIH recommendations. A main point of disagreement (remdesivir) may now seem outdated, but the process of disagreement is still relevant. There were diverging results between NIH-sponsored Adaptive COVID-19 Treatment Trial (ACTT) and the WHO-sponsored Solidarity trial. The latter seemed to “refute” the former. But according to Maves (2021): “I suggest that we should consider not whether one trial or one guideline is ‘false’ but rather how they all may be true”. But the contradiction is quite startling—suggesting shorter vs. longer recovery with remdesivir. Can both really be true? Maves qualifies, “The answer lies in the different information obtained from homogeneous vs heterogeneous settings, as well as what we may learn from blinded randomized controlled trials (RCTs) vs large pragmatic trials”. In other words, both the contrasting clinical contexts and contrasting methods can be cross-checked in order to figure out why there are disagreements in the results. For instance, is there equitable access to resources and care within and across clinical contexts? Diverging results can be used to make sense of why and how disagreement occurs; but they can also be used to suggest new avenues for information-gathering—that is, to suggest methods, protocols, and information sources.   


Earlier, we discussed the development of new information-gathering methods that seek to acknowledge a larger context of shifting and overlapping complexities. Side-by-side with the development of new methods, a critical question emerges about sources and agency in information-gathering: Who measures? In the philosophy of science perspectivism literature, the “user” decides the respects and degrees of information-gathering. Implicitly, the “user” refers to the experimenter or theoretician. But can we reframe the decision-making behind scientific “use” as an active process that also includes public and community autonomy in information-gathering? We argue that we must. New frameworks for public data-engagement have been proposed, where the focus is on structured, iterative collaborations that engage community members in scientific practice. We have argued that the importance of developing public data practices in COVID-19 and beyond, is to promote autonomy in information-gathering, curation, reporting, and iterative cross-checking. Along with promoting autonomy, collaborative public frameworks can also promote the development of more equitable measurement practices that, in turn, inform more equitable, comprehensive, and contextually-sensitive decision-making. It isn’t just the correction of measurement contents that we are after, it is also the correction of the information-gathering methodology behind the contents and what results from the correction of that methodology. When data-gathering methodologies and relevant parameters are determined by a homogeneous ‘outside’ body, the result is a surface-level, idealized exploration.

Hannah Howland

Hannah Howland is an architectural designer at PYATOK | architecture + urban design. Her focus is on applied research in design—including, how to: use systems biology to re-conceptualize ‘biomimicry’; generate multi-process COVID-19 built environment interventions; and create more equitable design practices and solutions.

Vadim Keyser

Vadim Keyser is a Provost's Award-winning faculty member at California State University, Fresno and liaison between the College of Arts and Humanities and the College of Science and Mathematics (Philosophy Department; Biotech M.A. Program; and Cognitive Science Program). His research focus is on applied scientific methodology, modeling, technoscience, and normative issues in scientific practice. (drvkeyser.com)

1 COMMENT

  1. I’d like to see a little more data-gathering and data-sharing with the public on antibody levels measured in vaccinated and unvaccinated people (some of whom may not know if they’ve had the infection), as well as the advisability of vaccinating people who have already produced a significant natural antibody response. It’s surprising there has been so little news reporting given to this important issue.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

The Supreme Court’s Symbolic Code of Conduct

Two things seem true about modern professional life. One, most professional activities nowadays (legal ones, anyway) are backed by a code of conduct—roughly, a set...