According to the World Health Organization, the global biosecurity threat is increasing. One reason is that genetic editing and synthesis technologies are advancing and becoming more widely accessible. Another is the continuing characterization of the genetic makeup that would make a pathogen especially lethal, exceptionally communicable, and otherwise dangerous to human populations.
Almost no country regulates biological threats adequately. Unless we change course, we may sleepwalk into a reality in which affordable gene-editing and synthesis tools (e.g., affordable benchtop DNA printing) can combine long DNA fragments available for purchase into dangerous pathogens or create such pathogens from scratch. On one estimate, tens of thousands of people worldwide will, in coming years, be able to operate accessible machinery that can create new variants.
Most people do not try to kill and injure multiple other people, but some do. Currently, the latter shoot or drive cars into crowds, mail strangers anthrax spores, or deliberately spread Omicron or HIV through unprotected contact. Among the tens of thousands who will have command of gene editing or synthesis tools, an individual who seeks to kill and injure multiple people might be able to do something far more destructive. If and when instructions for doing so become available, such an individual could use these tools to concoct an airborne virus that combines anthrax’s lethality, Omicron’s communicability, many emerging pathogens’ resistance to our natural immunity, and HIV’s presymptomatic spread that complicates early detection. If they then release that variant in multiple locations (along with other mutants), or it leaks out of their labs, it is far from clear that a vaccine (for each mutant) could be developed and deployed at scale fast enough to stop the attack. Mortality from the (repeated) resulting outbreak(s) could dwarf that of COVID-19. Attacks could prove catastrophic if large language models are used to develop especially bad mutants (thankfully, states may be persuadable out of developing the very worst bioweapons).
Unfortunately, we are inadvertently developing and promulgating such instruction books. Scientists are making progress towards creating “mirror bacteria”, synthetic organisms whose molecular structure is reversed compared to the one found in nature. If mirror organisms are produced and then intentionally or accidentally released, humans and their ecosystems might lack any immunity toward them. Other scientists seeking to fight COVID-19 are using AI to characterize the genetic features that make SARS-CoV-2 variants likelier to spread fast. But on an independent review, their research carries a concerning “dual-use” potential. If individuals with destructive intentions use their findings in combination with more benign and available AI tools, they could discover what tweaks to either that virus or more lethal viruses would make them spread faster.
What risk of thereby indirectly enabling such a future individual to start a pandemic of a given nature and scale is high enough for biologists to suppress some of their findings or even abort a study altogether? What risk should incline oversight bodies, academic institutions, professional societies, funders, and journal editors to compel them to do so?
Usually, such decisions and surrounding policies are left primarily to the researchers who conduct such studies, who are perceived as the primary experts or stakeholders. However, industry and academic researchers whose livelihoods and research agendas depend on such research are conflicted. Perhaps unsurprisingly, virologists, some of whom enhance pathogens to pandemic potential in so-called “gain-of-function research of concern,” are an especially strong voice for such research despite its dual-use potential and direct risk of a lab leak. Diverse expertise is necessary from the many disciplines with a bearing on these decisions, in which all of us are stakeholders. Perhaps wider inputs would have already done away with our current “tunnel vision” of devoting nearly all pandemic preparedness resources to natural emerging infections—awful, but at least not engineered for an especially destructive blend.
Philosophers, in particular, are well-positioned to make progress on several related questions, as the rest of this post points out.
Values
First, when biological research has concerning potential for dual use or a dangerous lab leak, it is tempting to dub “incomparable” the potential disvalue of any resulting widescale human mortality and morbidity and the potential value of scientific knowledge gained (setting aside technological advances). Arguably, however, this is a good illustration that things that are apples and oranges in one way are easily comparable in others. Intuitively, no gain in scientific knowledge is worth a string of particularly bad pandemics.
Comparing a few apples to heaps and heaps of juicy oranges is thus a somewhat closer metaphor, but even that falls short. In a particularly bad scenario, the risks involved may be said to include human extinction. Because extinction would prevent all later generations of happy humans and happy digital minds from coming into existence, strong longtermists tend to vest the highest priority in preventing extinction. Other philosophers question whether we are obligated to make happy beings, however. And even setting that aside, the pandemics we are talking about would probably not, in the first instance, extinguish the species. Some remote communities are not in regular contact with other humans and would be initially spared.
Still, in a worst-case scenario, the resulting engineered viruses would kill many more people than world wars have. They could also destroy our technological capability for transhuman development for an extended period. And the destruction of enough institutions and capabilities could lead to extinction later, so such research carries a low probability of (delayed) human extinction, after all. The question of how to govern such research may, therefore, come down primarily to probabilities. What are the chances of different profiles of a pandemic, as well as the prospects of various valuable discoveries?
Probabilities
Longtermists who are “fanaticists” do not tolerate even a relatively low probability of extinction. Their thought is that extinction would thwart the value added by an astronomical number of potential later human and transhuman happy lives over a great many generations, which, for them, settles the matter. Increasing the species’ chance of survival, even by a tiny bit, commands a higher priority than all other considerations.
In our area, this may initially seem to translate into the question when remote potential for dual use or a lab leak warrants foregoing highly valuable pursuits. After all, many valuable medical advances have come from research with such potential. Among the general-use technologies that can boost bioweapons are gene editing tools, artificial intelligence, computers, and pipettes. A philosopher may wonder whether we should have avoided developing all those because, notwithstanding their countless valuable uses, they would predictably somewhat increase extinction risks in the ways described.
That way of putting the question is misleading in multiple ways. On the one hand, the charted risk of a terrible outcome is not a tiny probability. Some observers describe it as nearly inevitable unless our cultures of bioresearch and publication change. That already sidesteps the debate about the plausibility of fanaticism. Furthermore, some precautionary approaches to decision-making under uncertainty put special emphasis on preventing high mortality and morbidity. And because that toll would result in one way or another from our own science and technology system and from an attacker’s agency, not only from nature, some deontological thinkers might place even greater emphasis on biosecurity.
But there are complicating factors in the other direction as well. The value of some research with the potential for dual use or a lab leak is in the improvement of pandemic preparedness. Therefore, the badness of bad pandemics counts on both sides of the balance, not one. Rather than comparing having a few apples to a low probability of having a truly astronomical number of oranges, the proper comparison may be having one of two mountains of oranges (with a few apples on one mountain) securing which has very different chances. The comparison can be based on whether studies are more likely to cause a pandemic than to thwart one or whether their findings could be reached in safer ways.
Another complicating factor is that biosecurity endeavors can be counterproductive in the presence of other agents whose motivations and capabilities are responsive to ours. They can reveal to attackers what pathogens and attacks we deem to be worst. For example, “red teaming”, an effort to evaluate security through a simulated intrusion that pretends to be an enemy, at the direction of the organization intruded into, risks creating roadmaps for misuse. Even seemingly benign instructional efforts or public deliberations of policy on biosecurity can reveal to attackers what scenarios are most concerning to us. That questions the call of many documents in the area for openness and international collaboration: the sheer spread of knowledge can be a biorisk. A measure that decreases the probability of disaster in many ways and may, therefore, seem to decrease it overall can exacerbate it dramatically, complicating the attribution of stable chances to the various potential events.
Processes
Further moral and political considerations enter the fray when biorisk is reduced coercively, unfairly, or in transgression of rights. Censoring the free exchange of ideas is problematic. If labs or training programs screen out employees deemed dangerous, discrimination can result. On the other hand, we already suppress information on how to create other weapons of mass destruction, seemingly with broad agreement. And no one has a right to increase biosecurity threats. While, of course, perpetrators would bear the primary responsibility for any attempted attack, enabling that attack by making and promulgating the genetic discoveries it requires also seems wrongful or at least a legitimate object of regulation, even when done with the best intentions to enhance human health and biosecurity.
Who should call the shots on these matters? Global experts in multiple disciplines (not just virology)? Or, given that we are all stakeholders, everyone alive, and perhaps those who will be created if humanity persists through their democratic representatives? What role should philosophers and other bioethicists play in this area?
On all this, philosophical and bioethics work is rare. More is needed.
I thank Open Philanthropy for related funding and Kevin Esvelt, Richard B. Gibson, and Leah Price for suggestions on earlier versions.

Nir Eyal
Nir Eyal is the inaugural Henry Rutgers Professor of Bioethics and the inaugural Dr. and Mrs. Stanley S. Bergen Professor of Bioethics at Rutgers University. Primarily a bioethicist, Eyal directed Rutgers’s Center for Population-Level Bioethics and, earlier, worked or trained at Harvard, Princeton, NIH, Oxford, Hebrew U., and Tel-Aviv U.