Biosafety, biosecurity, and bioethics

The COVID-19 pandemic has highlighted the importance of biosafety in the biomedical sciences. While it is often assumed that biosafety is a purely technical matter that has little to do with philosophy or the humanities, biosafety raises important ethical issues that have not been adequately examined in the scientific or bioethics literature. This article reviews some pivotal events in the history of biosafety and biosecurity and explores three different biosafety topics that generate significant ethical concerns, i.e., risk assessment, risk management, and risk distribution. The article also discusses the role of democratic governance in the oversight of biosafety and offers some suggestions for incorporating bioethics into biosafety practice, education, and policy.


Introduction
Over seven million people have died from COVID-19 in the last four years, despite the concerted efforts of governments, public health officials, medical professionals, and biomedical researchers to slow down the spread of the disease and save lives (World Health Organization 2024).While science, medicine, and technology have been instrumental in controlling this pandemic, they have been less effective than one would have hoped.Although COVID-19 has not been as devasting as the Spanish flu of 1918, which killed an estimated 50 million people, it has demonstrated, in sobering terms, that the human population in the twenty-first century is still vulnerable to highly transmissible infectious diseases (Centers for Disease Control and Prevention 2018; Young 2023).
In thinking about how to prevent the next pandemic, researchers and policymakers have discussed various problems that hampered efforts to control COVID-19, including inadequate testing, surveillance, and case reporting; lack of public trust in science and medicine; poor adherence to safety guidelines; politization of public health; conflicts between policies at the local, national, and international level; insufficient production of protective equipment and medical supplies; and difficulties with equitable and effective distribution of vaccines and treatments (Stolberg and Weiland 2023;Marcassoli et al. 2023).
A topic that has probably not received enough attention is lack of vigilance concerning biosafety (Young 2023).Concerns about biosafety entered the public forum because of the scientific debate about the origins of the pandemic.While the majority of scientists initially believed that the virus that causes COVID-19, SARS-CoV-2, evolved in an animal population in China (possibly horseshoe bats) and entered the human population by means of one or more intermediate host species (possibly animals being sold at the Wuhan wet market), this hypothesis has not been conclusively proven and many scientists now believe that the virus may have escaped from the Wuhan Institute of Virology (WIV) in the fall of 2019 (Andersen et al. 2020;Haider et al. 2020;Sirotkin and Sirotkin 2020 Although most proponents of the lab escape hypothesis claim that SARS-CoV-2 was the product of genetic engineering or serial passaging 1 experiments to enhance the virulence of a naturally occurring coronavirus, one does not need to make this assumption to think that SARS-CoV-2 could have entered the human population by means of a biosafety incident.It is possible that the virus was collected from animals and then accidentally infected workers who were studying it in the laboratory but not genetically enhancing it.Scientists from the WIV have been collecting, studying, and experimenting with coronaviruses since the SARS epidemic of [2003][2004], and history teaches us that coronaviruses have escaped from laboratories in the past (Kimman et al. 2008;Young 2023).According to the World Health Organization (WHO), during the SARS epidemic of [2003][2004], in which there were over 8,000 reported cases and 774 deaths worldwide, the virus escaped from a laboratory in Beijing twice, infecting nine people and killing one (Parry 2004;Walgate 2004;American Lung Association 2023).Several years prior to the COVID-19 pandemic, scientists and intelligence officials had raised concerns about biosafety problems in China's rapidly expanding network of microbiology laboratories (Cyranoski 2017;Yuan et al. 2020;Young 2023).
1 Genetic engineering involves the direct modification of an organism's genome; for example, using a molecular tool, such as CRISPR, to insert a DNA sequence into a virus that enables it to produce a protein that help it infect human cells (Menachery et al. 2015).Serial passaging is a form of experimentation in which one infects animals with a pathogen and then inoculates other animals with secretions or other biomaterials from the original hosts, based on signs or symptoms they display, such as degree of illness.Passaging is a form of directed evolution (Russell et al. 2012;Sirotkin and Sirotkin 2020).
We may never fully understand the origins of COVID-19 because the Chinese government has destroyed and suppressed coronavirus data and prevented WHO officials who were investigating the origins of COVID-19 from having unrestricted access to the WIV and researchers working there (Harrison and Sachs 2022;Cohen 2023;Calvert and Arbuthnutt 2023;Young 2023).Nevertheless, the idea that a biosafety lapse at the WIV-or some other laboratory for that matter-could have caused the COVID-19 pandemic is a very real possibility that has significant bioethical and public policy implications.
Most of the articles or book chapters on bio-risks written by bioethicists have focused on "hot" topics related to biosecurity, such as bioterrorism and dual use research, rather than on biosafety per se (see, for example, Kuhlau et al. 2011;Rappert and Selgelid 2013;Kaebnick et al. 2016;Holm 2017;Evans et al. 2022). 2 While it is often assumed that biosafety is a purely technical matter that has little to do with philosophy or the humanities, biosafety raises important ethical issues that have not been adequately examined in the scientific or bioethics literature (Kambouris 2023;Johnson 2024).To provide some useful background to these ethical issues, this article will first review some pivotal events in the history of biosafety and biosecurity.The article will then explore three different biosafety topics that generate significant ethical concerns, i.e., risk assessment, risk management, and risk distribution.Finally, the article will also discuss the role of democratic decision-making in the oversight of biosafety and offer some suggestions for incorporating bioethics into biosafety.Note: the findings and recommendations in this article are based on my review of the biosafety literature and my analysis of the issues rather than on direct observations of research at the NIH.

Defining biosafety and biosecurity
Biosafety can be defined as "the use of specific practices, training, safety equipment, and specially designed buildings to protect the worker, community, and environment from an accidental exposure or unintentional release of infectious agents and toxins (Department of Health and Human Services 2017a)."3Biosafety is closely related to another concept, biosecurity, which can be defined as "the protection, control of, and accountability for high-consequence biological agents and toxins, and critical relevant biological materials and information within laboratories to prevent unauthorized possession, loss, theft, misuse, diversion, and intentional release (Department of Health and Human Services 2015)."Biosafety and biosecurity both deal with the management of biological risks; however, biosafety focuses on preventing accidental exposure to or release of dangerous biological agents and toxins, while biosecurity focuses on deliberate misuses of dangerous biological agents and toxins (Beeckman and Rüdelsheim 2020).Although this article will focus on the ethics of biosafety, it will also consider biosecurity matters, which are often closely connected to biosafety.

A brief history of biosafety and biosecurity
Biological research has involved significant safety issues since Louis Pasteur (1822-1895), Edward Koch (1843Koch ( -1910)), and other pioneering microbiologists began collecting, isolating, and culturing microorganisms in the 1800s (see Table 1).However, governments did not take a strong interest in biosafety until the 1950s, when the risks of bioweapons research became apparent.During this time, both the US and the Soviet Union were conducting research on offensive bioweapons and defensive countermeasures (Frischknecht 2003).The first unofficial meeting of the American Biosafety Association (ABSA) took place at a military facility in Fort Detrick, MD in 1955.Attendees were military personnel, and the sessions were classified.In 1957, the ABSA expanded its constituency and began holding non-classified sessions, and by the mid-1960s, scientists and administrators from the Centers for Disease Control (CDC), National Institutes of Health (NIH), and other federal agencies were attending its meetings.In 1966, meeting attendees included representatives from universities, hospitals, and private industry (Connell 2011).By the early 1970s, the Occupational Safety and Health Administration (OSHA) had developed biosafety regulations (Occupational Safety and Health Administration 2011).
In 1972, the US officially abandoned offensive bioweapons research when it signed the Biological Weapons and Toxins Convention (BWTC).Although the Soviet Union also signed the BWTC, it continued to conduct secret research on offensive bioweapons (Leitenberg and Zilinskas 1989).Very little is publicly known about what the Soviet Union did to promote biosafety during the Cold War era (Leitenberg and Zilinskas 1989).However, there are reasons to believe that its safety measures were inadequate.For example, in 1979, 66 people died from anthrax poisoning due to an accident at a laboratory in Sverdlovsk (Zhou et al. 2019), but this incident was not verified until 15 years after it happened because Soviet authorities covered it up (Leitenberg and Zilinskas 1989).
Biosafety became a major concern for molecular biologists in the early 1970s, when Paul Berg (1926Berg ( -2023)), Stanley Cohen (1922-2020), and Herbert Boyer developed techniques for recombining DNA from different bacteria (Jackson et al. 1972;Connell 2011).Scientists working with recombinant DNA soon realized that their experiments posed serious risks to public health and the environment if genetically modified organisms (GMOs) were to escape from the laboratory and infect people, animals, or plants.Leaders of this field met in Asilomar, CA in 1973, 1974, and 1975 to discuss biosafety issues and develop biosafety protocols.In 1974, Berg, Cohen, Boyer and other top scientists called for a voluntary moratorium on recombinant DNA experiments until these risks were better understood and appropriate safety protocols were in place (Berg et al. 1974).Berg et al. (1975) published a paper describing some biosafety principles and recommendations for recombinant DNA experiments.
In 1974, the NIH formed the recombinant DNA advisory committee (RAC) to provide guidance for NIH-funded recombinant DNA experiments.In 1976, the RAC published guidelines for recombinant DNA research, which included the requirement that funded institutions establish institutional biosafety committees (IBCs) to  (Connell 2011;Beeckman and Rüdelsheim 2020).The framework includes four safety levels corresponding to the degree of risk from infectious agents or toxins.For example, the safety measures used in a BSL-1 lab, such as storing agents in sealed containers and washing surfaces, are adequate for research on benign organisms like Escherichia coli, but not for dangerous pathogens that pose a high risk of aerosol transmitted infection, such as the Ebola virus.To study the Ebola virus, one should use the safety measures found in a BSL-4 lab, such as wearing a full-body, air-supplied suit, working in a physically isolated laboratory, and undergoing decontamination procedures when leaving the laboratory.See Table 2.The BSL framework is used by institutions around the world to guide biosafety practices (World Health Organization 2020).In 1986, the CDC published the 1st edition of Biosafety in Microbiological and Biomedical Laboratories (now in its 6th edition), which describes the BSL framework in detail (Centers for Disease Control and Prevention 2020).
Although the public became aware of the biosafety issues related to genetic manipulation through magazine and newspaper editorials warning about the dangers of "superbugs" and popular books that became movies, such as The Andromeda Strain (Crichton 1969), biosafety was, at that time, more of a scientific issue than a political one.The President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research (1982) did publish a report titled Splicing Life, but the document focused, for the most part, on the ethics of genetically engineering human beings and did not examine biosafety issues in much depth.The US Supreme Court's decision in 1980 to uphold Ananda Chakrabarty's patent on genetically modified (GM) bacteria that metabolizes crude oil met with little fanfare or public protest, even though it proved to be instrumental in stimulating the burgeoning biotechnology industry (Resnik 2004).
During the 1980s and 1990s, important steps were taken to internationalize biosafety.In 1984, the ABSA became ABSA International and began affiliating with biosafety associations around the globe (American Biosafety Association International 2023).In 2001, the International Biosafety Working Group was formed, which later became the International Federation of Biosafety Associations (2023) (IFBA).The IFBA also includes over 30 biosafety safety associations from North and South America, Europe, Asia, and Africa.An important biosafety treaty, the Cartagena Protocol on Biosafety to the Convention on In 1996, the first GM crops and foods became commercially available.Soon afterward, European citizens and farmers protested against GM crops and "Frankenfoods" and the European Union banned GM foods and crops until 2008 (Resnik 2021).Although GM crops remain a controversial issue in biotechnology and over two dozen countries currently ban or significantly restrict their cultivation, most of the public concern about GM has focused on the health risks of consuming GM foods and to a lesser extent the bio-risks of growing GM crops, such as the environmental impacts of accidental or intentional releases of GM plants (Resnik 2021).
At the beginning of the twenty-first century, bioterrorism and biosecurity emerged as major concerns for science policy.In the fall of 2001, Bruce Ivins, a biodefense researcher at the US Army Medical Research Institute of Infectious Diseases in Fort Detrick, allegedly sent letters laced with anthrax spores to Senators Tom Daschle and Patrick Leahy and several members of the media.The attacks, which killed five people, sickened 17, and exposed dozens of postal workers to anthrax spores, created tremendous anxiety in a nation already reeling from the terrorist attacks of September 11, 2001.The Federal Bureau of Investigation's lengthy and costly investigation eventually named Ivins as the suspect in 2008, but Ivins committed suicide before he could be apprehended and brought to trial (CNN 2023).It is worth noting that this was not the first bioterrorist attack on US soil.In 1984, members of a religious cult led by Bhagwan Rajneesh sprayed salmonella bacteria on salad bars in ten restaurants in The Dalles, Oregon in an effort to make people too sick to vote in Wasco County elections so that the cult's candidates would win.751 people developed salmonella poisoning, but fortunately no one died (Author 2021).
In response to the anthrax attacks as well as other concerns about terrorism and biosecurity, the US Congress in 2002 passed the Agricultural Bioterrorism Protection Act and the Public Health Security and Bioterrorism Preparedness and Response Act, which establish requirements for the possession, control, use, and transfer of dangerous biological entities known as select agents and toxins (Connell 2011).Currently, there are 68 types of biological entities classified as select agents or toxins, including SARS, genetically manipulated forms of SARS, anthrax, Ebola, smallpox, andreconstructed pandemic influenza 1918 (Federal Select Agent Program 2022).
During the early 2000s, several papers were published in biomedical journals that raised issues of "dual use;" that is, the results of the research could be used for beneficial purposes, such as advancing science, medicine, or public health, but also could be used for harmful purposes, such as terrorism, crime, or warfare (Miller and Selgelid 2007).Some of these publications included a study showing how to increase virulence of a virus used to vaccinate people against smallpox, a study showing how to manufacture a polio virus from DNA sequence data and mail-order supplies, a study describing a mathematical model for infecting the US milk supply with botulinum toxin, and study showing how to reconstruct the extinct 1918 Spanish flu virus (Rosengard et al. 2002;Cello et al. 2002;Wein and Liu 2005;Tumpey et al. 2005).
These dual use issues caught the attention of the US Congress, which asked the National Research Council (NRC) to study them.In 2004, the NRC issued an influential report, Biotechnology in the Age of Terrorism, which examined the ethical and policy issues raised by these and other publications and identified some types of research that warrant special scrutiny, including experiments that attempt to enhance the harmful consequences of a biological agent or toxin, disrupt the effectiveness of an immunization against a biological agent, increase the transmissibility of a biological agent or toxin, or create a novel biological agent or toxin or reconstitute an eradicated biological agent or toxin (National Research Council 2004;Shinomiya et al. 2022).The NRC also recommended that the US government form an organization to provide scientists, journal editors, and government officials with guidance and advice on biosecurity and biosafety issues.
The US government followed the NRC's recommendation and formed the National Science Advisory Board for Biosecurity (NSABB) in 2004 (National Science Advisory Board for Biosecurity 2023).Because "dual use research" is potentially very broad in scope, the NSABB changed the term to "dual use research of concern" (DURC) to focus the scope of ethical and policy debate (World Health Organization 2010).The US government currently defines DURC as "[L]ife sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be misapplied to do harm with no, or only minor, modification to pose a significant threat with potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security (The Whitehouse 2024)." The most controversial case the NSABB has examined thus far involved two NIH-funded studies describing how to genetically modify the H5N1 avian influenza virus to enable it to be transmissible between mammals by respiratory water droplets.The papers describing the studies were submitted to Science and Nature in 2011 (Russell et al. 2012;Imai et al. 2012).One of the studies was headed by Ronald Fouchier at Erasmus Medical Center in the Netherlands and the other by Yoshihiro Kowoaka at the University of Wisconsin-Madison.The journals editors gave the papers special review because they recognized that the results of the research could be used to make a bioweapon and asked the NSABB to render an opinion as to whether these papers should be published.At its December 2011 meeting, the NSABB recommended, by unanimous vote, that neither paper should be published in full.The papers could be published only if key information needed to replicate the experiments was redacted (Resnik 2013).The primary basis for the NSABB's opinion was that the biosecurity risks of publication could not be justified on the basis of the benefits of the research for science and public health, such as potential applications in the development of treatments or vaccines or monitoring of bird populations for dangerous mutations.The NSABB was also concerned about the biosafety risks of publication if others tried to replicate the experiments, but these apprehensions were secondary, at least initially.
After the NSABB's ruling, researchers working in this area agreed to a temporary moratorium on experiments that enhance the virulence or transmissibility of avian influenza viruses.In February 2012, the WHO convened a workshop to examine the issues raised by the papers and recommended full publication.In March 2022, after reading longer versions of the papers that included additional information about biosecurity and biosafety protocols and the benefits of publication, the NSABB reversed its earlier ruling and recommended full publication of both papers.However, a substantial minority of NSABB members felt that the research conducted Fouchier's laboratory (Russell et al. 2012) should be published only in redacted form (Author 2013).
Later in 2012, after the publication of the controversial papers, the US government adopted a policy for the oversight of DURC in the life sciences research funded by federal agencies.The policy defined DURC and set forth procedures agencies must follow in assessing and mitigating the risks of DURC.The policy only applied to seven categories of dangerous experiments involving biological entities on the selects or toxins list (United States Government 2012).
The announcement of a new US government DURC policy did not quell the debate about genetic manipulation of dangerous pathogens.Many scientists and some bioethicists argued that the benefits of this research were not worth the risks and urged researchers to consider alternative approaches to studying the geneticmolecular structure, evolutionary dynamics, and pathogenicity of viruses (Wain-Hobson 2013; Lipsitch and Inglesby 2014; Evans et al. 2015).Other scientists pushed back against these critiques by arguing that the research has important benefits for medicine, public health, and science and that the risks can be managed with appropriate safeguards and security (Fauci 2012;Casadevall et al. 2014;Davis et al. 2014).The NSABB and NRC held several meetings to consider these issues in more depth and solicit input from scientists and members of the public.
An important development that occurred at this time is that the biosafety risks of research involving dangerous pathogens began to be seen by many as just as important as biosecurity risks, due in part, to concerns raised by scientists at NSABB and NRC meetings and some biosafety lapses in federal laboratories (Shinomiya et al. 2022).For example, in July 2014 six vials of variola virus, which causes smallpox, were discovered during an inventory of materials stored in a Food and Drug Administration (FDA) cold storage room at the NIH (Young 2023).In December 2014, the CDC reported that some material from an Ebola experiment was improperly transported from a BSL-4 lab to a BSL-2 lab (Centers for Disease Control and Prevention 2015).Although these incidents did not expose laboratory workers to deadly pathogens, the fact that these materials were not properly stored, transferred, or accounted for was a major biosafety problem.The NIH and CDC took measures to improve biosafety practices and oversight as a result of these incidents (Young 2023).
In October 2014, DHHS announced a pause on new funding of gain of function (GOF) genetic manipulation experiments involving SARS, MERS, and other potential pandemic pathogens (PPPs) and asked the NSABB to review the issues and make policy recommendations.GOF research is research that attempts to genetically manipulate a pathogen to enable it to acquire a new function, such as increased transmissibility or virulence (NRC 2015).The NSABB recommended that the DHHS should lift the funding pause after implementing an oversight framework that minimizes and manages the risks of GOF research with PPPs.The DHHS followed these recommendations and lifted the funding pause on December 2017.The DHHS's oversight framework established criteria for making funding decisions for research involving genetic enhancement of PPPs and outlines responsibilities for funding agencies and DHHS.The framework defined a PPP as a pathogen that "is likely highly transmissible and likely capable of wide and uncontrollable spread in human populations; and… is likely highly virulent and likely to cause significant morbidity and/or mortality in humans (Department of Health and Human Services 2017b)."The framework defined an enhanced PPP as "a PPP resulting from the enhancement of the transmissibility and/or virulence of a pathogen.Enhanced PPPs do not include naturally occurring pathogens that are circulating in or have been recovered from nature, regardless of their pandemic potential (Department of Health and Human Services 2017b)." In 2015, an NIH-funded research team led by Ralph Baric at the University of North Carolina at Chapel Hill published a paper reporting the results of an experiment in which they used a mouse-adapted SARS-CoV backbone to create a chimeric virus that expresses the spike protein of SHC014.The virus was able to use angiotensin-converting enzyme 2 (ACE2) receptors to infect and replicate in human airway cells and caused pathogenesis in humanized mice.The authors hypothesized that coronavirus pools found in horseshoe bats maintain spike proteins that give them the capability of infecting humans and stated that there is "a potential risk of SARS-CoV re-emergence from viruses currently circulating in bat populations (Menachery et al. 2015(Menachery et al. : 1508))."WIV researcher Shi Zhengli, the second to last author on the paper who had been collecting bat coronaviruses since 2004, provided the genomic sequence for the spike protein.Shi has conducted similar experiments at the WIV.While Baric's research was conducted in a BSL-3 lab, Shi's was conducted in a BSL-2 lab (Jacobsen 2021).
Although the study did not generate much public interest when it was published, it became controversial in May 2021, when US Senator Rand Paul questioned NIAID Director Anthony Fauci about the study during Senate Health Committee hearings.Senator Paul claimed that the study was a GOF experiment that violated the NIH's funding pause, but Fauci denied this allegation.Paul also said that the NIH had funded Shi's research at the WIV through grants to EcoHealth Alliance, but Fauci also denied this claim.However, the NIH later admitted that it had funded Shi's research but was unaware that this had happened because of EcoHealth Alliance's lack of transparency and accountability concerning its research projects (Basu 2021;Shinomiya et al. 2022;Calvert and Arbuthnutt 2023).In May 2024, the NIH terminated EcoHealth Alliance's funding due to its lax oversight and reporting of research (Lenharo 2024).
Since 2014, public health and governmental organizations have held meetings to discuss ways of enhancing biosafety (Shinomiya et al. 2022).The ABSA's annual meetings have brought together hundreds of scientists, public health practitioners, and government officials from all over the world (ABSA 2023).In December 2017, the WHO (World Health Organization 2017) held a meeting that included experts from more than 20 countries and 53 institutions to discuss challenges, opportunities, and solutions for maximizing safety in BSL-4 laboratories.In October 2020, the US government hosted a meeting attended by scientists and administrators (but no ethicists) from various federal agencies including DHHS, NIH, FDA, USDA, DOE, FBI, and EPA, to discuss policies and strategies for enhancing biosafety.The meeting led to the publication of a report in 2022 (National Science and Technology Council 2022).
At the beginning of May 2024, the Biden Administration announced that it had revised federal policies DURC and PPP research (The Whitehouse 2024; Kaiser 2024).The new policies were adopted in response to pressures from politicians and members of the public who were concerned about the risks of GOF and the possibility that the pathogen that caused the COVID-19 pandemic was a genetically manipulated virus that escaped from the WIV.The policy expanded the types of research that require review and oversight and revised the definition of a PPP to include a pathogen that could have pandemic potential as result of being modified.The policy distinguishes between two categories of research: 1) DURC research, including nine types of experiments of concern, similar to those identified by the NRC; and 2) research involving PPPs or PPEPs (potential pandemic enhanced pathogens).A PPP is defined as a "pathogen that is likely capable of wide and uncontrollable spread in a human population and would likely cause moderate to severe disease and/or mortality in humans (The Whitehouse 2024).The policy also provides guidance for policy implementation.

Bioethics issues
From this brief history, we can see that biosafety and biosecurity have been an important topic in science policy since the 1950s.However, although discussions of biosecurity in the professional literature and public meetings have addressed ethical issues, discussions of biosafety have focused mostly on scientific and technical issues.With the notable exception of dual use research, there has been little interaction between bioethicists and bio-scientists on matters of biosafety.As noted earlier, an important biosafety meeting held by the US government in 2020 did not include any ethicists.While the NSABB has included ethicist members and has featured presentations by ethicists, most of the discussion at these meetings has centered on scientific and technical issues (National Science Advisory Board for Biosecurity 2005Biosecurity , 2014Biosecurity , 2023)).Although there is little doubt that scientific and technical issues should take centerstage in discussions of biosafety, ethical issues also deserve serious attention, because safety has implications for human wellbeing, human rights, justice, and other moral values.Some important ethical issues in biosafety are described below (see Table 3).

Risk assessment
Risk assessment is the most fundamental ethical issue related to biosafety and biosecurity because research should not be conducted and funded4 if it is likely to create risks that are unacceptable.Risk assessment consists of three parts: (1) identifying potential risks and benefits of a human activity, technology, or other item to be assessed; (2) assigning probability estimates to risks and benefits; and (3) weighing risks and benefits.Risk assessment requires the exercise of ethical (or moral)5 judgment because the third part involves deciding whether risks are ethically justifiable, given the benefits (Resnik 2021).Risk assessment is a fundamental safety issue in bioscience, regardless of whether the research involves genetic experiments that create novel agents or toxins or only entails the collection, observation, and storage of naturally occurring pathogens, such as SARS or Ebola.
While there has been some discussion in the bioethics literature about how to assess the risks and benefits of research involving genetic manipulation of pathogens (see Kuhlau et al. 2011;Resnik 2013;Selgelid 2016), there has been very little discussion about assessing the risks of biological research that does not involve genetic manipulation, such as the research that led to the SARS accident in a lab in Beijing Are the risks of the research acceptable, given the potential benefits?
How should decision-makers deal with uncertainty concerning risks and benefits?
Risk management How should the risks of the research be managed?
Should some studies not be published in full?
Who should have access to dangerous materials?
What BSL level is appropriate for an experiment?

Risk distribution
How are the risks and benefits of the research expected to be distributed?
Who will face the most significant risks?
Who stands to benefit?
Is the expected distribution fair?
Will the local communities and the public have appropriate input into decisions concerning the acceptability, management, and the distribution of risks?
in 2004, which presumably did not involve genetic manipulation. 6For several decades, scientists have considered arguments for and against destroying stockpiles of smallpox, but this debate has taken place mostly without input from bioethicists (Weinstein 2011;Gronvall and Sell 2022).Although Selgelid (2004) has written about ethical issues related to smallpox, he has focused on public health and medical issues, such as mandatory vaccinations and informed consent, and not on the issue of whether smallpox should be stored and studied in the lab.
Since most biological research conducted in BSL-3 and BSL-4 labs does not involve the genetic manipulation of pathogens (Kimman et al. 2008), it is important for biomedical researchers, bioethicists, policymakers, and members of the public to participate in assessments and decisions related to all types of dangerous research biological research to ensure that risk/benefits issues are adequately addressed.Democratic processes (discussed in more depth below) should play an essential role in these deliberations.
One of the key principles of risk assessment is that risks should be proportional to benefits (European Commission 2000;Nuffield Council 2007;Resnik 2021).While most people would view this as a good idea, deciding whether risks are proportional to benefits is a complex determination that requires one not only to ethically compare risks and benefits but also to estimate the probabilities related to different possible outcomes.However, probability estimates concerning risks and benefits are often fraught with uncertainty, due to lack of reliable and accurate data from studies published in peer-reviewed journals or other credible sources of evidence (Resnik 2021).
How to deal with uncertainty regarding risks and benefits raises challenging ethical issues for biosafety and biosecurity (Selgelid 2016;Resnik 2021).Biosafety and biosecurity risks can be difficult to estimate due to lack of high-quality, interpretable data concerning laboratory accidents, laboratory acquired infections, security breaches, and other adverse outcomes.While there is some data concerning biosafety risks in different types of laboratories, it is difficult to interpret due to the diversity of reporting systems and metrics (Kimman et al. 2008;National Research Council 2015;Wurtz et al. 2016;Himmel 2023;Young 2023).Biosecurity risks are difficult to estimate because bioterrorism events are very rare, and predictions must therefore be made based on mathematical models, which are subject to various biases (Resnik 2021;Himmel 2023).
Because good evidence is lacking, experts often disagree significantly on the risks and benefits of dangerous biological research.For example, experts disagreed by several orders of magnitude concerning the biosafety risks of the controversial H5N1 experiments reviewed by the NSABB in 2011 (see Lipsitch and Bloom 2012;Fouchier 2015;Fouchier et al. 2012;Klotz 2015).Recently, the Department of Homeland Security (DHS) and the National Academies of Science, Engineering, and Medicine disagreed about the biosafety risks of a USDA lab built in Manhattan, Kansas (Cornwall 2023).(Further discussion below.)Experts have also disagreed about whether GOF research is likely to yield important benefits, such as knowledge that could be useful in developing treatments or vaccines (Fauci 2012;National Research Council 2015;Lipsitch 2018;DiRita and Bertuzzi 2021).When faced with uncertainties related to biological risks, some have advocated for risk-aversive policies, while others have sought to judiciously balance risks and benefits.Deciding which path to take is an ethical, not a scientific decision (Selgelid 2016;Resnik 2021).
Scientists, politicians, and members of the public also disagree about how to compare benefits and risks.Comparing risks and benefits raises moral and political questions because it involves deciding upon the level (or degree) of benefit that is required to justify serious risks, such as risks related to biosafety or biosecurity (Resnik 2021).As noted above, since 2010, some scientists (e.g., Fauci 2012) have argued that the risks of GOF are worth the benefits, while others defended the opposite viewpoint (Lipsitch 2018).Because risk estimates are fraught with uncertainty, disagreements about the justification of GOF experiments may affect these estimates, because proponents of GOF may underestimate the risks while proponents may overestimate the risks (Resnik 2021).Thus, it may be very difficult to separate "facts" from "values" when it comes to assessing biosafety and biosecurity risks.
A key assumption that supports arguments for conducting dangerous biological research is that the risks can be well-managed through implementation of appropriate biosafety and biosecurity measures (Selgelid 2016).But can they?And what ethical issues does risk management raises?These questions bring us to our next topic.

Risk management
Risk management requires the exercise of ethical judgment because it involves deciding how to minimize or mitigate the risks of an activity once a decision has been made to engage in it, and strategies for minimizing or mitigating risks often create tradeoffs among competing values (Resnik 2021).Deciding whether to publish DURC entails tradeoffs between values, such as promoting progress in science and technology vs. preventing risks to public health, agriculture, or national security.As we saw earlier, the NSABB considered these and other values when it recommended that the H5N1 GOF studies should be published.Although the NSABB considered the option of redacted publication, it did not ultimately recommend it.However, redacted publication of DURC has occurred at least once.In 2014, a research team led by Stephen Arnon from the California Department of Public Health published a paper in which they redacted some genomic sequence data needed to make a novel Clostridium botulinum toxin (Dover et al. 2014).They chose to redact this information because they were concerned that someone could use it to make a bioweapon for which there was no effective treatment (Relman 2014).
Bioethicists have published articles about and taken part in NSABB sessions on publication, data sharing, and transparency issues related to DURC (see Resnik 2013Resnik , 2020;;Selgelid 2016).However, important ethical questions can arise prior to publication when one is deciding a) how much safety is sufficient for conducting a study that uses or makes dangerous biological agents or toxins, and b) the type of safety measures that should be implemented in the study.These questions are conceptually similar to those that arise in other areas of risk management in science that involve competing values, such as clinical trial design (Shamoo and Resnik 2022).
As noted earlier, the BSL framework requires that the level of biosafety should correspond to degree of risk of the agent or toxin.However, deciding which BSL level to use for an experiment is not a purely technical matter because it involves tradeoffs between risks, costs, and benefits (Kimman et al. 2008).BSL-4 labs cost more money to build and operate than BSL-3 labs; BSL-3 cost more than BSL-2; and BSL-2 cost more than BSL-1.Research in BSL-4 labs is more difficult to conduct than research in BSL-3 labs because it involves the use of uncomfortable protective gear that can hinder movement (Kimman et al. 2008;Butler 2009).Additionally, some types of agents or toxins may not be easily categorized within the BSL framework, which implies that safety measures may need to be determined on a case-by-case basis (Kimman et al. 2008).Given these financial and logistical realities, researchers and funding agencies may be tempted to skimp on biosafety to save money and reduce inconvenience (Young 2023).These cost/risk/benefit issues also have implications for the distribution of risks (discussed below) because countries that cannot afford (or do not want to spend money on) BSL-3 or BSL-4 labs may create higher risks than countries that can afford to these labs if they decide to conduct risky biological research in BSL-1 or BSL-2 labs, which may have happened with coronavirus research at the WIV (Young 2023).Deciding who should have access to select agents and toxins also raises ethical issues.Access to materials raises ethical concerns because denial of access should be based on the security risks that a person poses and not on racial/ethnic, political, or other irrelevant factors.However, it can be difficult to distinguish between relevant and irrelevant factors when a making decision concerning access to research materials, especially give legitimate concerns about preventing foreign countries from stealing intellectual property, new technologies, or proprietary data (Gilbert 2023).Background checks for people who want access to select agents or toxins should be fair, judicious, and prudent.
There is widespread agreement among experts that laboratory culture plays a crucial role in promoting safe biological research (Perkins et al. 2019).The culture of a laboratory or research group can be equated with the values, beliefs, and behaviors of its members (Schneider et al. 2013).A safety culture is one in which researchers not only follow safety protocols but also believe in the importance of safety and advocate for it (Institute of Medicine 2000; Perkins et al. 2019).Questions about how to foster a safe laboratory culture are not technical/scientific issues but are humanistic ones related to education in the responsible conduct of research (RCR) (National Academies of Sciences, Engineering, and Medicine 2017; Antes et al. 2019;Shamoo and Resnik 2022).The NIH requires that RCR instruction for funded students includes education on "safe research environments," which is equated with environments "that promote inclusion and are free of sexual, racial, ethnic, disability and other forms of discriminatory harassment (National Institutes of Health 2022)."However, one could argue that RCR education should address not only psychosocial safety but also biosafety and laboratory safety (Shamoo and Resnik 2022).Moreover, there may be situations in which safety issues intersect with other RCR issues, such as authorship, collaboration, and mentoring.For example, if a trainee and their supervisor have a disagreement about the level of safety required to conduct an experiment, this could impact the trainee's relationship to their supervisor and their career.
The Institute of Medicine (IOM) (2000), in its groundbreaking report, To Err Is Human: Building a Safer Health System, emphasized the importance of fostering a safety culture in health care for preventing medical errors.The report stressed that while it is unrealistic to expect people to be error-free, the culture and system can be improved, so that mistakes can be prevented, detected, and corrected.One of the characteristics of US medical culture, according to the IOM, is that people are often reluctant to report, admit to, or share information about errors due to concerns about legal liability.The IOM recommends changing this culture so that people are more willing to report errors, admit to them, and share information about errors.To help change this culture, 29 US states have passed "I'm sorry" laws that require health care providers to disclose errors to patients and make apologies inadmissible in court as evidence of fault (Bender 2007).The report recommended improving the system by developing procedures that include multiple levels of error-prevention/ detection, such as double-checking of prescriptions, blood types, known allergies, and surgical sites.
These lessons from the IOM report clearly apply to biosafety.When a biosafety accident or breach occurs, researchers and administrators must not ignore it, deny it, or cover it up but be willing to report it to the relevant authorities so that corrective actions can be taken to protect workers and the public and prevent similar events from happening in the future (Young 2023;Haines and Gronvall 2023).Although researchers are not likely to face legal liability for mistakes they make in the lab, they may still face institutional sanctions that could discourage reporting.Biological laboratories can make their cultures safety-oriented by treating self-reporting of mistakes and cooperation with an investigation as factors that mitigate or eliminate potential sanctions (Feeser et al. 2021).Additionally, biosafety protocols should include multiple levels of error-prevention/detection, such as double-checking of labels on containers, regular auditing of inventories of agents and toxins, and (if appropriate) antibody testing of workers for infections, even when they display no symptoms of disease.While it is not possible to prevent all needle sticks, spills, failures of protective gear and equipment, and other accidents in the lab, one can be prepared to respond to them when they happen (Young 2023).
Accreditation also can play a major role in promoting safety-oriented cultures and systems in biomedical research.Since 1965, the Association for Assessment and Accreditation of Laboratory Animal Care (AALAC) International has been accrediting research organizations that perform experiments on animals (Gettayacamin and Retnam 2017).Currently, over 1000 research organizations from more than 50 countries are accredited (Association for Assessment and Accreditation of Laboratory Animal Care International 2023).AAALAC International evaluates organizations based on their compliance with standards, practice, and procedures described in the National Research Council's (2011) Guide to the Care and Use of Laboratory Animals.During the accreditation process, AAALAC International visits research institutions, inspects animal facilities, examines animals, evaluates the institution's standard operating procedures for protecting animal welfare, reviews animal use protocols approved by the ACUC and ACUC meeting minutes, and interviews researchers and institutional officials.There is little doubt that AAALAC International accreditation plays ain indispensable role in promoting laboratory animal welfare (Gettayacamin and Retnam 2017).
Although there are some organizations that provide biosafety accreditation for research institutions or laboratories, there is no organization in the biosafety realm with the influence or prestige of AAALAC (National Research Council 1989;Mourya et al. 2017).The advent of such an organization could be instrumental in promoting biosafety worldwide (Silver 2022).

Risk distribution
Another important ethical issue is the distribution of biological risks.Risk distribution raises ethical issues because one might ask whether the distribution is fair or just (Resnik 2021).Researchers working with dangerous human pathogens are likely to face the highest risks, but people living near the lab may also face risks if biological agent escapes.The sphere of risk can expand outward once an agent enters human, animal, or plant populations (Lipsitch and Bloom 2012).Although much of the recent biosafety debate has focused on risks to human health, biological agents may pose risks to non-human species, such as livestock, crops, or native plants or animals.Many pathogens, such as SARS and influenza viruses, pose risks to humans and animals.Approximately 60% of human diseases can be transmitted to animals and about 75% of new human infectious diseases come from animals (Centers for Disease Control and Prevention 2021).GM plants or animals, especially those that incorporate a gene drive system,7 can pose a significant threat to agriculture and ecosystems if they are accidentally or intentional released into the environment (National Academies of Sciences, Engineering, and Medicine 2016; Esvelt and Gemmell 2017).
Fairness can be understood in terms of 1) the outcomes of a distribution of risks and benefits or 2) procedures or processes used to distribute risks and benefits (Rawls 1971).For example, in asking whether a state government's decision about where to locate a hazardous waste site was fair, one can ask whether the outcome of the decision (i.e., where the hazard was placed) was fair, or whether the procedures that the government followed in making the decision were fair (Resnik 2012).There are many different theories about what makes a distribution of risk and benefits fair or unfair, including utilitarian, egalitarian, and libertarian approaches (Barry 1991;Lamont 2017).For example, a state might select a site for hazardous waste storage with an eye toward minimizing risks for whole population or toward minimizing risks for socioeconomically disadvantaged members of the population (Resnik 2012).
While there is not sufficient space in this article to make substantive comments about what makes the outcome of a risk/benefit distribution fair or unfair, there is enough space to address the fairness of procedures.To address procedural fairness issues, it will be useful to distinguish between three groups of people: (1) people who participate in work involving dangerous biological agents or toxins; (2) people who live in the vicinity of the work; and (3) members of the greater public.
Many different activities that people engage in, including work, recreation, and travel, involve significant hazards or risks. 8Valid consent is a key requirement, but not the only requirement, 9 for regarding occupational exposure to hazards and risks as fair. 10It is fair, one might argue, for a firefighter to encounter higher occupational risks than a librarian because the firefighter has freely and knowingly consented to these risks (Author 2012).By the same reasoning, it follows that participation in work involving significant biological hazards or risks is ethical and procedurally fair when it is consented to.To promote informed choice related to bio-risk exposure, overseers of the laboratory, such as senior investigators, must inform workers (e.g., students, postdoctoral fellows, and technicians) about biological hazards and measures being taken to reduce and control risks, such as safety protocols, protective equipment, disinfection and decontamination procedures, health monitoring, and waste disposal.Dissemination of safety information is required by OSHA regulations and is a standard practice for training researchers who work in biological laboratories (National Research Council 1989;Kimman et al 2008).
Consent may be compromised, however when people are coerced, manipulated, or deceived into working with dangerous biological agents or toxins, which could occur if a research organization outsources dangerous biological research to a lowincome country or any nation, state, or region with a poor track-record of protecting human rights or worker safety.Although it is not known how often the outsourcing of dangerous biological research occurs, this is potentially an important ethical and political issue related to biosafety, especially since pharmaceutical and biotechnology companies often outsource research to minimize costs and regulatory burdens (Shah 2006;Schroeder et al. 2017).

The role of democracy in biosafety oversight
While informed consent can promote procedural fairness for those who choose to engage in work involving biological hazards and risks, it does not apply to people living in the vicinity of the laboratory or members of the greater public, who may not have consented to taking these risks (Resnik 2014;Kolopack and Lavery 2017).How can risks to people outside the laboratory be justified?A reasonable answer to this question is that society can makes decisions concerning the acceptability and distribution of these risks through democratic processes.Democracy helps to foster 8 A hazard is something that can produce harm.A risk is the probability that a harm will occur.Dangerous biological agents and toxins can be viewed as hazardous biological materials that pose a risk of harm. 9One might also argue that employers have obligations to take reasonably measures to promote safety in the workplace (Resnik 2012). 10For a person's consent to be valid, their decision must free from coercion or undue influence, and they must have sufficient information to make a reasonable choice.Additionally, the person must be capable of making autonomous choices.procedural fairness by giving citizens equal rights to enact and shape the laws and policies that apply to them.Democracy is "consent" of the governed (for further discussion of democracy and consent, see Gutmann and Thompson 1998;Rawls 2005;Christiano and Bajaj 2022).
Political theorists distinguish between direct democracy, in which citizens vote on laws that affect them, and representative democracy, in which citizens elect representatives who make and execute laws on their behalf.Elected representatives may, in turn, appoint people to help them execute laws, and these appointees may hire other people, and so on.Because direct democracy is an impractical form of decision-making for a large group, most democratic nations and states are representative democracies.However, even representative democracies may include forms of direct democracy; for example, when citizens vote on referenda (Gutmann and Thompson 1998).
Most of today's representative democracies include many layers of bureaucracy that intercede between the public and government actions (Bertsou and Caramani 2021).For example, the person who is charged with executing US federal environment laws, the EPA Administer, is appointed by the President and confirmed by the Senate.The EPA Administrator appoints other administrators, who hire employees and solicit expert opinion.Although the public has oversight of the EPA, most of the agency's key decisions are made by employees, contractors, or volunteers with expertise in toxicology, epidemiology, medicine, statistics, and other relevant disciplines (Brown 2009;Resnik 2012).The public is not entirely excluded from this governance structure because expert panels often include public members and administrators usually consult with members of the public.Other agencies, such as the NIH, CDC, and FDA, implement a similar form of bureaucratic decision-making. 11ecause powerful corporations, wealthy individuals, and political or industry groups can significantly influence government decisions by lobbying elected or appointed officials, making contributions to campaigns, or challenging laws or regulations in court, many theorists argue that representative democracies should incorporate deliberative forms of democracy to help ensure that government decisions are fair.Deliberative democracy consists of various methods for promoting in-depth discussion, thoughtful debate, and inclusive dialogue about government actions (Gutmann andThompson 1998, 2004;Fishkin 2011).Some ways of practicing deliberative democracy include holding town hall meetings and listening sessions, sponsoring public debates, and soliciting comments on proposed laws, regulations, or decisions (Gutmann and Thompson 2004;Fishkin 2011).Community and public engagement (discussed below) are forms of deliberative democracy (Resnik 2018;  Neuhaus 2018). 12 Transparency and openness are essential to all forms of democratic governance.That is, citizens must be able to access, use, and understand the information needed to make decisions concerning voting and other matters (Rawls 2005;Elliott and Resnik 2019).To promote transparency and openness pertaining to bio-risks, leaders of research institutions must inform community members and the public about proposals for dangerous biological research and risk-management plans so they can decide whether the risks are acceptable (Evans et al. 2015;Yeh et al. 2017;Author 2021;Haines and Gronvall 2023).Collecting and analyzing data on laboratory accidents, security breaches and other adverse outcomes (mentioned earlier) can also help to promote transparency and openness.
Transparency and openness can be difficult to achieve because there is inherent tension between these values and other values, such as protecting the privacy of human research participants, intellectual property, research interests, and proprietary or classified information (Resnik 2006(Resnik , 2020)).However, it is usually possible for scientists and institutional officials to share information the public needs to know without compromising other values.For example, institutional officials can disclose the types of studies that are being proposed without revealing detailed information that could threaten the privacy of research participants or researchers' interests.
Including community members on institutional ethics committees, such as IBCs, IRBs, or ACUCs, helps to promote democracy by giving local people a voice in deliberations about proposed research (Presidential Commission for the Study of Bioethical Issues 2016).However, community representation on ethics committees is not nearly enough to ensure that decision-making related to bio-risks is democratic because important choices must be made long before a study reaches the IBC, such as deciding whether dangerous biological research should be funded and conducted, and how its risks should be managed.While some politicians and concerned citizens have been highly critical about government funding of GOF research, they have not said much about funding other types of research that raise biosafety and biosecurity issues.For example, during his questioning of Dr. Fauci, Senator Paul focused on the funding of GOF research by the NIH and did not address the issue of how many BSL-3 or BSL-4 labs the US needs to study infectious diseases or how safety at these labs can be improved (Basu 2021;National Institutes of Health. 2023).
For an example of deliberative democracy related to building dangerous biological laboratories, consider the controversy concerning $1.25 billion National Bio and Agro-Defense Facility (NBADF), BSL-4 lab located in Manhattan, KS, which opened on May 23, 2023 (Eaves 2020;Associated Press 2023).Following the anthrax attacks of 2001, the US government decided to expand the number of BSL-3 and BLS-4 labs supported by federal agencies to develop countermeasures to bioterrorism (Eaves 2020).The Department of Homeland Security (DHS) initially proposed upgrading a BSL-3 lab on Plum Island, NY to a BSL-4 lab, but abandoned this idea after Senator Hillary Clinton and Representative Timothy Bishop opposed it (Eaves 2020).The DHS considered several sites for the NBADF but selected Manhattan because of its remoteness from heavily populated areas and its proximity to Kansas State University.13Also, Kansas politicians, such as Senator Pat Roberts, encouraged DHS officials to build the NBADF in Manhattan because it would create 400 jobs generate hundreds of millions of dollars each year for the local economy (Eaves 2020).Although elected officials played key roles in the selection of the site for the NBADF, it is not clear that the decisions they made reflected the will of the people.Many Manhattan residents strongly opposed construction NBADF because they feared that a lab accident could have devastating impacts on agriculture, especially livestock.While NBADF critics were able to express their views at public meetings held by DHS and communicate their opposition to elected officials, it is unclear whether their concerns had a significant impact on DHS' decision, other than to delay it (Eaves 2020).
Another example of democratic oversight of bio-risks occurred in the State of Florida from 2009 to 2022 (Resnik 2019).In 2009, the Florida Keys Mosquito Control District (FKMCD) approached Oxitec about conducting a field trial of its GM Aedes aegypti mosquitoes.Oxitec had developed GM male mosquitoes (which do not bite) with a genetic variant rendering their offspring incapable of reproducing because they die before reaching maturity unless they are dosed with the antibiotic tetracycline.Oxitec had already conducted fields trials in the Cayman Islands and Brazil.Studies had shown the Oxitec's mosquitoes can cause Aedes aegypti populations to decline by over 90% (Carvalho et al. 2015).Aedes aegypti mosquitoes carry several diseases, including dengue, Zika, and Yellow Fever.
From 2012 to 2014, while Oxitec's application was under review at the US Department of Agriculture (USDA) and then at the FDA, the FKMCD polled residents about a proposed field trial in Key Haven, FL and held public meetings about it.Although participants in these meetings expressed serious concerns about the environmental and public health impacts of the field trial, initial polling showed most members of the local community supported it (Resnik 2018;Schairer et al 2021).However, environmental groups soon launched a media campaign against the field trial and asked people to communicate their opposition by submitting public comment to the FDA.Florida resident Milagros Demier sent a petition opposing the field trial to the FDA, which was signed by 165,000 people, more than four times the population of Key Haven (Schairer et al 2021).In the spring of 2016, Oxitec received permission from the FDA to conduct a field trial in Key Haven.Later that year, Florida voters approved a referendum for GM mosquito field trials in the state of Florida and Monroe County voters approved a referendum for field trials in their county, but Key Haven voters did not approve the field trials, so the FKMCD and Oxitec decided to conduct the trials elsewhere.In 2021, following a public comment period, the EPA, which had taken over federal jurisdiction of GM mosquitoes, gave Oxitec permission to conduct field trials in Cudjoe Key, Ramrod Key, and Vaca Key, Florida, and in Harris County, Texas and Tulare County, CA (Oxitec 2022).In April 2022, Oxitec announced that the lethal gene was successfully transmitted to female mosquitoes in the Florida Keys field trials and that none of them reached adulthood.The study was a small, pilot project that was not designed to test efficacy (Waltz 2022).
One could argue that the events which transpired in Florida represent a paradigmatic case of procedurally fair, democratic decision-making related to the oversight of bio-risks.The company sought permission for its studies from the relevant governmental authorities and citizens had an opportunity to express their opinions through voting and by petitioning elected and appointed officials.Citizens of Key Haven were not forced to accept bio-risks they did not approve of, and Oxitec was able to obtain approval for field trials in places where there was not strong community opposition.
However, some commentators have argued that the democratic process in this case was not fair or effective (Meghani and Kuzma 2018).First, there was a lack of clear regulatory guidance concerning approval of the GM mosquito trials, because three federal agencies took turns claiming jurisdiction (Schairer et al 2021).Lack of clear regulatory guidance was a problem not only for Oxitec and the FKMCD but also for those seeking to petition the government about the proposed field trials.Although the FDA received 2649 comments and the EPA received 500, it is not clear how the agencies incorporated these comments into their decision-making (Schairer et al 2021).Second, because Oxitec and environmental groups had exerted substantial influence on government officials and the public, there is some doubt as to whether the votes that were taken reflected the will of the people or corporate or political interests (Meghani and Kuzma 2018).It is worth noting that public opinion about the field trials shifted from positive to negative after environmental groups inserted themselves into this controversy (Resnik 2018).Third, the process that could have played a key role in soliciting and forming public opinion in a fair and effective way, i.e., community engagement, was far from ideal (Schairer et al 2021).FKMCD's "engagement" was a one-way sharing of scientific and technical information rather than a two-way dialogue.To be fair and effective, community/public engagement should promote open discussion in which parties on different sides of an issue share information, concerns, and values and try to understand each other's point of view.Engagement should be respectful, meaningful, and inclusive (Lavery et al. 2010;Tindana et al. 2015;National Academies of Sciences, Engineering, and Medicine 2016;Conley et al. 2023).
As one can see from these cases, there is a pressing need for more critical reflection and empirical research on how to meaningfully involve communities and the public in decision-making related to the assessment, management, and distribution of bio-risks (Schoch-Spana et al. 2017).Bioethicists can contribute to this discussion by examining moral concepts, principles, and practices related to community and public engagement.

Conclusion
Biosafety (and to a lesser extent biosecurity) tends to be viewed as a technical matter best addressed by scientific and public health experts.This article has shown, contrarily, that biosafety raises important ethical issues related to the assessment, management, and distribution of bio-risks, and that biosafety practice, education, and policy could therefore benefit from incorporating insights from bioethics.Some topics where bioethics may make important contributions to biosafety (and biosecurity) include: • Weighing the risks and benefits and benefits of biological research.
• Contributing to deliberations about managing the risks of biological research; such as deciding which BSL level is appropriate for a study.There are probably other topics where bioethics can make contributions to biosafety that have not been mentioned above.Hopefully, the conclusions and proposals defended in this article will stimulate further discussion of the bioethics of biosafety.
Funding Open access funding provided by the National Institutes of Health.The funding for this research was provided by National Institute of Environmental Health Sciences (Grant No. ziaes102646-10) This research does not represent the views of the National Institute of Environmental Health Sciences, National Institutes of Health, or US government.directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
• Fostering a safety-oriented culture in a biolab or research institute.• Incorporating biosafety and biosecurity issues into education and training in the responsible conduct of research.• Thinking about what is means for laboratory workers to make informed, voluntary choices concerning participation in dangerous biological research and potential barriers to effective consent, such as the outsourcing of dangerous biological research to low-income countries.• Reflecting on the role of democratic processes and principles, such as community and public engagement, transparent/open communication, and public membership on research committees, in oversight of bio-risks.• Examining inherent conflicts between transparency/openness in science and other important values, such as protection of the privacy of human subjects, intellectual property, and proprietary and classified information, and how to resolve these conflicts.• Developing codes of conduct related to biosafety and biosecurity.

Table 1
Some key events in the history of biosafety and biosecurity 2001 Anthrax terrorist attacks in the US kill five people and sicken dozens 2002 The US passes laws for control of select biological agents and toxins 2003-2004 One person dies and nine get sick as result of SARS escaping from a lab in Beijing 2004 The National Research Council issues an influential report, Biotechnology in the Age of Terrorism Biological Diversity, was negotiated in the 1990s and adopted by 103 countries in 2000.The protocol, which has been signed by 173 countries, governs the safe transfer, handling, and use of GMOs (Cartagena Protocol on Biosafety 2023).

Table 3
Ethical issues related to biosafety