Because emerging biotechnologies are dual-use, governance must weigh the risk of misuse with the potential for beneficial use in innovation and development. Unfortunately, biosecurity attempts are mired in uncertainty around both the actual capabilities of synthetic biology, as well as the motivations of actors given the increasing number of contexts in which synthetic biology is used. Modern governments are still relying on old rules to regulate a new technology, clearly an insufficient strategy for ensuring security in the coming decades.
Building an effective biosecurity strategy to encompass twenty-first century biotechnologies requires understanding the novelties that sciences like synthetic biology create in the biosecurity threat space, as well as the structural vulnerabilities these sciences can exploit and the likely causes of inadequate biosecurity practices. Synthetic biology’s novel biosecurity concerns arise from its broad scope, wider availability, complexity, and uncertainty over current and future capabilities. For example, critical developments such as gene editing via CRISPR gene editing vastly improve upon previous genetic engineering processes and may yield a revolution in human and environmental health research, but may also cause substantial and irreversible harms. One application of gene editing is the gene drive, which can rapidly propagate a certain set of genes or alleles through a population, circumventing Mendelian inheritance laws and increasing the chance that this set of genes is passed on. While gene drives are an exciting new technology, their ability to rapidly alter the genetic makeup of a population is cause for concern. Other potential negative consequences of gene editing may include the unconstrained diffusion of gene-edited material throughout the environment, the disruption of ecologies with genetically-modified organisms (in particular engineered gene drive systems), and off-target impacts from genome editing. These techniques could also be used maliciously, with an actor purposely targeting humans and/or the environment.
The publication and dissemination of a methodology for synthesizing horsepox in a laboratory setting was a recent application of gene editing (Noyce et al. 2018). Some critics say this information could support a nefarious actor to reconstitute and develop smallpox, or to synthetize other viruses. Additionally, the widely publicized recreation of the 1918 Spanish Influenza (Tumpey et al. 2005), which killed some 50 million people worldwide at the close of the First World War, could facilitate the synthesis process for actors wishing to cause harm. Even nonpathogenic approaches have been described as dual-use research, ranging from the disruption of local ecologies via gene drives to the manipulation or destruction of inorganic materials.
These and dozens of other cases demonstrate the increasing ease with which an actor can acquire information and apply existing tools to deploy advanced genetic engineering applications with limited to no oversight. In 1975, the U.S. National Institute of Health (“NIH”) established compliance measures for genome engineering that were enforced through funding restrictions; however, many synthetic biology innovators can now operate without NIH funding, approval, or even awareness, and NIH does not oversee research in other countries. Today, the financial costs, time limitations, and skill requirements needed to wield synthetic biology tools have scaled down such that some of these tools have become accessible even to elementary school students. Furthermore, the requisite baseline knowledge diminishes over time as synthetic biology processes become more streamlined. While such broad access to sophisticated genetic engineering knowledge and equipment can accelerate scientific breakthroughs, it also places the responsibility for biosecurity on a near infinite number of unsupervised actors across the globe.
In 2018, the BWC secretariat noted that increased access to technologies such as gene editing, gene drives, and gene synthesis is available to actors with limited or no oversight from established industry or governmental organizations, raising concerns about potential violations of the BWC. It is helpful to forecast and understand looming threats and potential mitigation strategies at various scales, but international treaties are not structured to oversee bottom-up efforts related to the localization and globalization of synthetic biology below the national scale. One part of the solution may be the broadening of engagement from established oversight agencies like NIH.
An additional option is the Responsible Research and Innovation (“RRI”) approach, utilized by the UK and the EU, which appraises the potential effects of new research on society and the environment in order to improve the alignment of processes and expected outcomes with societal values and needs. RRI approaches include experts from a range of different fields whose role it is to assess scientific development with the goal of mitigating risk, making research advances accessible through fair and sustainable means, and upholding key morals and values. Programs that adopt the RRI approach are not meant to prevent research or the publication of results, but rather to minimize downstream harms that could make developers, companies, and/or governments liable for costly insurance and cleanup efforts. RRI becomes an important criteria in access to public funding but is not a regulatory requirement.
Biosecurity could also indicate to the general public that certain synthetic biology products have been filtered to guarantee beneficial uses. The US Nuclear Regulatory Commission (nrc.gov), for instance, performs this role for research and test reactors; the synthetic biology field would benefit from a similar regulatory body for biosecurity.
Where top-down governance proves insufficient, other actors such as universities, non-profits, and companies will need to engage their own gatekeeping and watchdog capabilities to protect against nefarious actors. Top-down governance may support such initiatives, which will require harmonization and communication up to the international level. These initiatives will need to be incentivized. Currently, though, biosecurity is viewed as an obligation, such that individuals, organizations, and companies must use their own funds to observe unstated and often confusing or contradictory needs for overall security. This balance of costs and benefits is insecure, and as such, institutions tend to want to minimize expenditures associated with oversight (Gillum et al. 2018). The best argument to support investing in biosecurity is that the advancement of synthetic biology ultimately requires public approval, whereas currently the public remains quite skeptical (Pauwels 2013; Oliver 2018). The public could grow more opposed to synthetic biology were the public inadvertently exposed to some harm as a consequence of insufficient or inadequate oversight. Biosecurity necessitates a strategy which incentivizes managers and corporations to stay up-to-date with the latest risks and concerns.
Some corporations are aware of the risks to their bottom line should the public be exposed to harm arising from a synthetic biology product. The majority of DNA synthesis companies, in fact, have joined the International Gene Synthesis Consortium (“IGSC”), whereby they utilize company resources to monitor customers and their requests for potential security problems. They are aware that implementing such biosecurity measures is in their best interests, even though there is no legal regime requiring them to do so. Likewise, in January 2020, the World Economic Forum and the Nuclear Threat Initiative published a report recommending that a technical consortium be set up in order to create a common DNA sequence screening mechanism. This screening mechanism would be based on work done by the IGSC.
Viewing longstanding biosecurity policy practices through the lens of risk analysis results in the conclusion that there are significant gaps in biosecurity effectiveness for synthetic biology. These inefficient and inadequate policies include (a) viewing security as a cost or undesirable expense to be minimized, (b) the siloing of scholarship and practice across disciplinary domains and among government, industry, academia, and civil society, and (c) the narrow framing of security problems ignores new actors and technological developments taking place in a variety of different countries and in adjacent technology fields. Each of these issues could be resolved through policy solutions that both encourage technological development and mitigate security threats while enabling public engagement in synthetic biology and investment in its products as they enter the marketplace. Policies for synthetic biology must be scalable, transferable, and adaptable in order to take into account its emerging technical and social challenges.
The increasingly globalized, distributed, and dispersed nature of synthetic biology products and research worsens challenges arising from differing practices of biosecurity governance globally. Advanced biological research is no longer overwhelmingly dominated by Europe and the US, and this may introduce different approaches to, or priorities for, biosecurity. Russia’s Federal Research Programme for Genetic Technologies Development for 2019–2027, for instance, intends to “implement a comprehensive solution to the task of the accelerated development of genetic technologies, including genetic editing; to establish scientific and technological groundwork for medicine, agriculture and industry; to improve the system of preventing biological emergencies and monitoring in this area” (Ministry of Science and Higher Education of the Russian Federation 2019). Similarly, Saudi Arabia is funding research related to the development of microbial cell factories to produce fuels and chemicals, while the Singaporean government is investing considerable resources into the funding of life and environmental sciences researchat Nanyang Technological University, the National University of Singapore, and the Agency for Science, Technology and Research (A*STAR). The Chinese Academy of Sciences is establishing an Institute of Synthetic Biology, which is tasked with the dual responsibilities of fostering roadmaps for the future development of Chinese synthetic biology while also establishing safety and security norms for researchers at Chinese institutions. There are no top-down efforts beyond existing mechanisms like the BWC or the CWC (The Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction) that standardize global governance and usage of synthetic biology, and bottom-up efforts are not coordinated in their reach or messaging.
Relative newcomers to the development of synthetic biology may possess differing tolerances and constructions of risk compared to more established technology developers. The implications of the entry of such newcomers to the field, though vast, can be grouped into two general areas. One includes diverging safety and security practices at various points of an international supply chain that forms the backbone of an increasingly globalized economy. Another includes the potential for small-scale experiments or national biosecurity policies to escape the given actor’s control and spill across political boundaries. While one country may find the environmental risk of a particular synthetic biology application acceptable, its spread across borders into another country may disrupt those local ecologies (i.e., crashing or hardening a particular species through genetic engineering) or expose vulnerable human populations to irreversible consequences without options for amelioration. The nature of certain synthetic biology applications (i.e. gene drives) makes it impossible for risk-averse countries to wholly quarantine themselves from exposure to harms resulting from another country’s decisions. This is also an issue of equity given that risk-tolerant countries will reap the rewards of risks when beneficial technologies emerge, but risk-averse countries may bear their neighbors’ risks without any means to capture potential rewards.
An environment of competing and incongruent risk architectures causes individual states, organizations, or industries to arrive at differing definitions of security threats or acceptable levels of loss in pursuit of a technology’s intended gains. For a technology as uncertain as synthetic biology, this policy divergence may set governments, companies, and other research organizations down vastly differing policy paths, and impede consensus in assessing the minutiae of technical risk concerns or assessment protocols, or ensuring security for anyone.