Thus far, we have outlined existing technical aspects of biosecurity framework and identified some gaps. We identified a set of imminent opportunities for additional technology development that would be beneficial in the short-term. We also noted that other gaps are likely to only be possible to address in the medium- to long-term future We break these opportunities into biological threat Prevention, Detection, and Response as general categories. We further disseminate these areas into more specific topics in each section (Fig. 10.1).
10.3.1 Biological Threat Prevention
Since biological engineering projects are now often done through cycles of Design, Building, and Testing, we discuss these imminent threat prevention opportunities below in that context. We identify the current approaches in these areas and suggest key ways in which new technology could be developed in these areas to strengthen biosecurity.
The first part of the engineering process is the Design phase. This is when the biological engineer makes key decisions about what DNA sequences will be involved in a project and which function each DNA element is supposed to have. In this starting step lies an opportunity to incorporate biosecurity features of synthetic DNA from the very beginning. In recent years, a number of software tools have been created for synthetic biology that automate the design process,Footnote 16 so there exists an opportunity to add tools specific to biosecurity in these frameworks.
The first step of a design process is an abstract design process, where the high-level design requirements are defined before a concrete design with specific real components are formulated. This step is called ‘Specification’. Some examples of a specification that have been used in synthetic biology include logic gate behaviour, toggle switch behaviour, and oscillatory behaviour – for these examples, the very high level intentions of a genetic construct are defined, but no actual DNA sequences are yet selected. This step is useful because it defines the overall purpose of an engineered system before committing to the actual components needed to create a fully designed system. This specification is then generally fed automatically into a downstream design tool that chooses and arranges components to satisfy the specification.
While some software tools for specification in synthetic biology already exist, there are not yet any tools specific to specification of biosecurity features. This working group concluded that it would be useful to have such a software tool created, and discussed features that would be ideal to incorporate in such a tool. We thought that integrating security considerations into desirable system properties such as ‘biocontainment’ features would be desirable. We also thought it would be an opportunity to specify whether or not DNA sequences of known threat status should be incorporated into designs, or not, and create a direct link to downstream design tools.
The creation of such tools can also lead towards creation of design standards with respect to biosecurity. While our working group decided that biosecurity specification standards were a good idea, it is less clear how they would be enforced. For other specification topics in the field, complying with a specification standard is voluntary best practice. In some limited cases, like within the iGEM community, a form of project specifications with security considerations are submitted to the central organization for approval before specific designs are made, but this is not a scalable practice. In principle, a decision tree could be deduced to help an engineer determine if a specification should move forward or not, but most likely in the short term, complying with standards would have to remain a voluntary best practice.
10.3.1.1.2 Design Tools
After the specification step of design comes the selection of specific components to satisfy the specification. This is still most often done manually by a user in various software interfaces, but can also be done automatically with design tools. The output of this design step in a synthetic biology context is a complete DNA sequence. While current genetic design software generally does not screen the DNA sequences at this step for hazardous fragments, there is an opportunity in that step to perform an in silico screen in this design step for potentially hazardous sequences.
While the DNA synthesis and assembly step is a clear place to look for matches to known threats, design tools could be an ideal environment to perform modelling and analysis for potentially less obvious threat. In principle, whole-cell modelling could be developed to determine if over-expression of certain agents will disable cell metabolism via Flux Balance Analysis.Footnote 17 Models could also be developed to predict if a specific protein resembles a hazardous agent or if a viral agent could pose risk to a specific model system or cell type.
These types of modelling approaches with a complete DNA sequence could be extremely valuable at mitigating biosecurity risk before synthesis, but it is known that very few groups are capable of making such models at this time. And even then, it’s hard to validate these models completely to the point that these design tools can be used reliably. Although there have been some powerful design tools recently published with highly successful design automation functionality for genetic logic circuits,Footnote 18 building tools for biosecurity threats would be a bit more abstract and require knowledge of the environment and potentially a community of cells on top of whole-cell modelling. This could be simplified for in vitro cell-free systems, but it is general knowledge that one cannot directly apply knowledge obtained in in vitro systems to in vivo systems.
10.3.1.1.3 Selecting Chassis
Once a final DNA sequence for a design is determined, it must be determined which ‘chassis’ (i.e. organism/model system) the DNA will be used in. Generally this information is already included in the DNA design, but when you consider the environment in which the DNA is introduced to a cell, additional considerations become relevant (especially biosecurity considerations). Furthermore, traditionally some complete organisms have been designated as dangerous agents (i.e. Yersinia pestis) – while the organism has pathogenicity as a whole, a vast majority of the genes in these cells is harmless to humans. It would be valuable to create design tools that also consider the functionality of the chassis the DNA is implanted into – for example is it possible to add DNA to chassis that either makes a previously harmless chassis harmful or a previously harmful chassis harmless?
The simplest chassis is one that only uses cellular components, but no complete cell (i.e. ‘cell-free systems’). The use of these chassis can simplify the analysis of whether an agent is harmful, but if this DNA were to get inadvertently to a living organism, it would be hard to know if it could become harmful or not. The most complicated chassis use case is a future unnatural, engineered organism or an organism that is rarely used and poorly understood in the literature. In this case, it would be near impossible to say with certainty if a DNA sequence is harmful in these contexts. This can get further convoluted if one must determine which organism or type of cell is harmed, as not all biological threats to humans are direct threats to human cells (i.e. threats to agriculture). In summary, there is ripe opportunity to develop tools that consider the organisms DNA is used in when determining DNA threat status.
10.3.1.1.4 Tools to Enhance Tracking of Users and Research
Finally, in addition to the many purely technical opportunities that exist for biosecurity development in the near future, there is also the opportunity to track activities at the design phase. Right now, all DNA threats are typically caught at the Build step, but if a user logs all of their design thinking, there is greater ability to warn a user of potential DNA threats before they start physical construction of the DNA. If their design was linked to databases where threat information can be automatically queried, projects that accidently use hazardous DNA components can be mitigated earlier. In the later future, if these types of tracking and logging were done at the design level, additional high-level adaptive management of new threat information could be incorporated seamlessly.
After the design process, the next step of a biological engineering process is to strategize the build step. In this case, our construction material is DNA. As discussed in this chapter, there are already some existing biosecurity frameworks that relate to DNA synthesis and assembly and some gaps. In this section, we discuss some detailed near-term opportunities for technology development and technical guidelines in the Build phase.
Over the past 10 years, there has been huge technological advancement in the DNA synthesis field and has caused a shift in DNA synthesis versus DNA assembly.Footnote 19 In the past, a majority of DNA building was done in labs via PCR amplification and DNA assembly. This was necessary for building large constructs, as it was only economical to synthesize short fragments of DNA called DNA oligonucleotides (i.e. ‘oligos’). However, because technology DNA synthesis of large fragments has become so much cheaper, it is now often more economical to simply outsource synthesis of most fragments and only assemble these large fragments in the last step, as opposed to relying primarily on traditional molecular cloning. As a consequence, most imminent opportunities in this space fall under the purview of DNA synthesis as opposed to DNA assembly. The working group identified a number of concrete avenues for strengthening current screening procedures and practices.
10.3.1.2.1 Who Should Be Screening Synthesis Production in the Future?
One general gap identified in the prior section was a lack of consensus on who ideally should be screening synthesis production. While the current system has the DNA producer (i.e. synthesis company, bench biologist, etc.) self-regulating using guidelines, as DNA synthesis becomes a more and more accessible technology, this might not remain the case. The two primary alternatives to self-regulation would be having licensed companies that provide DNA screening as a paid service or having government agencies commit resources to perform this service. Each of these alternatives has pros and cons, but both approaches generally require a centralized screening tool.
The benefit of having governments in charge of screening DNA orders is that they have a direct link to regulatory structure and no direct incentive for profit. It would also be advantageous from a centralization perspective – it could have one screening tool and one database and would not require verification that multiple screening tools and databases are screening correctly. Furthermore, a government is able to control exports of physical items (i.e. DNA). This could be a good mechanism for ensuring that hazardous DNA is not produced and exported, with legal punishment for violators. In addition, the government has access to additional information via state intelligence programs to perform customer screening (i.e. a government would have existing lists of individuals and organizations considered dangerous that it would not allow synthetic DNA to be delivered to). If done in an ideal way with neutrality towards the DNA synthesis producers and consumers, the government screening option could be a good solution. Unfortunately, in reality it might not be that simple. Currently this industry functions void of almost any regulation – adding regulation is always a messy process and each government has different attitudes towards business and science. Furthermore, a rogue state with control over the DNA synthesis industry could become a more general existential threat.Footnote 20
Another alternative is having licensed companies providing screening as a service to the DNA producer. While one such company already exists,Footnote 21 more could exist in the future. Like with the government solution, this approach uses a centralized tool and database to perform screening, but is managed by a company, which has different incentives and concerns. One advantage of this solution is that it is consistent, but also is not directly tied to regulations, which allows the synthesis producers to operate with more freedom. The main complexity of this solution lies in intellectual property concerns – namely that companies have interest in not divulging their IP to other companies under the concern that the company that is screening could in principle use sequence information from the screen for economic gain. This solution requires a high degree of trust between the companies producing DNA and those screening it. This proposed solution raises all sorts of liability issues that must be negotiated between all participating parties and could be more complex than a government-based solution.
10.3.1.2.2 A Stratified White List Approach for DNA Synthesis Production
After a clear decision is made on the future of which parties will be performing DNA synthesis screening, a general strategy on which DNA should and should not be allowed for synthesis must be formulated. Under today’s current guidelines, screeners use a Black List approach – all sequences are allowed except those that closely match DNA sequences designated as potentially hazardous. When DNA consumers make requests that are on the Black List, their orders are flagged and the DNA producer will follow up with the consumer to verify whether or not they should get the DNA. While this solution works well for sequences of known threat (i.e. fragments of the Smallpox genome), it is not able to handle new threats or threats which are not currently deemed hazardous enough to make the Black List. The result is that there is likely a large volume of sequences produced and distributed today that have some sort of threat potential. The Black List approach works well if the list remains static, but we know that in the synthetic biology space, this is not a realistic expectation.
The reverse of the Black List approach is a White List approach – a White List contains a large library (or generic definition) of sequences that cause no reason for concern. In a White List-centric approach, only things that give hits on the White List are allowed and anything not on it is not allowed. The primary difference between White- and Black-List approaches is how the ‘grey’ area is approached. In a Black List paradigm, things in the grey area are allowed and resources are spent to confirm that they aren’t technically on the Black List. In a White List paradigm, things that are in the grey area are generally not allowed.
This working group proposed the idea of a ‘Stratified White List’ approach. In this framework, it is essentially a White List approach with exceptions for highly trusted partners. These highly trusted partners could be institutions with clear approval to work with specific hazardous sequences – depending on the research being performed, there would be different categories of White Lists. The proposed category breakdown of Stratified White Lists could have the following types of breakdowns:
Basic molecular biology labs with institutional approval to do work in BL1 (or equivalent) with no declared intention of working on sequences that might pose threat
Labs with permission to work on one specific agent or set of agents with established threat status
Labs with permission to work on a broad set of agents with known threat status
By default, all customers (new and existing) would be automatically placed in the CATEGORY 1 provided that they have proof that they are working in an established institution (i.e. not a private address with no specific permission to work with DNA) and would then need to pass some certifications to move into CATEGORY 2 or CATEGORY 3. CATEGORY 1 would include labs at academic and industrial institutions and DIY community labs. This certification process would need to be of minimal burden to the customer, but make it clear that the customer has institutional approval to work with certain types of agents to be approved for higher categories. This process could potentially be tied to IGSC or managed in some part by a similar organization.
In general, the Stratified White List system would cover most examples of DNA to be produced, but there are a couple important edge cases that would require more thinking. First, certain mammalian genes (i.e. insulin) could be overexpressed in certain situations that make a gene on the White List cause harm to human cells. Second, this approach still does not solve the problem of sequences that are requested that match no known DNA sequence in the screening database.
While the Stratified White List approach gives a clear tiered system, one issue that could arise is how to deal with the use of middlemen or intermediary institutions giving access of higher-tiered DNA to those at a lower tier. For example, instead of a bench scientist ordering directly from a DNA producer, they might regularly order through a local supplier. Or, for example, a user at a high category clearance giving inactivated forms of agents to lower-category parties under the premise that the second party will not mutate the agent back into active form. To solve this problem, we would recommend an end-user certificate to validate that the party physically using the DNA is on the right category White List. While this proposed system does not completely solve this ‘middleman problem’, this problem is also unsolved in the current Black List approach.
10.3.1.2.3 Functional Equivalence of Sequences
The current foundation for determining if a sequence should be built or not is founded on lists of known sequences of harm. However, it is broadly acknowledged that there is a much larger list of sequences that may be threats that are not currently on these lists. This is currently unaddressed in any screening framework, but there has been discussion of methods to assign functional equivalency of sequences – the task of determining if a sequence is ‘similar enough’ to a known threat to cause pause before DNA synthesis.
Discussions as recently as 2008Footnote 22 had deemed this scientific pursuit too challenging of a problem to seriously consider. At that time, it was thought that a nucleotide sequence similarity percentage of 80% could be useful to identify sequences of potential threat. This sequence matching approach had many problems, namely because sequence identity isn’t necessarily a good predictor of function, and was subsequently abandoned. However, in recent years, there have been huge advances made in machine learning in biology and an explosion of DNA production for genetics research. Moreover, at the time, a lot of sequence databases where new and therefore were sparse and contained errors. Now that vast, accurate databases of sequences exist and machine learning in biology has gotten off to a strong start, perhaps it is time to revisit the idea of building tools to predict functional equivalency not based solely on nucleotide sequence.
10.3.1.2.4 DNA Assembly and Smaller DNA Synthesis Providers
As discussed, a large shift in recent years has gone towards de novo DNA synthesis over traditional DNA amplification and assembly. However, DNA assembly of small fragments is still performed widely in the community too. While this DNA assembly is more time consuming and sometimes more expensive than DNA synthesis, it creates some problematic edge cases for the existing screening framework. First, since the current screening guidance only focuses on fragments of size ≥200 bp, one could order a bunch of small fragments of a hazardous agent and assemble them in a lab without being detected. Second, if a user already has access to some fragments of hazardous DNA, they can order oligonucleotides to mutate and assemble full-length agents. A near-term opportunity in this space is to build software tools that account for DNA assembly. Software could also be developed such that a DNA purchaser account could be flagged if they order a large set of small fragments that partially match a known dangerous agent or if an account suddenly logs ‘unusual’ ordering activity as is done sometimes with ATM withdrawals at financial institutions.
The DNA assembly problem is an important area to address since many types of parties still do this routinely. Organism design companies, automated platforms, cloud labs, CROs, guide RNA service providers, and other service providers regularly produce small DNA fragments in house. This issue will become even more pressing for bench-top DNA synthesizers.
10.3.1.2.5 Attribution and Tracing
Finally, while attribution tools have gotten off to a strong start, there are additional angles that might be factored into these tools to include lab-specific optimizations of codon optimization and synthetic biological parts usage patterns. The synthetic biology community often uses different codon optimization schemes for their parts and often re-uses combinations of characterized parts to build complex genetic circuits, so these additional dimensions could strongly aid existing attribution efforts.
In recent years, there has also been a widespread adoption of ‘DNA barcoding’ techniques for many areas of biotechnology and there could also be opportunities to institute a DNA barcoding system for DNA synthesis in certain capacities. This would require standardization and a consensus of how to do the barcoding, but it could be a useful way to program attribution into the DNA synthesis workflow.
The final part of the Design-Build-Test cycle is Test – methods for obtaining and analysing data. While the Design and Build phases predict or assume a certain degree of functionality of an agent, in the Test phase, these qualities are scientifically determined. In the context of biosecurity, this is where the actual threat capability of any given agent is determined. In this situation, a lot of the technological focus is under what setting (both physical and biological) the testing of potential bio-threats is done, since it is dangerous to test potentially hazardous agents in an open, uncontrolled setting. Ergo, most relevant concerns with respect to testing revolve around containment.
10.3.1.3.1 Physical Containment
The first layer of containment is physical containment – where certain types of agents are stored and worked with by scientists. While this is conventionally performed in physically secure labs with different levels of chemical and biological agent clearance, emerging DNA technology has made this problem more complex. Specifically, with the widespread use of synthetic DNA and incorporation into model organisms, do standard decontamination and waste procedures suffice for eliminating biothreats?
In general, biological waste is either treated with bleach before being poured down the drain or sent for incineration (re-usable containers for research materials are autoclaved at high temperature and pressure). It is assumed that these procedures are broadly effective at containing biological threats, but given the stability of DNA, this generalization should be revisited in near-term research. It is known, for example, that standard sterilization techniques do not fully degrade double-stranded DNA fragments,Footnote 23 leaving whole genes unmodified. While it is not known how much DNA would be needed to create horizontal gene transfer, there is a knowledge gap of ramifications of allowing DNA to escape labs via current sterilization processes. This working group identified the area of measuring levels synthetic DNA in waste collection and the general environment as an area of key opportunity in physical containment. As an extension, another area of imminent technology development would be using new technologies to set up a surveillance network to track when DNA fragments of interest are detected at specific physical locations. Such a surveillance network could ground many of our assumptions on the physical spread of biologics from laboratories.
10.3.1.3.2 Biology-Based Containment
A second layer of containment is biology-based containment. Biology-based methods contain organisms using programmed biological features. The key difference in this containment approach is that it allows engineered organisms in the environment outside of a controlled facility. While this is traditionally avoided, there could be large benefit of using engineered organisms in the environment for applications such as bioremediation where organisms could be used to clean the environment of toxic molecules or pollution.
One early biology-based containment method is the use of antibiotic resistance genes such that only bacteria with that gene could grow on a substrate. Later on, other approaches of biology-based containment were developed including using cell lysates (i.e. cell-free systems) and partial organisms (i.e. lenti-viral packaging) to control biological spread by removing parts of the biology used for replication. In more recent years, technologies such as recoding,Footnote 24 kill-switches,Footnote 25 and gene drivesFootnote 26 have been introduced to engineer biocontainment such that organisms can be used in certain field applications without the ability to escape the controlled environment.
Kill-switch technology describes programmed mechanisms for a human observer to change the environment where an organism is placed in order to cause the organism to rapidly die. This has been engineered for both temperature and environmental triggers. Additional technology development for fine control of these mechanisms, using genetic logic gatesFootnote 27 or an engineered micro-biomeFootnote 28 could provide more sophisticated control of containment. Technology development in this area would have high near-term impact for biosecurity, as we think about how to introduce new, impactful biological applications, while taking proper measures to be able to control the spread of engineered organisms if they are not behaving as desired. It could also impact the desirability of biological weapons, should we develop capabilities to accurately confine engineered systems to specific locations.
A second new technology for biocontainment, called ‘recoding’, is a method for containing engineered agents by requiring them to use an alternate genetic code for survival. This has been done for E. coli, where these recoded bacteria, called rE. coli, require the addition of extra unnatural amino acids in the environment to survive, and thus cannot grow in environments that do not have an unnatural additive.Footnote 29 There is on-going work in the field to expand this technology into new organisms and at greater scale. Further near-term development in this area will lead to creation of organisms that are safe for use in the environment because they fundamentally cannot survive in natural environments. A key step will be experimentally demonstrating that this is true.
A third new technology, ‘gene drives’,Footnote 30 has been proposed as a genetic mechanism for control of population genetics. This technology uses engineered inheritance to guarantee the passing of certain genes via sexual reproduction in eukaryotes. The result is that populations could, in principle, be culled or controlled using the gene drive mechanism. The advent of this technology has drawn in large-scale science fundingFootnote 31 to determine if this approach is has efficacy on a large scale and develop technologies such as reversible gene drives to correct potential mistakes made. There have also been efforts to limit the spread of gene drives to specific locations.Footnote 32,Footnote 33 The primary model organism used thus far for gene drives is the mosquito, since suppression of mosquitos in certain regions could be used to supress the spread of malaria and other diseases. This area is ripe for additional technology development and application to more species if it can prove controllable in current research efforts.
This working group discussed ways in which the existing methods could be used synergistically to create additional layers of biocontainment. For example, one could imagine a situation where artificial dependence on certain conditions and dependence on antibiotics triggers the expression of certain factors in absence of the antibiotic to kill the cells could be a two-component system. This type of containment system could exist between an animal and bacterium where they depend on each other and one dies out in the environment without the other. Other ‘xenobiotic’ biocontainment examples could be developed to create complex, layered levels of biocontainment in the near future. This type of path forward will require much greater inclusion of ecologists in relevant research areas. Even then, there will be some risk in such projects. Some reversion might be possible by use of kill switches and reversal drives, but it is likely that some changes will be permanent depending not only on the system but also the population size, where it is released, and the fitness of the organism in the environment it is released into.
10.3.1.3.3 Horizontal Gene Transfer
Finally, in hypothetical cases of genes escaping containment, we must consider how to mitigate horizontal gene transfer. In principle, horizontal gene transfer has a certain pace with which new DNA gets introduced to a new bacterium by chance. The new DNA often has only a limited beneficial metabolic function and will for sure not be toxic to the cell. Evolution may change the DNA into genes with a more central role in metabolism or increase expression, otherwise the DNA may be lost again. In a modern global world of today and the invention of many different antibiotic drugs, the selective pressure on the bacteria has never been greater and only the most pathogenic strains can survive. Thus the acquisition of gene cassettes varies greatly.
Bacteria employ a variety of mechanisms to transfer genes horizontally, such as transformation, transduction and conjugation. Natural transformation is a process by which cells take up naked DNA from the environment. It involves multi-component cell envelope spanning structures, such as type II secretion systems (T2SS), type IV secretion systems (T4SS) and type IV pili. In transduction, DNA is transferred with the help of bacteriophages and conjugation requires physical contact between a donor and a recipient cell via a conjugation pilus, through which genetic material is transferred.
So what is transferred? A broad spectrum of mobile genetic elements, such as plasmids, transposons, bacteriophages or genomic islands are transferred and can be found to account for a large proportion of bacterial genomes as evolution goes on. An example of selective pressure is the acquisition of copper resistance (along with resistance to arsenic and cadmium) - comprising czc/cusABC and copABCD systems in the kiwifruit pathogen Pseudomonas syringae pv. Actinidae.Footnote 34 The pathogen infected the first plantation in Australia in 2010, and by 2016, 25% of all samples taken were resistant to the copper treatment.
With the development of modern molecular biology tools, endless new DNA constructs have been released into nature when biological waste is deliberately or by accident tossed down the drain. Resistance marker genes, plasmids with multi-host capabilities, and fusion proteins are a great source of DNA that can be taken up by other bacteria and which will make them even more pathogenic to human health than before.
10.3.1.4 Economic Drivers
While we have discussed here many areas of imminent technology development that could significantly bolster biosecurity practices, we must not forget the underlying economic incentives of DNA production, since economic drivers important to realising the potential of synthetic biology run counter to comprehensive biosecurity governance. Commercial applications inherently want to maximise profit and minimise overheads. Creating and implementing measures to prevent deliberate misuse add cost. This is a notable disincentive for large parts of the synthetic biology community to engage with biosecurity. Therefore, it is particularly important to streamline the financial and resource implications of biosecurity measures.
Furthermore, given its intrinsically interdisciplinary nature, many members of the synthetic biology community come from disciplines outside biology and biotechnology. As a result, they may not have been exposed to, or have a background in, biosafety or biosecurity. It is therefore important that biosecurity measures are accessible, supported by appropriate tools and resources, and adequately promoted among members of the community.
10.3.2 Detection of Biological Threats
Thus far, we have discussed numerous ways in which biological threats either are currently being mitigated or can be mitigated with technology development in the near future. However, there are also numerous opportunities to increase capability in the threat detection domain. In this domain, we assume that a biological threat has already been physically created in the environment and the question focuses on technology to detect it. In this context, we consider our ability to establish surveillance methods and rapidly diagnose biological threats.
Current methods for diagnostics of hazardous agents can be summarized as a collection of peptide sequencing, antibody-based diagnostics via ELISA or ImmunoPCR, and genome sequencing technologies. Generally speaking, peptide sequencing is most useful for protein threats like toxins, antibody screening is most often used for viral infections, and genome sequencing can be performed for both viruses and bacteria.
Of these technologies, the most rapidly evolving is genome sequencing. While some companies have made great progress on portable whole-genome sequencing,Footnote 35 there are still significant issues with the limit of DNA detection. Often there is not enough genetic material acquired in the field to make confident identifications of species and if the organism is modified, it makes that conclusion even more difficult. Some new microfluidic devices have aided this problem, but there is still a lot of room for improvement. Additional technological progress in microfluidic device development is a key area of opportunity to improve these DNA-based diagnostics. Another issue for these diagnostic devices is the comparison to reliable, non-redundant sequence databases. Historically, as large sequence databases have been built, a fair amount of inaccurate data and erroneous meta-data has been entered and so another opportunity to improve the diagnostic functionality is to clean up these databases to the point that they can be much more useful for immediate comparison with diagnostic devices.
One key area of diagnostic development for a variety of agents is cell-free systems. Cell free systems have been used for many years and are routinely produced by individual laboratories following their own recipe. Today the technology has advanced and the understanding of the important factors to make reproducible kits has enabled it to be used outside the lab.Footnote 36
Toehold switches were developed in 2014 and utilized the preferential binding of DNA into a secondary structure if no target was present and it would unfold and bind to the target if it was present. A reporter gene would be activated upon unfolding and a signal could be detected. The technique can detect nanomolar and low micromolar concentrations due to the absent amplification step of target. It will generate signal detection in as little as 20 min and the maximum ON/OFF ratios ranged between 10- and 140-fold. Careful optimization of target region is needed to ensure maximum signal. An important advantage of paper-based distribution of synthetic gene networks is their potential for low cost (4–65¢/sensor) and relative ease to manufacture.Footnote 37
Developing the technique further, by using Cas9 and an isothermal RNA amplification step, the detection limit improved to low femtomolar range and had a single-base resolution discriminating between American and African Zika type viruses. Other variants of the technology (Sherlock) use an isothermal RNA amplification step to get low attomolar sensitivity by using a different Cas13 protein. This protein will, via the CRISPR methodology, find its target and cleave it.Footnote 38 Due to a build in collateral cleavage feature, the Cas13 protein will next degrade any mRNA it may find. Thus the provided fluorochrome - quencher reporter mRNA molecule will be degraded and a signal can be measured using a fluorescent reader. Yet again, other systems using cas12 can target DNA in the same way as mentioned aboveFootnote 39 or by using a CRISPR-Cas9-triggered nicking endonuclease-mediated Strand Displacement Amplification method named CRISDA.Footnote 40
A technology to be fully developed in the future is the biological transistor. It is the detection of an unamplified target gene via CRISPR–Cas9 immobilized on a graphene field-effect transistor. An electrical signal is generated if CRISPR detects its target and thus can positively identify a biological agent on the DNA level within 15 min.Footnote 41
With the invention of the methods mentioned here, a field deployable paper stick technology will be able to tell if a dangerous pathogen is present in a fast and reliable way. It is a huge step in the direction of being able to detect a biological attack on site, but still laborious work is still needed to extract nucleic acids from each sample and ID RNA is present, RNases are to be avoided at all cost to get reliable results.
Finally, as diagnostic tests are developed to be faster and more accurate, we can start to form systematic surveillance protocols. This can range from detection of immediate human health pathogens to analysis of field micro-biomes, detection of fungi and decomposers, and agricultural pathogens. While these other types of threats are currently too low priority to focus diagnostic efforts on, in the big picture, these areas really matter. In agriculture, there are already systems for tracking and regulating pedigree of lines of animals and plants, but as the technology for diagnostic DNA tests improves, it would be reasonable to develop the areas of genetic surveillance of agriculture since it is a high-impact area of human wellbeing that is not directly human health focused.
10.3.3 Threat Response and Countermeasures
Biological countermeasures are typically biologics and small molecules used to detect, prevent, or treat biological and chemical insults. Biologics are composed of vaccines and antibodies. Vaccine development, while slow and laborious, is effective at producing acquired immunity and protection to a broad range of known diseases and weaponized agents. Recently, large-scale mining of human immune repertoires for antigen binders has been propelled by technological advances such as next generation sequencing (NGS) and given rise to the field of system-immunology. Coupled with bioinformatics analysis, we have gained significant insight into the diversity of antigen binders and the polarization of repertoires in response to challenge. Subsequently, it is now possible to mine these repertoires for protective monoclonal antibodies and deliver effective countermeasures. However, extant antibody discovery platforms suffer from a multitude of disadvantages that impede high-throughput repertoire interrogation and antibody discovery.
There are several large existing programs to develop medical countermeasures (MCMs) to new and existing biothreats such as p3, PRISM, and US AID, but these programs are beyond the scope of near-term biosecurity efforts to improve. While new synthetic biology tools will certainly lead to decreased development time to countermeasure delivery compared to traditional approaches, it is still relatively new technology and will take some time to be integrated into these large existing efforts to develop countermeasures.
One topic discussed at some length by this working group was how information on MCMs should be disseminated to the general public. Specifically, we discussed the idea of putting the latest technical information on new threats online. For example, each year there is a seasonal strain of influenza that circulates and vaccines are routinely developed to combat the new strain. To do this, the new viral strain is sequenced and a new MCM is created. Should these new viral sequences and information on countermeasures be publicly available information?
Historically this type of information has been available only to those actively working in the space. New sequence information is kept on a non-public database and companies that develop yearly MCMs for influenza get the physical strain in exchange for donating some vaccine free of charge for use in resource limited settings. However as they can access the data with the virus sequence, they do not have to ask for the strains anymore – the can simply synthesize the new strains themselves, make the vaccines from that source without the obligation to make the MCMs more generally accessible. Thus, the system relies on the good will of the companies to share benefits from their work.
Furthermore, we have seen a much more open approach in response to the global COVID-19 pandemic – the DNA sequence for this threatFootnote 42 and subsequent diagnostic and countermeasure development has been rapidly published and made publicly available.Footnote 43 This has led to rapid development of novel diagnostics and MCMs by a large swathe of companies (including some synthetic biology-based companies), giving tremendous opportunity for the biotechnology industry. But, the cons of this very open approach will take time to play out, as some risk has been taken by disseminating so much of this information in a short period of time to mitigate a global crisis. Only after the dust settles with the on-going COVID 19-pandemic, will we be able to see whether or not there are clear negative consequences of this open approach.