2.1 Terrorism and Bioterrorism in the USA in 2001

On September 11, 2001, the Unites States suffered the deadliest (non-military) terrorist attack of all times when high jacked airline jets crashed into the World Trade Center Towers in New York City, into The Pentagon in Washington, DC, and into a field in rural Pennsylvania.

About seven days after the 9/11 attack, anonymous letters laced with deadly spores of the Bacillus anthracis (Anthrax) arrived in offices of media companies and the US Congress. During the following 2 months at least ten letters, anonymously mailed, were identified causing at least 17 anthrax infections and altogether 5 casualties. This was the first incidence of what seemed to be the planned killing by means of deadly organisms and, as such, may be considered the emergence of modern bioterrorism. The attack triggered a nine-year investigation that consumed many millions of dollars and was extraordinary for its magnitude [1].

The 9/11 terrorist attack claimed >3000 casualties and changed life in the US. The Anthrax attack, in contrast, caused “only” 5 casualties but it was a most frightening wake-up call for the public. Bioterrorism executed with an anonymous, highly dangerous, self-replicating infectious agent, which cannot be detected even with high-powered light microscopes (e.g. viruses), suddenly presented to the society a new vulnerability that could cause deadly chaos. To lower the risk of such scenario all dangerous microorganisms, known now as “select agents”, are currently controlled for research by the government and their use is subject to strictest rules.

2.2 Synthetic Biology and Bioterrorism

The publication in July 2002 of the first chemical synthesis of poliovirus [2, 3], an organism harmful for humans, struck a sensitized and anxious American public after 9/11. It became instantaneously clear that viruses like poliovirus no longer “exist” only in nature but also in computers. Viruses, therefore, can be synthesized using the information stored in computers. These synthetic viruses have the computer as parent – a natural isolate is no longer required.

This scenario immediately brings us to an ancient dilemma in science called “dual use research.” Every result produced in science has the propensity to be useful and/or harmful for humankind. In addition, every progress in biomedical technology yields the possibility to achieve new “unheard-of” results, like the chemical synthesis of poliovirus. Currently, there are the genomes of >2,500 viruses available in public databases. All of these sequences could be used to synthesize the corresponding viruses, including small pox virus. That is the dark side of progress. However, true to the tenet of “dual use research”: the de novo synthesis of poliovirus has also lead to the development of very promising new strategies to develop vaccines [4, 5] that will hopefully aid humankind in the future.

Since the first ground breaking synthesis of poliovirus in 2002, some very dangerous viruses have been “re-created” in the test tube in the absence of natural isolates [6]. Most notably was the resurrection of the type A strain of influenza virus 1918/1919 that killed an estimated 20–50 million humans [7]. The virus vanished after the catastrophic pandemic, leaving no trace for investigations that would explain its extraordinary virulence. Studies carried out after its de novo synthesis in 2005, however, have identified the murderous armor of this virus and, thus, prepared medical scientists for its possible recurrence. These results justified the enormous efforts to “revive” this terrible virus.

The revival of another lost virus in the absence of the natural template was described in a recent, unpublished report [8]. Briefly, horsepox virus, an orthopoxvirus, thought to be extinct from nature, has been synthesized de novo. This virus, which is related to the human smallpox virus, may have been part of the original vaccine against human smallpox and, therefore, it may be highly useful for improving the current smallpox vaccine. Please note that in the history of humankind smallpox virus is considered the most vicious killer of all time. It was globally eradicated in 1980. Not surprisingly then, the report that the related horsepox virus was rebuild by genetic engineering fueled again the debate about the benefits and risks in biomedical research [8].

In the mid 2010s these events coincided with the advent of a new discipline, Synthetic Biology. Synthetic Biology is an emerging area of research that can broadly be described “as the design and construction of novel artificial biological pathways, organisms or devices, or the redesign of existing natural biological systems” [9]. About 12 years ago, the idea of combining molecular genetics, genetic engineering, cell biology, mathematics and engineering has been heralded as the dawn of a new science with unlimited possibilities to create new artificial biological systems useful for humans and the environment. It has been greeted with skepticism by a nervous general public as these goals could lead also to new bioterrorist agents.

Are there sure measures for preventing misuse by modern biomedical techniques leading to results harmful for humans? An answer provided by Joshua Lederberg (1998 Nobel Prize Laureate in Medicine) is sobering: “There is no technical solution to the problem of biological weapons. It needs an ethical, human and moral solution – if it is going to happen at all” [10]. It is worth noting that this crucial statement was made just 3 years before the anthrax attack.

Listed below are some constraints that show how in the US the development of dangerous infectious agents, referred to as “select agents”, is controlled – perhaps misuse even prevented – through technical and administrative hurdles:

  1. I.

    Re-creating an already existing dangerous virus for malicious intent is a complex scientific endeavor. (i) It requires considerable scientific knowledge and experience and, more importantly, considerable financial support. That support usually comes from government and private agencies (NIH, NAF, etc.), organizations that carefully screen at multiple levels all applications for funding of ALL biological research. (ii) It requires an environment suitable for experimenting with dangerous infectious agents (containment facilities). Any work in containment facilities is also carefully regulated.

  2. II.

    Genetic engineering to synthesize or modify organisms relies on chemical synthesis of DNA. Synthesizing DNA is automated and carried out with sophisticated, expensive instruments. The major problem of DNA synthesis, however, is that the product is not error-free. Any single mistake in the sequence of small DNA segments (30–60 nucleotides) or large segments (>500 nucleotides) can ruin the experiment. Companies have developed strategies to produce and deliver error free, synthetic DNA, which investigators can order electronically from vendors, such as Integrated DNA Technologies (US), GenScript (US) or GeneArt (Germany). This offers a superb and easy way to control experimental procedures carried out in any laboratory: the companies will automatically scan ordered sequences in extensive data banks to monitor relationship to sequences of a select agent. If so, the order will be stalled until sufficient evidence has been provided by the investigator that she/he is carrying out experiments approved by the authorities. The entire complex issue of protecting society from the misuse of select agents has been discussed in two outstanding studies [11, 12].

  3. III.

    Engineering a virus such that it will be more harmful (more contagious, more pathogenic) is generally difficult because, in principle, viruses have evolved to proliferating maximally in their natural environment. That is, genetic manipulations of a virus often lead to loss of fitness that, in turn, is unwanted in the bioterrorist agent.

A special case, however, is research leading to “Gain-of-Function” of microorganisms, an objective broadly defined by the US Government as follows:

  • U.S. Government Gain-of-Function Deliberative Process and Research:

  • Gain-of-function studies, or research that improves the ability of a pathogen to cause disease, help define the fundamental nature of human-pathogen interactions, thereby enabling assessment of the pandemic potential of emerging infectious agents, informing public health and preparedness efforts, and furthering medical countermeasure development. Gain-of-function studies may entail biosafety and biosecurity risks; therefore, the risks and benefits of gain-of- function research must be evaluated, both in the context of recent U.S. biosafety incidents and to keep pace with new technological developments, in order to determine which types of studies should go forward and under what conditions.

  • US. Government, October 17, 2014.

The debate about Gain-of-Function Research became particularly intense in 2013 when benefits vs risks were broadly assessed in studies of the host range change, increased transmissibility and pathogenicity of non-human infectious agents such as SARS, MERS or animal influenza virus strains. Indeed, following an intense debate, all experiments aimed at “Gain-of-function” of infectious agents were put on hold but have since been re-assessed [13].

2.3 The Responsibility of Scientists to Prevent Bioterrorism

If large, secret organizations (like the “Aum Shinrikyo” in Japan) or governments of hostile countries would plan to develop and apply agents of bioterrorism the opportunities to interfere with such activities would be limited. Therefore, it is imperative to train young scientists to be aware of molecular techniques to modifying existing dangerous organisms or generate an infectious agent with properties of a select agent. This training must also include education in the ethics of science – preparing young scientists to share responsibilities to work only for the good of humankind. Obviously, this is a long-term goal. In the short-term, vigilance and communication are our best defense [14,15,16].