Keywords

Introduction

The safety of the blood supply has been a matter of concern from the outset with the successive recognition of transfusion-transmitted syphilis, closely followed by hepatitis outbreaks. Indeed, hepatitis was, for many years, regarded as an accepted and intractable outcome of transfusion, particularly in the United States. In the late 1950s and the 1960s, there were concerted efforts to understand and to minimize this problem, culminating in the recognition of causative agents and implementation of specific donor tests. However, it was not until the AIDS crisis in the 1980s that safety became an overriding priority. In retrospect, AIDS was a harbinger of the global dissemination of a transfusion-transmissible, epidemic infection. This set the scene for aggressive attention to blood safety but also established the expectation that there might be other transfusion-transmissible disease outbreaks in the future, but the fear was focused on chronic, parenterally transmitted agents. Agents with the potential for transfusion transmission must have an asymptomatic phase with circulation in the blood, must survive the processes inherent in collection and storage of blood, and must be infectious by the intravenous route. The advent of West Nile fever in the United Sates was yet another unexpected threat to blood safety, but it has been followed by a cascade of other global outbreaks of tropical arboviruses with actual or theoretical implications to the blood supply. The issue is whether or not there are means to foresee and defend against such events.

Emerging Infectious Diseases

Over that past 30 years or so, emerging infectious diseases have become an intense focus of attention. The simplest definition is that these are diseases whose frequency has increased over the prior 20 years [1]. There are many drivers for emergence, but some of the most important are environmental change, rapid transportation, urbanization, conflict, and failure of control measures. In addition, the agents themselves may mutate and acquire different properties.

Some emerging infections clearly impact blood safety. The first of these was AIDS. HIV, the causative agent, originated in primates in Africa and was likely transmitted to humans as a result of preparation of meat from such primates. Subsequent human to human transmission probably occurred horizontally via the sexual route and perhaps vertically to infants. Large-scale emergence of the disease was attributable to travel and extensive sexual networks, particularly among men having sex with other men. Infection is almost invariably chronic, and there is a very long asymptomatic period during which the virus is present in the circulation. This long incubation period was not appreciated at first and resulted in the silent spread of the virus including its transmission by blood transfusion. Ultimately, although HIV/AIDS had a serious negative impact on blood safety and on public confidence, it did stimulate a proactive response to recipient protection.

A second emerging infection with potentially severe implications to transfusion was variant Creutzfeldt-Jakob disease (vCJD), another zoonosis, caused by ingestion of the infectious prion responsible for bovine spongiform encephalopathy (BSE, mad cow disease) [2]. In this case, the potential for transfusion transmission was predicted before any cases were seen, an important step, because this disease also has a very lengthy latency. It is important to note that, although the BSE and vCJD outbreak emerged in the United Kingdom, concern and prevention were global. Indeed, globalization and transportation were definitively involved in the spread of BSE and vCJD.

In 1999, West Nile virus (WNV) was identified for the first time in North America, with a few cases in New York City. In the next few years, it expanded across the continent. In 2002, a total of 23 cases of transfusion transmission were reported in the United States, and there were estimates that hundreds of thousands of individuals had been naturally infected by mosquitoes [3]. Universal blood donor testing for WNV RNA was in place by mid-2003. WNV is an arbovirus, transmitted by culicine mosquitoes, and the primary amplifying hosts are birds. Human-to-human transmission does not occur (other than by transfusion), and the human is an accidental, dead-end host. The appearance and scope of the outbreak were completely unexpected. Initially, there was little concern about blood safety, as the previous focus had been on chronic infections. In the case of WNV, the size of the outbreak was such that there was a significant chance of collecting blood from an individual during the short (7–10 day) viremic phase of early infection. The virus now appears to be endemic in the United States with annual periods of human infection and disease.

The early twenty-first century in the Americas has been characterized by continuing waves of epidemics of tropical arboviruses, essentially all of which are transmitted by Aedes mosquitoes . Unlike WNV, the relevant viruses are transmissible by the human-mosquito-human route. They are actually or potentially transmissible by transfusion. They include dengue, chikungunya, and Zika viruses. Most recently, Zika virus has received particular attention, largely because it is responsible for a very serious fetal disease syndrome, which includes microcephaly [4]. In the United States, universal, individual donation NAT has been required since late 2016, although only a few tens of positive donations were found nationwide during the first year and a half of testing.

A critical question is whether or not these outbreaks could have been predicted as a first step toward control or prevention of adverse outcomes. The purpose of the remainder of this chapter is to discuss the extent to which this may be achieved and the available interventions to manage the risk.

Horizon Scanning

Early recognition of a potential threat is a critical step toward establishing readiness for the impact of an emerging infection. Fortunately, general awareness of emerging infections has led to a great deal of interest in this area, and tools are readily available [5].

Probably the best source of primary information is ProMed-mail, a simple list server that provides frequent bulletins about infectious disease events worldwide (http://www.promedmail.org). It is possible to receive documented alerts on human, animal, and plant diseases or to select one or two of these categories. The information is carefully curated for reliability, and, in many cases, brief commentary is appended to the news item. All postings are also maintained in a web-based searchable database. In normal use, one or two posts are received each day via e-mail and each report may have up to four or five different items. High profile outbreaks may be followed, as updates are posted as appropriate.

The websites of national and supranational health organizations are also rich sources of information about disease outbreaks. Globally, the World Health Organization (WHO) is most comprehensive, but each WHO region also has a website. In the case of the Americas, the PAHO site is most informative. The US Centers for Disease Control and Prevention (CDC) has an excellent site which also covers outbreaks outside the United States. Similarly, the European Center for Disease Control (ECDC) has a topical and informative site. Most countries also have similar sites.

Another very useful primary source is HealthMap (http://www.healthmap.org/en/), which reports on and tracks infectious diseases through a mapping interface, which may be used interactively. There are many additional components, including topical reports on events, and the website deserves careful exploration. Additionally, social media should not be overlooked, as many of these, and similar programs have Twitter feeds that are reliable pointers to important information.

Horizon scanning using these tools, however, only identifies and perhaps describes the extent of an outbreak but does not usually provide any information about its implications for blood safety. As a broad generalization, to be transfusion-transmissible, an infectious agent must have an asymptomatic phase during which the agent is present in the blood. This requirement reduces concern about many respiratory and enteric infections. Even so, precautions may be appropriate if the transmission route is not clearly known, and the epidemiology of the infection involves rapid and extensive spread, as was the case for SARS. In this case, WHO rapidly recommended precautions to protect blood safety.

Thus, if the outbreak is of a known disease, it is reasonable to consider the likelihood of transfusion transmission before preparing to develop interventions [6].

An interesting, but rare event would be the recognition of a transfusion-transmitted infection before an outbreak of the disease had been established. It could be argued that this is possible in special circumstances, and, indeed, some viruses have been identified in the context of searches for otherwise unknown hepatitis viruses. However, the viruses identified this way do not appear to be pathogenic. On the other hand, it is true that astute physicians reported the earliest (unexpected) evidence of transfusion transmission of some known agents. West Nile virus is good example of this situation.

Prediction

Given the available information, is it possible to predict an outbreak of a transfusion-transmitted infection and thus be fully prepared when the outbreak occurs? The simple answer is “sometimes,” but the inherent problem is that emerging infections are themselves not predictable. The situation is made more complex by the fact that, currently, in most cases the best intervention is to develop and implement a sensitive test for donated blood. Industry is understandably not usually willing to commit to test development until the market need is apparent, a situation usually linked to demonstrated transfusion transmissions of the emerging agent. However, it is clear that the extent and urgency of the eventual intervention are associated with the severity of the disease.

Recently, tropical arboviruses have been a matter of particular concern in the Americas, and they illustrate different circumstances. The appearance of West Nile virus in the United States in 1999 could not have been predicted. In contrast, the appearance of Zika virus in the United States led to rapid and perhaps overzealous implementation of universal testing [7]. In this case, however, there was an immediate history of the rapid spread of other tropical arboviruses (most notably chikungunya virus) throughout the Caribbean and South America and subsequently into Puerto Rico and to other mainland areas in the United States with a high density of vector competent mosquitoes. This, in addition to the association of the virus with the severe impact of Zika virus on the fetus, encouraged manufacturers to research the development of appropriate test systems, and testing was in place at essentially the same time as the appearance of autochthonous infections on the US mainland.

This latter experience suggests that, in some circumstances, it may be possible to predict outbreaks of certain diseases. Indeed, there have been organized efforts to do exactly this for some arboviral diseases in Europe. More specifically, careful analysis of environmental, meteorological, and human movement parameters can identify likely areas for the presence of particular mosquito vectors. Evaluation of population travel patterns can be used to indicate the risk of introduction of a given agent into these areas and thus to target areas of potential risk for an outbreak. These methods have been successfully implemented for malaria, WNV, and dengue [8]. Additionally, in some cases, the dynamics of the outbreak can be predicted.

Although it apparently did not offer a real risk to blood safety, the explosive outbreak of SARS in 2003 is a counterexample, in which a local epidemic of a zoonosis spread around the world in a few months. This was totally unpredictable, and it provoked the hasty implementation of donor deferral policies as a precautionary response, as the pathogenesis of the disease was initially unclear.

Interventions: Donor Suitability

The safety of the blood supply is a matter of considerable medical, political, and public concern. Accordingly, a great deal of effort is exerted to prevent transfusion transmission of agents known or thought to be transmissible by transfusion [9]. The remainder of this chapter will review available measures used to achieve this outcome. These measures vary by country in some cases because the threat is local and in other cases because resources are limited or public health priorities differ.

Donor Information-Based Strategies

It is not always recognized that the process for identifying donor populations has a significant impact on the infectious risk profile, as demonstrated by the decreased prevalence of infection markers among first-time donors compared to the population at large; however, this issue will not be discussed here. Neither will routine donor evaluations of self-reported health and vital signs be discussed, although some of these measures undoubtedly eliminate a number of donors in the early acute phase of infections. Of more relevance to this chapter are strategies that are implemented in order to reduce the risk of specific infections. These generally reflect specific questioning of potential donors in order to determine medical history, exposure risk, or behavioral risk for given infections.

Medical History

This approach is frequently used when a novel infection occurs but prior to the recognition of its cause and the development of appropriate tests. Perhaps the most obvious example of this approach was questioning presenting donors about a history of viral hepatitis, which was implemented long before the etiology of transfusion-transmitted hepatitis was understood. Arguably, the question likely did prevent some transmissions because of the chronic nature of infection by at least some hepatitis viruses, but would have been ineffective in the face of asymptomatic infection. This intervention has mostly been abandoned in the face of effective testing protocols. Asking prospective donors about symptoms of given infections has continued and was an important component of early efforts to impact the risk of transmission of HIV. More recently, questions about the symptoms of tropical arbovirus infections have also been introduced, at least pending the availability of suitable screening tests. In general, however, this approach is neither sensitive nor specific. There have been some instances where such questioning has been eliminated when shown to be ineffective.

Another aspect of this approach is the use of post-donation information , often called PDI. In most blood systems, donors are encouraged, after donation, to inform the blood center of any illness or symptoms occurring during the few days following their donation. This implies that an infectious agent could have been circulating in the donor’s blood and allows the blood center to recover the donation prior to its release or use. This approach is known as passive PDI. However, in situations where there is a known risk, such as the presence of an ongoing outbreak, an active form of PDI has, on occasion, been used [10]. In this case, the blood product is quarantined, and its donor is contacted within a few days, and the product is released only after the donor has affirmed that there were no symptoms post-donation. This approach is difficult to employ for platelets, because of their short shelf life.

Risk Exposure

Donors may be asked q uestions about circumstances that are known, or thought, to offer increased risk of infection with specific agents. This is again a long-standing practice, which should be open to review and change as more effective and accurate measures become available. The risks that are generally considered relate to contact with known cases of a specific disease, exposure to environments where transmission risk is increased (such as a prison) or travel from an area known or thought to offer risk. Most recently, this approach has been invoked in the United States for travel to areas affected by outbreaks of Zika virus, although the questioning stopped once universal donor testing was put in place. Residence in areas with ongoing risk of vCJD is another example. Conversely, a number of countries ask questions designed to avoid collecting blood from people who have recently traveled from the United States, as a measure to reduce risk of West Nile virus infection. This approach is potentially sensitive but has very poor specificity. It has the advantage that it can be i mplemented (and abandoned) relatively quickly. This type of questioning can have a significant effect on blood supplies, as a broadly based travel question, for example, can impact 3% or more of all donors.

Risk Behaviors

The third class of questions ask the presenting donor whether or not he or she has engaged in activities or behaviors known to be associated with infection, usually by a specific known agent or group of agents. These questions tend to be intrusive and may, in many cases, require self-reporting of behaviors that may be socially challenging, such as improper injecting drug use. In other cases, the donor may well feel that the questions are discriminatory to a given lifestyle or sexual affinity; this is particularly the case with questions intended to elicit risk for HIV infection. This situation is not readily ameliorated, particularly when specific questions are required by the regulator. There have been attempts to avoid this situation through educational activities designed to encourage self-deferral prior to interview or to give donors the opportunity to indicate that his or her donation should not be used for transfusion (confidential unit exclusion, or CUE). In the United States, CUE has been essentially abandoned as evaluative studies suggested that it was of little benefit. There is published evidence that donors do not always respond accurately to some of these risk behavior questions. A number of studies indicate that around 2% of accepted donors will acknowledge in post-donation studies that they had risk behaviors that should have disqualified them from donation [11]. There is interest and discussion about development of risk questions that do not lead to a sense of discrimination among some donors, but this is not necessarily easy. However, as we develop increasingly sensitive tests and pathogen inactivation, it is to be hoped that such donor questioning could be eliminated.

Interventions: Testing

Although the interventions noted above likely eliminate the majority of donors and donations offering the risk of infection, those methods are neither sensitive nor specific and may be unreliable. There is little question that testing methods are currently perceived as the most significant blood safety intervention. Conceptually, this approach was initiated at some level as a result of cases of transmission of syphilis by transfusion, but the origins of current technology came from attempts to manage the transmission of viral hepatitis by transfusion. These efforts started before the causative agents of hepatitis were characterized or isolated, so the original focus was on the use of liver function tests – an approach known as surrogate testing . The recognition the association of what we now know as hepatitis B surface antigen with one form of transfusion-transmissible hepatitis in the late 1960s led directly to testing that is still in place. Posttransfusion hepatitis continued to occur, however, and it took another 20 years to isolate the hepatitis C virus. In the interim, AIDS and its agent HIV appeared, and testing started in 1985 in the United States. It became apparent that these early tests were not able to identify all infectious donations, and, staring in 1999, additional testing for viral nucleic acids was initiated.

Surrogate Testing

A definition of surrogate is “substitute,” and this is a useful definition: in the absence of a specific test for the infectious agent itself, a different analyte or characteristic is used. In the case of surrogate tests for viral hepatitis, tests that are commonly used to detect liver damage (most often blood tests for alanine aminotransferase [ALT]) were used. Clearly, there is a rationale behind this approach, but it was eventually shown to be neither sensitive nor specific. It was, however, in general use until the implementation of a specific test for hepatitis C antibodies. More controversial was the use of a test for antibodies to the hepatitis B core antigen (anti-HBc) in order to reduce the risk of transmission of HIV, prior to the actual recognition of that virus. This approach was based upon epidemiologic data showing that many AIDS patients also had anti-HBc. This was because the hepatitis B virus shared some of the same transmission routes as HIV, although this was, of course, not clear until the virus had been characterized. Ironically, testing for anti-HBc was eventually mandated in the United States, as an additional measure to reduce the transmission of hepatitis B virus by transfusion. A number of other surrogate tests have been used, but generally only on a small scale, and mostly in the context of AIDS, prior to the availability of tests for antibodies to HIV. As noted above, surrogate tests usually lack both specificity and sensitivity, and it is difficult to explain the meaning of an abnormal test result to the donor.

Specific Tests for Pathogens

The presence of pathogens in blood can be identified by a number of different methods. Conceptually, direct observation of the pathogen is simplest but generally insensitive and dependent on the nature of the agent itself. For many years, testing has been based upon serologic approaches. The apparent prototype was testing for HBV, based upon detection of a viral antigen circulating in the blood, but subsequent to this, almost all chronic transfusion-transmissible infections were detected by tests for antibodies to the agents. Subsequently, in a return to testing directly for the pathogen, nucleic acid testing was implemented for HBV, HCV, and HIV. This approach has proved effective for acute infections such as West Nile fever and Zika.

Direct Tests

The most obvious direct testing method is by microscopic observation, as performed, for example, for malaria. However, this approach is very insensitive, and it is confined to agents that are visible by light microscopy. Thus it is restricted to parasites and bacteria and cannot be used for viral infections. Because only a very small volume of sample can be observed, there is also a serious constraint in the sensitivity of testing: for malaria, for example, 50 to 100 parasites per microliter are routinely identified although a highly skilled operator may be able to recognize as few as 5 per microliter.

In many countries, platelet concentrates are routinely evaluated for the presence of bacteria, which may readily proliferate during room-temperature storage of the component. In most cases, a sample of the platelet product is used to inoculate a culture bottle, which is then incubated in an automated device, routinely used for diagnostic blood cultures in the hospital environment [12, 13]. The device monitors the products of bacterial growth in the bottles. If the platelet sample contains viable bacteria, then growth is detected, in most cases, within 24 h, a time at which the parent product is released. Mounting evidence, however, indicates that this method may detect only about 50% of contaminated platelets, most often because the relatively small sample taken for testing did not capture one or more of the contaminating bacteria, which are initially at very low concentration in the component. Despite these issues, bacterial culture has resulted in a measureable decrease in the incidence of patient septic reactions attributable to bacterial contamination in platelets.

Serologic Tests

At least until the end of the twentieth century, transfusion-transmissible agents of most concern were all responsible for chronic infections: syphilis, hepatitis B and C, HIV, HTLV, and malaria. A common feature of all of these infections is the ability of the infectious agent to persist in the face of the immune response. Consequently, the presence of circulating antibodies is frequently an indication that the infectious agent is present. Because antibody is usually present at much greater level than the agent itself, serologic testing is more sensitive than direct detection. Of the pathogens listed immediately above, all but HBV were detected in donations (at least initially) solely by tests for the corresponding antibodies.

The test for HBV was based upon an unusual characteristic of the virus, that is, the production of excess viral coat material (HBsAg), which circulates in the blood stream. The levels of HBsAg are often enough to be detected by simple test methods, such as agar gel diffusion. Indeed, when routine testing was introduced in the early 1970s, tests were based upon some form of passive or active gel diffusion. Those tests were soon supplanted by radioimmunoassay, which was itself replaced by enzyme immunoassays. Subsequently, other detector systems, such as chemiluminescence, have been introduced, but the principle is the same. In some countries, including the United States and Japan, additional testing is now performed for the HBV antibody to core antigen (anti-HBc), in order to identify donors with occult HBV infection (OBI), which is defined as HBV infection without detectable HBsAg.

In 1985, tests for antibodies to HIV were licensed and rapidly implemented. This testing achieved a major decline in the incidence of transfusion-transmitted HIV infection, although earlier efforts based upon donor education and questioning also contributed [14]. Serologic tests were subject to continuing improvement, particularly for sensitivity, but it became apparent that the early stages of infection resulted in circulating virus prior to the development of detectable levels of antibody. This was named the window period and was as long as 45 days with the earliest tests and 22 days after some levels of improvement [15].

Concern about retroviruses led to the initiation of antibody testing for HTLV (1 and 2) in a number of countries and subsequently, the identification of HCV, the virus causing non-A, non-B hepatitis, led to the development of antibody tests for this infection. Subsequently, testing for antibodies to Trypanosoma cruzi, the agent of Chagas disease, was implemented in the United States, although it has been in place for many years in South and Central America and parts of Mexico.

A further development for HCV and HIV testing has been the development of the so-called fourth-generation serologic tests. These tests detect antibodies but also simultaneously may detect viral antigens, which are detectable during the latter part of the window period. The level of viral antigens is low and not comparable to the high levels of HBsAg. Nevertheless, this approach adds some additional level of safety in circumstances where NAT is not available.

Rapid Tests

A number of rapid tests for key markers for HBV, HCV, and HIV are available. They are usually based upon lateral flow technology and generate a result in around 15 min. Although logistically attractive, they are not generally recommended for blood donor testing as they are less sensitive than standard tests. However, they are definitely better than not testing at all and may be of value in locations without available resources for full-scale testing. In China and some other locations, rapid tests for HBsAg are used for first-pass testing to avoid collection of blood from donors with positive findings, thus saving resources [16]. In these instances, however, accepted donors are also tested with regular tests.

Nucleic Acid Testing

As pointed out above, even the most sensitive serologic tests demonstrably fail to detect donor infectivity during the window period. This failure led directly to the adoption of tests for viral nucleic acids as a further supplement to serologic tests. It is of some interest to note that the practice actually originated in parts of Europe and that the initial objective was to reduce the prolonged window period for HCV. In the United States, however, the major driver for adoption was the further reduction of risk of transmission of HIV. Many blood systems now routinely test all donations for HBV, HCV, and HIV using multiplex NAT, either singly or in mini pools [17]. The analytic sensitivity of these methods is in the range of a few copies per mL, and their use has reduced the residual risk of transfusion transmission to less than one per million in the United States. Additionally, NAT is used as the sole approach to detection of WNV and Zika virus, both of which are arboviruses causing acute infection. For these (and similar viruses), serologic tests are of little use, as infectivity appears to be absent once antibodies are detectable.

In addition to its sensitivity, NAT has the advantage that new tests can be developed relatively rapidly, as it is not necessary to develop and qualify antibodies or other rare reagents. Once testing platforms are available, they can readily accept newly developed tests. These factors contributed to the rapid adoption of tests for WNV and Zika virus in the United States. However, they are not necessarily appropriate for many parts of the developing world.

Interventions: Pathogen Inactivation

Pathogen reduction , based upon pathogen inactivation technology, offers considerable promise in the context of blood safety. At the time of writing, methods to treat plasma and platelet concentrates are commercially available and in use in a number of countries. Methods for treatment of whole blood and red cell concentrates are in development, and some are in clinical trials. The most promising methods have been shown to inactivate 5 or more log10 of infectivity of many viruses and bacteria, in the absence of significant impact on blood components. There is documented evidence that a method used on platelet concentrates has essentially eliminated the risk of bacterial sepsis after implementation in France, Belgium, and Switzerland [18]. Methods for plasma and platelets have been approved in the United States as an alternative to NAT for Zika virus. This system has also been approved in place of bacterial culture. Currently, however, it is unclear whether use of pathogen reduction will entirely supplant all other tests in current use.

Global Implications of Interventions

The WHO considers that the minimum requirement for blood safety is to test for syphilis, HBV, HCV, and HIV. This expectation has largely been met, but this is an incomplete approach to minimizing transfusion-transmitted infections, particularly if only a single test is used. However, not every country or blood system has the resources to go beyond the minimum requirement. It is unrealistic to expect that all countries should meet the same standard, no matter how desirable that is. Another issue is that risks do vary geographically, as exemplified by Chagas disease and HTLV, so additional local measures may be appropriate. An obvious question is the role of pathogen reduction in the developing world. Clearly, it would be desirable to use this technology to displace the need for other interventions, but this is offset by concerns about costs and appropriate infrastructure. Some methods have been evaluated in this environment, but it is probably too early to assess their true potential and benefit.