Introduction

Globally, emerging infectious diseases represent a threat to human and animal health (Daszak et al. 2000; Woolhouse et al. 2008). Most emerging diseases originate from wildlife (Taylor et al. 2001), where they may infect multiple animal hosts (Haydon et al. 2002). Understanding disease emergence requires consideration of the pathogen, animal hosts that are naturally infected by the pathogen and the ecological interactions which facilitate pathogen perpetuation in nature (Childs et al. 2007). The One Health concept recognizes that human, domestic animal, and wildlife health are interconnected and should be considered within an ecosystem context (Kaplan et al. 2009), while promoting collaboration between microbiologists, ecologists, epidemiologists, physicians, veterinarians, and modelers in the development of conceptual and mathematical system models. Models can guide appropriate disease surveillance, prevention, and control strategies (Fooks 2007; Zinsstag et al. 2009, 2011). Communication between these traditionally independent disciplines relies on a mutual conceptual understanding of disease surveillance methods and precise interpretation of the data generated. Subsequent data use in predictive models must recognize the strengths and limitations of the techniques utilized (Table 1). This paper presents concepts and examples that may appear obvious to immunologists and microbiologists, yet may be unknown, or overlooked by, ecologists, modelers, and policy makers.

Table 1 Common Misinterpretations of the Meaning of Antibody Positive and Negative Animals in Wildlife Disease Investigations.

The measurement of antibodies in blood is a critical disease surveillance tool because antibodies are typically easier to detect and persist longer than the inciting infectious agents. Serological assays detect antibodies induced by infection or vaccination, and provide evidence of past exposure to a pathogen. Although ecologists, modelers and policy makers may receive little training in immunology or the technical aspects of measuring host immune response to infection, they often must rely on serological data for inference to pathogen force of infection and transmission rates, as well as to parameterize dynamic disease models. Here we review the role of antibody assays and the interpretation of results in wildlife disease investigations, for an audience with little training in immunology or laboratory diagnostics. We discuss common factors that lead to misinterpretation of serological data, which primarily result from a lack of understanding about host immune response to infection and variation in test sensitivity and specificity. We address issues relating to the interpretation of antibody prevalence data from wildlife, and provide recommendations to guide study design and inference using serologic data (Table 2).

Table 2 Recommendations for the Use of Serologic Testing for Inference to Wildlife Disease Monitoring or Surveillance.

Approaches to Studying Wildlife Infections

Incidence and prevalence are the most frequently used measures to describe the epidemiology of infection in natural populations. Incidence is the number of new infections in a population-at-risk over time. Prevalence can be described as point or period, with the former describing the proportion of infected animals in a population at any particular moment, and the latter describing the proportion of infected animals in a population over a designated period of time (e.g., season). Antibody prevalence (i.e., seroprevalence) describes the proportion of individuals within a population that demonstrate pathogen-specific antibodies in the serum. Longitudinal or cross-sectional sampling strategies gather data on incidence and prevalence to infer temporal or spatial infection dynamics in wildlife populations.

Longitudinal studies repeat sampling of individuals, social groups, or populations, to detect changes in antibody prevalence over time, and may be used to estimate infection incidence if the sample size is large enough to detect antibody seroconversion events in the population (Hazel et al. 2000). Re-sampling individual wild animals, however, is often logistically difficult, therefore where it is possible to determine age in a species, age-structured antibody prevalence data may be utilized to gain insight into pathogen transmission processes (Farrington et al. 2001). However, insights can be limited by available knowledge regarding individual serological outcomes to infection (Evans 1976), including the: probability that an infected individual will seroconvert; incubation period and case fatality rate of infected individuals; duration of the antibody response to infection; and relationship between antibody status and resistance to pathogen infection. Often, few data exist regarding these fundamental questions and models generated from antibody prevalence data should recognize such uncertainties.

Cross-sectional studies focus on the social group, population, or species and, in contrast to longitudinal studies, provide snapshots on current or past infection prevalence rather than incidence. Antibody prevalence is only equivalent to infection incidence when the duration of tested antibody is orders of magnitude shorter than life-span of the host (e.g., IgM responses). Cross-sectional studies can document evidence for circulation of a pathogen within a group, population, geographic area, or species; for example, during exploratory or outbreak investigations when the infection status of a population, or the natural host range of a pathogen, is unknown (Swanepoel et al. 2007; Lembo et al. 2011). A major limitation of single cross-sectional studies is that they do not provide information on infection dynamics. However, cross-sectional antibody prevalence data may still be useful for disease ecology studies, and in some cases more useful than infection prevalence data (Heisey et al. 2006). For example, antibodies typically persist longer than antigen, and hence are more likely to be detected within a population ‘snapshot’. Repeated cross-sectional surveys that incorporate age-structured sampling may permit inference into temporal infection dynamics of wildlife (Plowright et al. 2008; Hayman et al. 2012). Cross-sectional surveys can also be used to evaluate spillover risk from wildlife populations. For example, cross-sectional surveys of antibodies to Brucella abortus in elk or to pseudorabies virus in wild swine inform managers where spillover to livestock is most likely to occur, and hence where to target management (Cross et al. 2007; Pannwitz et al. 2011). Serology has also been used to evaluate risk of spillover from domestic animals to wildlife, as with canine distemper virus transmission from domestic dogs to Serengeti carnivores (Alexander and Appel 1994; Cleaveland et al. 2000).

Cross-sectional antibody data are important in planning and evaluating wildlife disease management strategies. For example, vaccination against rabies virus (RABV) is undertaken annually in North America and Europe through the use of recombinant or modified live virus vaccines, enclosed in a bait for oral consumption by target wildlife (Rupprecht et al. 2008). Cross-sectional surveys are used to determine the pre-intervention spatial distribution of immunity so that vaccine baits can be optimally distributed (Vos 2003), and post-vaccination herd immunity for inference to bait uptake and infection resistance (Sidwa et al. 2005). RABV vaccination and modeling of immunity were management strategies also used to prevent the extinction of the Ethiopian wolf (Canis simiensis) (Haydon et al. 2006; Knobel et al. 2008). Another example involves management of bovine tuberculosis (Mycobacterium bovis) spillover to cattle through Bacillus Calmette–Guérin (BCG) vaccination of a wildlife reservoir, the European badger (Meles meles), in Great Britain (Chambers et al. 2011). Discrimination of infected versus vaccinated badgers was possible in this study (Greenwald et al. 2003), though it is typically not possible with antibody prevalence data.

A valuable step in designing ecological wildlife disease investigation involves the development of a conceptual, and ideally quantitative and predictive epidemiological model of the system. Early communication and collaborative model development ensure that appropriate data will be collected to inform a predictive model of the system (Restif et al. 2012) (Table 2). Traditional epidemiological models identify the basic compartments which formally define cohorts of susceptible, exposed, infected and (perhaps) recovered or immune individuals (i.e., SEIR), and the interactions among these cohorts which facilitate the invasion and maintenance of a pathogen in animal or plant populations (Anderson and May 1979, 1986). To parameterize these models, knowledge of the actual infection status of animals, and how this changes over time, is required. Acquiring these data could involve the lethal sampling of large numbers of animals, particularly if infection incidence is low. As this usually is neither feasible nor ethical, antibody data are used as a proxy for prevalence of infection.

Serological data may be more useful than infection prevalence data in determining the force of infection—the rate at which susceptible individuals become infected and the foundation for estimating transmission rates (Heisey et al. 2006). If antibody loss is slow and disease-induced mortality is well understood (Heisey et al. 2006, 2010), serology as an indicator of past infection may be more easily interpreted than prevalence data. For example, a low prevalence could be generated by a high force of infection and fast recovery rate, or a low force of infection and low recovery rate—problems avoided with serology if titers are long-lived.

Factors associated with the infection process, such as pathogen dose, variant and route of inoculation, can all impact the induction of a host antibody response to infection. While detection of antigen-specific antibodies usually indicates prior exposure to a pathogen, negative test results do not necessarily rule out prior exposure (Turmelle et al. 2010b). Antibody-positive animals are not necessarily infected animals, as one study demonstrated during a survey for Puumala virus in wild bank voles (Clethrionomys glareolus) (Alexeyev et al. 1998). It is often assumed that the immune class (R) of animals in SEIR models is equivalent to seropositive animals, when in fact antibodies may not be a reliable indicator of infection resistance (Raberg et al. 2009). In addition, variation in the sensitivity of antibody detection methods may exist (Cleaveland et al. 1999; Chambers et al. 2002; Troyer et al. 2005), with apparent trade-offs between sensitivity (ability to identify positive results and avoid false negatives) and specificity (ability to identify negative results and avoid false positives) for any assay. However, application of novel modeling methodologies, such as site-occupancy models, may tolerate imperfect detection probabilities based on serosurveys and other diagnostic techniques (Lachish et al. 2012).

Detecting Infections in Wildlife

The objective of disease surveillance systems is to track the incidence and prevalence of a specific pathogen infection in populations of interest. Several diagnostic techniques can obtain such data, each having its own strengths and limitations (OIE 2010). Pathogen isolation (e.g., using cell culture or animal models) permits identification and characterization of the disease agent, and enables animal infection experiments which are necessary to fulfill Koch’s postulates (Evans 1976) and characterize host pathogenesis. Pathogen isolation also permits a greater epidemiological understanding of the circulating pathogen diversity within and among reservoir and incidental hosts (Streicker et al. 2010). However, pathogen isolation from wildlife can be challenging even under ideal laboratory conditions. Infection burdens may be low, as observed with henipavirus infections in bats (Middleton et al. 2007; Halpin et al. 2011), or the pathogen may be sequestered in organs, thereby requiring lethal sampling, as with brucellosis in bison (Bison bison) and elk (Cervus elaphus) (Baldwin and Roop 2002) or classical swine fever in wild swine (Sus scrofa) (Kaden et al. 2006). Some infections may be latent, i.e., dormant in the body but with potential for reactivation, as has been observed with pseudorabies in wild swine (Wittmann and Rziha 1989). Often, animal infectious periods are short, as seen with RABV and canine distemper virus (Deem et al. 2000; Hampson et al. 2009), with few animals infected at any given time, thereby requiring very large sample sizes to detect infection. Finally, the handling and isolation of some pathogens requires high containment facilities only found in specialist laboratories. These factors might make obtaining isolates from wildlife impractical in many cases.

Direct pathogen detection tests, other than isolation, such as antigen-detection assays and molecular diagnostic tools (e.g., the polymerase chain reaction—PCR), can be used to detect evidence of active or latent infection. These methods share many of the limitations of pathogen isolation: unless a pathogen is circulating in blood, excreted in urine or feces, or colonizing an accessible mucosal surface or superficial lymph node, lethal sampling of wildlife will be required. Despite this, detection of host pathogen excretion via accessible pathways (e.g., blood or mucosal surfaces) can provide meaningful insights for parameterizing transmission probabilities and rates, although several studies also recognized that pathogen excretion may be intermittent among infected animals (Baer and Bales 1967; Chambers et al. 2002; Middleton et al. 2007), potentially limiting inference from cross-sectional snapshots. However, sample integrity is a key factor for field studies and maximizing the probability of successful pathogen isolation or detection often requires cold-chain or laboratory capacity that is difficult in a field setting, especially in remote geographic areas.

Given some limitations of pathogen isolation and antigen detection methods, antibody prevalence data are often used to elucidate infection dynamics in animal populations. The presence of specific antibody, however, only demonstrates past exposure to an antigen, while typically providing no information about the timing, intensity or frequency of infection. At a population level, antibody prevalence data provide information about the cumulative exposure history of the population, but not necessarily infection status. Antibody prevalence does not change quickly in response to changes in infection incidence, particularly when antibodies persist for long periods and host population turnover is slow. Some pathogens have evolved strategies to circumvent detection by the host immune system (e.g., lyssaviruses and herpesviruses) (Aleman et al. 2001; Faber et al. 2002; Wang et al. 2005), thus complicating the reliance on serological techniques to track infection dynamics. Notable testing limitations with serological techniques include cross-reactivity, poor accuracy, and undefined or non-standardized cut-off values to interpret an antibody-positive result. Because presence of antibody may not confer infection resistance, it is critical to explore the significance of an antibody-positive status in the context of a controlled infection process, using in vitro and in vivo models. However, with careful study design and interpretation (Table 2), antibody prevalence can be an invaluable tool for understanding the ecology of disease dynamics, even in poorly-understood systems such as wildlife populations.

Optimal Test Selection and Interpretation

The two main classes of antibody targeted in serological testing are IgM and IgG, where IgM is secreted first in response to pathogen infection yet is usually short-lived, whereas IgG is secreted later and persists longer in the circulation. Serological assays typically detect either binding (BAb) or neutralizing antibodies (NAb), and the type of test determines the type of antibodies that are detected and the subsequent inference that is possible from such data (Table 3).

Regardless of the assay selected, proper test validation, and stringent laboratory quality control standards are key to reliable collection and interpretation of antibody prevalence data (OIE 2010). Standard practice requires that appropriate positive and negative controls (ideally, relevant to the host population sampled) be employed in every test. Although laboratory strains of a pathogen may be employed for assay standardization, a field-derived strain of the pathogen may be more appropriate for certain systems or questions. Where pathogen diversity within a population, species or community is high, e.g. paramyxoviruses among bats (Drexler et al. 2012), it may be desirable to include multiple pathogen strains in serological tests (Kuzmin et al. 2011). The recent development of pathogen pseudotypes for serologic testing can facilitate testing of diverse pathogen repertoires using small sample volumes while minimizing biohazards (Temperton et al. 2005; Wright et al. 2008).

Cross-reactivity of antibodies to multiple pathogens has been important in vaccine development, but can also limit the interpretation of antibody prevalence data (Weyer et al. 2008; Horton et al. 2010; Mansfield et al. 2011). Cross reactivity can pose particular challenges in disease investigations of wildlife, as there is often no prior characterization of circulating pathogen diversity or cross-reactivity within and among populations. For example, antisera raised against one flavivirus can cross-react with other flaviviruses (Mansfield et al. 2011), but cross-reactivity within and between flavivirus serocomplexes has been inconsistent (Calisher et al. 1989a). One consequence of flavivirus cross-reactivity is reduced specificity in serological assays (Hirota et al. 2010), which led to the early misdiagnosis of the North American West Nile Virus epidemic as St. Louis Encephalitis in New York City (Lanciotti et al. 1999; WHO 1999). However, the idea of cross-reactivity limiting the specificity of serological assays extends to a variety of systems, as demonstrated among rhabdoviruses (Calisher et al. 1989b).

Evaluation of test repeatability and robustness is necessary for sound interpretation of serological test results, particularly for longitudinal studies. Inter-laboratory variation is a well-recognized issue for all pathogen testing, but some assays, particularly virus neutralization tests (VNTs), are prone to variation even within the same laboratory. This is because they are biologically dynamic tests, relying on consistent replication of live virus populations in cell culture. The NAb titer of a single control serum tested against a standard laboratory strain of RABV in a closely controlled test can vary by more than two-fold (Figure 1). The precise quantity of virus used in neutralization assays affects estimated antibody titers. To counter this, acceptable standards of variation must be developed with regard to positive and negative controls, with any test results that do not fall within strict and pre-determined values being discarded. Although frequently not performed, longitudinal samples from individuals should be tested in the same assay at the same time, rather than in consecutive assays.

Figure 1
figure 1

The correlation between results obtained from testing one control serum against one virus in multiple assays (n = 3167) in a rabies virus neutralization test using a single challenge virus standard (CVS). A linear regression model (solid line, R 2 = 0.16, P < <0.01) shows a 0.18 log2 reduction in serum titer for every twofold increase in virus titer and substantial variance in serum titer (standard deviation 0.42 log2). Virus titer is measured for each test, and results are discarded if the infectious dose is outside pre-determined limits (4.32–8.23 log2 median tissue culture infective dose–TCID50).

Within and between assay—and between laboratory—variation has been carefully evaluated for serological assays in influenza, particularly in humans in relation to vaccination (Wood et al. 1994), but also for horses (Mumford 2000). When considering the two most commonly used and well-controlled serological assays available for human influenza, hemagglutination inhibition (HI) and single radial hemolysis (SRH) assays, Wood et al (1994) reported that although each technique was reproducible within laboratories, variability between laboratories was higher for HI (maximum variability 32-fold; geometric coefficient of variation, GCV, 112%) than for SRH (maximum variability 3.8-fold; GCV 57%). The potential for such variation is usually overlooked when interpreting serological data.

To determine population antibody prevalence and to evaluate the sensitivity and specificity of a test, values obtained with a given test sample are evaluated against a reference cut-off value, meaning that all values below the cut-off are considered antibody-negative and all values above the cut-off value are considered antibody-positive. Standard cut-off values are often not known, usually due to a lack of well-characterized reference samples from target wildlife populations. Modification of a cut-off value has a direct impact on the sensitivity and specificity of the serological assay—and hence estimated antibody prevalence. For this reason, it can be difficult to compare serological results across studies, particularly as cut-off values usually are not standardized between laboratories and because many publications report only proportional antibody prevalence rather than individual titers. Comparisons are more difficult when pathogen strains or antigens used in a serological test vary across studies. Estimating antibody prevalence is most problematic with regards to the evaluation of low-titer individuals and their proportion within a population, such that reporting results as quantitative values may be more informative (Peel et al. 2012).

An example of how different criteria can affect interpretation of results involves the testing of 166 European bats for European bat lyssavirus (EBLV) NAb. Using the same cut-off threshold for a positive response (i.e., a 1:27 dilution), but a different level of virus neutralization (100% versus 50% reduction in fluorescing fields) can lead to variation in NAb prevalence estimates, ranging from 0.6% (CI 0.0-3.3) under more stringent criteria (i.e., 100% neutralization) to 4.8% (CI 2.1-9.3) under less stringent criteria (i.e., 50% neutralization) (AVHLA, unpublished data). Similarly, in a study of RABV NAb among sera collected over two years from 1,058 bats in the United States, an increase in test cut-off threshold from 0.06 international units per ml (IU/ml) to 0.1 IU/ml led to a reduction in RABV NAb seroprevalence from 38% (CI 35-41) to 28% (CI 25-31) (CDC, unpublished data). While variation in testing conditions can be accounted for by using reference positive control sera of known potency, the above examples highlight the difficulties in running longitudinal samples across different years or operators and demonstrate that different estimates of antibody prevalence from the same samples are possible. Instances where positive control sera are not included or reported are especially troubling, and make it difficult to control for test variation across operators or laboratories. Efforts must be made to standardize methods and result interpretation and reporting across laboratories.

Serological Interpretation in Wildlife Populations

In most systems, the duration of detectable antibody following infection is not known. While positive reactions for one antibody class (i.e., IgM) can be taken as evidence of active or recent infection, it is not possible to infer timing of infection from more commonly detected antibody classes (e.g., IgG). Repeated infection may be necessary to induce an antibody response in some animals (Turmelle et al. 2010b), and antibodies may persist from weeks to years, depending on the host–pathogen system and individual variation (Aubert 1992). Importantly, the loss of detectable circulating antibody does not necessarily represent a loss of immunity with respect to subsequent pathogen infection (e.g., memory lymphocytes). Rather, the animal may be primed immunologically to respond to re-infection, or mechanisms such as cell-mediated immunity may also play a significant role in infection resistance. It is typically impossible to know the exposure histories of wild free-ranging animals, particularly during cross sectional studies, and extremely challenging to differentiate seronegative animals that were previously seropositive from animals that have never encountered the pathogen under study (i.e., are truly naïve) (Table 1).

With age-structured sampling of mammalian wildlife, maternally-derived antibody (MDAb) may be identified in nursing or recently weaned young, but the function of MDAb in most wildlife host-pathogen systems has not been well-characterized (Boulinier and Staszewski 2008). The presence of MDAb can also interfere with individual immune responses and can compromise the response to vaccination in offspring (Xiang and Ertl 1992; Muller et al. 2001; Siegrist 2003). Evaluating the proportion of antibody-positive dams is necessary for interpreting proportional antibody prevalence among offspring in a social group or population, as the antibody titer of the dam may impact the probability of transfer to, and the level of MDAb in, respective offspring (Muller et al. 2002; Boulinier and Staszewski 2008; Kallio et al. 2010). MDAb often wane in juveniles around the time of weaning (Muller et al. 2002; Plowright et al. 2008), yet may be detected for a much longer period of time using antibody-binding compared to VNT assays (Muller et al. 2005). The effects of MDAb may vary across and within host-pathogen systems, and few have been adequately studied. Susceptibility to infection is presumed to be high among offspring nursing from seronegative dams and, where breeding is seasonal, the infection of offspring that are naïve or have waning MDAb may modulate seasonal pulses of infection or disease outbreaks (Fouchet et al. 2007; Kallio et al. 2010; George et al. 2011; Plowright et al. 2011). Age-structured serological studies have great potential for providing highly informative insights for disease modeling (Heisey et al. 2006) and disease management strategies (Farrington et al. 2001), although timing of sampling intervals and proper cohort representation are key considerations.

Despite substantial variation in the longevity of wildlife, the immune response to repeated pathogen infection in long-lived hosts has received little attention. For example, humoral immune responses may be less important following repeated infection of bats with RABV (Turmelle et al. 2010b), perhaps due to an increasing role of cell-mediated defenses (Moore et al. 2006; Horowitz et al. 2010), potentially complicating a reliance on antibody prevalence data for dynamic disease models in some host-pathogen systems. Certainly, expanding immunological surveillance among wildlife to include different measures of immunity holds exciting promise for modeling the perpetuation and emergence of infectious diseases in wildlife (Graham et al. 2007).

Beyond initial decisions about which specific serological test will be employed for a study, sample size and sampling strategy must be carefully considered. Strategies might include capture of free-ranging animals or capture within a roost, shelter, nest or burrow. Capture of refuging animals may bias collection of sick or moribund animals which may be more likely to be infected and seropositive. Comparison of studies investigating RABV NAb seroprevalence in populations of bats suggests different NAb seroprevalence in bats captured while roosting versus those in flight (Constantine et al. 1968; Steece and Altenbach 1989; Turmelle et al. 2010a). Furthermore, individual immunological response to infection among wildlife populations may vary due to host or environmental factors (Bouma et al. 2010; Graham et al. 2010; Hawley and Altizer 2011). All studies must consider that age and social structures of populations can vary in space and time, potentially leading to variation in the types of individuals sampled and estimates of antibody prevalence.

Conclusions

As new diagnostic techniques develop in the study of wildlife disease, the challenges of interpretation of results from all systems are increasing. When properly employed, serological data can be very powerful for inference and modeling of infectious disease dynamics in wildlife, but the limitations must also be acknowledged. Development of conceptual and mathematical models prior to field sampling, greater consideration of pathogenesis and age structure in the population infection process, investment in longitudinal studies whenever possible and standardized sample collection, storage and testing protocols can ensure that reliable and meaningful data are obtained for modeling applications to effectively characterize, and evaluate intervention strategies for, wildlife disease systems.