Introduction

Misinformation, also referred to as disinformation or denialism, increasingly poses a challenge to invasive species policy and management (Russell and Blackburn 2017; Ricciardi and Ryan 2018). Whereas scientific debate is characterized by reasoned dialectic, healthy skepticism, and constructive consideration of differing interpretations and perspectives, misinformation campaigns use approaches such as ad hominem attacks, strawman arguments, appeals to emotion, and diversionary tactics to make untruthful and disingenuous assertions and to propagate information that directly contradicts substantial scientific evidence. Misinformation clouds scientific consensus, creates uncertainty, and obstructs policy, both for the management of invasive species (Ricciardi et al. 2017) and more broadly for issues of profound societal importance like climate change (Oreskes and Conway 2010; UCS 2018).

A salient example highlighting the problem of misinformation in invasion biology is the issue of free-ranging domestic cats (Felis catus). Overwhelming scientific consensus supports that cats are an invasive species; they have caused dozens of extinctions (Doherty et al. 2016), impact native wildlife populations (Loss and Marra 2017), and carry multiple zoonotic diseases (Gerhold and Jessup 2013). Yet cat population management is exceptionally contentious, likely due to the popularity of cats as human companion animals. Free-ranging cat advocates propagate misinformation about the ecological impacts of cats (Loss and Marra 2018; Stanley 2018) to overturn policies that would allow removal of cats to achieve biodiversity management objectives and replace them with non-lethal options only (i.e., a “no-kill” policy). These non-lethal approaches—such as trap-neuter-return (TNR), where cats are trapped, sterilized, and released—are often presented to policymakers and the public as a panacea to reduce free-ranging cat populations and improve cat welfare. However, there is no rigorous scientific evidence that TNR is widely effective at reducing cat populations (Longcore et al. 2009; Marra and Santella 2016). Further, the claim that nonlethal approaches enhance cat welfare ignores that many outdoor cats live shortened and perilous lives (Barrows 2004; PETA 2018) and that they continue to transmit diseases and cause wild animal suffering through depredation (McRuer et al. 2017).

Advocates for free-roaming cats have also focused extensively on discrediting peer-reviewed scientific research on harmful impacts of cats, as exemplified by the response to a paper three of us (SRL, TW, and PPM) published that contains data-driven estimates of wildlife mortality due to cat predation in the United States (Loss et al. 2013). This quantitative synthesis of studies from around the world estimated that U.S. free-ranging pet cats and unowned feral and semi-feral cats annually kill 1.3–4.0 billion birds and 6.3–22.3 billion mammals. Loss et al. (2013) concluded that cats are the top source of direct human-caused mortality (i.e., excluding indirect threats like habitat loss) for U.S. birds and small mammals. This conclusion was supported by similar U.S. reviews showing lower bird mortality for other direct threats, including collisions with structures and automobiles (Longcore et al. 2012; Loss et al. 2015), and research showing cats to be the top source of direct mortality for birds in Canada (Blancher 2013; Calvert et al. 2013). Upon publication of Loss et al. (2013), the conclusion that cats are the top U.S. source of direct human-caused mortality for birds and small mammals spread rapidly in the media, reaching hundreds of millions of people through > 300 outlets (Angier 2013; Morelle 2013). The paper was also well-received in scientific circles, having been cited > 320 times as of July 2018 according to Google Scholar, with no instances of negative criticism.

Despite the paper’s favorable reception, free-ranging cat advocates launched an effort to discredit Loss et al. (2013). Alley Cat Allies (ACA), an organization that claims domestic cats balance ecosystems and have no harmful effects on wildlife (ACA 2017a), commissioned a report criticizing the paper’s methods (ACA 2013; Online Resource 1). An opinion article on the National Public Radio website (King 2013) referenced this report’s criticisms, further expanding its influence. An independent blogger, soon thereafter employed by Best Friends Animal Society (BFAS 2013), also posted a critique of the paper’s methods on his Vox Felina website (Wolf 2013; Online Resource 2) followed by presentations rehashing these criticisms in scientific venues, including the 2016 North American Congress for Conservation Biology and 2017 Wildlife Society Conference (Wolf 2017).

These efforts have influenced policies with ramifications for invasive species management. ACA conducts webinars to train supporters to influence policymakers and spread information designed to discredit Loss et al. (2013) and other scientific research (ACA 2017b, c). As a result, ACA and their supporters have given testimony in policy arenas across the U.S. in an attempt to discredit the science and fast-track policies that support TNR or otherwise keep cats on the landscape. The Council of the District of Columbia’s 2015 roundtable on the District’s Wildlife Action Plan (DDOE 2015) illustrates how these efforts have impacted policy. Among other priorities, the plan called for revisiting the District-funded TNR program, citing Loss et al. (2013) in describing cats as invasive species. At the public hearing, both ACA and the above-referenced blogger presented testimony attempting to discredit Loss et al. (2013). The councilmember chairing the session, believing the criticisms had merit, later referred to Loss et al. (2013) as “the discredited study” (CODC 2015) and helped amend the Fisheries and Wildlife Omnibus Amendment Act to remove all mention of cats as invasive species and to prevent removal of cats to achieve wildlife management objectives (CODC 2016).

As we show in detail in the remainder of this paper, none of these criticisms undermine the analysis and main conclusion of Loss et al. (2013) that cats annually kill billions of U.S. birds and mammals. The criticisms contain numerous overt errors and misrepresentations and were not published in the peer-reviewed literature. Nevertheless, we are compelled to respond to these criticisms through the formal peer-review process because continuing efforts to discredit the paper are in fact misleading policies impacting invasive species management. We also believe the cat issue is instructive for the broader problem of misinformation and denialism in invasive species management and conservation science. Below, we briefly summarize the methods in Loss et al. (2013) and respond to criticisms in both the Vox Felina post and ACA-commissioned review to assess whether they are part of the broader cat misinformation campaign (a summary of all criticisms and our response to each is in Online Resource 3).

Summary of methods in Loss et al. (2013)

Loss et al. (2013) conducted a systematic literature review to identify studies from around the world that quantified cat predation on wildlife—as well as studies with estimates of contiguous U.S. population sizes for owned pet cats and unowned feral and semi-feral cats, the proportion of pet cats allowed outdoors, proportions of owned and unowned cats that hunt, and the factor by which counts of prey returns by pets to owners underestimate total predation. Loss et al. (2013) excluded studies if it was unclear whether cats were owned or unowned and if studies sampled < 10 cats or were < 1 month in duration. Studies were only included if in temperate zones and mainland areas (continents and large islands, such as those constituting the United Kingdom).

In reviewing studies, Loss et al. (2013) took a conservative approach, meaning they were particularly careful to exclude studies or data that might inflate estimates of mortality due to special circumstances or unresolvable uncertainty. Average per cat predation rates were extracted from each study if reported or calculated using data in the paper or received after contacting study authors. Studies of unowned cats presented either numbers or occurrence percentages of prey items in cat stomachs or scats. For studies reporting numbers, Loss et al. (2013) assumed one stomach/scat sample represented a cat’s daily intake—an example of the conservative approach to estimation, since cats typically digest prey within 12 h (Hubbs 1951) and produce two or more scats per day (Jackson 1951). For studies reporting occurrence percentages, Loss et al. (2013) assumed percentages represented a cat’s daily intake (e.g., if 10% of stomachs/scats contained at least one prey item, then predation = 0.1 prey items per stomach/scat per day)—an even more conservative approach, since the presence of prey could reflect more than one prey item. For studies with predation documented for less than an entire calendar year, annual predation rates were calculated using monthly proportions of expected mortality generated from studies where sampling covered the entire year. For example, if a study’s duration covered 3 months of the year when 75% of annual predation mortality was expected to occur, the predation estimate was adjusted to a year-round estimate by adding 25% additional mortality over the 9 non-sampled months.

To incorporate uncertainty, Loss et al. (2013) derived probability distributions for all model parameters. To generate median predation estimates with ± 95% CIs, Loss et al. (2013) conducted a Monte Carlo simulation analysis with 10,000 random draws from each parameter distribution. For owned cats, predation estimates were generated by multiplying values for five parameters (with uniform distributions for all but the first), including the number of pet cats in the contiguous U.S. (normal distribution: mean = 84 M; SD = 2.5 M), the proportion of pet cats with outdoor access (range 0.4–0.7), the proportion of outdoor pets that hunt (range 0.5–0.8), the rate of prey returns to owners (see Loss et al. 2013 for details), and a correction factor to account for cats not returning all prey to owners (range 1.2–3.3). For unowned cats, estimates were generated by multiplying values drawn from uniform probability distributions for the number of unowned cats in the contiguous U.S. (range 30–80 million), the proportion of unowned cats that hunt (range 0.8–1.0), and estimated annual predation rates for hunting unowned cats (see Loss et al. 2013 for details).

Response to criticisms in Vox Felina blog post

The author of the blog post criticizing Loss et al. (2013) on the Vox Felina website (Wolf 2013, Online Resource 2) is widely viewed in the feral cat advocacy community as an expert on the science of cat impacts and management (BFAS 2013). His writings generally criticize any scientific publication that shows adverse impacts from feral cats or questions the effectiveness of TNR as a management approach, starting with Longcore et al. (2009) and continuing to the present. Wolf, in many recent posts and presentations (e.g., Wolf 2017), has focused extensively on criticizing Loss et al. (2013) without acknowledging any information that would undermine his critiques. In addition, Wolf’s presentations at scientific conferences have included little new material beyond the original blog post. Wolf and other advocates for TNR-only policies have frequently repeated his claims in policy discussions as if they had undisputed merit (CODC 2015). We here respond to these criticisms because they have contributed substantially to shaping the public and policy discourse regarding cat impacts and management.

Criticism: cat predation estimates are unrealistic given the total number of U.S. birds

Wolf has repeatedly claimed that Loss et al.’s (2013) annual estimates of cat predation on birds (1.4–4.0 billion) are not credible because the total estimated breeding population of North American (U.S. and Canada) land birds is 4.9 billion. He credits this estimate to Arnold and Zink (2011), but it was originally generated in Blancher et al. (2007) with data from the North American Breeding Bird Survey (NABBS). As noted in Blancher et al. (2007), 4.9 billion is “likely a conservative total, however, as densities from Breeding Bird Censuses suggest the total could be 2–3 times higher in some regions” (Rosenberg and Blancher 2005). Moreover, these population estimates only include adult birds at the onset of the breeding season, not young-of-the-year birds that hatch after surveys are conducted. An unknown but undoubtedly enormous number of these hatch-year birds do not survive to be counted in the following survey period, and many of these nestlings and fledglings are depredated by cats (Balogh et al. 2011; Stracey 2011). Furthermore, the Blancher et al. (2007) land bird estimates exclude other taxa (waterfowl, shorebirds, waterbirds, and secretive marshbirds), the adults of which—and certainly their nestlings and fledglings—would perhaps triple or quadruple an estimate of the total number of birds available. Other sources suggest roughly 10 billion birds are present in the contiguous U.S. in the pre-breeding season and 20 billion are present in the fall season (USFWS 2002), and even these estimates ignore hatch-year birds that perish during summer, the period when cat predation generally peaks. The Loss et al. (2013) predation estimates are thus reasonable given that the cumulative number of U.S. birds alive and susceptible to predation over an entire calendar year is far greater than 4.9 billion.

Criticism: cat predation does not necessarily lead to population-level impacts

Wolf claims that Loss et al. (2013) fails to acknowledge that predation does not always cause population-level impacts; he further claims such impacts are unlikely because cats tend to prey on “the young, the old, the weak, or unhealthy” that would have died anyway (Wolf 2013). Notably, Loss et al. (2013) never had the objective of assessing population impacts, yet they did state that such impacts are likely for some species in some mainland locations, a conclusion that has since been supported by multiple studies from around the world (Loss and Marra 2017). Because some of these studies existed in 2013 (Crooks and Soulé 1999; van Heezik et al. 2010; Balogh et al. 2011), Wolf’s criticism ignored evidence suggesting such impacts were likely.

Wolf’s evidence in support of cats preying mainly on weak and unhealthy individuals is also unsupported by scientific evidence. Neither of the studies he cited that assessed bird body condition (Møller and Erritzøe 2000; Baker et al. 2008) provides evidence that the birds killed would have lower fitness or survival, and one of the studies explicitly warns against such a conclusion (Baker et al. 2008). Further, a crude assessment of body condition to determine if birds would have died without cat predation overlooks the substantial challenges and complexity of determining whether mortality is additive or compensatory at the population level (Loss and Marra 2017). Finally, a population-level focus reflects a philosophy that ignores the individual welfare and suffering of the animals that cats injure and kill. This philosophy is in contradiction to the narrative used to justify no-kill policies, which focuses on concerns about individual cat welfare (Longcore et al. 2009).

Criticism: cat predation estimates have broad uncertainty

Wolf also criticizes the broad uncertainty around the Loss et al. (2013) predation estimates, citing an earlier paper by the same authors (Loss et al. 2012) highlighting limitations of wildlife mortality estimates that are extrapolated from a limited sample of data and do not account for uncertainty. Thus, Wolf attempts to discredit Loss et al. (2013) by claiming their methods do not follow their own recommendations. However, Wolf quotes Loss et al. (2012) out of context; that paper referred to limitations of extrapolating from one or a few small-scale studies without accounting for estimate uncertainty. As described above, Loss et al. (2013) synthesized data from multiple studies and defined data-derived probability distributions for all model parameters, allowing for explicit and transparent accounting for estimate uncertainty. Loss et al. (2013) also conducted a sensitivity analysis to quantify the amount of estimate uncertainty contributed by each parameter. The wide uncertainty around estimates therefore represented the state of the science on cat predation, and the sensitivity analysis highlighted opportunities to refine estimates with further research. Notably, even the lowest bounds of the estimates still amount to an exceptionally high number of birds and mammals killed by cats.

Criticism: the estimate of predation by unowned cats is inflated

Wolf has claimed that the Loss et al. (2013) estimates of predation by unowned cats are inflated for several reasons. For the number of unowned cats (range 30–80 million), Wolf states that no empirically derived estimates existed to inform the distribution range, a limitation Loss et al. (2013) in fact acknowledged. The authors did cite several sources providing non-empirical estimates, and the most commonly cited figure of 60–100 million cats is actually higher than the numbers used by Loss et al. (2013). If one assumes the actual numbers of unowned cats lie at the lower end of the distribution range (30 million cats), then the lower ends of the Loss et al. (2013) predation estimates reflect this possibility. As illustrated by the Loss et al. (2013) sensitivity analysis, this parameter contributed the greatest uncertainty to predation estimates. Wolf might have highlighted the need for research to improve the estimates, but he has provided no evidence that the numbers are lower than those cited by nearly every authority, including the Humane Society of the United States (30–40 million; HSUS 2018).

Wolf also claimed that Loss et al. (2013) was unjustified in assuming 80–100% of unowned cats kill wildlife because the studies cited are only on rural cats, and some urban studies have resulted in few direct observations of predation. Wolf further claimed that many urban cats reduce their hunting frequency because they are fed by humans. Regarding the former claim, the urban studies Wolf cites had no objective of estimating predation (Calhoon and Haspel 1989; Castillo and Clarke 2003), and the anecdotal nature of their observations does not allow conclusions about the proportion of cats that hunt or the frequency of predation events. The latter claim also has limited support in the peer-reviewed literature; indeed, studies show that cats hunt and kill regardless of whether they are fed by humans [Liberg 1984; Barratt 1998; but see Silva-Rodríguez and Sieving (2011)]. Based on the Loss et al. (2013) summary of rural studies, the lowest documented hunting proportion was 90%, yet they included a potential for an 80–100% hunting rate, an approach that was actually more conservative than the literature.

The Vox Felina blog post also criticized the annual predation rates of unowned cats. Wolf claimed that some of the earlier studies Loss et al. (2013) used, particularly those from the 1930s–1950s, overestimate predation because their data collection method (shooting cats along roads and assessing stomach contents) only samples cats that are hunting. This criticism is irrelevant because the predation rate parameters Loss et al. (2013) derived were only meant to reflect hunting cats, and as described above, the study derived a separate parameter to account for cats that do not hunt. Further, although stomach contents analyses do not provide an exact representation of numbers of prey killed, Loss et al. (2013) implemented a transparent and conservative approach to interpreting these data (see methods summary above). Wolf also claims predation estimates are inflated because Loss et al. (2013) used a uniform distribution rather than a skewed distribution. This criticism reflects a fundamental misunderstanding of the methods. Variation in predation among individual cats does tend to be skewed, but the uniform distributions Loss et al. (2013) derived were based on study averages across multiple cats. There is no evidence that among-study variation in average cat predation follows a skewed distribution. Finally, a more recent study using substantially more predation data reported higher rates of predation on birds by individual unowned cats in Australia (Woinarski et al. 2017), suggesting the range used for this parameter by Loss et al. (2013) was likely conservative.

Criticism: the estimate of predation by owned cats is inflated

Wolf also claimed that the estimates of predation by owned cats are inflated for several reasons. For the proportion of pet cats outdoors (range 0.4–0.7), Wolf suggested that two sources Loss et al. (2013) used to generate the distribution range (Marketing and Research Services, Inc. (MRS) 1997; American Bird Conservancy (ABC) 2012) actually referred to the same survey. Although it does appear that the ABC estimate originated from the earlier MRS survey, counting a study twice would have no effect on the parameter values drawn or the resultant predation estimates; Loss et al. (2013) assumed a uniform distribution, and counting a study twice would not change the distribution bounds. Wolf further claimed that Loss et al. (2013) did not distinguish between cats that are outdoors at all times and those that spend at least some time indoors, and that predation estimates are therefore inflated because indoor-outdoor cats probably kill fewer animals than cats that are outdoors at all times. Loss et al. (2013) derived the range of 40–70% of pet cats outdoors from published surveys for which data were unavailable to parse apart the number of hours each cat was allowed outdoors. This criticism therefore relates primarily to the level of detail in the original studies, not to the methods in Loss et al. (2013). Additionally, since studies from which Loss et al. (2013) extracted data included multiple cats that spent varying amounts of time outdoors, they indirectly captured variation in time spent outdoors in the prey return distributions. Finally, even if prey return parameters were overestimated, there would be little overall effect on predation estimates because pet cat predation rates are far lower than those for unowned cats.

Wolf has also claimed in scientific conference presentations (Wolf 2017) that Loss et al. (2013) used inflated values for the proportion of outdoor cats that hunt (range 0.5–0.8). A study published after Loss et al. (2013) offers further insight on this proportion. Loyd et al. (2013) used cat-borne videos to determine that 44% of owned cats hunted during an average monitoring period of 38 h. With a longer monitoring period, this rate would certainly meet and likely exceed the values used by Loss et al. (2013). For prey return rates, Wolf states in the blog post that estimates are biased for the same reasons he claims bias in unowned cat predation rates; however, as described above, this criticism about the skewed nature of predation reflects a misunderstanding of the way Loss et al. (2013) defined distributions for the predation rates.

Wolf also claims that Loss et al. (2013) misapplied the correction factor for prey items not returned to owners from Kays and DeWan (2004) because this value (3.3) was based only on observations of cat hunting success for mammals in summer. Loss et al. (2013) applied this correction factor for all prey taxa and seasons because there is no evidence that the proportion of prey items returned to owners varies taxonomically or seasonally. A more recent study indicates that the Loss et al. (2013) estimate for this correction factor may actually be conservative. Cat-borne videos showed that only 23% of prey items were returned to owners, suggesting a correction factor of 4.3 (Loyd et al. 2013), a value that if used would have increased predation estimates. Wolf was correct in pointing out that Loss et al. (2013) misinterpreted George (1974) when interpreting a correction factor of 2 based on George (1974) stating that 50% of predation events were observed. This percentage actually referred to an observation bias associated with the author’s survey methods, not the prey return behavior of cats. This misinterpretation has frequently been made in the literature (Fitzgerald and Turner 2000), but in the context of Loss et al. (2013), it has no effect on the correction factor distribution or the predation estimates because it is between the other two values used to inform the bounds of the uniform distribution.

Response to criticisms in Alley Cat Allies report

In support of the campaign to discredit Loss et al. (2013), Alley Cat Allies commissioned a report criticizing the paper’s statistical methods (ACA 2013; Online Resource 1). The report, which was not published in the peer-reviewed literature, raised seven criticisms, two of which duplicated those in the Vox Felina post (counting a study twice; not accounting for variation in the amount of time pet cats are outdoors). Like the Vox Felina criticisms, the ACA report is characterized by errors and misrepresentations that undermine its credibility. Nonetheless, because the report has been influential in propagating claims that the Loss et al. (2013) study has been discredited, we here respond to the five criticisms not already discussed above.

Criticism: no meta-analysis was performed

The ACA report claimed that, instead of conducting a Monte Carlo simulation analysis, Loss et al. (2013) should have (1) included a meta-analysis to calculate an estimate of “some effect size or parameter” or (2) addressed why such an analysis was not conducted. As the ACA report noted, formal meta-analyses require a collection of studies that estimate both the effect size for a relationship between two variables as well as an estimate of uncertainty for the effect size (Gurevitch et al. 2001). However, a formal meta-analysis is not possible for the question of how many animals are killed by cats because this question entails no relationship between variables and therefore no effect size estimates. The ACA report is therefore incorrect in suggesting a meta-analysis could be conducted.

Criticism: extrapolation is easily misused

The ACA report claimed that Loss et al. (2013) misused data extrapolation in a number of ways. However, these criticisms highlight either a fundamental misunderstanding or purposeful misrepresentation of the paper’s methods. First, to provide an analogy for the perils of extrapolating, the report describes an example where a linear relationship between age and height is quantified for a sample of children and then used to extrapolate height estimates for adults, resulting in an inaccurately inflated height estimate for adults whose growth slows after a certain age. This example suggests that Loss et al. (2013) similarly extrapolated to generate predation rate estimates higher than those observed in the data. However, the ACA report analogy is flawed and misleading because, as clearly described in Loss et al. (2013), the predation rate distributions employed predation values within, not above, the range of values in the literature. There is no reason to believe that predation rates of non-sampled U.S. cats vary from the sample of cats that was used to inform the predation rate parameters. Therefore, this is not extrapolation in the same sense as the age-height example and instead is likely to have resulted in representative, not inflated, predation estimates.

Second, the report misrepresents how Loss et al. (2013) calculated full-year predation rate estimates from studies with sampling that covered less than an entire calendar year. The report claimed that monthly predation estimates were multiplied by 12 to generate annual estimates, an approach that ignores seasonal fluctuations and would cause overestimation of mortality given declines in predation during winter. However, as Loss et al. (2013) clearly stated, and as we describe above, this seasonal variation was accounted for by adjusting partial-year predation estimates using the average monthly proportions of expected mortality in non-sampled months, as generated from studies where sampling covered the entire year. For example, if a study’s duration only covered 3 summer months when 75% of annual predation mortality was expected to occur, the predation estimate was adjusted to a year-round estimate by adding 25% additional mortality over the 9 non-sampled months.

Third, the report claims that in some instances Loss et al. (2013) extrapolated a single predation rate estimate to all U.S. cats. For example, the report states: “Based on a small sample of cats over three summer months in one specific geographic area, the authors see fit to extrapolate this predation rate to all cats at all times of the year in all geographic regions of the U.S.” This criticism misses the point that the analysis was based on probability distributions derived from multiple studies, each including data on multiple cats, and it further ignores how Loss et al. (2013) accounted for seasonal predation variation, as described above. Finally, the report claims it was unclear how Loss et al. (2013) calculated predation estimates for studies of cat stomach and scat contents. However, this methodology was transparently described in the paper’s methods, and as described there and above, the approach used by Loss et al. (2013) more likely resulted in lower rather than higher estimates.

Criticism: ad hoc analysis

The ACA report criticizes the use of uniform distributions for many model parameters and claims that decisions on defining ranges for these distributions were ad hoc and not based on a formal statistical method. The report provides a specific example for the proportion of owned cats allowed outdoors where eight literature estimates (0.66, 0.5, 0.65, 0.4, 0.43, 0.77, 0.36, 0.56) were used to derive a uniform distribution with min = 0.4 and max = 0.7. This criticism ignores that Loss et al. (2013) did follow a formal statistical procedure to define many distributions; for some data-rich parameters, including the prey return rate and predation rate, distributions were based on 95% CIs calculated from values in the literature. Further, the use of uniform distributions is defensible because few data were available for several parameters, and the authors had no available justification to ascribe greater probability to one value over another. For such data-poor parameters, ranges were defined to either exactly capture observed minimum and maximum values or adjusted slightly lower than observed values in accordance with the conservative approach Loss et al. (2013) chose to take. For the specific example noted above, the range of 0.4–0.7 is centered on the range of values from three nationwide studies (0.5, 0.65, and 0.66) and allows for slightly lower and higher values as indicated by local studies. This approach follows a logical, repeatable procedure and was transparently described in the methods and supplementary methods sections of Loss et al. (2013).

Criticism: mischaracterization of error

The ACA report claims that Loss et al. (2013) did not acknowledge and account for error in the individual estimates used to define probability distributions. The report again uses the example of the proportion of owned cats outdoors and highlights how one literature estimate of 0.4 has an associated 95% CI of 0.26–0.53. Importantly, Loss et al.’s (2013) conclusions regarding the amount of wildlife mortality from cat predation would remain unchanged even with incorporation of uncertainty around each individual parameter estimate extracted from the literature. By considering the 95% CI for each individual estimate, the overall probability distributions would be wider—reflecting the lower confidence interval bounds for the lowest estimates and the upper bounds for the highest estimates. However, the mean and median of the distributions would remain essentially unchanged. This would result in broader uncertainty around resultant predation estimates but negligible change to median estimates.

Criticism: authors cite sources that are not peer-reviewed

The report claims that three of the references Loss et al. (2013) used to estimate the proportion of pet cats allowed outdoors were not peer-reviewed (American Pet Products Manufacturers Association, Inc. (APPMA) 1997; MRS 1997; ABC 2012). However, there is no reason provided for why predation estimates would be biased by using these references. Industry reports include the most comprehensive national analyses of pet ownership behaviors available, and they follow statistically grounded survey methodologies. Furthermore, Loss et al. (2013) used several additional sources, including four peer-reviewed studies, to define the probability distribution for this parameter.

Conclusions

The conclusions in Loss et al. (2013) regarding the large numbers of U.S. birds and mammals killed by cats have been overwhelmingly accepted by the scientific community. Yet free-ranging cat advocates have used the unpublished and non-peer-reviewed critiques that we address above to successfully convince some policymakers and members of the public that the Loss et al. (2013) study has been widely discredited. As we have shown, these criticisms are either completely unfounded or relate to minor issues that do not undermine the study’s conclusions. Based on the emotional responses cats typically elicit, at best these arguments to counter the findings of Loss et al. (2013) are naïve, and at worst they are intentionally misleading. Given the overt misrepresentations and errors that characterize these criticisms, we argue that it is justifiable to interpret the critiques as part of the broader misinformation campaign designed to purposefully fabricate doubt regarding the harmful impacts of outdoor cats and to stymie policies that would remove outdoor cats from the landscape (Loss and Marra 2018).

The misinformation surrounding the issue of free-ranging cats joins the growing body of unsubstantiated and fabricated claims and denialism surrounding invasive species science and management (Ricciardi and Ryan 2018). In offering for publication in the peer-reviewed scientific literature a point-by-point refutation of the claims attempting to discredit the Loss et al. (2013) study, we in no way wish to establish or imply the legitimacy of those claims. Rather, we intend to draw attention, by way of a specific example, to certain general aspects of a misinformation campaign, namely: (1) utilizing avenues of influence (even presentations at scientific meetings) that bypass scientific peer review in an effort to accrue false legitimacy, and (2) the technique of delegitimizing an argument by isolating and criticizing elements that ignore both context and explanations of methodology. Both misinformation strategies underscore the importance of thorough peer review—in the published literature, in the more informal setting of scientific meetings, and ultimately in public policy forums. Notably, misinformation about cat management, as well as other aspects of invasive species denialism, are often associated with implicit and explicit threats of violence against scientists and policymakers (Carey 2012; Marra and Santella 2016; Power 2017, and personal experience of the authors), and this is particularly concerning with regard to safeguarding civil, evidence-based policy and discourse. We re-emphasize that misinformation and denialism are characterized by unsubstantiated assertions that contradict scientific evidence, not by honest disagreement, differing interpretations, civil discourse, and healthy skepticism that characterize the scientific endeavor (Crowley et al. 2017a; Russell and Blackburn 2017).

Several factors have been identified that lead to conflict in invasive species management, including conflicts surrounding conservation efforts that involve companion animals. These factors may also contribute to invasive species conflicts eventually becoming increasingly dominated by misinformation and denialism. Factors that contribute to conflict include: a failure to recognize and consider the full social context surrounding invasive species issues, a lack of inclusive public engagement, and a communication framework that assumes that one-way information transfer from experts to the public will effectively increase support for management (Farnworth et al. 2014; Courchamp et al. 2017; Crowley et al. 2017b). As reviewed by Ricciardi and Ryan (2018), motivations behind invasive species misinformation and denialism could include public distrust of scientists and scientific institutions, conflicting values about what constitutes nature and nativeness, and the media giving equal space to dissenting viewpoints that contradict overwhelming scientific consensus. For the cat issue, the goal of feral cat advocates to completely eliminate lethal management approaches in favor of nonlethal methods like TNR—which does not fully address cat welfare concerns (Barrows 2004; PETA 2018) and allows continued existence of cats on the landscape, along with their associated predation and disease transmission—is also likely a key motivator for efforts to discredit peer-reviewed science showing harmful effects of free-ranging cats. These types of invasive species conflicts, and potentially the rise of denialism and misinformation, may be avoided by considering the social context of management issues, recognizing that viewpoints are influenced by values as much as by evidence, and engaging the public inclusively using a collaborative, multidirectional communication framework (Estévez et al. 2015; Courchamp et al. 2017; Crowley et al. 2017b).

In cases where misinformation and denialism are already at play in affecting invasive species policy, additional steps will be needed to counteract their influence (Loss and Marra 2018). Investigative journalism to expose misinformation will help counteract the media’s role in portraying scientific consensus as up for debate. Identifying, exposing, and counteracting sources of misinformation and denialism on the internet, including on social media websites, will also be important given the increasing role of these platforms in shaping public discourse on controversial issues. Further social science research is also needed to identify the scenarios and situations that lead to the emergence of misinformation and denialism in invasive species conflicts. Finally, making the public and policymakers aware of misinformation, and providing them with authentic scientific information and refutation of faulty claims, should facilitate evidence-based policy (Russell and Blackburn 2017). Our response to the misleading criticisms of Loss et al. (2013) follows the spirit of this recommendation in emphasizing evidence for the fabricated controversy around free-ranging cats. We hope this response paves the way toward evidence-based management of free-ranging cats and also stimulates discussion and research into sources and impacts of misinformation and denialism in invasion biology more broadly.