Abstract
Camera trap data are biased when an animal passes through a camera’s field of view but is not recorded. Cameras that operate using passive infrared sensors rely on their ability to detect thermal energy from the surface of an object. Optimal camera deployment consequently depends on the relationship between a sensor array and an animal. Here, we describe a general, experimental approach to evaluate detection errors that arise from the interaction between cameras and animals. We adapted distance sampling models and estimated the combined effects of distance, camera model, lens height, and vertical angle on the probability of detecting three different body sizes representing mammals that inhabit temperate, boreal, and arctic ecosystems. Detection probabilities were best explained by a half-normal-logistic mixture and were influenced by all experimental covariates. Detection monotonically declined when proxies were ≥6 m from the camera; however, models show that body size and camera model mediated the effect of distance on detection. Although not a focus of our study, we found that unmodeled heterogeneity arising from solar position has the potential to bias inferences where animal movements vary over time. Understanding heterogeneous detection probabilities is valuable when designing and analyzing camera trap studies. We provide a general experimental and analytical framework that ecologists, citizen scientists, and others can use and adapt to optimize camera protocols for various wildlife species and communities. Applying our framework can help ecologists assess trade-offs that arise from interactions among distance, cameras, and body sizes before committing resources to field data collection.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Camera traps are widely used to collect wildlife data that can inform conservation and management decisions at local, ecosystem, and landscape levels (Keim et al., 2021; Swanson et al., 2015; Visscher et al., 2017, 2023). They have become a standard method of monitoring wildlife across varying spatial and temporal scales (Meek et al., 2014; Steenweg et al., 2017) because of their ability to efficiently record multiple species using non-invasive, autonomous methods. Camera trap data can be used to assess wildlife distribution, diversity, composition, abundance, density, population trends, and behaviour provided they are deployed following a rigorous sampling design (Burton et al., 2015; McIntyre et al., 2020; Rovero et al., 2013).
Like many monitoring approaches, camera traps can result in false negatives where an animal is present but not recorded. The detection process consists of a series of conditional probabilities (Hofmeester et al., 2019) that can be formulated as (Kays et al., 2021):
where the probability of observing an animal (pi) in time interval i is a product of three parameters and the number of visits during that interval (Ni). The parameters relate to the conditional probabilities of an animal encountering the camera given it is within the sample unit (re), triggering the camera once it has encountered the camera (rt), and the camera capturing a usable image of the animal (rp). These processes are sometimes combined into a single probability estimate (Burton et al., 2015); however, understanding each component is valuable when inferring ecological phenomena at broad spatial scales (Kays et al., 2021) or using camera data to obtain information about the encounter process (Findlay et al., 2020; Moeller et al., 2018; Nakashima et al., 2018).
The probability of detecting an animal passing through a camera’s field of view (rt) arises from the interaction between the camera and the animal. Most camera traps use passive infrared (PIR) sensors that detect thermal energy from the surface of an object. These sensors are triggered when a difference in heat moves between sensors (Meek et al., 2014; Welbourne et al., 2016). Cameras are therefore more effective at detecting animals with large surface areas relative to the sensor array. For example, the probability of detecting animals declines with both body size and distance (Heiniger & Gillespie, 2018; Howe et al., 2017; Jacobs & Ausband, 2018; Mason et al., 2022; Rowcliffe et al., 2011) because their heat signatures appear smaller. Similarly, camera type, height, angle, and target species can impact rt by changing the relationship between a sensor array and an animal (Apps & McNutt, 2018; Kelly & Holub, 2008; Welbourne et al., 2016).
Given the growing demand for data to support the conservation and management of multiple species (i.e. biodiversity), quantifying detection can help ecologists make a priori decisions about camera trap deployments and the subsequent analysis and interpretation of data. Here, we describe a general, experimental approach that extends distance models to evaluate detection errors that occur when an animal passes through a camera’s field of view but is not recorded (rt). Using two camera models commonly used in wildlife research (Reconyx HP2X and PC900), we first calculated each model’s horizontal field of view. We then estimated the effect of distance, camera lens height, model, and vertical angle on the probability of detecting three different body sizes representing mammals ranging in size from red fox (Vulpes vulpes) to moose (Alces alces). Our goal was to develop a general, experimental, and analytical framework that ecologists, citizen scientists, and others can use, and adapt, to maximize detection probabilities (rt) for a given wildlife species or community.
Materials and methods
Data collection
We conducted experiments in a flat, grassy field in Ontario, Canada (44.36° N, 78.74° W). The treeless field covered 2500 m2 so it was unlikely to result in false negatives resulting from topographic variation, or false positives resulting from movement of background surface temperatures (Welbourne et al., 2016). All experiments were completed using a standardized plot. Three cameras were mounted to a single pole with the bottom of the cameras located 80, 110, and 140 cm above ground level, corresponding to lens heights of 86, 116, and 146 cm respectively. Cameras cannot detect animals when covered by snow, and these heights represent a range of conditions that accommodate mid-winter snow depths throughout many temperate and boreal ecosystems. We placed stakes 2, 4, 6, 8, 10, 12, and 15 m in front of the cameras and marked the maximum extent of the plot by placing stakes at 1 m intervals perpendicular to the 15 m stake (Appendix 1, Figs. 4 and 5). We placed a 160-cm-tall snow measurement gauge in front of the camera array and positioned each camera so the gauge appeared in the centre of the field of view, with the top of the gauge in the top 25% of the image. Finally, we measured the horizontal field of view and vertical angle of the cameras used in the experiment (Appendix 2).
We conducted a series of experimental detection trials using proxies with similar shoulder heights of furbearers and game species that inhabit boreal and temperate ecosystems. The medium proxy was a person crawling with a height of 30–50 cm, representing animals such as red fox and lynx (Lynx spp.). The large proxy was a person walking hunched over with a height of 80–110 cm, representing grey wolf (Canis lupus), wild boar (Sus scrofa), American black bear (Ursus americanus), white-tailed deer (Odocoileus virginianus), fallow deer (Damas dama), roe deer (Capreolus spp.), caribou and reindeer (Rangifer tarandus), and red deer (Cervus elaphus). Finally, the large ungulate proxy was two people walking side-by-side with a height of 160–195 cm, representing wapiti (Cervus canadensis), bison (Bison spp.), and moose.
We programmed cameras to capture 5 images when triggered, wait 0 s between triggers, and rearm. We conducted ten trials for every combination of proxy (medium, large, and large ungulate), camera model (PC900, HP2X), lens height (86, 116, and 146 cm), snow measurement gauge distance (hereafter ‘aiming distance’; 5, 10, and 15 m), and distance (2, 4, 6, 8, 10, 12, and 15 m) resulting in a balanced dataset consisting of 3780 trials. Each trial was timed and conducted by moving perpendicular to the camera array. We waited at least 15 s between trials to ensure each trial was independent and alternated the direction of movement so that proxies entered the field of view from both directions. We manually inspected every image, determined whether each trial resulted in a detection, and mapped the location where the proxy was first detected by comparing its position to the stakes in the photo (Fig. 1).
Detection modelling
We estimated the probability of detecting proxies in two steps. First, we adapted distance sampling models to a binomial response and fit empirical detection functions based on the exponential, half-normal, and hazard distributions:
where p(y) is the detection probability at y distance, α is the scale parameter, and γ is the shape parameter. We considered candidate models consisting of standard functions where detection declines monotonically with distance, as well as modified functions that include a logistic mixture model that account for animals passing beneath the camera (Rowcliffe et al., 2011). We fit distance models using maximum likelihood methods in the R bbmle package (Bolker and R Development Core Team, 2022) and selected the most parsimonious model using AIC.
Second, we estimated the mean probability of detection in relation to experimental covariates (i.e. proxy body size, proxy distance, camera model, lens height, aiming distance, vertical angle), as well as proxy speed, sun altitude above the horizon (radians), and sun azimuth (radians) while conditioning parameter estimates on the most parsimonious detection function from the first step. Camera angles were correlated with lens height (Spearman’s ρ = 0.71, p < 0.001) and aiming distance (ρ = 0.47, p < 0.001) so we fit one suite of models considering vertical angle and a second suite considering lens height and aiming distance. We also considered statistical interactions between body size and distance, based on the expectation that larger animals are more easily detected than smaller animals when further away, and between camera model and distance. We fit global models and used backwards stepwise selection until only significant covariates and interactions remained in each model. Binomial regression models were estimated by using generalized linear models with a logit link function in R 4.2.1 (R Core Team, 2022). We used the receiver-operating characteristic to evaluate the final models’ ability to correctly classify true detections (sensitivity) and misses (specificity) using the pROC package (Robin et al., 2011).
Results
Detection probabilities were best explained by a half-normal-logistic mixture that allows lower probabilities near the camera (Table 1). All experimental factors influenced detection in both regression models. Confidence intervals around shared parameters overlapped, with most intervals overlapping the mean estimate from the complementary model (Appendix 1, Fig. 6), implying that parameters were robust to including camera angle or lens height and aiming distance. Both models had AUCs ≥0.920 (Appendix 1, Figs. 7 and 8) which is often considered to have superior discrimination (Hosmer et al., 2013).
Detection monotonically declined when proxies were ≥6 m from the camera; however, our regression models show that body size and camera model mediated the effect of distance on detection (Figs. 2 and 3). The two larger proxies had similar detection probabilities across the range of experimental distances (Tables 2 and 3; p > 0.103). Although the medium-sized proxy had significantly lower probabilities of being detected, the magnitude of the effect varied depending on its distance from the camera (p < 0.001). Marginal effects show the probability of detecting the medium-sized proxy was consistent with larger proxies when located 4–12 m in front of the camera but was lower when located 2 m (lower by 0.17–0.24) or 15 m in front of the camera (lower by 0.1–0.10).
The two camera models exhibited different strengths relative to our experimental design. The PC900 has a taller vertical field of view (32°) than the HP2X (30°) (Reconyx, Inc.) and, as a result, had a higher probability of detecting proxies located 2 m in front of the camera compared to the HP2X (Fig. 3). Marginal effects show that this was particularly important for detecting the medium-sized proxy when the camera lens was ≥116 cm above the ground. In contrast, the HP2X had a higher probability of detecting proxies located >6 m in front of the camera.
Sun position exhibited small, but significant effects on detection probabilities. Detection declined as the sun rotated towards the front of the camera (p≤0.013); the mean probability of detection was 0.73 (SD = 0.319) during the morning when the sun was oriented 92 to 124° relative to the camera lens, and 0.69 (SD = 0.329) during the afternoon when the sun was oriented 41 to 88° relative to the camera lens.
Discussion
We modelled detection as a Bernoulli process, where each trial results in a success (detection) or failure (miss). Ecologists rarely estimate false negatives directly and have developed numerous analytical methods that estimate rt indirectly using information about the encounter process (re). Early camera trap studies estimated detection by applying capture-recapture methods to marked individuals (Karanth & Nichols, 1998). Marking is not feasible for many species, leading to the development and application of random encounter (Rowcliffe et al., 2011), spatial count (Chandler & Royle, 2013), distance sampling (Howe et al., 2017), time- and space-to event (Moeller et al., 2018), and random encounter and staying time models (Nakashima et al., 2018; Warbington & Boyce, 2020). These models are needed to make robust inferences about population state variables and can be improved by deploying cameras in such a way that they maximize rt. We provide a simple, adaptable experimental and analytical framework that ecologists, citizen scientists, and others can use to maximize detection probabilities (rt) for a given wildlife species or community.
A camera’s ability to detect an animal is determined by the arrangement of the PIR sensor array relative to the size of that animal’s heat signature (Welbourne et al., 2016). As a result, the detection process arises from the properties of the animal, the camera, and how the camera is deployed. For example, detection necessarily declines with distance because animals’ heat signatures are proportional to their body size and decreases with distance according to the inverse-square law (Papacosta & Linscheid, 2014). Similarly, camera models’ performance can vary under similar ecological conditions (Driessen et al., 2017; Heiniger & Gillespie, 2018; Swann et al., 2004) because their sensitivity is influenced by the physical arrangement of the PIR sensors relative to the animal. Finally, the physical placement of the camera can influence performance by altering the orientation of the sensor array relative to the same heat signature.
Lens height and angle are two parameters that ecologists can easily manipulate in the field. Previous studies suggest that the importance of lens height is mediated by how camera sensors are angled relative to animal movement. For example, cameras deployed 60–90 cm above ground level detected more animals than cameras deployed at 3 m with the sensors oriented parallel to a road (Meek et al., 2016), but similar numbers of animals when cameras were deployed at 3 m with sensors were oriented to cover an entire hiking trail or game trail (Jacobs & Ausband, 2018). Although correlations prevented us from isolating the effects of camera angle and lens height, our regression models show that deploying cameras <90 cm and parallel to the ground (i.e. near 0°) resulted in the highest detection probabilities, consistent with the understanding that detection is highest when cameras point at the centre body mass of target species (Meek et al., 2014, 2016). This relationship presents a challenge where snow accumulation creates time-varying rt vis-à-vis variation in effective height. Ecologists using camera traps in temperate, boreal, and arctic climates may need to consider seasonal camera protocols (e.g. deploying cameras at different heights) or modelling detection as a spatio-temporal process. Applying our experimental and analytical framework across a range of lens heights and angles using a full-factorial design could help ecologists identify deployment parameters that are less sensitive to time-varying rt.
Although not a focus of our study, we found that detection probabilities were higher when the sun was behind the camera (Table 2). As PIR sensors detect heat emitted from the surface of an object, we suspect that the accumulation of radiant energy throughout the day reduced detection by homogenizing surface temperature of species and background objects (Welbourne et al., 2016). Although the mean probability of detection in our experiment only declined 0.04 from morning to afternoon, our results suggest that unmodeled heterogeneity arising from solar radiation can bias inferences where animal movements vary over time (Frey et al., 2017; Green et al., 2022). Existing camera protocols that recommend aiming camera traps north to avoid sun glare that can impact photo quality (e.g. ABMI, 2021) do not fully account for heterogeneous detection that can arise from solar position. For example, unmodeled heterogeneity arising from solar position may wax and wane across seasons, particularly at high latitudes the range of solar radiation is greatest. We suggest biologists consider solar conditions when designing and analyzing camera trap studies.
There are at least three general frameworks ecologists can use to estimate detection probabilities (rt). Monitoring the same location using multiple cameras (Jacobs & Ausband, 2018; Meek et al., 2016) allows ecologists to estimate relative rt using realistic conditions but can be inefficient to assess detection across a range of covariate values and may confound rt with re (however see Palencia et al., 2022). Directly assessing false positives from time-lapse images (Hamel et al., 2013) allows ecologists to estimate absolute rt | re using realistic conditions but is similarly inefficient across a range of covariate values. Directly assessing false positives in an experimental framework (Apps & McNutt, 2018, this study) allows ecologists to efficiently estimate absolute rt | re across a range of covariate values at the cost of realistic animal size and movement. Although we applied distance sampling methods in an experimental framework, they can be adapted to estimate absolute rt in any framework where the distance between the camera (i.e. observer) and animal are measured without bias.
Ecologists can easily extend our experimental and analytical framework; for brevity, we identify four extensions. First, detection errors can also occur when an unidentified animal triggers a camera trap (rp). Identifiability can be limited by camera flash, trigger speed, image quality, weather, lighting, vegetation density, and animal speed, direction, and coat colour relative to background conditions. Conducting experimental trials across a range of these conditions could further help ecologists quantify conditions that impact species identification, test temporally dynamic detection models, and develop protocols that maximize identifiability of target species. Second, as alluded to above, ecologists can apply it to more realistic scenarios. For example, the medium and large proxies in our study had the same body mass and surface area, even though animals emit heat in proportion to their surface area (Martin & Barboza, 2020; Mortola, 2013). Conducting experimental trials where the target closely resembles the study species size and movement—and is allowed to enter and exit the field of view from multiple directions and angles—is needed to achieve ecologically credible estimates (Apps & McNutt, 2018; Becker et al., 2022). Proxies are valuable for exploring mechanisms and trade-offs associated with camera deployments but cannot fully replace target species. Third, our results show that mixing two camera models made by the same manufacturer can have measurable impacts on detection within a given study. Conducting experimental trials on more camera models can help ecologists identify risks (Palencia et al., 2022), and specify analytical models aimed at correcting heterogeneous detection arising from using multiple camera models. Finally, ecologists can estimate detection probabilities by incorporating covariates into the empirical detection functions (Appendix 3). The high number of parameters relative to the data in our experiment resulted in a complex likelihood surface with several local maxima, so we conditioned covariates on the estimated detection function. Reducing the number of parameters in future experiments (e.g. fewer body sizes) or increasing sample sizes could facilitate the development of more robust models where the shape of the distance model varies dynamically based on other covariates.
Our framework does not replace the need to consider detection processes in the analysis of camera trap data (Hofmeester et al., 2019). It can be used to develop field protocols that maximize detection probabilities (rt) for a given wildlife species, or assess potential trade-offs among species, seasons, camera models, and plot layout. Ecologists can also apply our experimental and analytical framework to field data by assessing how many time-lapse images containing an animal or a fresh animal track (trials) resulted in a triggered image (success or failure). Relating these trials to site-specific data about camera model, angle, height, and animal distance can be used to estimate species- and deployment-specific detection probabilities without needing to collect additional information about the encounter process.
Conclusion
Our study shows how experimental trials can reduce uncertainty associated with detection bias in rt when designing and analyzing camera trap programs. It may be reasonable to assume near-perfect detection for some species-distance combinations so long as vegetation and topography do not obscure animals (Becker et al., 2022; Keim et al., 2019); however, these scenarios are often limited to specific ecological conditions. Biologists are therefore faced with the choice of censoring data to distances where detection is homogenous and nearly perfect or incorporating species-specific models that account heterogeneous detection probabilities. Conducting trials across a range of conditions relevant to a given study area can help quantify detection zones and thus help biologists optimize camera protocols, account for variation arising from using multiple camera models, and assess trade-offs that arise from interactions among distance, cameras, and body sizes. Ultimately, this information can help users obtain unbiased estimates of animal use, occupancy, and density to support management or conservation activities.
Data availability
The datasets generated and analyzed during this study are available from the corresponding author on reasonable request.
References
Alberta Biodiversity and Monitoring Institute [ABMI]. (2021). Terrestrial ABMI Autonomous Recording Unit (ARU) and Remote Camera Trap Protocols. Version 2021-04-21. https://ftp-public.abmi.ca/home/publications/documents/599_ABMI_2021_TerrestrialARUandRemoteCameraTrapProtocols_ABMI.pdf. Accessed July 2023
Apps, P., & McNutt, J. W. (2018). Are camera traps fit for purpose? A rigorous, reproducible and realistic test of camera trap performance. African Journal of Ecology, 56(4), 710–720. https://doi.org/10.1111/aje.12573
Becker, M., Huggard, D. J., Dickie, M., Warbington, C., Schieck, J., Herdman, E., Serrouya, R., & Boutin, S. (2022). Applying and testing a novel method to estimate animal density from motion-triggered cameras. Ecosphere, 13(4), e4005. https://doi.org/10.1002/ecs2.4005
Bolker B., & R Development Core Team. (2022). bbmle: Tools for general maximum likelihood estimation. v1.0.25. https://CRAN.R-project.org/package=bbmle.
Burton, A. C., Neilson, E., Moreira, D., Ladle, A., Steenweg, R., Fisher, J. T., Bayne, E., & Boutin, S. (2015). Wildlife camera trapping: A review and recommendations for linking surveys to ecological processes. Journal of Applied Ecology, 52(3), 675–685. https://doi.org/10.1111/1365-2664.12432
Chandler, R. B., & Royle, J. A. (2013). Spatially explicit models for inference about density in unmarked or partially marked populations. The Annals of Applied Statistics, 7(2), 936–954 https://www.jstor.org/stable/23566419
Driessen, M. M., Jarman, P. J., Troy, S., & Callander, S. (2017). Animal detections vary among commonly used camera trap models. Wildlife Research, 44(4), 291–297. https://doi.org/10.1071/WR16228
Findlay, M. A., Briers, R. A., & White, P. J. (2020). Component processes of detection probability in camera-trap studies: Understanding the occurrence of false-negatives. Mammal Research, 65(2), 167–180. https://doi.org/10.1007/s13364-020-00478-y
Frey, S., Fisher, J. T., Burton, A. C., & Volpe, J. P. (2017). Investigating animal activity patterns and temporal niche partitioning using camera-trap data: Challenges and opportunities. Remote Sensing in Ecology and Conservation, 3(3), 123–132. https://doi.org/10.1002/rse2.60
Green, S. E., Stephens, P. A., Whittingham, M. J., & Hill, R. A. (2022). Camera trapping with photos and videos: Implications for ecology and citizen science. Remote Sensing in Ecology and Conservation, 9(2), 268–283. https://doi.org/10.1002/rse2.309
Hamel, S., Killengreen, S. T., Henden, J. A., Eide, N. E., Roed-Eriksen, L., Ims, R. A., & Yoccoz, N. G. (2013). Towards good practice guidance in using camera-traps in ecology: Influence of sampling design on validity of ecological inferences. Methods in Ecology and Evolution, 4(2), 105–113. https://doi.org/10.1111/j.2041-210x.2012.00262.x
Heiniger, J., & Gillespie, G. (2018). High variation in camera trap-model sensitivity for surveying mammal species in northern Australia. Wildlife Research, 45(7), 578–585. https://doi.org/10.1071/WR18078
Hofmeester, T. R., Cromsigt, J. P., Odden, J., Andrén, H., Kindberg, J., & Linnell, J. D. (2019). Framing pictures: A conceptual framework to identify and correct for biases in detection probability of camera traps enabling multi-species comparison. Ecology and Evolution, 9(4), 2320–2336. https://doi.org/10.1002/ece3.4878
Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression. John Wiley Sons.
Howe, E. J., Buckland, S. T., Després-Einspenner, M. L., & Kühl, H. S. (2017). Distance sampling with camera traps. Methods in Ecology and Evolution, 8(11), 1558–1565. https://doi.org/10.1111/2041-210X.12790
Jacobs, C. E., & Ausband, D. E. (2018). An evaluation of camera trap performance–What are we missing and does deployment height matter? Remote Sensing in Ecology and Conservation, 4(4), 352–360. https://doi.org/10.1002/rse2.81
Karanth, K. U., & Nichols, J. D. (1998). Estimation of tiger densities in India using photographic captures and recaptures. Ecology, 79(8), 2852–2862. https://doi.org/10.1890/0012-9658(1998)079[2852:EOTDII]2.0.CO;2
Kays, R., Hody, A., Jachowski, D. S., & Parsons, A. W. (2021). Empirical evaluation of the spatial scale and detection process of camera trap surveys. Movement Ecology, 9, 1–13. https://doi.org/10.1186/s40462-021-00277-3
Keim, J. L., DeWitt, P. D., Wilson, S. F., Fitzpatrick, J. J., Jenni, N. S., & Lele, S. R. (2021). Managing animal movement conserves predator–prey dynamics. Frontiers in Ecology and the Environment, 19(7), 379–385. https://doi.org/10.1002/fee.2358
Keim, J. L., Lele, S. R., DeWitt, P. D., Fitzpatrick, J. J., & Jenni, N. S. (2019). Estimating the intensity of use by interacting predators and prey using camera traps. Journal of Animal Ecology, 88(5), 690–701. https://doi.org/10.1111/1365-2656.12960
Kelly, M. J., & Holub, E. L. (2008). Camera trapping of carnivores: Trap success among camera types and across species, and habitat selection by species, on Salt Pond Mountain, Giles County, Virginia. Northeastern Naturalist, 15(2), 249–262. https://doi.org/10.1656/1092-6194(2008)15[249:CTOCTS]2.0.CO;2
Martin, J. M., & Barboza, P. S. (2020). Thermal biology and growth of bison (Bison bison) along the Great Plains: Examining four theories of endotherm body size. Ecosphere, 11(7), e03176. https://doi.org/10.1002/ecs2.3176
Mason, S. S., Hill, R. A., Whittingham, M. J., Cokill, J., Smith, G. C., & Stephens, P. A. (2022). Camera trap distance sampling for terrestrial mammal population monitoring: Lessons learnt from a UK case study. Remote Sensing in Ecology and Conservation, 8(5), 717–730. https://doi.org/10.1002/rse2.272
McIntyre, T., Majelantle, T. L., Slip, D. J., & Harcourt, R. G. (2020). Quantifying imperfect camera-trap detection probabilities: Implications for density modelling. Wildlife Research, 47(2), 177–185. https://doi.org/10.1071/WR19040
Meek, P. D., Ballard, G., Claridge, A., Kays, R., Moseby, K., O’brien, T., O’Connell, A., Sanderson, J., Swann, D. E., Tobler, M., & Townsend, S. (2014). Recommended guiding principles for reporting on camera trapping research. Biodiversity and Conservation, 23, 2321–2343. https://doi.org/10.1007/s10531-014-0712-8
Meek, P. D., Ballard, G. A., & Falzon, G. (2016). The higher you go the less you will know: Placing camera traps high to avoid theft will affect detection. Remote Sensing in Ecology and Conservation, 2(4), 204–211. https://doi.org/10.1002/rse2.28
Moeller, A. K., Lukacs, P. M., & Horne, J. S. (2018). Three novel methods to estimate abundance of unmarked animals using remote cameras. Ecosphere, 9(8), e02331. https://doi.org/10.1002/ecs2.2331
Mortola, J. P. (2013). Thermographic analysis of body surface temperature of mammals. Zoological Science, 30(2), 118–124. https://doi.org/10.2108/zsj.30.118
Nakashima, Y., Fukasawa, K., & Samejima, H. (2018). Estimating animal density without individual recognition using information derivable exclusively from camera traps. Journal of Applied Ecology, 55(2), 735–744. https://doi.org/10.1111/1365-2664.13059
Palencia, P., Vicente, J., Soriguer, R. C., & Acevedo, P. (2022). Towards a best-practices guide for camera trapping: Assessing differences among camera trap models and settings under field conditions. Journal of Zoology, 316(3), 197–208. https://doi.org/10.1111/jzo.12945
Papacosta, P., & Linscheid, N. (2014). The confirmation of the inverse square law using diffraction gratings. The Physics Teacher, 52(4), 243–245. https://doi.org/10.1119/1.4868944
R Core Team. (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing https://www.R-project.org/
Robin, X., Turck, N., Hainard, A., Tiberti, N., Lisacek, F., Sanchez, J. C., & Müller, M. (2011). pROC: An open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics, 12(1), 1–8. https://doi.org/10.1186/1471-2105-12-77
Rovero, F., Zimmermann, F., Berzi, D., & Meek, P. (2013). Which camera trap type and how many do I need? A review of camera features and study designs for a range of wildlife research applications. Hystrix, 24(2), 148–156. https://doi.org/10.4404/hystrix-24.2-8789
Rowcliffe, M. J., Carbone, C., Jansen, P. A., Kays, R., & Kranstauber, B. (2011). Quantifying the sensitivity of camera traps: An adapted distance sampling approach. Methods in Ecology and Evolution, 2(5), 464–476. https://doi.org/10.1111/j.2041-210X.2011.00094.x
Steenweg, R., Hebblewhite, M., Kays, R., Ahumada, J., Fisher, J. T., Burton, C., Townsend, S. E., Carbone, C., Rowcliffe, J. M., Whittington, J., & Brodie, J. (2017). Scaling-up camera traps: Monitoring the planet’s biodiversity with networks of remote sensors. Frontiers in Ecology and the Environment, 15(1), 26–34. https://doi.org/10.1002/fee.1448
Swann, D. E., Hass, C. C., Dalton, D. C., & Wolf, S. A. (2004). Infrared-triggered cameras for detecting wildlife: An evaluation and review. Wildlife Society Bulletin, 32(2), 357–365. https://doi.org/10.2193/0091-7648(2004)32[357:ICFDWA]2.0.CO;2
Swanson, A., Kosmala, M., Lintott, C., Simpson, R., Smith, A., & Packer, C. (2015). Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Scientific Data, 2(1), 1–14. https://doi.org/10.1038/sdata.2015.26
Visscher, D. R., Macleod, I., Vujnovic, K., Vujnovic, D., & DeWitt, P. D. (2017). Human risk induced behavioral shifts in refuge use by elk in an agricultural matrix. Wildlife Society Bulletin, 41(1), 162–169. https://doi.org/10.1002/wsb.741
Visscher, D. R., Walker, P. D., Flowers, M., Kemna, C., Pattison, J., & Kushnerick, B. (2023). Human impact on deer use is greater than predators and competitors in a multiuse recreation area. Animal Behaviour, 197, 61–69. https://doi.org/10.1016/j.anbehav.2023.01.003
Warbington, C. H., & Boyce, M. S. (2020). Population density of sitatunga in riverine wetland habitats. Global Ecology and Conservation, 24, e01212. https://doi.org/10.1016/j.gecco.2020.e01212
Welbourne, D. J., Claridge, A. W., Paull, D. J., & Lambert, A. (2016). How do passive infrared triggered camera traps operate and why does it matter? Breaking down common misconceptions. Remote Sensing in Ecology and Conservation, 2(2), 77–83. https://doi.org/10.1002/rse2.20
Acknowledgements
This study was supported by the Ontario Ministry of Natural Resources and Forestry to support Ontario’s provincial wildlife monitoring efforts. We thank Valerie Dupee, Alison White, Lynn Landriault, Neil Dawson, Ethan Dobbs, and Rachael DeRaaf for their assistance; and Darcy Visscher, Eric Howe, Lynn Landriault, Lani Stinson, Danielle Berube, Derek Goertz, Grace Bullington, Claudia Haas, Ben Teton, and one anonymous reviewer for their comments on earlier drafts of this paper.
Funding
This study was funded by the Ontario Ministry of Natural Resources and Forestry as part of its provincial wildlife monitoring programs.
Author information
Authors and Affiliations
Contributions
DeWitt: conceptualization, methodology, software, formal analysis, visualization, writing – original draft, writing – review & editing, supervision, project administration. Cocksedge: methodology, investigation, data curation, writing – original draft, writing – review & editing.
Corresponding author
Ethics declarations
Ethics approval
All authors have read, understood, and have complied as applicable with the statement on “Ethical responsibilities of Authors” as found in the Instructions for Authors.
Competing interests
The authors have no competing or financial interests relevant to this article.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
DeWitt, P.D., Cocksedge, A.G. A simple framework for maximizing camera trap detections using experimental trials. Environ Monit Assess 195, 1381 (2023). https://doi.org/10.1007/s10661-023-11945-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10661-023-11945-9