Skip to main content

NOSpcimen: A First Approach to Unsupervised Discarding of Empty Photo Trap Images

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2023)

Abstract

A key tool in wildlife conservation is the observation and monitoring of wildlife using photo-trapping cameras. Every year, thousands of cameras around the world take millions of images. A large proportion of these are empty – they do not show any animals. Sorting out these blank images requires considerable effort from biologists, who spend hours on the task. It is therefore of particular interest to automate this task. So far, systems have been proposed which are based on the use of supervised learning models. In order to learn, these systems require the annotation of images to indicate where animals are located within them. NOSpcimen (NOn-SuPervised disCardIng of eMpty images based on autoENcoders) system takes a different approach. It relies on unsupervised learning mechanisms. Thus, no prior annotation work is required to automate the process of discarding empty images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tuia, D., et al.: Perspectives in machine learning for wildlife conservation. Nat. Commun. 13(1), 792 (2022)

    Google Scholar 

  2. De Bondi, N., White, J.G., Stevens, M., Cooke, R.: A comparison of the effectiveness of camera trapping and live trapping for sampling terrestrial small-mammal communities. Wildlife Research 37(6), 456–465 (2010)

    Google Scholar 

  3. Wei, W., Luo, G., Ran, J., Li, J.: Zilong: a tool to identify empty images in camera-trap data. Eco. Inform. 55, 101021 (2020)

    Article  Google Scholar 

  4. Tabak, M.A., et al.: Machine learning to classify animal species in camera trap images: applications in ecology. Methods Ecol. Evol. 10(4), 585–590 (2019)

    Google Scholar 

  5. Villa, A.G., Salazar, A., Vargas, F.: Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks. Ecol. Inform. 41, 24–32 (2017)

    Google Scholar 

  6. Beery, S., Morris, D., Yang, S., Simon, M., Norouzzadeh, A., Joshi, N.: Efficient pipeline for automating species id in new camera trap projects. Biodiversity Inf. Sci. Stand. 3, e37222 (2019)

    Google Scholar 

  7. Charte, D., Charte, F., García, S., del Jesus, M.J., Herrera, F.: A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines. Inf. Fusion 44, 78–96 (2018)

    Google Scholar 

  8. Qi, Y., Wang, Y., Zheng, X., Wu, Z.: Robust feature learning by stacked autoencoder with maximum correntropy criterion. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6716–6720. IEEE (2014)

    Google Scholar 

  9. Liu, W., Pokharel, P.P., Principe, J.C.: Correntropy: a localized similarity measure. In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 4919–4924. IEEE (2006)

    Google Scholar 

  10. Theis, L., Shi, W., Cunningham, A., Huszár, F.: Lossy image compression with compressive autoencoders. arXiv preprint arXiv:1703.00395 (2017)

  11. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Google Scholar 

  12. Xie, J., Linli, X., Chen, E.: Image denoising and inpainting with deep neural networks. Adv. Neural. Inf. Process. Syst. 25, 341–349 (2012)

    Google Scholar 

  13. Chalapathy, R., Chawla, S.: Deep learning for anomaly detection: a survey. arXiv preprint arXiv:1901.03407 (2019)

  14. Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129–137 (1982)

    Article  Google Scholar 

  15. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    Google Scholar 

  16. Bird, S., Klein, E., Loper, E.: Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly Media Inc, Sebastopol (2009)

    Google Scholar 

  17. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16000–16009, June 2022

    Google Scholar 

Download references

Acknowledgements

The research carried out in this study is part of the project “ToSmartEADs: Towards intelligent, explainable and precise extraction of knowledge in complex problems of Data Science” financed by the Ministry of Science, Innovation and Universities with code PID2019-107793GB-I00/AEI/10.13039/501100011033. Also, this work was partly enabled by Antón Alvarez’s participation in the CV4Ecology Summer Workshop, supported by the Caltech Resnick Sustainability Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David de la Rosa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de la Rosa, D. et al. (2023). NOSpcimen: A First Approach to Unsupervised Discarding of Empty Photo Trap Images. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2023. Lecture Notes in Computer Science, vol 14135. Springer, Cham. https://doi.org/10.1007/978-3-031-43078-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43078-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43077-0

  • Online ISBN: 978-3-031-43078-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics