Detecting Wildlife in Unmanned Aerial Systems Imagery Using Convolutional Neural Networks Trained with an Automated Feedback Loop

  • Connor Bowley
  • Marshall Mattingly
  • Andrew Barnas
  • Susan Ellis-Felege
  • Travis Desell
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10860)


Using automated processes to detect wildlife in uncontrolled outdoor imagery in the field of wildlife ecology is a challenging task. This is especially true in imagery provided by an Unmanned Aerial System (UAS), where the relative size of wildlife is small and visually similar to its background. This work presents an automated feedback loop which can be used to train convolutional neural networks with extremely unbalanced class sizes, which alleviates some of these challenges. This work utilizes UAS imagery collected by the Wildlife@Home project, which has employed citizen scientists and trained experts to go through collected UAS imagery and classify it. Classified data is used as inputs to convolutional neural networks (CNNs) which seek to automatically mark which areas of the imagery contain wildlife. The output of the CNN is then passed to a blob counter which returns a population estimate for the image. The feedback loop was developed to help train the CNNs to better differentiate between the wildlife and the visually similar background and deal with the disparate amount of wildlife training images versus background training images. Utilizing the feedback loop dramatically reduced population count error rates from previously published work, from +150% to \(-3.93\)% on citizen scientist data and +88% to +5.24% on expert data.


  1. 1.
    Lion Research Center, University of Minnesota. Accessed 2012
  2. 2.
    Bonney, R., Cooper, C.B., Dickinson, J., Kelling, S., Phillips, T., Rosenberg, K.V., Shirk, J.: Citizen science: a developing tool for expanding science knowledge and scientific literacy. BioScience 59(11), 977–984 (2009)CrossRefGoogle Scholar
  3. 3.
    Phillips, T., Dickinson, J.: Tracking the nesting success of north America’s breeding birds through public participation in NestWatch (2008)Google Scholar
  4. 4.
    Wood, C., Sullivan, B., Iliff, M., Fink, D., Kelling, S.: eBird: engaging birders in science and conservation. PLoS Biol. 9(12), e1001220 (2011)CrossRefGoogle Scholar
  5. 5.
    Xu, S., Zhu, Q.: Seabird image identification in natural scenes using grabcut and combined features. Ecol. Inf. 33, 24–31 (2016)CrossRefGoogle Scholar
  6. 6.
    Abd-Elrahman, A., Pearlstine, L., Percival, F.: Development of pattern recognition algorithm for automatic bird detection from unmanned aerial vehicle imagery. Surv. Land Inf. Sci. 65(1), 37 (2005)Google Scholar
  7. 7.
    Chrétien, L.-P., Théau, J., Ménard, P.: Visible and thermal infrared remote sensing for the detection of white-tailed deer using an unmanned aerial system. Wildl. Soc. Bull. 40(1), 181–191 (2016)CrossRefGoogle Scholar
  8. 8.
    LeCun, Y., Cortes, C.: MNIST handwritten digit database. AT&T Labs (2010).
  9. 9.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)Google Scholar
  11. 11.
    Bowley, C., Mattingly, M., Ellis-Felege, S., Desell, T.: Toward using citizen scientists to drive automated ecological object detection in aerial imagery. In: 2017 IEEE 12th International Conference on e-Science (e-Science). IEEE (2017)Google Scholar
  12. 12.
    Mattingly, M., Barnas, A., Ellis-Felege, S., Newman, R., Iles, D., Desell, T.: Developing a citizen science web portal for manual and automated ecological image detection. In: 2016 IEEE 12th International Conference on e-Science (e-Science), pp. 223–232. IEEE (2016)Google Scholar
  13. 13.
    Fischer, D.A., Schwamb, M.E., Schawinski, K., Lintott, C., Brewer, J., Giguere, M., Lynn, S., Parrish, M., Sartori, T., Simpson, R., Smith, A., Spronck, J., Batalha, N., Rowe, J., Jenkins, J., Bryson, S., Prsa, A., Tenenbaum, P., Crepp, J., Morton, T., Howard, A., Beleu, M., Kaplan, Z., vanNispen, N., Sharzer, C., DeFouw, J., Hajduk, A., Neal, J.P., Nemec, A., Schuepbach, N., Zimmermann, V.: Planet hunters: the first two planet candidates identified by the public using the kepler public archive data. Mon. Not. R. Astron. Soc. 419(4), 2900–2911 (2012)CrossRefGoogle Scholar
  14. 14.
    Simpson, R., Page, K.R., De Roure, D.: Zooniverse: observing the world’s largest citizen science platform. In: Proceedings of the 23rd International Conference on World Wide Web, pp. 1049–1054. ACM (2014)Google Scholar
  15. 15.
    Lintott, C.J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M.J., Nichol, R.C., Szalay, A., Andreescu, D., Murray, P., Vandenberg, J.: Galaxy zoo: morphologies derived from visual inspection of galaxies from the sloan digital sky survey. Mon. Not. R. Astron. Soc. 389(3), 1179–1189 (2008)CrossRefGoogle Scholar
  16. 16.
    York, D.G., Adelman, J., Anderson Jr., J.E., Anderson, S.F., Annis, J., Bahcall, N.A., Bakken, J., Barkhouser, R., Bastian, S., Berman, E., et al.: The sloan digital sky survey: technical summary. Astron. J. 120(3), 1579 (2000)CrossRefGoogle Scholar
  17. 17.
    Voss, M.A., Cooper, C.B.: Using a free online citizen-science project to teach observation & quantification of animal behavior. Am. Biol. Teach. 72(7), 437–443 (2010)CrossRefGoogle Scholar
  18. 18.
    Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. In: ACM Transactions on Graphics (TOG), vol. 23, no. 3. pp. 309–314. ACM (2004)Google Scholar
  19. 19.
    Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13(1), 21–27 (1967)CrossRefGoogle Scholar
  20. 20.
    Freund, Y., Schapire, R.E., et al.: Experiments with a new boosting algorithm. In: ICML, vol. 96, pp. 148–156 (1996)Google Scholar
  21. 21.
    Friedman, J.H.: Additive logistic regression: a statistical view of boosting. Ann. Statist. 28, 337–407 (2000)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  23. 23.
    Gomez, A., Salazar, A., Vargas, F.: Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks, arXiv preprint arXiv:1603.06169 (2016)
  24. 24.
    Bowley, C., Andes, A., Ellis-Felege, S., Desell, T.: Detecting wildlife in uncontrolled outdoor video using convolutional neural networks. In: 2016 IEEE 12th International Conference on e-Science (e-Science), pp. 251–259. IEEE (2016)Google Scholar
  25. 25.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)Google Scholar
  26. 26.
    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the ICML, vol. 30, p. 1 (2013)Google Scholar
  27. 27.
    Ng, A.Y.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Proceedings of the Twenty-first International Conference on Machine Learning, p. 78. ACM (2004)Google Scholar
  28. 28.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate O (1/k2). In: Soviet Mathematics Doklady, vol. 27, no. 2, pp. 372–376 (1983)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Connor Bowley
    • 1
  • Marshall Mattingly
    • 1
  • Andrew Barnas
    • 2
  • Susan Ellis-Felege
    • 2
  • Travis Desell
    • 1
  1. 1.Department of Computer ScienceUniversity of North DakotaGrand ForksUSA
  2. 2.Department of BiologyUniversity of North DakotaGrand ForksUSA

Personalised recommendations