Large-Scale Automatic Species Identification

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10400)

Abstract

The crowd-sourced Naturewatch GBIF dataset is used to obtain a species classification dataset containing approximately 1.2 million photos of nearly 20 thousand different species of biological organisms observed in their natural habitat. We present a general hierarchical species identification system based on deep convolutional neural networks trained on the NatureWatch dataset. The dataset contains images taken under a wide variety of conditions and is heavily imbalanced, with most species associated with only few images. We apply multi-view classification as a way to lend more influence to high frequency details, hierarchical fine-tuning to help with class imbalance and provide regularisation, and automatic specificity control for optimising classification depth. Our system achieves 55.8% accuracy when identifying individual species and around 90% accuracy at an average taxonomy depth of 5.1—equivalent to the taxonomic rank of “family”—when applying automatic specificity control.

Keywords

Species identification Convolutional neural networks 

References

  1. 1.
    Deng, J., Krause, J., Berg, A.C., Fei-Fei, L.: Hedging your bets: optimizing accuracy-specificity trade-offs in large scale visual recognition. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3450–3457. IEEE (2012)Google Scholar
  2. 2.
    Glick, J., Miller, K.: Insect classification with heirarchical deep convolutional neural networks. Final report for convolutional neural networks for visual recognition (CS231N), Stanford University (2016). http://cs231n.stanford.edu/reports/2016/pdfs/283_Report.pdf
  3. 3.
    Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732. IEEE (2014)Google Scholar
  4. 4.
    Lee, S., Purushwalkam, S., Cogswell, M., Crandall, D., Batra, D.: Why M heads are better than one: training a diverse ensemble of deep networks (2015). arXiv preprint: arXiv:1511.06314
  5. 5.
    Lee, S.H., Chan, C.S., Wilkin, P., Remagnino, P.: Deep-plant: plant identification with convolutional neural networks. In: Proceedings of the 2015 IEEE International Conference on Image Processing, pp. 452–456. IEEE (2015)Google Scholar
  6. 6.
    Mayo, M., Watson, A.T.: Automatic species identification of live moths. Knowl. Based Syst. 20(2), 195–202 (2007)CrossRefGoogle Scholar
  7. 7.
    Pereira, S., Gravendeel, B., Wijntjes, P., Vos, R.: OrchID: a generalized framework for taxonomic classification of images using evolved artificial neural networks (2016). bioRxiv preprint 070904Google Scholar
  8. 8.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.: Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of the 2015 IEEE International Conference on Computer Vision, pp. 945–953. IEEE (2015)Google Scholar
  10. 10.
    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the 3rd AAAI Conference on Artificial Intelligence (2017)Google Scholar
  11. 11.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. IEEE (2016)Google Scholar
  12. 12.
    Yan, Z., Zhang, H., Piramuthu, R., Jagadeesh, V., DeCoste, D., Di, W., Yu, Y.: HD-CNN: hierarchical deep convolutional neural network for large scale visual recognition. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2740–2748. IEEE (2015)Google Scholar
  13. 13.
    Yang, H.P., Ma, C.S., Wen, H., Zhan, Q.B., Wang, X.L.: A tool for developing an automatic insect identification system based on wing outlines. Sci. Rep. 5, 12786 (2015)CrossRefGoogle Scholar
  14. 14.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, vol. 27, pp. 3320–3328. Curran Associates Inc. (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of WaikatoHamiltonNew Zealand
  2. 2.School of Mathematics and StatisticsUniversity of CanterburyChristchurchNew Zealand

Personalised recommendations