Advertisement

Overview of LifeCLEF 2018: A Large-Scale Evaluation of Species Identification and Recommendation Algorithms in the Era of AI

  • Alexis JolyEmail author
  • Hervé Goëau
  • Christophe Botella
  • Hervé Glotin
  • Pierre Bonnet
  • Willem-Pier Vellinga
  • Robert Planqué
  • Henning Müller
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11018)

Abstract

Building accurate knowledge of the identity, the geographic distribution and the evolution of living species is essential for a sustainable development of humanity, as well as for biodiversity conservation. Unfortunately, such basic information is often only partially available for professional stakeholders, teachers, scientists and citizens, and often incomplete for ecosystems that possess the highest diversity. In this context, an ultimate ambition is to set up innovative information systems relying on the automated identification and understanding of living organisms as a means to engage massive crowds of observers and boost the production of biodiversity and agro-biodiversity data. The LifeCLEF 2018 initiative proposes three data-oriented challenges related to this vision, in the continuity of the previous editions, but with several consistent novelties intended to push the boundaries of the state-of-the-art in several research directions. This paper describes the methodology of the conducted evaluations as well as the synthesis of the main results and lessons learned.

Notes

Acknowledgements

The organization of LifeCLEF 2018 was supported by the French project Floris’Tic (Tela Botanica, INRIA, CIRAD, INRA, IRD) funded in the context of the national investment program PIA. The organization of the BirdCLEF task was supported by the Xeno-Canto foundation for nature sounds as well as the French CNRS project SABIOD.ORG and EADM GDR CNRS MADICS, BRILAAM STIC-AmSud. The annotations of some soundscape were prepared with regreted wonderful Lucio Pando at Explorama Lodges, with the support of Pam Bucur, Marie Trone and H. Glotin.

References

  1. 1.
    Atito, S., et al.: Plant identification with deep learning ensembles. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  2. 2.
    Baillie, J., Hilton-Taylor, C., Stuart, S.N.: 2004 IUCN red list of threatened species: a global species assessment. IUCN (2004)Google Scholar
  3. 3.
    Deneu, B., Maximilien Servajean, C.B., Joly, A.: Location-based species recommendation using co-occurrences and environment - GeoLifeCLEF 2018 challenge. In: CLEF Working Notes 2018 (2018)Google Scholar
  4. 4.
    Bonnet, P., et al.: Plant identification: experts vs. machines in the era of deep learning. In: Joly, A., Vrochidis, S., Karatzas, K., Karppinen, A., Bonnet, P. (eds.) Multimedia Technologies for Environmental & Biodiversity Informatics. MMSA, pp. 131–149. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-76445-0_8CrossRefGoogle Scholar
  5. 5.
    Botella, C., Bonnet, P., Joly, A.: Overview of GeoLifeCLEF 2018: location-based species recommendation. In: CLEF Working Notes 2018 (2018)Google Scholar
  6. 6.
    Briggs, F., et al.: Acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach. J. Acoust. Soc. Am. 131, 4640 (2012)CrossRefGoogle Scholar
  7. 7.
    Cai, J., Ee, D., Pham, B., Roe, P., Zhang, J.: Sensor network for the monitoring of ecosystem: bird species recognition. In: 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007 (2007)Google Scholar
  8. 8.
    De’Ath, G.: Boosted trees for ecological modeling and prediction. Ecology 88(1), 243–251 (2007)CrossRefGoogle Scholar
  9. 9.
    Furlanello, T., Lipton, Z.C., Itti, L., Anandkumar, A.: Born again neural networks. In: Metalearn 2017 NIPS Workshop, pp. 1–5, December 2017Google Scholar
  10. 10.
    Gaston, K.J., O’Neill, M.A.: Automated species identification: why not? Philos. Trans. Roy. Soc. Lond. B: Biol. Sci. 359(1444), 655–667 (2004)CrossRefGoogle Scholar
  11. 11.
    Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM (1999)Google Scholar
  12. 12.
    Goëau, H., Bonnet, P., Joly, A.: Overview of ExpertLifeCLEF 2018: how far automated identification systems are from the best experts? LifeCLEF experts vs. machine plant identification task 2018. In: CLEF 2018 (2018)Google Scholar
  13. 13.
    Goëau, H., et al.: The ImageCLEF 2013 plant identification task. In: CLEF 2013, Valencia (2013)Google Scholar
  14. 14.
    Goëau, H., et al.: The ImageCLEF 2011 plant images classification task. In: CLEF 2011 (2011)Google Scholar
  15. 15.
    Goëau, H., et al.: ImageCLEF 2012 plant images identification task. In: CLEF 2012, Rome (2012)Google Scholar
  16. 16.
    Goëau, H., Glotin, H., Planqué, R., Vellinga, W.P., Kahl, S.: Overview of BirdCLEF 2018: monophone vs. soundscape bird identification. In: CLEF Working Notes 2018 (2018)Google Scholar
  17. 17.
    Goëau, H., Glotin, H., Vellinga, W., Planqué, B., Joly, A.: LifeCLEF bird identification task 2017. In: Working Notes of CLEF 2017 - Conference and Labs of the Evaluation Forum, Dublin, Ireland, 11–14 September 2017 (2017)Google Scholar
  18. 18.
    Grill, T., Schlüter, J.: Two convolutional neural networks for bird detection in audio signals. In: 2017 25th European Signal Processing Conference (EUSIPCO), pp. 1764–1768, August 2017Google Scholar
  19. 19.
    Guisan, A., Thuiller, W., Zimmermann, N.E.: Habitat Suitability and Distribution Models: With Applications in R. Cambridge University Press, Cambridge (2017)CrossRefGoogle Scholar
  20. 20.
    Haiwei, W., Ming, L.: Construction and improvements of bird songs’ classification system. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  21. 21.
    Haupt, J., Kahl, S., Kowerko, D., Eibl, M.: Large-scale plant classification using deep convolutional neural networks. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  22. 22.
    Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, p. 3 (2017)Google Scholar
  23. 23.
    Joly, A., et al.: Interactive plant identification based on social image data. Ecol. Inform. 23, 22–34 (2014)CrossRefGoogle Scholar
  24. 24.
    Joly, A., et al.: The ImageCLEF plant identification task 2013. In: International Workshop on Multimedia Analysis for Ecological Data (2013)Google Scholar
  25. 25.
    Kahl, S., Wilhelm-Stein, T., Klinck, H., Kowerko, D., Eibl, M.: A baseline for large-scale bird species identification in field recordings. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  26. 26.
    Kahl, S., Wilhelm-Stein, T., Klinck, H., Kowerko, D., Eibl, M.: Recognizing birds from sound - the 2018 BirdCLEF baseline system. arXiv preprint arXiv:1804.07177 (2018)
  27. 27.
    Lasseck, M.: Image-based plant species identification with deep convolutional neural networks. In: Working Notes of CLEF 2017 (Cross Language Evaluation Forum) (2017)Google Scholar
  28. 28.
    Lasseck, M.: Audio-based bird species identification with deep convolutional neural networks. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  29. 29.
    Lasseck, M.: Machines vs. experts: working note on the ExpertLifeCLEF 2018 plant identification task. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  30. 30.
    Lee, D.J., Schoenberger, R.B., Shiozawa, D., Xu, X., Zhan, P.: Contour matching for a fish recognition and migration-monitoring system. In: Optics East, pp. 37–48. International Society for Optics and Photonics (2004)Google Scholar
  31. 31.
    Messina, J.P., et al.: Mapping global environmental suitability for Zika virus. eLife 5, e15272 (2016)CrossRefGoogle Scholar
  32. 32.
    Moyes, C.L., et al.: Predicting the geographical distributions of the macaque hosts and mosquito vectors of plasmodium knowlesi malaria in forested and non-forested areas. Parasit. Vectors 9(1), 242 (2016)CrossRefGoogle Scholar
  33. 33.
    Müller, H., Clough, P., Deselaers, T., Caputo, B. (eds.): ImageCLEF - Experimental Evaluation in Visual Information Retrieval. The Springer International Series On Information Retrieval, vol. 32. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15181-1CrossRefzbMATHGoogle Scholar
  34. 34.
    Müller, L., Marti, M.: Two bachelor students’ adventures in machine learning. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  35. 35.
    Moudhgalya, N.B., Sharan Sundar, S.D.M.P., Bose, C.A.: Hierarchically embedded taxonomy with CLNN to predict species based on spatial features. In: CLEF Working Notes 2018 (2018)Google Scholar
  36. 36.
    Polyak, B.T., Juditsky, A.B.: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30(4), 838–855 (1992)MathSciNetCrossRefGoogle Scholar
  37. 37.
    Schlüter, J.: Bird identification from timestamped, geotagged audio recordings. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  38. 38.
    Sevilla, A., Glotin, H.: Audio bird classification with inception v4 joint to an attention mechanism. In: Working Notes of CLEF 2017 (Cross Language Evaluation Forum) (2017)Google Scholar
  39. 39.
    Silvertown, J., et al.: Crowdsourcing the identification of organisms: a case-study of iSpot. ZooKeys 480, 125 (2015)CrossRefGoogle Scholar
  40. 40.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
  41. 41.
    Taubert, S., Max Mauermann, S.K.D.K., Eibl, M.: Species prediction based on environmental variables using machine learning techniques. In: CLEF Working Notes 2018 (2018)Google Scholar
  42. 42.
    Sulc, M., Picek, L., Matas, J.: Plant recognition by inception networks with test-time class prior estimation. In: Working Notes of CLEF 2018 (Cross Language Evaluation Forum) (2018)Google Scholar
  43. 43.
    Sullivan, B.L., et al.: The eBird enterprise: an integrated approach to development and application of citizen science. Biol. Conserv. 169, 31–40 (2014)CrossRefGoogle Scholar
  44. 44.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)Google Scholar
  45. 45.
    Towsey, M., Planitz, B., Nantes, A., Wimmer, J., Roe, P.: A toolbox for animal call recognition. Bioacoustics 21(2), 107–125 (2012)CrossRefGoogle Scholar
  46. 46.
    Trifa, V.M., Kirschel, A.N., Taylor, C.E., Vallejo, E.E.: Automated species recognition of antbirds in a Mexican rainforest using hidden Markov models. J. Acoust. Soc. Am. 123, 2424 (2008)CrossRefGoogle Scholar
  47. 47.
    Wäldchen, J., Rzanny, M., Seeland, M., Mäder, P.: Automated plant species identification-trends and future directions. PLOS Comput. Biol. 14(4), 1–19 (2018).  https://doi.org/10.1371/journal.pcbi.1005993CrossRefGoogle Scholar
  48. 48.
    Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995. IEEE (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Alexis Joly
    • 1
    Email author
  • Hervé Goëau
    • 2
  • Christophe Botella
    • 1
    • 3
  • Hervé Glotin
    • 4
  • Pierre Bonnet
    • 2
  • Willem-Pier Vellinga
    • 5
  • Robert Planqué
    • 5
  • Henning Müller
    • 6
  1. 1.Inria, LIRMMMontpellierFrance
  2. 2.CIRAD, UMR AMAPMontpellierFrance
  3. 3.INRA, UMR AMAPMontpellierFrance
  4. 4.AMU, Univ. Toulon, CNRS, ENSAM, LSIS UMR 7296, IUFMarseilleFrance
  5. 5.Xeno-canto foundationAmsterdamThe Netherlands
  6. 6.HES-SOSierreSwitzerland

Personalised recommendations