Machine Vision and Applications

, Volume 27, Issue 5, pp 607–609 | Cite as

Special issue on computer vision and image analysis in plant phenotyping

Editorial

Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and behavior) as a result of genotype differences (i.e., differences in the genetic code) and the environment. Previously, the process of taking phenotypic measurements has been laborious, costly, and time-consuming. In recent years, noninvasive, imaging-based methods have become more common. These images are recorded by a range of capture devices from small embedded camera systems to multi-million Euro smart greenhouses, at scales ranging from microscopic images of cells, to entire fields captured by UAV imaging.

These images need to be analyzed in a high-throughput, robust, and accurate manner. UN-FAO statistics show that according to current population predictions we will need to achieve a 70 % increase in food productivity by 2050, simply to maintain current global agricultural demands. Phenomics—large-scale measurement of plant traits—is the bottleneck in knowledge-based bioeconomy, and machine vision is ideally placed to help [16]. However, the occurring challenges differ from usual tasks addressed by the computer vision community due to the requirements posed by this application scenario. Dealing with these new problems has spawned new specialized workshops such as Computer Vision Problems in Plant Phenotyping (CVPPP) which was held for the first time in conjunction with the European Conference on Computer Vision (ECCV) 2014 and the second time with the British Machine Vision Conference (BMVC) 2015, and the stand-alone workshop IAMPS (Image Analysis Methods for the Plant Sciences) now in its fourth year.

The overriding goal of this special issue is not only to present interesting computer vision solutions, but also to introduce challenging computer vision problems in the increasingly important plant phenotyping domain, accompanied with benchmark datasets and suitable performance evaluation methods.

The 12 papers presented in this special issue [1, 3, 5, 6, 7, 8, 10, 11, 13, 17, 18, 21] were selected and revised from submissions received in response to an open call for papers. Six of them [1, 3, 11, 13, 18, 21] were developed from conference contributions at CVPPP 2014 [2, 4, 12, 14, 19, 20]. Two other papers received by the open call already appeared in a preceding or an adjacent issue of this journal and are also part of this special issue [9, 22].

Overall, we have received contributions that image the plant either above the ground or below the ground. Some rely on 2D or multiple 2D images [1, 6], 3D surfaces from 2D [8, 21], 3D reconstruction [18], and hyperspectral imaging [3, 5], while others rely on 3D volumetric imaging [13] and proper analysis algorithms.

Two papers [11, 17] deal with recognizing and classifying plants or whole trees, which can be extremely useful when one is interested in categorizing new cultivars and hybrids against a bank of known phenotypes.

A survey on results of the Leaf Segmentation Challenge at CVPPP 2014 presents state-of-the-art algorithms competing on the challenging problem of segmenting each single leaf from images showing rosette plants [22] on the basis of a recently published benchmark dataset [15]. This was the first serious computer vision dataset within the plant domain. The availability of datasets is further enriched by the paper of Cruz et al. [7] in which multimodal images of model plants are presented. A benchmark in leaf segmentation is achieved relying on a template matching method. The presence of public and shared datasets has been a major driver of progress in mainstream computer vision, and it is very promising to see an emerging openness about data in this new subdomain.

All the papers discussed as of now demonstrate the complexity of extracting phenotyping information when plants are imaged in controlled environmental settings. When we move to the field the ultimate and most useful frontier, things become exceedingly more complex. To motivate further research into this area, Kelly et al. [9] discuss an interesting set of challenges when phenotyping field crops.

The application of plant phenotyping opens up a wealth of novel and important opportunities for the machine vision community. The wide variety of image scales (from cell to field), modalities (visible, X-ray, infrared, hyperspectral), and dimensions (2D, 3D, time series), in addition to complex environmental factors, make the research challenging. The fruits of the advances, though, can have a direct impact on our ability to understand plant growth and improve the efficiency of how we grow our food.

To conclude, we are truly excited to have made the first steps in introducing this fascinating application domain for computer vision. With this special issue, the introduction and release of open benchmark datasets [7, 15] and workshops at prestigious venues, we hope to attract more colleagues from the computer vision community in our quest toward advancing the state of the art in this societally and environmentally important area. We hope also that through highlighting some recent computer vision systems and applications within this domain we can also convince the reader that there are some deep, interesting, and challenging problems in the world of plant imaging.

References

  1. 1.
    Augustin, M., Haxhimusa, Y., Busch, W., Kropatsch, W.G.: A framework for the extraction of quantitative traits from 2d images of mature Arabidopsis thaliana. Mach. Vis. Appl. 27(5), 647–661 (2016). doi:10.1007/s00138-015-0720-z
  2. 2.
    Augustin, M., Haxhimusa, Y., Busch, W., Kropatsch, W.G.: Image-based phenotyping of the mature Arabidopsis shoot system. In: Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 231–246. Springer (2015)Google Scholar
  3. 3.
    Behmann, J., Mahlein, A.K., Paulus, S., Dupuis, J., Kuhlmann, H., Oerke, E.C., Plümer, L.: Generation and application of hyperspectral 3d plant models: methods and challenges. Mach. Vis. Appl. 27(5), 611–624 (2016). doi:10.1007/s00138-015-0716-8
  4. 4.
    Behmann, J., Mahlein, A.K., Paulus, S., Kuhlmann, H., Oerke, E.C., Plümer, L.: Generation and application of hyperspectral 3d plant models. In: L. Agapito, M.M. Bronstein, C. Rother (eds.) Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 117–130. Springer (2016). doi:10.1007/978-3-319-16220-1_9
  5. 5.
    Benoit, L., Benoit, R., Belin, É., Vadaine, R., Demilly, D., Chapeau-Blondeau, F., Rousseau, D.: On the value of the Kullback-Leibler divergence for cost-effective spectral imaging of plants by optimal selection of wavebands. Mach. Vis. Appl. 27(5), 625–635 (2016). doi:10.1007/s00138-015-0717-7 CrossRefGoogle Scholar
  6. 6.
    Boyle, R.D., Corke, F.M.K., Doonan, J.H.: Automated estimation of tiller number in wheat by ribbon detection. Mach. Vis. Appl. 27(5), 637–646 (2016). doi:10.1007/s00138-015-0719-5 CrossRefGoogle Scholar
  7. 7.
    Cruz, J.A., Yin, X., Liu, X., Imran, S.M., Morris, D.D., Kramer, D.M., Chen, J.: Multi-modality imagery database for plant phenotyping. Mach. Vis. Appl. 27(5), 735–749 (2016). doi:10.1007/s00138-015-0734-6 CrossRefGoogle Scholar
  8. 8.
    Golbach, F., Kootstra, G., Damjanovic, S., Otten, G., Zedde, R.: Validation of plant part measurements using a 3d reconstruction method suitable for high-throughput seedling phenotyping. Mach. Vis. Appl. 27(5), 663–680 (2016). doi:10.1007/s00138-015-0727-5 CrossRefGoogle Scholar
  9. 9.
    Kelly, D., Vatsa, A., Mayham, W., Kazic, T.: Extracting complex lesion phenotypes in Zea mays. Mach. Vis. Appl. 27(1), 145–156 (2016). doi:10.1007/s00138-015-0718-6 CrossRefGoogle Scholar
  10. 10.
    Kelly, D., Vatsa, A., Mayham, W., Ngô, L., Thompson, A., Kazic, T.: An opinion on imaging challenges in phenotyping field crops. Mach. Vis. Appl. 27(5), 681–694 (2016). doi:10.1007/s00138-015-0728-4 CrossRefGoogle Scholar
  11. 11.
    Larese, M.G., Granitto, P.M.: Finding local leaf vein patterns for legume characterization and classification. Mach. Vis. Appl. 27(5), 709–720 (2016). doi:10.1007/s00138-015-0732-8 CrossRefGoogle Scholar
  12. 12.
    Larese, M.G., Granitto, P.M.: Hybrid consensus learning for legume species and cultivars classification. In: L. Agapito, M.M. Bronstein, C. Rother (eds.) Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 201–214. Springer (2015). doi:10.1007/978-3-319-16220-1_15
  13. 13.
    Mairhofer, S., Johnson, J., Sturrock, C.J., Bennett, M.J., Mooney, S.J., Pridmore, T.P.: Visual tracking for the recovery of multiple interacting plant root systems from X-ray \(\mu \)CT images. Mach. Vis. Appl. 27(5), 721–734 (2016). doi:10.1007/s00138-015-0733-7 CrossRefGoogle Scholar
  14. 14.
    Mairhofer, S., Sturrock, C.J., Bennett, M.J., Mooney, S.J., Pridmore, T.P.: Visual object tracking for the extraction of multiple interacting plant root systems. In: L. Agapito, M.M. Bronstein, C. Rother (eds.) Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 89–104. Springer (2015). doi:10.1007/978-3-319-16220-1_7
  15. 15.
    Minervini, M., Fischbach, A., Scharr, H., Tsaftaris, S.A.: Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognit. Lett. (2015). doi:10.1016/j.patrec.2015.10.013 Google Scholar
  16. 16.
    Minervini, M., Scharr, H., Tsaftaris, S.A.: Image analysis: the new bottleneck in plant phenotyping [Applications Corner]. IEEE Signal Processing Magazine 32(4), 126–131 (2015). doi:10.1109/MSP.2015.2405111 CrossRefGoogle Scholar
  17. 17.
    Othmani, A.A., Jiang, C., Lomenie, N., Favreau, J.M., Piboule, A., Voon, L.F.C.L.Y.: A novel computer-aided tree species identification method based on burst wind segmentation of 3d bark textures. Mach. Vis. Appl. 27(5), 751–766 (2016). doi:10.1007/s00138-015-0738-2
  18. 18.
    Pound, M.P., French, A.P., Fozard, J.A., Murchie, E.H., Pridmore, T.P.: A patch-based approach to 3d plant shoot phenotyping. Mach. Vis. Appl. 27(5), 767–779 (2016). doi:10.1007/s00138-016-0756-8 CrossRefGoogle Scholar
  19. 19.
    Pound, M.P., French, A.P., Murchie, E.H., Pridmore, T.P.: Surface reconstruction of plant shoots from multiple views. In: L. Agapito, M.M. Bronstein, C. Rother (eds.) Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 158–173. Springer (2015). doi:10.1007/978-3-319-16220-1_12
  20. 20.
    Santos, T.T., Koenigkan, L.V., Barbedo, J.G.A., Rodrigues, G.C.: 3d plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera. In: L. Agapito, M.M. Bronstein, C. Rother (eds.) Computer Vision—ECCV 2014 Workshops, vol. 8928, pp. 247–263. Springer (2015). doi:10.1007/978-3-319-16220-1_18
  21. 21.
    Santos, T.T., Rodrigues, G.C.: Flexible three-dimensional modeling of plants using low- resolution cameras and visual odometry. Mach. Vis. Appl. 27(5), 695–707 (2016). doi:10.1007/s00138-015-0729-3 CrossRefGoogle Scholar
  22. 22.
    Scharr, H., Minervini, M., French, A.P., Klukas, C., Kramer, D.M., Liu, X., Luengo, I., Pape, J.M., Polder, G., Vukadinovic, D., Yin, X., Tsaftaris, S.A.: Leaf segmentation in plant phenotyping: a collation study. Mach. Vis. Appl. 27(4), 585–606 (2016). doi:10.1007/s00138-015-0737-3 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Institute of Bio- and Geosciences: Plant Sciences (IBG-2)JülichGermany
  2. 2.Department of Computer ScienceAberystwyth UniversityAberystwythUK
  3. 3.Schools of Computer Science and BiosciencesUniversity of NottinghamNottinghamUK
  4. 4.School of EngineeringUniversity of EdinburghEdinburghUK
  5. 5.IMT Institute for Advanced StudiesLuccaItaly

Personalised recommendations