Abstract
Plant phenotyping is an essential step in the plant breeding cycle, necessary to ensure food safety for a growing world population. Standard procedures for evaluating three-dimensional plant morphology and extracting relevant phenotypic characteristics are slow, costly, and in need of automation. Previous work towards automatic semantic segmentation of plants relies on explicit prior knowledge about the species and sensor set-up, as well as manually tuned parameters. In this work, we propose to use a supervised machine learning algorithm to predict per-point semantic annotations directly from point cloud data of whole plants and minimise the necessary user input. We train a PointNet++ variant on a fully annotated procedurally generated data set of partial point clouds of tomato plants, and show that the network is capable of distinguishing between the semantic classes of leaves, stems, and soil based on structural data only. We present both quantitative and qualitative evaluation results, and establish a proof of concept, indicating that deep learning is a promising approach towards replacing the current complex, laborious, species-specific, state-of-the-art plant segmentation procedures.
Keywords
- 3D perception
- Semantic segmentation
- Plant phenotyping
This is a preview of subscription content, access via your institution.
Buying options




References
Chebrolu, N., Magistri, F., Läbe, T., Stachniss, C.: Registration of spatio-temporal point clouds of plants for phenotyping. PLoS ONE 16(2), e0247243 (2021)
Chéné, Y., et al.: On the use of depth camera for 3d phenotyping of entire plants. Comput. Electron. Agric. 82, 122–127 (2012)
Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20(1), 37–46 (1960)
BO Community: Blender - a 3D modelling and rendering package. Blender Foundation (2018). http://www.blender.org
Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5828–5839 (2017)
Emmi, L., Gonzalez-De-Santos, P.: Mobile robotics in arable lands: current state and future trends. In: 2017 European Conference on Mobile Robots, ECMR 2017 (2017). https://doi.org/10.1109/ECMR.2017.8098694
Griffiths, D., Boehm, J.: Weighted point cloud augmentation for neural network training data class-imbalance. arXiv preprint arXiv:1904.04094 (2019)
Le Louedec, J., Li, B., Cielniak, G., et al.: Evaluation of 3D vision systems for detection of small objects in agricultural environments. In: Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (2020)
Le Louedec, J., Montes, H.A., Duckett, T., Cielniak, G.: Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 64–65 (2020)
Li, D., et al.: An overlapping-free leaf segmentation method for plant point clouds. IEEE Access 7, 129054–129070 (2019)
Ma, X., Wang, Z., Li, H., Zhang, P., Ouyang, W., Fan, X.: Accurate monocular 3D object detection via color-embedded 3D reconstruction for autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 6851–6860 (2019)
Magistri, F., Chebrolu, N., Stachniss, C.: Segmentation-based 4D registration of plants point clouds for phenotyping. IROS (2020)
Nguyen, T.T., Slaughter, D.C., Max, N., Maloof, J.N., Sinha, N.: Structured light-based 3D reconstruction system for plants. Sensors 15(8), 18587–18612 (2015)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5099–5108 (2017)
Shi, W., van de Zedde, R., Jiang, H., Kootstra, G.: Plant-part segmentation using deep learning and multi-view vision. Biosyst. Eng. 187, 81–95 (2019)
Tardieu, F., Cabrera-Bosquet, L., Pridmore, T., Bennett, M.: Plant phenomics, from sensors to knowledge. Curr. Biol. 27(15), R770–R783 (2017)
Weber, J., Penn, J.: Creation and rendering of realistic trees. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 1995 (1995). https://doi.org/10.1145/218380.218427
Xia, C., Wang, L., Chung, B.K., Lee, J.M.: In situ 3D segmentation of individual plant leaves using a RGB-D camera for agricultural automation. Sensors 15(8), 20463–20479 (2015)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Heiwolt, K., Duckett, T., Cielniak, G. (2021). Deep Semantic Segmentation of 3D Plant Point Clouds. In: Fox, C., Gao, J., Ghalamzan Esfahani, A., Saaj, M., Hanheide, M., Parsons, S. (eds) Towards Autonomous Robotic Systems. TAROS 2021. Lecture Notes in Computer Science(), vol 13054. Springer, Cham. https://doi.org/10.1007/978-3-030-89177-0_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-89177-0_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89176-3
Online ISBN: 978-3-030-89177-0
eBook Packages: Computer ScienceComputer Science (R0)