Advertisement

Precision Agriculture

, Volume 21, Issue 1, pp 160–177 | Cite as

Fruit detection in natural environment using partial shape matching and probabilistic Hough transform

  • Guichao Lin
  • Yunchao TangEmail author
  • Xiangjun ZouEmail author
  • Jiabing Cheng
  • Juntao Xiong
Article

Abstract

This paper proposes a novel technique for fruit detection in natural environments which is applicable in automatic harvesting robots, yield estimation systems and quality monitoring systems. As most color-based techniques are highly sensitive to illumination changes and low contrasts between fruits and leaves, the proposed technique, conversely, is based on contour information. Firstly, a discriminative shape descriptor is derived to represent geometrical properties of arbitrary fragment, and applied to a bidirectional partial shape matching to detect sub-fragments of interest that match parts of a reference contour. Then, a novel probabilistic Hough transform is developed to aggregate these sub-fragments for obtaining fruit candidates. Finally, all fruit candidates are verified by a support vector machine classifier trained on color and texture features. Citrus, tomato, pumpkin, bitter gourd, towel gourd and mango datasets were provided. Experiments on these datasets demonstrated that the proposed approach was competitive for detecting most type of fruits, such as green, orange, circular and non-circular, in natural environments.

Keywords

Fruit detection Shape descriptor Partial shape matching Probabilistic Hough transform Support vector machine 

Notes

Acknowledgements

This work was funded by a grant from the National Natural Science Foundation of China (No. 31571568), and the Project of Province Science and Technology of Guangdong (No. 2017A030222005).

References

  1. Ahonen, T., Matas, J., He, C., & Pietikainen, M. (2009). Rotation invariant image description with local binary pattern histogram fourier features. In Proceedings of the 16th Scandinavian Conference on Image Analysis (pp. 61–70). Heidelberg, Germany: Springer.Google Scholar
  2. Arbelaez, P., Maire, M., Fowlkes, C., & Malik, J. (2011). Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,33(5), 898–916.PubMedCrossRefGoogle Scholar
  3. Bargoti, S., & Underwood, J. P. (2017). Image segmentation for fruit detection and yield estimation in apple orchards. Journal of Field Robotics,34(6), 1039–1060.CrossRefGoogle Scholar
  4. Barnea, E., Mairon, R., & Ben-Shahar, O. (2016). Colour-agnostic shape-based 3d fruit detection for crop harvesting robots. Biosystems Engineering,146, 57–70.CrossRefGoogle Scholar
  5. Bulò, S. R., & Pelillo, M. (2013). A game-theoretic approach to hypergraph clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence,35(6), 1312–1327.CrossRefGoogle Scholar
  6. Chakraborty, B., Gonzàlez, J., & Roca, F. X. (2013). Large scale continuous visual event recognition using max-margin Hough transformation framework. Computer Vision and Image Understanding,117(10), 1356–1368.CrossRefGoogle Scholar
  7. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 886–893). Los Alamitos, USA: IEEE Computer Society Press.Google Scholar
  8. Donoser, M., Riemenschneider, H., & Bischof, H. (2009). Efficient partial shape matching of outer contours. In Proceedings of the 9th Asian Conference on Computer Vision (pp. 281–292). Heidelberg, Germany: Springer.Google Scholar
  9. Everingham, M., Eslami, S. M. A., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2015). The Pascal Visual Object Classes challenge: A retrospective. International Journal of Computer Vision,111(1), 98–136.CrossRefGoogle Scholar
  10. Fan, H., Yang, C., & Tang, Y. (2015). Object detection based on scale-invariant partial shape matching. Machine Vision and Applications,26(6), 711–721.CrossRefGoogle Scholar
  11. Fergus, R., Perona, P., & Zisserman, A. (2003). Object class recognition by unsupervised scale-invariant learning. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 264–271). Los Alamitos, USA: IEEE Computer Society Press.Google Scholar
  12. Font, D., Pallejà, T., Tresanchez, M., Teixidó, M., Martinez, D., Moreno, J., et al. (2014). Counting red grapes in vineyards by detecting specular spherical reflection peaks in RGB images obtained at night with artificial illumination. Computers & Electronics in Agriculture,108(C), 105–111.CrossRefGoogle Scholar
  13. Gongal, A., Amatya, S., Karkee, M., Zhang, Q., & Lewis, K. (2015). Sensors and systems for fruit detection and localization: A review. Computers & Electronics in Agriculture,116(C), 8–19.CrossRefGoogle Scholar
  14. He, A. X., & Yung, N. H. C. (2008). Corner detector based on global and local curvature properties. Optical Engineering,47(5), 057008.CrossRefGoogle Scholar
  15. Herold, B., Truppel, I., Zude, M., & Geyer, M. (2005). Spectral measurements on ‘elstar’ apples during fruit development on the tree. Biosystems Engineering,91(2), 173–182.CrossRefGoogle Scholar
  16. Ji, W., Zhao, D., Cheng, F., Xu, B., Zhang, Y., & Wang, J. (2012). Automatic recognition vision system guided for apple harvesting robot. Computers & Electrical Engineering,38(5), 1186–1195.CrossRefGoogle Scholar
  17. Kapach, K., Barnea, E., Mairon, R., Edan, Y., & Ben-Shahar, O. (2012). Computer vision for fruit harvesting robots—State of the art and challenges ahead. International Journal of Computational Vision & Robotics,3(1/2), 4–34.CrossRefGoogle Scholar
  18. Kovesi, P. D. (2000). Matlab and octave functions for computer vision and image processing. http://www.peterkovesi.com/matlabfns. Accessed 25 August 2017.
  19. Kurtulmus, F., Lee, W. S., & Vardar, A. (2014). Immature peach detection in colour images acquired in natural illumination conditions using statistical classifiers and neural network. Precision Agriculture,15(1), 57–79.CrossRefGoogle Scholar
  20. Li, H., Lee, W. S., & Wang, K. (2016). Immature green citrus fruit detection and counting based on fast normalized cross correlation (FNCC) using natural outdoor colour images. Precision Agriculture,17(6), 678–697.CrossRefGoogle Scholar
  21. Linker, R. (2017). A procedure for estimating the number of green mature apples in night-time orchard images using light distribution and its application to yield estimation. Precision Agriculture,18(1), 59–75.CrossRefGoogle Scholar
  22. Linker, R., Cohen, O., & Naor, A. (2012). Determination of the number of green apples in RGB images recorded in orchards. Computers and Electronics in Agriculture,81(1), 45–57.CrossRefGoogle Scholar
  23. Liu, H., Latecki, L. J., & Yan, S. (2010). Robust clustering as ensembles of affinity relations. Advances in Neural Information Processing Systems,23, 1414–1422.Google Scholar
  24. Low, K. (2004). Linear least-squares optimization for point-toplane ICP surface registration. Technical report, TR04-004, University of North Carolina, Chapel Hill, USA.Google Scholar
  25. Lu, C., Latecki, L. J., Adluru, N., Yang, X., & Ling, H. (2009). Shape guided contour grouping with particle filters. In Proceedings of the IEEE 12th International Conference on Computer Vision (pp. 2288–2295). Los Alamitos, CA, USA: IEEE Computer Society Press.Google Scholar
  26. Lu, J., Lee, W. S., Gan, H., & Hu, X. (2018). Immature citrus fruit detection based on local binary pattern feature and hierarchical contour analysis. Biosystems Engineering,171, 78–90.CrossRefGoogle Scholar
  27. Lu, J., & Sang, N. (2015). Detecting citrus fruits and occlusion recovery under natural illumination conditions. Computers and Electronics in Agriculture,110(C), 121–130.CrossRefGoogle Scholar
  28. Luo, L., Tang, Y., Zou, X., Wang, C., Zhang, P., & Feng, W. (2016). Robust grape cluster detection in a vineyard by combining the adaboost framework and multiple color components. Sensors,16(12), 2098.CrossRefGoogle Scholar
  29. Ma, T., & Latecki, L. J. (2011). From partial shape matching through local deformation to robust global shape similarity for object detection. In Proceedings of the 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 1441–1448). Los Alamitos, CA, USA: IEEE Computer Society Press.Google Scholar
  30. Maire, M., Arbelaez, P., Fowlkes, C., & Malik, J. (2008). Using contours to detect and localize junctions in natural images. Computer Vision and Pattern Recognition,11, 1–8.Google Scholar
  31. Maji, S., & Malik, J. (2009). Object detection using a max-margin Hough transform. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp 1038–1045). Los Alamitos, CA, USA: IEEE Computer Society Press.Google Scholar
  32. Murillo-Bracamontes, E. A., Martinez-Rosas, M. E., Miranda-Velasco, M. M., Martinez-Reyes, H. L., Martinez-Sandoval, J. R., & Cervantes-De-Avila, H. (2012). Implementation of Hough transform for fruit image segmentation. Procedia Engineering,35(12), 230–239.CrossRefGoogle Scholar
  33. Ommer, B., & Malik, J. (2009). Multi-scale object detection by clustering lines. In Proceedings of the IEEE 12th International Conference on Computer Vision (pp. 484–491). Los Alamitos, CA, USA: IEEE Computer Society Press.Google Scholar
  34. Opelt, A., Pinz, A., & Zisserman, A. (2006). A boundary-fragment-model for object detection. In Proceedings of the 9th European Conference on Computer Vision (pp. 575–588). Heidelberg, Germany: Springer.Google Scholar
  35. Payne, A., Walsh, K., Subedi, P., & Jarvis, D. (2014). Estimating mango crop yield using image analysis using fruit at ‘stone hardening’ stage and night time imaging. Computers and Electronics in Agriculture,100, 160–167.CrossRefGoogle Scholar
  36. Pearl, J. (1982). Reverend bayes on inference engines: a distributed hierarchical approach. In Proceedings of the Second AAAI Conference on Artificial Intelligence (pp. 133–136). Cambridge, UK: AAAI Press.Google Scholar
  37. Puttemans, S., Vanbrabant, Y., Tits, L., & Goedeme, T. (2016). Automated visual fruit detection for harvest estimation and robotic harvesting. In Proceedings of the Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA) (pp. 1–6). Los Alamitos, CA, USA: IEEE Computer Society Press.Google Scholar
  38. Qureshi, W. S., Payne, A., Walsh, K. B., Linker, R., Cohen, O., & Dailey, M. N. (2016). Machine vision for counting fruit on mango tree canopies. Precision Agriculture,17(3), 1–21.Google Scholar
  39. Riemenschneider, H., Donoser, M., & Bischof, H. (2010). Using Partial Edge Contour Matches for Efficient Object Category Localization. In Proceedings of the 11th European Conference on Computer Vision (pp. 29–42). Heidelberg, Germany: Springer.Google Scholar
  40. Shotton, J., Blake, A., & Cipolla, R. (2008). Multiscale categorical object recognition using contour fragments. IEEE Transactions on Pattern Analysis and Machine Intelligence,30(7), 1270–1281.PubMedCrossRefGoogle Scholar
  41. Slaughter, D. C., & Harrell, R. C. (1989). Discriminating fruit for robotic harvest using color in natural outdoor scenes. Transactions of the ASAE,32(2), 0757–0763.CrossRefGoogle Scholar
  42. Su, Y., Liu, Y., Cuan, B., & Zheng, N. (2015). Contour guided hierarchical model for shape matching. In Proceedings of the 2015 IEEE International Conference on Computer Vision (pp. 1609–1617). Los Alamitos, CA, USA: IEEE Computer Society Press.Google Scholar
  43. Tico, M., & Kuosmanen, P. (2003). Fingerprint matching using an orientation-based minutia descriptor. IEEE Transactions on Pattern Analysis and Machine Intelligence,25(8), 1009–1014.CrossRefGoogle Scholar
  44. Wachs, J. P., Stern, H. I., Burks, T., & Alchanatis, V. (2010). Low and high-level visual feature-based apple detection from multi-modal images. Precision Agriculture,11(6), 717–735.CrossRefGoogle Scholar
  45. Wang, Q., Nuske, S., Bergerman, M., & Singh, S. (2013). Automated crop yield estimation for apple orchards. In J. Desai, G. Dudek, O. Khatib, & V. Kumar (Eds.), Experimental robotics. Springer Tracts in Advanced Robotics (Vol. 88). Heidelberg, Germany: Springer.Google Scholar
  46. Wang, C., Zou, X., Tang, Y., Luo, L., & Feng, W. (2016). Localisation of litchi in an unstructured environment using binocular stereo vision. Biosystems Engineering,145, 39–51.CrossRefGoogle Scholar
  47. Xiang, R., Jiang, H., & Ying, Y. (2014). Recognition of clustered tomatoes based on binocular stereo vision. Computers and Electronics in Agriculture,106, 75–90.CrossRefGoogle Scholar
  48. Yu, Q., Wei, H., & Yang, C. (2017). Local part chamfer matching for shape-based object detection. Pattern Recognition,65, 82–96.CrossRefGoogle Scholar
  49. Zhao, C., Lee, W. S., & He, D. (2016). Immature green citrus detection based on colour feature and sum of absolute transformed difference (SATD) using colour images in the citrus grove. Computers and Electronics in Agriculture,124, 243–253.CrossRefGoogle Scholar
  50. Zou, X., Ye, M., Luo, C., Xiong, J., Luo, L., Wang, H., et al. (2016). Fault-tolerant design of a limited universal fruit-picking end-effector based on vision-positioning error. Applied Engineering in Agriculture,32(1), 5–18.CrossRefGoogle Scholar
  51. Zou, X., Zou, H., & Lu, J. (2012). Virtual manipulator-based binocular stereo vision positioning system and errors modelling. Machine Vision and Applications,23(1), 43–63.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of EducationSouth China Agricultural UniversityGuangzhouChina
  2. 2.College of Mechanical and Electrical EngineeringChuzhou UniversityChuzhouChina
  3. 3.College of Urban and Rural ConstructionZhongkai University of Agriculture and EngineeringGuangzhouChina

Personalised recommendations