Vegetation Segmentation in Cornfield Images Using Bag of Words

  • Yerania Campos
  • Erik Rodner
  • Joachim Denzler
  • Humberto Sossa
  • Gonzalo Pajares
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10016)

Abstract

We provide an alternative methodology for vegetation segmentation in cornfield images. The process includes two main steps, which makes the main contribution of this approach: (a) a low-level segmentation and (b) a class label assignment using Bag of Words (BoW) representation in conjunction with a supervised learning framework. The experimental results show our proposal is adequate to extract green plants in images of maize fields. The accuracy for classification is 95.3 % which is comparable to values in current literature.

Keywords

Bag-of-Words Machine learning Colour Vegetation Indices Green detection 

References

  1. 1.
    RHEA: robot fleets for highly effective agriculture and forestry management (2016). http://www.rhea-project.eu/
  2. 2.
    Mousazadeh, H.: A technical review on navigation systems of agricultural autonomous off-road vehicles. J. Terramech. 50(3), 211–232 (2013)CrossRefGoogle Scholar
  3. 3.
    Saxena, L., Armstrong, L.: A survey of image processing techniques for agriculture. In: Proceedings of Asian Federation for Information Technology in Agriculture (2014)Google Scholar
  4. 4.
    Haug, S., Michaels, A., Biber, P., Ostermann, J.: Plant classification system for crop/weed discrimination without segmentation. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1142–1149, March 2014Google Scholar
  5. 5.
    Hlaing, S.H., Khaing, A.S.: Weed and crop segmentation and classification using area thresholding. J. Res. Eng. Technol. 3, 375 (2014)Google Scholar
  6. 6.
    Tewari, V., Kumar, A.A., Nare, B., Prakash, S., Tyagi, A.: Microcontroller based roller contact type herbicide applicator for weed control under row crops. Comput. Electron. Agric. 104, 40–45 (2014)CrossRefGoogle Scholar
  7. 7.
    Choi, K.H., Han, S.K., Han, S.H., Park, K.H., Kim, K.S., Kim, S.: Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields. Comput. Electron. Agric. 113, 266–274 (2015)CrossRefGoogle Scholar
  8. 8.
    Torres-Snchez, J., Lpez-Granados, F., Pea, J.: An automatic object-based method for optimal thresholding in UAV images: application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 114, 43–52 (2015)CrossRefGoogle Scholar
  9. 9.
    Yang, W., Zhao, X., Wang, S., Chen, L., Chen, X., Lu, S.: A new approach for greenness identification from maize images. In: Huang, D.-S., Bevilacqua, V., Prashan, P. (eds.) ICIC 2015. LNCS, vol. 9225, pp. 339–347. Springer, Heidelberg (2015). doi:10.1007/978-3-319-22180-9_33 CrossRefGoogle Scholar
  10. 10.
    Jiang, G., Wang, Z., Liu, H.: Automatic detection of crop rows based on multi-rois. Expert Syst. Appl. 42(5), 2429–2441 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Meng, Q., Qiu, R., He, J., Zhang, M., Ma, X., Liu, G.: Development of agricultural implement system based on machine vision and fuzzy control. Comput. Electron. Agric. 112, 128–138 (2015). Precision AgricultureCrossRefGoogle Scholar
  12. 12.
    Guijarro, M., Riomoros, I., Pajares, G., Zitinski, P.: Discrete wavelets transform for improving greenness image segmentation in agricultural images. Comput. Electron. Agric. 118, 396–407 (2015)CrossRefGoogle Scholar
  13. 13.
    Balasubramaniam, P., Ananthi, V.P.: Segmentation of nutrient deficiency in incomplete crop images using intuitionistic fuzzy c-means clustering algorithm. Nonlinear Dyn. 83(1), 849–866 (2015)Google Scholar
  14. 14.
    Kazmi, W., Garcia-Ruiz, F.J., Nielsen, J., Rasmussen, J., Andersen, H.J.: Detecting creeping thistle in sugar beet fields using vegetation indices. Comput. Electron. Agric. 112, 10–19 (2015). Precision AgricultureCrossRefGoogle Scholar
  15. 15.
    Kazmi, W., Garcia-Ruiz, F., Nielsen, J., Rasmussen, J., Andersen, H.J.: Exploiting affine invariant regions and leaf edge shapes for weed detection. Comput. Electron. Agric. 118, 290–299 (2015)CrossRefGoogle Scholar
  16. 16.
    Ye, M., Cao, Z., Yu, Z., Bai, X.: Crop feature extraction from images with probabilistic superpixel Markov random field. Comput. Electron. Agric. 114, 247–260 (2015)CrossRefGoogle Scholar
  17. 17.
    Cheng, B., Matson, E.T.: A feature-based machine learning agent for automatic rice and weed discrimination. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2015. LNCS (LNAI), vol. 9119, pp. 517–527. Springer, Heidelberg (2015). doi:10.1007/978-3-319-19324-3_46 CrossRefGoogle Scholar
  18. 18.
    Moorthy, S., Boigelot, B., Mercatoris, B.: Effective segmentation of green vegetation for resource-constrained real-time applications. In: Proceedings of Precision Agriculture (2015)Google Scholar
  19. 19.
    Santos, T.T., Koenigkan, L.V., Barbedo, J.G.A., Rodrigues, G.C.: 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8928, pp. 247–263. Springer, Heidelberg (2015). doi:10.1007/978-3-319-16220-1_18 Google Scholar
  20. 20.
    Ionescu, R.T., Popescu, A.L., Popescu, M., Popescu, D.: Biomassid: a biomass type identification system for mobile devices. Comput. Electron. Agric. 113, 244–253 (2015)CrossRefGoogle Scholar
  21. 21.
    Woebbecke, D., Meyer, G., Von Bargen, K., Mortensen, D.: Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 38(1), 259–269 (1995)CrossRefGoogle Scholar
  22. 22.
    Meyer, G., Mehta, T., Kocher, M., Mortensen, D., Samal, A.: Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Trans. ASAE 41(4), 1189 (1998)CrossRefGoogle Scholar
  23. 23.
    Kataoka, T., Kaneko, T., Okamoto, H., et al.: Crop growth estimation system using machine vision. In: Proceedings of 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2003, vol. 2, pp. b1079–b1083. IEEE (2003)Google Scholar
  24. 24.
    Meyer, G.E., Neto, J.C.: Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 63(2), 282 (2008)CrossRefGoogle Scholar
  25. 25.
    Woebbecke, D.M., Meyer, G.E., Von Bargen, K., Mortensen, D.A.: Plant species identification, size, and enumeration using machine vision techniques on near-binary images. In: Proceedings of Applications in Optical Science and Engineering, International Society for Optics and Photonics, pp. 208–219 (1993)Google Scholar
  26. 26.
    Golzarian, M.R., Frick, R.A.: Classification of images of wheat, ryegrass and brome grass species at early growth stages using principal component analysis. Plant Methods 7(1), 1–11 (2011)CrossRefGoogle Scholar
  27. 27.
    Salton, G., Mcgill, M.J.: Introduction to Modern Information Retrieval. McGraw-Hill Inc., New York (1986)MATHGoogle Scholar
  28. 28.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, New York (2006)MATHGoogle Scholar
  29. 29.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  30. 30.
    Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008). Similarity Matching in Computer Vision and MultimediaCrossRefGoogle Scholar
  31. 31.
    Guijarro, M., Pajares, G., Riomoros, I., Herrera, P., Burgos-Artizzu, X., Ribeiro, A.: Automatic segmentation of relevant textures in agricultural images. Comput. Electron. Agric. 75(1), 75–83 (2011)CrossRefGoogle Scholar
  32. 32.
    MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297. University of California Press, Berkeley (1967)Google Scholar
  33. 33.
    Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011)CrossRefGoogle Scholar
  34. 34.
    Labatut, V., Cherifi, H.: Accuracy measures for the comparison of classifiers. CoRR abs/1207.3790 (2012)Google Scholar
  35. 35.
    Kohonen, T. (ed.): Self-organizing Maps. Springer, New York (1997)MATHGoogle Scholar
  36. 36.
    Dunn, J.C.: A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters. J. Cybern. 3(3), 32–57 (1973)MathSciNetCrossRefMATHGoogle Scholar
  37. 37.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comput. Vis. 59(2), 167–181 (2004)CrossRefGoogle Scholar
  38. 38.
    Brust, C., Sickert, S., Simon, M., Rodner, E., Denzler, J.: Convolutional patch networks with spatial prior for road detection and urban scene understanding. CoRR abs/1502.06344 (2015)Google Scholar
  39. 39.
    Fröhlich, B., Rodner, E., Denzler, J.: Semantic segmentation with millions of features: integrating multiple cues in a combined random forest approach. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012. LNCS, vol. 7724, pp. 218–231. Springer, Heidelberg (2013). doi:10.1007/978-3-642-37331-2_17 CrossRefGoogle Scholar
  40. 40.
    Larlus, D., Verbeek, J., Jurie, F.: Category level object segmentation by combining bag-of-words models with dirichlet processes and random fields. Int. J. Comput. Vis. 88(2), 238–253 (2010)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Yerania Campos
    • 1
  • Erik Rodner
    • 2
  • Joachim Denzler
    • 2
  • Humberto Sossa
    • 3
  • Gonzalo Pajares
    • 1
  1. 1.Department of Software Engineering and Artificial Intelligence, Faculty of InformaticsComplutense UniversityMadridSpain
  2. 2.Computer Vision GroupFriedrich Schiller University JenaJenaGermany
  3. 3.Instituto Politécnico Nacional-CICMexico D.F.Mexico

Personalised recommendations