Machine Vision and Applications

, Volume 28, Issue 3–4, pp 293–311 | Cite as

Superpixel-based class-semantic texton occurrences for natural roadside vegetation segmentation

Original Paper

Abstract

Vegetation segmentation from roadside data is a field that has received relatively little attention in present studies, but can be of great potentials in a wide range of real-world applications, such as road safety assessment and vegetation condition monitoring. In this paper, we present a novel approach that generates class-semantic color–texture textons and aggregates superpixel-based texton occurrences for vegetation segmentation in natural roadside images. Pixel-level class-semantic textons are learnt by generating two individual sets of bag-of-word visual dictionaries from color and filter bank texture features separately for each object class using manually cropped training data. For a testing image, it is first oversegmented into a set of homogeneous superpixels. The color and texture features of all pixels in each superpixel are extracted and further mapped to one of the learnt textons using the nearest distance metric, resulting in a color and a texture texton occurrence matrix. The color and texture texton occurrences are aggregated using a linear mixing method over each superpixel and the segmentation is finally achieved using a simple yet effective majority voting strategy. Evaluations on two datasets such as video data collected by the Department of Transport and Main Roads, Queensland, Australia, and a public roadside grass dataset show high accuracy of the proposed approach. We also demonstrate the effectiveness of the approach for vegetation segmentation in real-world scenarios.

Keywords

Vegetation segmentation Texton Superpixel Classification algorithm Object recognition 

Notes

Acknowledgements

This research was supported under ARC (Australian Research Council) Linkage Projects funding scheme (Project No. LP140100939).

References

  1. 1.
    Ponti, M.P.: Segmentation of low-cost remote sensing images combining vegetation indices and mean shift. IEEE Geosci. Remote Sens. Lett. 10, 67–70 (2013)CrossRefGoogle Scholar
  2. 2.
    Kang, Y., Yamaguchi, K., Naito, T., Ninomiya, Y.: Multiband image segmentation and object recognition for understanding road scenes. IEEE Trans. Intell. Transp. Syst. 12, 1423–1433 (2011)CrossRefGoogle Scholar
  3. 3.
    Blas, M. R., Agrawal, M., Sundaresan, A., Konolige, K.: Fast color/texture segmentation for outdoor robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4078–4085 (2008)Google Scholar
  4. 4.
    Haibing, Z., Shirong, L., Chaoliang, Z.: Outdoor scene understanding using SEVI-BOVW model. In: International Joint Conference on Neural Networks (IJCNN), pp. 2986–2990 (2014)Google Scholar
  5. 5.
    Zhang, L., Verma, B., Stockwell, D.: Class-semantic color-texture textons for vegetation classification. Neural Information Processing 9489, 354–362 (2015)CrossRefGoogle Scholar
  6. 6.
    Zhang, L., Verma, B., Stockwell, D.: Spatial contextual superpixel model for natural roadside vegetation classification. Pattern Recognit. 60, 444–457 (2016)CrossRefGoogle Scholar
  7. 7.
    Zafarifar, B., de With, P.H.N.: Grass field detection for TV picture quality enhancement. In: International Conference on Consumer Electronics (ICCE), Digest of Technical Papers, pp. 1–2 (2008)Google Scholar
  8. 8.
    Harbas, I., Subasic, M.: Motion estimation aided detection of roadside vegetation. In: 7th International Congress on Image and Signal Processing (CISP), pp. 420–425 (2014)Google Scholar
  9. 9.
    Harbas, I., Subasic, M.: Detection of roadside vegetation using features from the visible spectrum. In: 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1204–1209 (2014)Google Scholar
  10. 10.
    Nguyen, D.V., Kuhnert, L., Kuhnert, K.D.: Spreading algorithm for efficient vegetation detection in cluttered outdoor environments. Rob. Auton. Syst. 60, 1498–1507 (2012)CrossRefGoogle Scholar
  11. 11.
    Nguyen, D. V., Kuhnert, L., Jiang, T., Thamke, S., Kuhnert, K.D.: Vegetation detection for outdoor automobile guidance. In: IEEE International Conference on Industrial Technology (ICIT), pp. 358–364 (2011)Google Scholar
  12. 12.
    Schepelmann, A., Hudson, R.E., Merat, F.L., Quinn, R.D.: Visual segmentation of lawn grass for a mobile robotic lawnmower. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 734–739 (2010)Google Scholar
  13. 13.
    Balali, V., Golparvar-Fard, M.: Segmentation and recognition of roadway assets from car-mounted camera video streams using a scalable non-parametric image parsing method. Autom. Constr. 49(1), 27–39 (2015)CrossRefGoogle Scholar
  14. 14.
    Campbell, N.W., Thomas, B.T., Troscianko, T.: Automatic segmentation and classification of outdoor images using neural networks. Int. J. Neural Syst. 08, 137–144 (1997)CrossRefGoogle Scholar
  15. 15.
    Nguyen, D.V., Kuhnert, L., Thamke, S., Schlemper, J., Kuhnert, K.D.: A novel approach for a double-check of passable vegetation detection in autonomous ground vehicles. In: 15th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 230–236 (2012)Google Scholar
  16. 16.
    Chowdhury, S., Verma, B., Stockwell, D.: A novel texture feature based multiple classifier technique for roadside vegetation classification. Expert Syst. Appl. 42, 5047–5055 (2015)CrossRefGoogle Scholar
  17. 17.
    Bosch, A., Muñoz, X., Freixenet, J.: Segmentation and description of natural outdoor scenes. Image Vis. Comput. 25, 727–740 (2007)CrossRefGoogle Scholar
  18. 18.
    Zhang, L., Verma, B., Stockwell, D.: Roadside vegetation classification using color intensity and moments. In: The 11th International Conference on Natural Computation, pp. 1250–1255 (2015)Google Scholar
  19. 19.
    Liu, D.-X., Wu, T., Dai, B.: Fusing ladar and color image for detection grass off-road scenario. In: IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 1–4 (2007)Google Scholar
  20. 20.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: TextonBoost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context. Int. J. Comput. Vis. 81, 2–23 (2009)CrossRefGoogle Scholar
  21. 21.
    Sharma, A., Tuzel, O., Jacobs, D.W.: Deep hierarchical parsing for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 530–538 (2015)Google Scholar
  22. 22.
    Wang, W., Shen, J., Yang, R., Porikli, F.: A unified spatiotemporal prior based on geodesic distance for video object segmentation. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2017). doi: 10.1109/TPAMI.2017.2662005
  23. 23.
    Gould, S., Rodgers, J., Cohen, D., Elidan, G., Koller, D.: Multi-class segmentation with relative location prior. Int. J. Comput. Vis. 80, 300–316 (2008)CrossRefGoogle Scholar
  24. 24.
    Tighe, J., Lazebnik, S.: SuperParsing: scalable nonparametric image parsing with superpixels. In: Computer Vision—ECCV 2010, vol. 6315, pp. 352–365 (2010)Google Scholar
  25. 25.
    Munoz, D., Bagnell, J.A., Hebert, M.: Stacked hierarchical labeling. In: Computer Vision—ECCV 2010, vol. 6316, pp. 57–70 (2010)Google Scholar
  26. 26.
    Micusik, B., Kosecka, J.: Semantic segmentation of street scenes by superpixel co-occurrence and 3D geometry. In: IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp. 625–632 (2009)Google Scholar
  27. 27.
    Zhang, L., Verma, B., Stockwell, D., Chowdhury, S.: Spatially constrained location prior for scene parsing. In: The International Joint Conference on Neural Networks (IJCNN), pp. 1480–1486 (2016)Google Scholar
  28. 28.
    Jimei, Y., Price, B., Cohen, S., Ming-Hsuan, Y.: Context driven scene parsing with attention to rare classes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3294–3301 (2014)Google Scholar
  29. 29.
    Fulkerson, B., Vedaldi, A., Soatto, S.: Class segmentation and object localization with superpixel neighborhoods. In: IEEE 12th International Conference on Computer Vision, pp. 670–677 (2009)Google Scholar
  30. 30.
    Lempitsky, V., Vedaldi, A., Zisserman, A.: Pylon model for semantic segmentation. In: Advances in Neural Information Processing Systems, pp. 1485–1493 (2011)Google Scholar
  31. 31.
    Zhangzhang, S., Song-Chun, Z.: Learning AND-OR templates for object recognition and detection. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2189–2205 (2013)CrossRefGoogle Scholar
  32. 32.
    Zhou, Q., Zheng, B., Zhu, W.: Jan Latecki, L.: Multi-scale context for scene labeling via flexible segmentation graph. Pattern Recognit. 59, 312–324 (2016)CrossRefGoogle Scholar
  33. 33.
    Tung, F., Little, J.J.: Scene parsing by nonparametric label transfer of content-adaptive windows. Comput. Vis. Image Underst. 143, 191–200 (2016)CrossRefGoogle Scholar
  34. 34.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint, arXiv:1512.03385 (2015)
  35. 35.
    Zheng, L., Zhao, Y., Wang, S., Wang, J., Tian, Q.: Good practice in CNN feature transfer. arXiv preprint, arXiv:1604.00133 (2016)
  36. 36.
    Farabet, C., Couprie, C., Najman, L., LeCun, Y.: Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1915–1929 (2013)CrossRefGoogle Scholar
  37. 37.
    Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D. et al.: Conditional random fields as recurrent neural networks. arXiv preprint, arXiv:1502.03240 (2015)
  38. 38.
    Pinheiro, P. H., Collobert, R.: Recurrent convolutional neural networks for scene parsing. arXiv preprint, arXiv:1306.2795 (2013)
  39. 39.
    Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017)Google Scholar
  40. 40.
    Malik, J., Belongie, S., Leung, T., Shi, J.: Contour and texture analysis for image segmentation. Int. J. Comput. Vis. 43, 7–27 (2001)CrossRefMATHGoogle Scholar
  41. 41.
    Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: Tenth IEEE International Conference on Computer Vision (ICCV), pp. 1800–1807 (2005)Google Scholar
  42. 42.
    Gould, S., Fulton, R., Koller, D.: Decomposing a scene into geometric and semantically consistent regions. In: 12th IEEE International Conference on Computer Vision (ICCV), pp. 1–8 (2009)Google Scholar
  43. 43.
    Shotton, J., Johnson, M., Cipolla, R.: Semantic texton forests for image categorization and segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008)Google Scholar
  44. 44.
    Xie, G.S., Zhang, X.Y., Yan, S., Liu, C.L.: Hybrid CNN and dictionary-based models for scene recognition and domain adaptation. IEEE Trans. Circuits Syst. Video Technol. (2015). doi: 10.1109/TCSVT.2015.2511543
  45. 45.
    Xie, G.-S., Zhang, X.-Y., Liu, C.-L.: Efficient feature coding based on auto-encoder network for image classification. In: 12th Asian Conference on Computer Vision (ACCV), pp. 628–642 (2015)Google Scholar
  46. 46.
    Shrivastava, A., Pillai, J.K., Patel, V.M.: Multiple kernel-based dictionary learning for weakly supervised classification. Pattern Recognit. 48, 2667–2675 (2015)CrossRefGoogle Scholar
  47. 47.
    Zhang, Z., Xu, Y., Yang, J., Li, X., Zhang, D.: A survey of sparse representation: algorithms and applications. IEEE Access 3, 490–530 (2015)CrossRefGoogle Scholar
  48. 48.
    Zhang, L., Verma, B., Stockwell, D., Chowdhury, S.: Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside vegetation classification. In: International Joint Conference on Neural Networks (IJCNN), pp. 3249–3255 (2016)Google Scholar
  49. 49.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002)CrossRefGoogle Scholar
  50. 50.
    Yining, D., Manjunath, B.S.: Unsupervised segmentation of color-texture regions in images and video. IEEE Trans. Pattern Anal. Mach. Intell. 23, 800–810 (2001)CrossRefGoogle Scholar
  51. 51.
    Xiaofeng, R., Malik, J.: Learning a classification model for segmentation. In: Ninth IEEE International Conference on Computer Vision (ICCV), pp. 10–17 (2003)Google Scholar
  52. 52.
    Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based image segmentation. Int. J. Comput. Vis. 59, 167–181 (2004)CrossRefGoogle Scholar
  53. 53.
    Chang, C., Koschan, A., Chung-Hao, C., Page, D.L., Abidi, M.A.: Outdoor scene image segmentation based on background recognition and perceptual organization. IEEE Trans. Image Process. 21, 1007–1019 (2012)MathSciNetCrossRefGoogle Scholar
  54. 54.
    Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)CrossRefGoogle Scholar
  55. 55.
    Harbas, I., Subasic, M.: CWT-based detection of roadside vegetation aided by motion estimation. In: 5th European Workshop on Visual Information Processing (EUVIP), pp. 1–6 (2014)Google Scholar
  56. 56.
    Chowdhury, S., Verma, B., Tom, M., Zhang, M.: Pixel characteristics based feature extraction approach for roadside object detection. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.School of Engineering and TechnologyCentral Queensland UniversityBrisbaneAustralia

Personalised recommendations