Segmentation of LiDAR Intensity Using CNN Feature Based on Weighted Voting

  • Masaki UmemuraEmail author
  • Kazuhiro Hotta
  • Hideki Nonaka
  • Kazuo Oda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10317)


We propose an image labeling method for LiDAR intensity image obtained by Mobile Mapping System (MMS). Conventional segmentation method using CNN and KNN could give high accuracy but the accuracies of objects with small area are much lower than other classes with large area. We solve this issue by using voting cost. The first cost is determined from a local region. Another cost is determined from surrounding regions of the local region. Those costs become large when labeling result corresponds to class label of the region. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images) and test sets (8 images). We use class average accuracy as evaluation measures. Our proposed method gain 84.75% on class average accuracy, and it is 9.22% higher than our conventional method. We demonstrated that the proposed costs are effective to improve the accuracy.


Local Region Class Label Convolutional Neural Network Vote Weight Catchment Basin 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This work is partially supported by MEXT KAKENHI 15K00252.


  1. 1.
    Hasegawa, H., Ishiyama, N.: Publication of The Digital Maps (Basic Geospatial Information). Geospatial Inf. Authority Jpn 60, 19–24 (2013)Google Scholar
  2. 2.
    Umemura, M., Hotta, K., Nonaka, H., Oda, K.: Image labeling for lidar intensity image using K-NN of feature obtained by convolutional neural network, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLI-B3, 931–935 (2016)Google Scholar
  3. 3.
    Yan, W.Y., Shaker, A., Habib, A., Kersting, A.P.: Improving classification accuracy of airborne LiDAR intensity data by geometric calibration and radiometric correction. ISPRS J. Photogram. Remote Sens. 67, 35–44 (2012)CrossRefGoogle Scholar
  4. 4.
    Tighe, J., Lazebnik, S.: SuperParsing: scalable nonparametric image parsing with superpixels. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 352–365. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15555-0_26 CrossRefGoogle Scholar
  5. 5.
    Kohli, P., Osokin, A., Jegelka, S.: A principled deep random field model for image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1971–1978 (2013)Google Scholar
  6. 6.
    Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 297–312. Springer, Cham (2014). doi: 10.1007/978-3-319-10584-0_20 Google Scholar
  7. 7.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 142–158 (2016)CrossRefGoogle Scholar
  8. 8.
    Jonathan, L., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, pp. 3431–3440 (2015)Google Scholar
  9. 9.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561 (2015)
  10. 10.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia, Florida, USA, pp. 675–678 (2014)Google Scholar
  11. 11.
    Novak, K.: Mobile Mapping System: new tools for the fast collection of GIS information. In: Proceedings of the SPIE, vol. 1943 (1993)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Masaki Umemura
    • 1
    Email author
  • Kazuhiro Hotta
    • 1
  • Hideki Nonaka
    • 2
  • Kazuo Oda
    • 2
  1. 1.Meijo UniversityTempaku-ku, NagoyaJapan
  2. 2.Asia Air Survey Co., Ltd.Asao-ku, KawasakiJapan

Personalised recommendations