Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion

  • Abhinav ValadaEmail author
  • Gabriel L. Oliveira
  • Thomas Brox
  • Wolfram Burgard
Conference paper
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 1)


Semantic scene understanding of unstructured environments is a highly challenging task for robots operating in the real world. Deep Convolutional Neural Network architectures define the state of the art in various segmentation tasks. So far, researchers have focused on segmentation with RGB data. In this paper, we study the use of multispectral and multimodal images for semantic segmentation and develop fusion architectures that learn from RGB, Near-InfraRed channels, and depth data. We introduce a first-of-its-kind multispectral segmentation benchmark that contains 15, 000 images and 366 pixel-wise ground truth annotations of unstructured forest environments. We identify new data augmentation strategies that enable training of very deep models using relatively small datasets. We show that our UpNet architecture exceeds the state of the art both qualitatively and quantitatively on our benchmark. In addition, we present experimental results for segmentation under challenging real-world conditions. Benchmark and demo are publicly available at


Semantic segmentation Convolutional neural networks Scene understanding Multimodal perception 


  1. 1.
    Badrinarayanan, V., et al.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint (2015). arXiv:1511.00561
  2. 2.
    Bradley, D.M., et al.: Vegetation detection for mobile robot navigation. Technical report CMU-RI-TR-05-12, Carnegie Mellon University (2004)Google Scholar
  3. 3.
    Eitel, A., et al.: Multimodal deep learning for robust RGB-D object recognition. In: International Conference on Intelligent Robots and Systems (2015)Google Scholar
  4. 4.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis. Comm. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint (2015). arXiv:1512.03385
  6. 6.
    Hirschmüller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: CVPR (2005)Google Scholar
  7. 7.
    Huete, A., Justice, C.O., van Leeuwen, W.J.D.: MODIS vegetation index (MOD 13), Algorithm Theoretical Basis Document (ATBD), Version 3.0, p. 129 (1999)Google Scholar
  8. 8.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint (2014). arXiv:1408.5093
  9. 9.
    Liu, F., Shen, C., Lin, G.: Deep convolutional neural fields for depth estimation from a single image (2014). arXiv:1411.6387
  10. 10.
    Liu, W., et al.: ParseNet: looking wider to see better. preprint (2015). arXiv:1506.04579
  11. 11.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, November 2015Google Scholar
  12. 12.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: MICCAI (2015)Google Scholar
  14. 14.
    Oliveira, G.L., Burgard, W., Brox, T.: Efficient deep methods for monocular road segmentation. In: International Conference on Intelligent Robots and Systems (2016)Google Scholar
  15. 15.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)Google Scholar
  16. 16.
    Schwarz, M., Schulz, H., Behnke, S.: RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In: ICRA (2015)Google Scholar
  17. 17.
    Sermanet, P., et al.: Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint (2013). arXiv:1312.6229
  18. 18.
    Socher, R., et al.: Convolutional-recursive deep learning for 3D object classification. In: NIPS, vol. 25 (2012)Google Scholar
  19. 19.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:1409.1556
  20. 20.
    Thrun, S., Montemerlo, M., Dahlkamp, H., et al.: Stanley: the robot that won the DARPA grand challenge. JFR 23(9), 661–692 (2006)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Abhinav Valada
    • 1
    Email author
  • Gabriel L. Oliveira
    • 1
  • Thomas Brox
    • 1
  • Wolfram Burgard
    • 1
  1. 1.Department of Computer ScienceUniversity of FreiburgFreiburg Im BreisgauGermany

Personalised recommendations