Advertisement

Classification of ALS Point Clouds Using End-to-End Deep Learning

  • Lukas WiniwarterEmail author
  • Gottfried Mandlburger
  • Stefan Schmohl
  • Norbert Pfeifer
Original Article
  • 54 Downloads

Abstract

Deep learning, referring to artificial neural networks with multiple layers, is widely used for classification tasks in many disciplines including computer vision. The most popular type is the Convolutional Neural Network (CNN), commonly applied to 2D image data. However, CNNs are difficult to adapt to irregular data like point clouds. PointNet, on the other hand, has enabled the derivation of features based on the geometric distribution of a set of points in nD-space utilising a neural network. We use PointNet on multiple scales to automatically learn a representation of local neighbourhoods in an end-to-end fashion, which is optimised for semantic labelling on 3D point clouds acquired by Airborne Laser Scanning (ALS). The results are comparable to those using manually crafted features, suggesting a successful representation of these neighbourhoods. On the ISPRS 3D Semantic Labelling benchmark, we achieve 80.6% overall accuracy, a mid-field result. Investigation on a bigger dataset, namely the 2011 ALS point cloud of the federal state of Vorarlberg, shows overall accuracies of up to 95.8% over large-scale built-up areas. Lower accuracy is achieved for the separation of low vegetation and ground points, presumably because of invalid assumptions about the distribution of classes in space, especially in high alpine regions. We conclude that the method of the end-to-end system, allowing training on a big variety of classification problems without the need for expert knowledge about neighbourhood features can also successfully be applied to single-point-based classification of ALS point clouds.

Keywords

Semantic labelling Machine learning Neural networks PointNet Airborne laser scanning 

Zusammenfassung

Klassifizierung von ALS-Punktwolken mit End-to-End Deep Learning. Maschinelles Lernen mit künstlichen neuronalen Netzen hat unter dem Namen Deep Learning bei Klassifizierungs- und Regressionsaufgaben viele Erfolge, insbesondere auch in der Computer Vision erbracht. Ein beliebtes Modell dabei ist das Convolutional Neural Network, welches auf einer Faltungsoperation basiert und einfach auf 2D-Bilddaten angewendet werden kann. Auf irregulären Daten wie Punktwolken ist die Anwendung jedoch nicht trivial. Daher wurde mit PointNet eine Architektur vorgestellt, welche die Ableitung von Nachbarschaftsinformationen aus einer Menge von Punkten in n-dimensionalem Raum in der Form eines neuronalen Netzes lernen kann. In dieser Arbeit wird PointNet auf topographischen Punktwolken aus Airborne Laser Scanning (ALS) zur semantischen Klassifizierung eingesetzt. Die erzielten Ergebnisse sind mit denen aus manuell selektierten Merkmalen vergleichbar, was auf einen erfolgreichen Lernprozess hinweist. Die Gesamtgenauigkeit auf dem 3D Semantic Labelling Testdatensatz der ISPRS liegt mit 80,6% im Mittelfeld. Weitere Analysen auf einem größeren Datensatz, einer ALS Punktwolke des Landes Vorarlberg, zeigen deutlich höhere Gesamtgenauigkeiten von bis zu 95,8% in urbanem Gebiet. In alpinen Bereichen stellt sich jedoch die Separation von niedriger Vegetation und Boden als problematisch heraus. Dennoch ist die Methode, mithilfe von Trainingsdaten automatisiert Nachbarschaftsmerkmale aus Punktwolken für einzelpunktbasierte Klassifizierungsaufgaben abzuleiten, auch auf ALS-Daten erfolgreich anwendbar, ohne dass auf Expertenwissen zurückgegriffen werden muss.

Notes

Acknowledgements

alsNet was trained and evaluated in parts on hardware donated to the University of Stuttgart and Heidelberg University by NVIDIA and at the Vienna Scientific Cluster VSC-3. The Vorarlberg ALS dataset of 2011 was provided as Open Government Data by the federal government of Vorarlberg (Land Vorarlberg): http://data.vorarlberg.gv.at/. The Vaihingen data set was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) (Cramer 2010): https://ifpwww.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html.

References

  1. Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495CrossRefGoogle Scholar
  2. Bechtold S, Höfle B (2016) HELIOS: a multi-purpose LiDAR simulation framework for research, planning and training of laser scanning operations with airborne, ground-based mobile and stationary platforms. ISPRS Ann Photogramme Remote Sens Spat Inf Sci III–3:161–168CrossRefGoogle Scholar
  3. Blomley R, Weinmann M (2017) Using multi-scale features for the 3D semantic labeling of airborne laser scanning data. ISPRS Ann Photogramm Remote Sens Spat Inf Sci IV–2:43–50CrossRefGoogle Scholar
  4. Boulch A, Le Saux B, Audebert N (2017) Unstructured point cloud semantic labeling using deep segmentation networks. In: Pratikakis I, Dupont F, Ovsjanikov M (eds) Eurographics workshop on 3D object retrieval. The Eurographics Association, Aire-la-Ville.  https://doi.org/10.2312/3dor.20171047 Google Scholar
  5. Chehata N, Guo L, Mallet C (2009) Airborne LiDAR feature selection for urban classification using random forests. In: ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IARPS, Paris, France, vol XXXVIII-3/W8, pp 207–212Google Scholar
  6. Cramer M (2010) The DGPF-test on digital airborne camera evaluation-overview and test design. Photogramm Fernerkundung Geoinf 2:73–82CrossRefGoogle Scholar
  7. Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2(4):303–314CrossRefGoogle Scholar
  8. Dai A, Chang AX, Savva M, Halber M, Funkhouser T, Niessner M (2017) ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: The IEEE conference on computer vision and pattern recognition (CVPR) 2017Google Scholar
  9. Dai A, Ritchie D, Bokeloh M, Reed S, Sturm J, Nießner M (2018) ScanComplete: Large-scale scene completion and semantic segmentation for 3D scans. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, Salt Lake City, UT, USA, vol abs/1712.10215, pp 4578–4587.  https://doi.org/10.1109/CVPR.2018.00481
  10. Gerke M (2014) 3D semantic labeling contest. http://www2.isprs.org/commissions/comm3/wg4/3d-semantic-labeling.html
  11. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge. http://www.deeplearningbook.org
  12. Graham B, Engelcke M, van der Maaten L (2018) 3D semantic segmentation with submanifold sparse convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9224–9232Google Scholar
  13. Grilli E, Menna F, Remondino F (2017) A review of point clouds segmentation and classification algorithms. In: ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol XLII-2/W3, pp 339–344.  https://doi.org/10.5194/isprs-archives-XLII-2-W3-339-2017
  14. Hackel T, Wegner JD, Schindler K (2016) Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann Photogramm Remote Sens Spat Inf Sci III–3:177–184.  https://doi.org/10.5194/isprs-annals-III-3-177-2016 CrossRefGoogle Scholar
  15. Hackel T, Wegner JD, Savinov N, Ladicky L, Schindler K, Pollefeys M (2018) Large-scale supervised learning for 3D point cloud labeling: Semantic3d.net. Photogramm Eng Remote Sens 84(5):297–308CrossRefGoogle Scholar
  16. Hoo-Chang S, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35(5):1285CrossRefGoogle Scholar
  17. Hu X, Yuan Y (2016) Deep-learning-based classification for DTM extraction from ALS point cloud. Remote Sens 8:9.  https://doi.org/10.3390/rs8090730 Google Scholar
  18. Huang J, You S (2016) Point cloud labeling using 3D convolutional neural network. In: 23rd international conference on pattern recognition (ICPR), pp 2670–2675.  https://doi.org/10.1109/ICPR.2016.7900038
  19. Hubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160(1):106–154CrossRefGoogle Scholar
  20. Landrieu L, Simonovsky M (2018) Large-scale point cloud semantic segmentation with superpoint graphs. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 4558–4567.  https://doi.org/10.1109/CVPR.2018.00479
  21. Lawin FJ, Danelljan M, Tosteberg P, Bhat G, Khan FS, Felsberg M (2017) Deep projective 3D semantic segmentation. In: Felsberg M, Heyden A, Krüger N (eds) Computer analysis of images and patterns. Springer, Berlin, pp 95–107.  https://doi.org/10.1007/978-3-319-64689-3_8 CrossRefGoogle Scholar
  22. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551CrossRefGoogle Scholar
  23. LeCun Y, Cortes C, Burges C (2018) MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/. Accessed 13 Nov 2018
  24. Mallet C (2010) Analysis of full-waveform LiDAR data for urban area mapping. PhD thesis, Télécom ParisTechGoogle Scholar
  25. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133CrossRefGoogle Scholar
  26. Niemeyer J, Rottensteiner F, Soergel U (2014) Contextual classification of LiDAR data and building object detection in urban areas. ISPRS J Photogramm Remote Sens 87:152–165CrossRefGoogle Scholar
  27. Niemeyer J, Rottensteiner F, Soergel U, Heipke C (2016) Hierarchical higher order CRF for the classification of airborne LiDAR point clouds in urban areas. Int Arch Photogramm Remote Sens Spat Inf Sci XLI–B3:655–662.  https://doi.org/10.5194/isprs-archives-XLI-B3-655-2016 CrossRefGoogle Scholar
  28. Otepka J, Ghuffar S, Waldhauser C, Hochreiter R, Pfeifer N (2013) Georeferenced point clouds: a survey of features and point cloud management. ISPRS Int J Geo-Inf 2(4):1038–1065CrossRefGoogle Scholar
  29. Persello C, Stein A (2017) Deep fully convolutional networks for the detection of informal settlements in VHR images. IEEE Geosci Remote Sens Lett 14(12):2325–2329CrossRefGoogle Scholar
  30. Politz F, Sester M (2018) Exploring ALS and DIM data for semantic segmentation using CNNs. Int Arch Photogramm Remote Sens Spat Inf Sci XLII–1:347–354.  https://doi.org/10.5194/isprs-archives-XLII-1-347-2018 CrossRefGoogle Scholar
  31. Politz F, Kazimi B, Sester M (2018) Classification of laser scanning data using deep learning. In: Proceedings of the 38th scientific-technical annual conference of the DGPF and PFGK18 in Munich, Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation (DGPF) e.V., vol 27, pp 597–610Google Scholar
  32. Qi CR, Su H, Mo K, Guibas LJ (2017a) PointNet: Deep learning on point sets for 3D classification and segmentation. Proc Comput Vis Pattern Recogn IEEE 1(2):4Google Scholar
  33. Qi CR, Yi L, Su H, Guibas LJ (2017b) PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems, pp 5099–5108Google Scholar
  34. Rizaldy A, Persello C, Gevaert C, Oude Elberink S (2018) Fully convolutional networks for ground classification from LiDAR point clouds. ISPRS Ann Photogramm Remote Sens Spat Inf Sci IV–2:231–238CrossRefGoogle Scholar
  35. Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386–408CrossRefGoogle Scholar
  36. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL, the PDP Research Group (eds) Parallel distributed processing: explorations in the microstructures of cognition, vol I: foundations. MIT Press, New YorkGoogle Scholar
  37. Schmohl S, Sörgel U (2019) Submanifold sparse convolutional networks for semantic segmentation of large-scale ALS point clouds. ISPRS Ann Photogramm Remote Sens Spat Inf Sci IV–2/W5:77–84CrossRefGoogle Scholar
  38. Song S, Yu F, Zeng A, Chang AX, Savva M, Funkhouser TA (2017) Semantic scene completion from a single depth image. In: 2017 IEEE conference on computer vision and pattern recognition, pp 190–198.  https://doi.org/10.1109/CVPR.2017.28
  39. Tchapmi L, Choy C, Armeni I, Gwak J, Savarese S (2017) SEGCloud: Semantic segmentation of 3D point clouds. In: 2017 international conference on 3D vision (3DV), pp 537–547.  https://doi.org/10.1109/3DV.2017.00067
  40. TopoSys (2014) Technischer Abschlussbericht LiDAR und RGB-Land Vorarlberg (Anja Wiedenhöft and Svein G Vatslid)Google Scholar
  41. Tran THG, Otepka J, Wang D, Pfeifer N (2018) Classification of image matching point clouds over an urban area. Int J Remote Sens 39(12):4145–4169CrossRefGoogle Scholar
  42. Vosselman G (2013) Point cloud segmentation for urban scene classification. ISPRS Int Arch Photogramm Remote Sens Spat Inf Sci XL–7/W2:257–262.  https://doi.org/10.5194/isprsarchives-XL-7-W2-257-2013 CrossRefGoogle Scholar
  43. Wagner W, Roncat A, Melzer T, Ullrich A (2007) Waveform analysis techniques in airborne laser scanning. ISPRS Int Arch Photogramm Remote Sens Spat Inf Sci XXXVI/3:413–418Google Scholar
  44. Weinmann M, Jutzi B, Mallet C (2013) Feature relevance assessment for the semantic interpretation of 3D point cloud data. ISPRS Ann Photogramm Remote Sens Spat Inf Sci II–5/W2:313–318CrossRefGoogle Scholar
  45. Weinmann M, Jutzi B, Hinz S, Mallet C (2015) Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J Photogramm Remote Sens 105:286–304CrossRefGoogle Scholar
  46. Yang Z, Jiang W, Xu B, Zhu Q, Jiang S, Huang W (2017) A convolutional neural network-based 3D semantic labeling method for ALS point clouds. Remote Sens 9:9.  https://doi.org/10.3390/rs9090936 Google Scholar
  47. Yousefhussien M, Kelbe DJ, Ientilucci EJ, Salvaggio C (2018) A multi-scale fully convolutional network for semantic labeling of 3D point clouds. ISPRS J Photogramm Remote Sens 143:191–204.  https://doi.org/10.1016/j.isprsjprs.2018.03.018 CrossRefGoogle Scholar
  48. Zhao R, Pang M, Wang J (2018) Classifying airborne LiDAR point clouds via deep features learned by a multi-scale convolutional neural network. Int J Geogr Inf Sci 32(5):960–979CrossRefGoogle Scholar

Copyright information

© Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation (DGPF) e.V. 2019

Authors and Affiliations

  • Lukas Winiwarter
    • 1
    • 2
    Email author
  • Gottfried Mandlburger
    • 1
    • 3
  • Stefan Schmohl
    • 3
  • Norbert Pfeifer
    • 1
  1. 1.Department of Geodesy and Geoinformation (E120)Technische Universität WienViennaAustria
  2. 2.3D Geospatial Data Processing Group (3DGeo), Institute of GeographyHeidelberg UniversityHeidelbergGermany
  3. 3.Institute for PhotogrammetryUniversity of StuttgartStuttgartGermany

Personalised recommendations