Zusammenfassung
Efforts in various planning scenarios like factory planning, motion and trajectory planning, product design, etc. tend towards full realization in 3D. This makes point clouds an important 3D data type for capturing and assessing different situations. In this paper, we design a Bayesian extension to the frequentist PointNet classification network [1] by applying Bayesian convolutions and linear layers with variational inference. This approach allows the estimation of the model’s uncertainty in its predictions. Further, we are able to describe how each point in the input point cloud contributes to the prediction level uncertainty. Additionally, our network is compared against the state-of-the-art and shows strong performance. We prove the feasibility of our approach using a ModelNet 3D data set. Further, we generate an industrial 3D point data set at a German automotive assembly plant and apply our network. The results show that we can improve the frequentist baseline on ModelNet by about 6.46 %.
Schlüsselwörter
- Uncertainty estimation
- variational inference
- Bayesian neural network
- point cloud classification
- point cloud processing
This is a preview of subscription content, access via your institution.
Buying options
Preview
Unable to display preview. Download preview PDF.
Literatur
C. R. Qi ,H. Su, K. Mo, and L. J. Guibas, “Pointnet:Deep learning on point sets for 3d classification and segmentation,”inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.652–660.
A. Krizhevsky ,I. Sutskever, and G. E .Hinton ,“Image net classification with deep convolutional neural networks, ”inAdvances in neural information processing systems, 2012, pp.1097–1105.
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,”inProceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp.3431–3440.
Y. Bengio, R. Ducharme ,P. Vincent ,and C. Jauvin, “A neural probabilistic language model,”Journal of machine learning research, vol.3, no. Feb, pp.1137–1155, 2003.
P. Vincent, H. Larochelle,I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising auto encoders:Learning useful representations in a deep network with alocalde noising criterion,”Journal of machine learning research,vol.11, no. Dec,pp.3371–3408, 2010.
V. A. Banks ,K .L .Plant ,and N. A. Stanton, “Driver error or designer error:Using the perceptual cycle model to explore the circumstances surrounding the fatal tesla crash on 7th may 2016,”Safety science,vol. 108, pp.278–285, 2018.
A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” inAdvances in neural information processing systems, 2017, pp.5574–5584.
A .Der Kiureghian and O. Ditlevsen, “A leatory or epistemic? does it matter?”Structural Safety,vol.31, no.2, pp.105–112, 2009.
K. Posch, J. Steinbrener, and J. Pilz ,“Variational inference to measure model uncertainty in deep neural networks,”arXiv preprint arXiv:1902.10189, 2019.
M. Robnik-SikonjaandI. Kononenko, “Explaining classifications for individual instances,”IEEE Transactions on Knowledge and Data Engineering,vol.20, no.5, pp.589–600, 2008.
L. M. Zintgraf, T. S. Cohen, T. Adel ,and M. Welling ,“Visualizing deep neural network decisions:Prediction difference analysis,”arXiv preprint arXiv:1702.04595, 2017.
Z. Wu, S .Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets:A deep representation for volumetric shapes,”in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp.1912–1920.
A. Graves ,“Practical variational inference for neural networks, ”in Advances in neural information processing systems,2011, pp.2348–2356.
C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,”arXiv preprint arXiv:1505.05424, 2015.
Y. Galand Z. Ghahramani, “Bayesian convolutional neural networks with bernoulli approximate variational inference,”arXiv preprint arXiv:1506.02158, 2015.
H. Rue ,S .Martino, and N. Chopin, “Approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations,”Journal of the royal statistical society: Series B (statistical methodology),vol.71, no.2, pp.319–392, 2009.
S. Chib and E. Greenberg, “Understanding the metropolis-hastings algorithm,”The american statistician, vol.49, no.4, pp.327–335, 1995.
G. Casella and E. I. George, “Explaining the gibbs sampler,”The American Statistician,vol.46, no.3,pp.167–174,1992.
D. M. Blei, A. Kucukelbir ,and J. D .Mc Auliffe, “Variational inference:A review for statisticians,”Journal of the American statistical Association, vol.112, no.518, pp.859–877, 2017.
Y. Gal, R. Islam, and Z. Ghahramani, “Deep bayesian active learning with image data,”inProceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR.org, 2017, pp.1183–1192.
S. Depeweg, J. M. Hernández-Lobato, F. Doshi-Velez, and S. Udluft, “Uncertainty decomposition in bayesian neural networks with latent variables,”arXiv preprint arXiv:1706.08495, 2017.
R. D. Goyal, “Knowledge based neural network for text classification,” in2007 IEEE International Conference on Granular Computing (GRC 2007). IEEE, 2007, pp.542–542.
D. Koller and M. Sahami, “Hierarchically classifying documents using very few words,”Stanford InfoLab, Tech. Rep.,1997.
A. Mc Callum ,R .Rosenfeld, T. M. Mitchell, and A. Y .Ng, “Improving text classification by shrinkage in a hierarchy of classes.” inICML, vol.98, 1998, pp.359–367.
L. Zhang and Q. Ji, “A bayesian network model for automatic and interactive image segmentation,”IEEE Transactions on Image Processing,vol.20, no.9, pp.2582–2593, 2011.
M. A. Kupinski, D. C .Edwards, M. L. Giger, and C. E. Metz, “Ideal observer approximation using bayesian classification neural networks,” IEEE transactions on medical imaging,vol.20 ,no.9 ,pp.886–899, 2001.
Z. Kang, J. Yang, and R. Zhong, “Abayesian-network-based classification method integrating airborne lidar data with optical images,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,vol.10, no.4, pp.1651–1661, 2016.
A. Dewan ,G. L. Oliveira, and W. Burgard, “Deep semantic classification for 3d lidar data,”in2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017,pp.3544–3549.
K. Posch and J. Pilz, “Correlated parameters to accurately measure uncertainty indeep neural networks,”arXiv preprint arXiv:1904.01334, 2019.
2019.
K. Shridhar, F. Laumann, and M. Liwicki, “Uncertainty estimations by soft plus normalization in bayesian convolutional neural networks with variational inference,”arXiv preprint arXiv:1806.05978, 2018.
A. Kendall, V. Badrinarayanan, and R.Cipolla, “Bayesian segnet: Model uncertainty indeep convolutional encoder-decoder architectures for scene understanding,”arXiv preprint arXiv:1511.02680, 2015.
A .Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De Vito, Z. Lin, A. Desmaison, L. Antiga, and A.Lerer ,“Automatic differentiation in PyTorch,”inNIPS Autodiff Workshop,2017.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH , ein Teil von Springer Nature
About this paper
Cite this paper
Petschnigg, C., Pilz, J. (2021). Uncertainty aware deep point based neural network for 3D object classification. In: Haber, P., Lampoltshammer, T., Mayr, M., Plankensteiner, K. (eds) Data Science – Analytics and Applications. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-32182-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-658-32182-6_11
Published:
Publisher Name: Springer Vieweg, Wiesbaden
Print ISBN: 978-3-658-32181-9
Online ISBN: 978-3-658-32182-6
eBook Packages: Computer Science and Engineering (German Language)