Skip to main content

Uncertainty aware deep point based neural network for 3D object classification

  • 2909 Accesses

Zusammenfassung

Efforts in various planning scenarios like factory planning, motion and trajectory planning, product design, etc. tend towards full realization in 3D. This makes point clouds an important 3D data type for capturing and assessing different situations. In this paper, we design a Bayesian extension to the frequentist PointNet classification network [1] by applying Bayesian convolutions and linear layers with variational inference. This approach allows the estimation of the model’s uncertainty in its predictions. Further, we are able to describe how each point in the input point cloud contributes to the prediction level uncertainty. Additionally, our network is compared against the state-of-the-art and shows strong performance. We prove the feasibility of our approach using a ModelNet 3D data set. Further, we generate an industrial 3D point data set at a German automotive assembly plant and apply our network. The results show that we can improve the frequentist baseline on ModelNet by about 6.46 %.

Schlüsselwörter

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. C. R. Qi ,H. Su, K. Mo, and L. J. Guibas, “Pointnet:Deep learning on point sets for 3d classification and segmentation,”inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.652–660.

    Google Scholar 

  2. A. Krizhevsky ,I. Sutskever, and G. E .Hinton ,“Image net classification with deep convolutional neural networks, ”inAdvances in neural information processing systems, 2012, pp.1097–1105.

    Google Scholar 

  3. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,”inProceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp.3431–3440.

    Google Scholar 

  4. Y. Bengio, R. Ducharme ,P. Vincent ,and C. Jauvin, “A neural probabilistic language model,”Journal of machine learning research, vol.3, no. Feb, pp.1137–1155, 2003.

    Google Scholar 

  5. P. Vincent, H. Larochelle,I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising auto encoders:Learning useful representations in a deep network with alocalde noising criterion,”Journal of machine learning research,vol.11, no. Dec,pp.3371–3408, 2010.

    Google Scholar 

  6. V. A. Banks ,K .L .Plant ,and N. A. Stanton, “Driver error or designer error:Using the perceptual cycle model to explore the circumstances surrounding the fatal tesla crash on 7th may 2016,”Safety science,vol. 108, pp.278–285, 2018.

    Google Scholar 

  7. A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” inAdvances in neural information processing systems, 2017, pp.5574–5584.

    Google Scholar 

  8. A .Der Kiureghian and O. Ditlevsen, “A leatory or epistemic? does it matter?”Structural Safety,vol.31, no.2, pp.105–112, 2009.

    Google Scholar 

  9. K. Posch, J. Steinbrener, and J. Pilz ,“Variational inference to measure model uncertainty in deep neural networks,”arXiv preprint arXiv:1902.10189, 2019.

    Google Scholar 

  10. M. Robnik-SikonjaandI. Kononenko, “Explaining classifications for individual instances,”IEEE Transactions on Knowledge and Data Engineering,vol.20, no.5, pp.589–600, 2008.

    Google Scholar 

  11. L. M. Zintgraf, T. S. Cohen, T. Adel ,and M. Welling ,“Visualizing deep neural network decisions:Prediction difference analysis,”arXiv preprint arXiv:1702.04595, 2017.

    Google Scholar 

  12. Z. Wu, S .Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets:A deep representation for volumetric shapes,”in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp.1912–1920.

    Google Scholar 

  13. A. Graves ,“Practical variational inference for neural networks, ”in Advances in neural information processing systems,2011, pp.2348–2356.

    Google Scholar 

  14. C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,”arXiv preprint arXiv:1505.05424, 2015.

    Google Scholar 

  15. Y. Galand Z. Ghahramani, “Bayesian convolutional neural networks with bernoulli approximate variational inference,”arXiv preprint arXiv:1506.02158, 2015.

    Google Scholar 

  16. H. Rue ,S .Martino, and N. Chopin, “Approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations,”Journal of the royal statistical society: Series B (statistical methodology),vol.71, no.2, pp.319–392, 2009.

    Google Scholar 

  17. S. Chib and E. Greenberg, “Understanding the metropolis-hastings algorithm,”The american statistician, vol.49, no.4, pp.327–335, 1995.

    Google Scholar 

  18. G. Casella and E. I. George, “Explaining the gibbs sampler,”The American Statistician,vol.46, no.3,pp.167–174,1992.

    Google Scholar 

  19. D. M. Blei, A. Kucukelbir ,and J. D .Mc Auliffe, “Variational inference:A review for statisticians,”Journal of the American statistical Association, vol.112, no.518, pp.859–877, 2017.

    Google Scholar 

  20. Y. Gal, R. Islam, and Z. Ghahramani, “Deep bayesian active learning with image data,”inProceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR.org, 2017, pp.1183–1192.

    Google Scholar 

  21. S. Depeweg, J. M. Hernández-Lobato, F. Doshi-Velez, and S. Udluft, “Uncertainty decomposition in bayesian neural networks with latent variables,”arXiv preprint arXiv:1706.08495, 2017.

    Google Scholar 

  22. R. D. Goyal, “Knowledge based neural network for text classification,” in2007 IEEE International Conference on Granular Computing (GRC 2007). IEEE, 2007, pp.542–542.

    Google Scholar 

  23. D. Koller and M. Sahami, “Hierarchically classifying documents using very few words,”Stanford InfoLab, Tech. Rep.,1997.

    Google Scholar 

  24. A. Mc Callum ,R .Rosenfeld, T. M. Mitchell, and A. Y .Ng, “Improving text classification by shrinkage in a hierarchy of classes.” inICML, vol.98, 1998, pp.359–367.

    Google Scholar 

  25. L. Zhang and Q. Ji, “A bayesian network model for automatic and interactive image segmentation,”IEEE Transactions on Image Processing,vol.20, no.9, pp.2582–2593, 2011.

    Google Scholar 

  26. M. A. Kupinski, D. C .Edwards, M. L. Giger, and C. E. Metz, “Ideal observer approximation using bayesian classification neural networks,” IEEE transactions on medical imaging,vol.20 ,no.9 ,pp.886–899, 2001.

    Google Scholar 

  27. Z. Kang, J. Yang, and R. Zhong, “Abayesian-network-based classification method integrating airborne lidar data with optical images,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,vol.10, no.4, pp.1651–1661, 2016.

    Google Scholar 

  28. A. Dewan ,G. L. Oliveira, and W. Burgard, “Deep semantic classification for 3d lidar data,”in2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017,pp.3544–3549.

    Google Scholar 

  29. K. Posch and J. Pilz, “Correlated parameters to accurately measure uncertainty indeep neural networks,”arXiv preprint arXiv:1904.01334, 2019.

    Google Scholar 

2019.

  1. K. Shridhar, F. Laumann, and M. Liwicki, “Uncertainty estimations by soft plus normalization in bayesian convolutional neural networks with variational inference,”arXiv preprint arXiv:1806.05978, 2018.

    Google Scholar 

  2. A. Kendall, V. Badrinarayanan, and R.Cipolla, “Bayesian segnet: Model uncertainty indeep convolutional encoder-decoder architectures for scene understanding,”arXiv preprint arXiv:1511.02680, 2015.

    Google Scholar 

  3. A .Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De Vito, Z. Lin, A. Desmaison, L. Antiga, and A.Lerer ,“Automatic differentiation in PyTorch,”inNIPS Autodiff Workshop,2017.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christina Petschnigg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH , ein Teil von Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Petschnigg, C., Pilz, J. (2021). Uncertainty aware deep point based neural network for 3D object classification. In: Haber, P., Lampoltshammer, T., Mayr, M., Plankensteiner, K. (eds) Data Science – Analytics and Applications. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-32182-6_11

Download citation

Publish with us

Policies and ethics