Skip to main content
Log in

An evaluation methodology for 3D deep neural networks using visualization in 3D data classification

  • Published:
Journal of Mechanical Science and Technology Aims and scope Submit manuscript

Abstract

"Making 3D deep neural networks debuggable". In the study, we develop and propose a 3D deep neural network visualization methodology for performance evaluation of 3D deep neural networks. Our research was conducted using a 3D deep neural network model, which shows the best performance. The visualization method of the research is a method of visualizing part of the 3D object by analyzing the naive Bayesian 3D complement instance generation method and the prediction difference of each feature. The method emphasizes the influence of the network in the process of making decisions. The result of visualization through the algorithm of the study shows a clear difference based on the result class and the instance within the class, and the authors can obtain insight that can evaluate and improve the performance of the DNN (deep neural networks) model by the analyzed results. 3D deep neural networks can be made "indirectly debuggable", and after the completion of the visualization method and the analysis of the result, the method can be used as the evaluation method of "general non-debuggable DNN" and as a debugging method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. L. Kong, C. Alberti, D. Andor, I. Bogatyy and D. Weiss, Dragnn: A transition-based framework for dynamically connected neural networks, International Conference on Learning Representations 2017 Conference (2017).

    Google Scholar 

  2. D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev and M. Collins, Globally normalized transitionbased neural networks, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016) 2442–2452.

    Google Scholar 

  3. C. Szegedy, S. Ioffe, V. Vanhoucke and A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning, AAAI, 4 (2017) 12.

    Google Scholar 

  4. J. Salamon and J. P. Bello, Deep convolutional neural networks and data augmentation for environmental sound classification, IEEE Signal Processing Letters, 24 (3) (2017) 279–283.

    Article  Google Scholar 

  5. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang and J. Xiao, 3D shapenets: A deep representation for volumetric shapes, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015) 1912–1920.

    Google Scholar 

  6. Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, International Conference on Machine Learning (2016) 1050–1059.

    Google Scholar 

  7. L. M. Zintgraf, T. S. Cohen, T. Adel and M. Welling, Visualizing deep neural network decisions: Prediction difference analysis, International Conference on Learning Representations 2017 Conference (2017).

    Google Scholar 

  8. K. Simonyan, A. Vedaldi and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, International Conference on Learning Representations 2014 Conference (2014).

    Google Scholar 

  9. B. Shi, S. Bai, Z. Zhou and X. Bai, Deeppano: Deep panoramic representation for 3-d shape recognition, IEEE Signal Processing Letters, 22 (12) (2015) 2339–2343.

    Article  Google Scholar 

  10. H. Su, S. Maji, E. Kalogerakis and E. Learned-Miller, Multi-view convolutional neural networks for 3D shape recognition, Proceedings of the IEEE International Conference on Computer Vision (2015) 945–953.

    Google Scholar 

  11. K. Sfikas, T. Theoharis and I. Pratikakis, Exploiting the PANORAMA representation for convolutional neural network classification and retrieval, Eurographics Workshop on 3D Object Retrieval (2017).

    Google Scholar 

  12. C. R. Qi, H. Su, K. Mo and L. J. Guibas, Pointnet: Deep learning on point sets for 3D classification and segmentation, Proc. Computer Vision and Pattern Recognition IEEE, 1 (2) (2017) 4.

    Google Scholar 

  13. A. Brock, T. Lim, J. M. Ritchie and N. Weston, Generative and discriminative voxel modeling with convolutional neural networks, Advances in Neural Information Processing Systems, 29 (2016).

  14. D. Maturana and S. Sebastian, Voxnet: A 3D convolutional neural network for real-time object recognition, Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE (2015).

    Google Scholar 

  15. S. K. Kim and M. Likhachev, Planning for grasp selection of partially occluded objects, Robotics and Automation (ICRA), 2016 IEEE International Conference (2016).

    Google Scholar 

  16. A. Zeng, K. T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez and J. Xiao, Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge, Robotics and Automation (ICRA), 2017 IEEE International Conference (2017) 1386–1383.

    Google Scholar 

  17. P. Mcclure and N. Kriegeskorte, Representing inferential uncertainty in deep neural networks through sampling, International Conference on Learning Representations 2017 Conference (2017).

    Google Scholar 

  18. M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, Computer Vision-ECCV 2014 (2014) 818–833

    Google Scholar 

  19. K. P. Murphy, Naive bayes classifiers, University of British Columbia, 18 (2006).

  20. M. Robnik-Šikonja and I. Kononenko, Explaining classifications for individual instances, IEEE Transactions on Knowledge and Data Engineering, 20 (5) (2008) 589–600.

    Article  Google Scholar 

  21. J. V. Román, S. C. Pérez, S. S. Lana and J. C. Cristóbal, Hybrid approach combining machine learning and a rulebased expert system for text categorization, AAAI (2011).

    Google Scholar 

  22. H. Li. and P. E. Love, Combining rule-based expert systems and artificial neural networks for mark-up estimation, Construction Management & Economics, 17 (2) (1999) 169–176.

    Article  Google Scholar 

  23. D. Le Hanh, K. K. Ahn, N. B. Kha and W. K. Jo, Trajectory control of electro-hydraulic excavator using fuzzy self tuning algorithm with neural network, Journal of Mechanical Science and Technology, 23 (1) (2009) 149–160.

    Article  Google Scholar 

  24. R. Jafari-Marandi, M. Khanzadeh, B. K. Smith and L. Bian, Self-organizing and error driven (SOED) artificial neural network for smarter classifications, Journal of Computational Design and Engineering, 4 (4) (2017) 282–304.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soo-Hong Lee.

Additional information

Recommended by Associate Editor Young Hun Jeong

Hyun-Tae Hwang is currently studying his Ph.D. at Yonsei University in Seoul, Korea. He received his bachelor’s degree in mechanical engineering from Yonsei University in 2013. His current research interests include PLM, Collaborative Design, Machine Learning.

Soo-Hong Lee is currently as a Fulltime Professor at the Department of Mechanical Engineering, Yonsei University in Seoul, Korea. He received his bachelor’s degree in mechanical engineering from Seoul National University in 1981 and his master’s degree in mechanical engineering design from Seoul National University in 1983. He completed his Ph.D. from Stanford University, California, USA, in 1991. His current research interests include Intelligent CAD, Knowledge-based Engineering Design, Concurrent Engineering, Product Design Management, Product Lifecycle Management, Artificial Intelligence in Design, and Design Automation.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hwang, HT., Lee, SH., Chi, H.G. et al. An evaluation methodology for 3D deep neural networks using visualization in 3D data classification. J Mech Sci Technol 33, 1333–1339 (2019). https://doi.org/10.1007/s12206-019-0233-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12206-019-0233-1

Keywords

Navigation