Skip to main content
Log in

Neural network training fingerprint: visual analytics of the training process in classification neural networks

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

The striking results of deep neural networks (DNN) have motivated its wide acceptance to tackle large datasets and complex tasks such as natural language processing, facial recognition, and artificial image generation. However, DNN parameters are often empirically selected on a trial-and-error approach without detailed information on convergence behavior. While some visualization techniques have been proposed to aid the comprehension of general-purpose neural networks, only a few explore the training process, lacking the ability to adequately display how abstract representations are formed and represent the influence of training parameters during this process. This paper describes neural network training fingerprint (NNTF), a visual analytics approach to investigate the training process of any neural network performing classification. NNTF allows understanding how classification decisions change along the training process, displaying information about convergence, oscillations, and training rates. We show its usefulness through case studies and demonstrate how it can support the analysis of training parameters.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. Source code available at https://github.com/marthadais/NeuralNetworkTrainingFingerprint.

References

  • Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, et al. (2016) Tensorflow: a system for large-scale machine learning. In: OSDI, vol 16, pp 265–283

  • Alencar AB, Börner K, Paulovich FV, de Oliveira MCF (2012) Time-aware visualization of document collections. In: Proceedings of the 27th annual ACM symposium on applied computing, SAC ’12. Association for Computing Machinery, New York, pp 997–1004. https://doi.org/10.1145/2245276.2245469

  • Babiker HKB, Goebel R (2017) An introduction to deep visual explanation. ArXiv preprint arXiv:1711.09482

  • Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10(7):e0130140

    Article  Google Scholar 

  • Borg I, Groenen P (2003) Modern multidimensional scaling: theory and applications. J Educ Meas 40(3):277–280

    Article  MATH  Google Scholar 

  • Cantareira GD, Etemad E, Paulovich FV (2020) Exploring neural network hidden layer activity using vector fields. Information 11(9):426

    Article  Google Scholar 

  • Chen C, Yuan J, Lu Y, Liu Y, Su H, Yuan S, Liu S (2020) Oodanalyzer: interactive analysis of out-of-distribution samples. IEEE Trans Vis Comput Graph 27(7):3335–3349

    Article  Google Scholar 

  • de Araújo Tiburtino Neves TT, Martins RM, Coimbra DB, Kucher K, Kerren A, Paulovich FV (2021) Fast and reliable incremental dimensionality reduction for streaming data. Comput Graph. https://doi.org/10.1016/j.cag.2021.08.009

  • Erhan D, Bengio Y, Courville A, Vincent P (2009) Visualizing higher-layer features of a deep network. Univ Montr 1341(3):1

    Google Scholar 

  • Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning. MIT Press, Cambridge

    MATH  Google Scholar 

  • Gschwandtner T, Erhart O (2018) Know your enemy: identifying quality problems of time series data. In: 2018 IEEE Pacific visualization symposium (PacificVis). IEEE, pp 205–214

  • Gu D, Li Y, Jiang F, Wen Z, Liu S, Shi W, Lu G, Zhou C (2020) Vinet: a visually interpretable image diagnosis network. IEEE Trans Multimedia 22(7):1720–1729

    Article  Google Scholar 

  • Hohman F, Kahng M, Pienta R, Chau DH (2018) Visual analytics in deep learning: an interrogative survey for the next frontiers. IEEE Trans Vis Comput Graph 25(8):2674–2693

    Article  Google Scholar 

  • Hohman F, Wongsuphasawat K, Kery MB, Patel K (2020) Understanding and visualizing data iteration in machine learning. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–13

  • Kahng M, Andrews PY, Kalro A, Chau DHP (2018) Activis: visual exploration of industry-scale deep neural network models. IEEE Trans Vis Comput Graph 24(1):88–97

    Article  Google Scholar 

  • Krizhevsky A (2010) Convolutional deep belief networks on cifar-10. Unpublished manuscript 40(7):1–9

  • Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  • Krogh A, Hertz JA (1992) A simple weight decay can improve generalization. In: Advances in neural information processing systems, pp 950–957

  • LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436

    Article  Google Scholar 

  • LeCun Y, Haffner P, Bottou L, Bengio Y (1999) Object recognition with gradient-based learning. In: Shape, contour and grouping in computer vision. Springer, Berlin, Heidelberg, pp 319–345. https://doi.org/10.1007/3-540-46805-6_19

  • Liu D, Cui W, Jin K, Guo Y, Qu H (2018) Deeptracker: visualizing the training process of convolutional neural networks. ACM Trans Intell Syst Technol (TIST) 10(1):1–25

    Google Scholar 

  • Liu M, Shi J, Cao K, Zhu J, Liu S (2017) Analyzing the training processes of deep generative models. IEEE Trans Vis Comput Graph 24(1):77–87

    Article  Google Scholar 

  • Liu S, Wang X, Liu M, Zhu J (2017) Towards better analysis of machine learning models: a visual analytics perspective. Vis Inf 1(1):48–56

    Google Scholar 

  • Ma Y, Xie T, Li J, Maciejewski R (2019) Explaining vulnerabilities to adversarial machine learning through visual analytics. IEEE Trans Vis Comput Graph 26(1):1075–1085

    Article  Google Scholar 

  • Mahendran A, Vedaldi A (2016) Visualizing deep convolutional neural networks using natural pre-images. Int J Comput Vis 120(3):233–255

    Article  MathSciNet  Google Scholar 

  • Pezzotti N, Höllt T, Van Gemert J, Lelieveldt BP, Eisemann E, Vilanova A (2018) Deepeyes: progressive visual analytics for designing deep neural networks. IEEE Trans Vis Comput Graph 24(1):98–108

    Article  Google Scholar 

  • Rauber PE, Fadel SG, Falcao AX, Telea AC (2017) Visualizing the hidden activity of artificial neural networks. IEEE Trans Vis Comput Graph 23(1):101–110

    Article  Google Scholar 

  • Samek W, Binder A, Montavon G, Lapuschkin S, Müller KR (2017) Evaluating the visualization of what a deep neural network has learned. IEEE Trans Neural Netw Learn Syst 28(11):2660–2673

    Article  MathSciNet  Google Scholar 

  • Scherer D, Müller A, Behnke S (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In: International conference on artificial neural networks, pp 92–101. Springer

  • Shang W, Sohn K, Almeida D, Lee H (2016) Understanding and improving convolutional neural networks via concatenated rectified linear units. In: International conference on machine learning, pp 2217–2225

  • Simonyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. In: 2nd international conference on learning representations (ICLR), workshop track proceedings

  • Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ArXiv preprint arXiv:1409.1556

  • Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  • Takens F et al (1981) Detecting strange attractors in turbulence. Lect Notes Math 898(1):366–381

    Article  MathSciNet  MATH  Google Scholar 

  • Van Der Maaten L (2014) Accelerating t-SNE using tree-based algorithms. J Mach Learn Res 15(1):3221–3245

    MathSciNet  MATH  Google Scholar 

  • Wang J, Gou L, Shen HW, Yang H (2018) Dqnviz: a visual analytics approach to understand deep q-networks. IEEE Trans Vis Comput Graph 25(1):288–298

    Article  Google Scholar 

  • Webber CL Jr, Zbilut JP (1994) Dynamical assessment of physiological systems and states using recurrence plot strategies. J Appl Physiol 76(2):965–973

    Article  Google Scholar 

  • Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579

  • Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833. Springer

  • Zintgraf LM, Cohen TS, Adel T, Welling M (2017) Visualizing deep neural network decisions: prediction difference analysis. ArXiv preprint arXiv:1702.04595

Download references

Acknowledgements

We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martha Dais Ferreira.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ferreira, M.D., Cantareira, G.D., de Mello, R.F. et al. Neural network training fingerprint: visual analytics of the training process in classification neural networks. J Vis 25, 593–612 (2022). https://doi.org/10.1007/s12650-021-00809-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12650-021-00809-4

Keywords

Navigation