Skip to main content
Log in

Comparative Analysis of the Results of Training a Neural Network with Calculated Weights and with Random Generation of the Weights

  • Robust, Adaptive, and Network Control
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

An Erratum to this article was published on 01 December 2020

This article has been updated

Abstract

Neural networks based on metric recognition methods allow, based on the initial conditions of the computer vision task such as the number of images and samples, to determine the structure of the neural network (the number of neurons, layers, connections), and also allow to analytically calculate the values of the weights on the connections of the neural network. As feedforward neural networks, they can also be trained by classical learning algorithms. The possibility of precomputation of the values of the neural network weights allows us to say that the procedure for creating and training a feedforward neural network is accelerated in comparison with the classical scheme for creating and training a neural network where values of the weights are randomly generated. In this work, we conduct two experiments based on the handwritten numbers dataset MNIST that confirm this statement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Change history

References

  1. Kruglov, V. V. & Borisov, V. V. Iskusstvennye neironnye seti. Teoriya i praktika (Artificial Neural Networks: Theory and Practice). (Goryachaya Liniya-Telekom, Moscow, 2001).

    Google Scholar 

  2. Wassermann, P. D. Neural Computing. Theory and Practice. (Van Nostrand, New York, 1989). Translated under the title Neirokompayuternaya tekhnika. Teoriya i praktika, Moscow: Mir, 1992.

    Google Scholar 

  3. Geidarov, P. Sh Neural Networks on the Basis of the Sample Method. Automat. Control Comput. Sci. 43(no. 4), 203–210 (2009).

    Article  Google Scholar 

  4. Geidarov, P. Sh Multitasking Application of Neural Networks Implementing Metric Methods of Recognition. Autom. Remote Control 74(no. 9), 1474–1485 (2013).

    Article  Google Scholar 

  5. Birger, I. A. Tekhnicheskaya diagnostika (Technical Diagnostics). (Mashinostroenie, Moscow, 1978).

    Google Scholar 

  6. LeCun, Y., Bengio, Y. & Hinton, G. Deep Learning. Nature no. 521, 436–444 (2015).

    Article  Google Scholar 

  7. Schmidhuber, J. Deep Learning in Neural Networks: An overview. Neural Networks no. 61, 85–117 (2015).

    Article  Google Scholar 

  8. Srivastava, N. et al. A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 15(no. 1), 1929–1958 (2014).

    MathSciNet  MATH  Google Scholar 

  9. Geidarov, P. Sh Clearly Defined Architectures of Neural Networks and Multilayer Perceptron. Opt. Mem. Neural Network 26, 62–76 (2017).

    Article  Google Scholar 

  10. Geidarov, P. Sh An Algorithm for Nearest Neighbor Method Implementation in a Multilayer Perceptron. Tr. SPIIRAN no. 51, 123–151 (2017).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P.Sh. Geidarov.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Geidarov, P. Comparative Analysis of the Results of Training a Neural Network with Calculated Weights and with Random Generation of the Weights. Autom Remote Control 81, 1211–1229 (2020). https://doi.org/10.1134/S0005117920070048

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117920070048

Keywords

Navigation