Skip to main content
Log in

On the Possibility of Determining the Values of Neural Network Weights in an Electrostatic Field

  • Published:
Scientific and Technical Information Processing Aims and scope

Abstract

At present, in typical architectures of feedforward neural networks, the values of the weights of the connections and thresholds of neurons are determined by adjusting the values of the weights, performed by means of typical learning algorithms. The architectures of feedforward neural networks implemented on the basis of metric recognition methods are also known, the values of the weights of neurons for which are precalculated analytically. The analytical calculation of the weight values is carried out on the basis of metric expressions and allows a workable neural network to be immediately obtained without training. In this case, the effectiveness of the obtained neural network depends on the selected set and the number of samples, as well as on the selected dimension of the table of weights. Such neural networks can also be trained with typical learning algorithms, which makes it possible to increase the efficiency of the neural network with the calculated weights through additional training of the neural network. Here, the process of calculating the weight values and the further training of the neural network is also faster than training the neural network in the traditional way; on the basis of these networks, the possibility of determining the weight values and thresholds of a neural network using the strength and potential of the electrostatic field is considered. That is, it is proposed to use the parameter values of the electrostatic field as weight values of a neural network. In other words, the possibility of creating a workable neural network without analytical calculations and without the use of learning algorithms is considered. This approach allows the process of determining the values of the neural network weights almost instantaneous. The technically possible implementations of this approach and the problematic aspects of using the parameters of the electrostatic field as weights of a neural network, as well as possible approaches to resolving these difficulties are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.
Fig. 11.
Fig. 12.

REFERENCES

  1. Gorban’, A.N., Dudin-Barkovskii, V.L., Kirdin, A.N., Mirkes, E.M., Novokhod’ko, A.Yu., Rossiev, D.A., Terekhov, S.A., Senashova, M.Yu., and Tsaregorodtsev, V.G., Neiroinformatika (Neuroinformatics), Novosibirsk: Nauka, 1998.

    Google Scholar 

  2. Kruglov, V.V. and Borisov, V.V., Iskusstvennye neironnye seti. Teoriya i praktika (Artificial Neural Networks: Theory and Practice), Moscow: Goryachaya Liniya-Telekom, 2001.

  3. LeCun, Ya., Bengio, Yo., and Hinton, G., Deep learning, Nature, 2015, vol. 521, no. 7553, pp. 436–444. https://doi.org/10.1038/nature14539

    Article  Google Scholar 

  4. Bazenkov, N., Vorontsov, D., Dyakonova, V., Zhilyakova, L., Zakharov, I., Kuznetsov, O., Kulivets, S., and Sakharov, D., Discrete modeling of neuronal interaction in multi-transmitter networks, Sci. Tech. Inf. Process., 2018, vol. 45, no. 5, pp. 283–296. https://doi.org/10.3103/S0147688218050015

    Article  Google Scholar 

  5. Osipov, V.Yu. and Nikiforov, V.V., Recurrent neural networks with controlled elements in restoring frame flows, Inf.-Upravlyayushchie Sist., 2019, no. 5, pp. 10–17. https://doi.org/10.31799/1684-8853-2019-5-10-17

  6. Kuznetsov, O.P., Bazenkov, N.I., Boldyshev, B.A., Zhilyakova, L.Yu., Kulivets, S.G., and Chistopolsky, I.A., An asynchronous discrete model of chemical interactions in simple neuronal systems, Sci. Tech. Inf. Process., 2018, vol. 45, no. 6, pp. 375–389. https://doi.org/10.3103/S0147688218060072

    Article  Google Scholar 

  7. Le, T.T.L., The comparison of neural network CMAC and multilayer neural network in the task of detection of DOS attacks, Neirokomp’yutery: Razrab., Primenenie, 2016, no. 7, pp. 65–69.

  8. Golov, D.V. and Krasovskaya, L.V., Neural networks and recognition of handwritten digits based on artificial neural networks, Issled. Tekh. Nauk, 2014, no. 4, pp. 18–20.

  9. Drokin, I.S., About an algorithm for consistent weights initialization of deep learning networks and neural networks ensemble learning, Vestn. S.-Peterb. Univ. Ser. 10. Prikl. Mat. Inf. Protsessy Upr., 2016, no. 4, pp. 66–74. https://doi.org/10.21638/11701/spbu10.2016.406

  10. Lukina, A.S., Nekrasov, M.V., and Pakman, D.N., Processing of telemetric information of the spacecraft by neural networks based on the theory of Kalman filters, Tendentsii Razvit. Nauki Obraz., 2016, no. 13, pp. 43–45. https://doi.org/10.18411/lj2016-4-13

  11. Khusainov, A.T., Assessment of predictability of the system for maintaining reservoir pressure by neural networks in oil fields, Akademicheskii Zh. Zapadnoi Sibiri, 2016, vol. 12, no. 3, p. 48.

    Google Scholar 

  12. Bondarko, V.M., Bondarko, D.V., Solnushkin, S.D., and Chikhman, V.N., Simulation of the results of psychophysical experiments by neural networks, Neirokomp’yutery: Razrab., Primenenie, 2018, no. 5, pp. 31–33.

  13. Shi, P., Li, F., Wu, L., and Lim, Ch.-Ch., Neural network-based passive filtering for delayed neutral-type semi-Markovian jump systems, IEEE Trans. Neural Network Learning Syst., 2017, vol. 28, no. 9, pp. 2101–2114. https://doi.org/10.1109/TNNLS.2016.2573853

    Article  MathSciNet  Google Scholar 

  14. Yan, Ch., Xie, H., Yang, D., Yin, J., Zhang, Yo., and Dai, Q., Supervised hash coding with deep neural network for environment perception of intelligent vehicles, IEEE Trans. Intell. Transp. Syst., 2018, vol. 19, no. 1, pp. 284–295. https://doi.org/10.1109/TITS.2017.2749965

    Article  Google Scholar 

  15. He, W., Chen, Yu., and Yin, Zh., Adaptive Neural Network Control of an Uncertain Robot With Full-State Constraints, IEEE Trans. Cybern., 2016, vol. 46, no. 3, pp. 620–629. https://doi.org/10.1109/TCYB.2015.2411285

    Article  Google Scholar 

  16. Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., and Thrun, S., Dermatologist-level classification of skin cancer with deep neural networks, Nature, 2017, vol. 542, pp. 115–118. https://doi.org/10.1038/nature21056

    Article  Google Scholar 

  17. Geidarov, P.Sh., Neural networks on the basis of the sample method, Autom. Control Comput. Sci., 2009, vol. 43, no. 4, pp. 203–210. https://doi.org/10.3103/S0146411609040063

    Article  Google Scholar 

  18. Gejdarov, P.Sh., The architecture of a neural network with a sequential division of images into pairs, Prikl. Diskretnaya Mat., 2018, no. 41, pp. 98–109. https://doi.org/10.17223/20710410/41/10

  19. Gejdarov, P.Sh., Neural networks based on metric recognition methods as applied to problems with fuzzy inference, Iskusstvennyi Intellekt Prinyatie Reshenii, 2010, no. 2, pp. 77–88.

  20. Geidarov, P.Sh., Multitasking application of neural networks implementing metric methods of recognition, Autom. Remote Control, 2013, vol. 74, no. 9, pp. 1474–1485. https://doi.org/10.1134/S000511791309004X

    Article  MATH  Google Scholar 

  21. Geidarov, P.Sh., Clearly defined neural networks architecture, Opt. Mem. Neural Networks, 2015, vol. 24, no. 3, pp. 209–219. https://doi.org/10.3103/S1060992X15030054

    Article  Google Scholar 

  22. Geidarov, P.Sh., Neural networks with image recognition by pairs, Opt. Mem. Neural Networks, 2018, vol. 27, no. 2, pp. 113–119. https://doi.org/10.3103/S1060992X1802008X

    Article  Google Scholar 

  23. Gejdarov, P.Sh., Algorithm for calculating synapse weights of the first layer of a neural network on the base of metric recognition methods. Part 1, Inf. Upravlyayushchie Sist., 2020, no. 2, pp. 20–30. https://doi.org/10.31799/1684-8853-2020-2-20-30

  24. Gejdarov, P.Sh., Algorithm for calculating synapse weights of the first layer of a neural network on the base of metric recognition methods. Part 2, Inf. Upravlyayushchie Sist., 2020, no. 3, pp. 25–38. https://doi.org/10.31799/1684-8853-2020-3-25-38

  25. Birger, I.A., Tekhnicheskaya diagnostika (Technical Diagnostics), Moscow: Mashinostroenie, 1978.

  26. Gejdarov, P.Sh., An algorithm implementing the method of the nearest neighbor in a multi-layer perceptron, Tr. SPIIRAN, 2017, no. 51, pp. 123–151. https://doi.org/10.15622/sp.51.6

  27. Geidarov, P.Sh., Comparative analysis of the results of training a neural network with calculated weights and with random generation of the weights, Autom. Remote Control, 2020, vol. 81, no. 7, pp. 1211–1229. https://doi.org/10.1134/S0005117920070048

    Article  MATH  Google Scholar 

  28. Biryukov, S.V., Fizicheskie osnovy izmereniya parametrov elektricheskikh polei (Physical Foundations of Measuring the Parameters of Electric Fields), Omsk: Sib. Avtomobil’no-Dorozhnyi Inst., 2008.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Sh. Geidarov.

Ethics declarations

The author declares that he has no conflicts of interest.

Additional information

Translated by E. Oborin

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Geidarov, P.S. On the Possibility of Determining the Values of Neural Network Weights in an Electrostatic Field. Sci. Tech. Inf. Proc. 49, 506–518 (2022). https://doi.org/10.3103/S014768822205015X

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S014768822205015X

Keywords:

Navigation