Skip to main content

The Large-Scale Symmetry Learning Applying Pavlov Principle

  • Conference paper
  • First Online:
Advances in Neural Computation, Machine Learning, and Cognitive Research III (NEUROINFORMATICS 2019)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 856))

Included in the following conference series:

Abstract

Symmetry detection task in the domain of 100-dimension binary vectors is considered. This task is characterized by practically infinite number of training samples. We train an artificial neural network with binary neurons to solve the symmetry detection task. Weight changing of hidden neurons is performed according to Pavlov Principle. In the presence of error, synaptic weights are adjusted considering a matrix of random weights. After training on a relatively small number of data samples our network obtained generalization ability and detects symmetry on data not present at the training set. The obtained averaged percentage of correct recognition of our network is better than those of classic perceptron with fixed weights of synapses of neurons of hidden layer. We also compare performance of different modifications of the architecture including different number of hidden layers, different number of neurons in hidden layer, different number of neurons’ synapses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–534 (1986)

    Article  Google Scholar 

  2. Sejnowski, T.J., Kienker, P.K., Hinton, G.E.: Learning symmetry groups with hidden units: beyond the perceptron. Phys. D Nonlinear Phenom. 22(1–3), 260–275 (1986)

    Article  MathSciNet  Google Scholar 

  3. Lillicrap, T., Cownden, D., Tweed, D.B., Akerman C.J.: Random feedback weights support learning in deep neural networks. arXiv:1411.0247 (2014)

  4. Dunin-Barkowski, W.L., Solovyeva, K.P.: Pavlov principle and brain reverse engineering. In: 2018 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, Saint Lois, Missouri, USA, 30 May–2 June 2018, vol. 37, pp. 1–5 (2018)

    Google Scholar 

  5. Nokland, A.: Direct feedback alignment provides learning in deep neural networks. arXiv:160901596 (2016)

Download references

Acknowledgements

The work is financially supported by State Program of SRISA RAS No. 0065-2019-0003 (AAA-A19-119011590090-2).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander E. Lebedev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lebedev, A.E., Solovyeva, K.P., Dunin-Barkowski, W.L. (2020). The Large-Scale Symmetry Learning Applying Pavlov Principle. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y. (eds) Advances in Neural Computation, Machine Learning, and Cognitive Research III. NEUROINFORMATICS 2019. Studies in Computational Intelligence, vol 856. Springer, Cham. https://doi.org/10.1007/978-3-030-30425-6_48

Download citation

Publish with us

Policies and ethics