Soft Computing

, Volume 21, Issue 9, pp 2385–2393 | Cite as

On randomization of neural networks as a form of post-learning strategy

Methodologies and Application


Today artificial neural networks are applied in various fields—engineering, data analysis, robotics. While they represent a successful tool for a variety of relevant applications, mathematically speaking they are still far from being conclusive. In particular, they suffer from being unable to find the best configuration possible during the training process (local minimum problem). In this paper, we focus on this issue and suggest a simple, but effective, post-learning strategy to allow the search for improved set of weights at a relatively small extra computational cost. Therefore, we introduce a novel technique based on analogy with quantum effects occurring in nature as a way to improve (and sometimes overcome) this problem. Several numerical experiments are presented to validate the approach.


Neural networks Quantum randomness Training strategy Function approximation 


Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.


  1. Beck F, Eccles JC (1994) Quantum aspects of brain activity and the role of consciousness. How the self controls its brain. Springer, Berlin HeidelbergGoogle Scholar
  2. Bishop CM (1993) Neural networks for pattern recognition. MIT Press, CambridgeMATHGoogle Scholar
  3. Branke J (1995) Evolutionary algorithms for neural network design and training. In: Proceedings of the first nordic workshop on genetic algorithms and its applicationsGoogle Scholar
  4. Fujita O (1991) A method for designing the internal representation of neural networks and its application to network synthesis. Neural Netw 4(6):827–837CrossRefGoogle Scholar
  5. Goldberg A, Schey HM, Schwartz JL (1967) Computer-generated motion pictures of one-dimensional quantum-mechanical transmission and reflection phenomena. Am J Phys 35(3)Google Scholar
  6. Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson Education, Upper Saddle RiverGoogle Scholar
  7. Herz A, Sulzer B, Khn R, Van Hemmen HL (1988) The Hebb rule: storing static and dynamic objects in an associative neural network. EPL (Europhysics Letters) 7(7):663CrossRefGoogle Scholar
  8. Hopfield JJ (1987) Learning algorithms and probability distributions in feed-forward and feed-back networks. Proc Natl Acad Sci 84(23):8429–8433MathSciNetCrossRefGoogle Scholar
  9. Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257MathSciNetCrossRefGoogle Scholar
  10. Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1988) Optimization by simulated annealing, neurocomputing: foundations of research. MIT Press, CambridgeMATHGoogle Scholar
  11. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133Google Scholar
  12. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21(6):1087–1092CrossRefGoogle Scholar
  13. Penrose R (1999) The emperors new mind: concerning computers. Brains and the laws of physics. Oxford University Press, OxfordGoogle Scholar
  14. Pratama M, Anavatti SG, Lughofer E (2014a) GENEFIS: toward an effective localist network. IEEE Trans Fuzzy Syst 22(3):547–562Google Scholar
  15. Pratama M, Anavatti SG, Angelov PP, Lughofer E (2014b) PANFIS: a novel incremental learning machine. IEEE Trans Neural Netw Learn Syst 25(1):55–68Google Scholar
  16. Rosenblatt F (1958) The Perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386CrossRefGoogle Scholar
  17. Schaul T, Zhang S, LeCun Y (2012) No more pesky learning rates. arXiv:1206.1106
  18. Specht DF (1990) Probabilistic neural networks. Neural Netw 3(1):109–118CrossRefGoogle Scholar
  19. Werbos PJ (1988) Generalization of backpropagation with application to a recurrent gas market model. Neural Netw 1(4):339–356CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.IICT, Bulgarian Academy of SciencesSofiaBulgaria

Personalised recommendations