On randomization of neural networks as a form of post-learning strategy
Today artificial neural networks are applied in various fields—engineering, data analysis, robotics. While they represent a successful tool for a variety of relevant applications, mathematically speaking they are still far from being conclusive. In particular, they suffer from being unable to find the best configuration possible during the training process (local minimum problem). In this paper, we focus on this issue and suggest a simple, but effective, post-learning strategy to allow the search for improved set of weights at a relatively small extra computational cost. Therefore, we introduce a novel technique based on analogy with quantum effects occurring in nature as a way to improve (and sometimes overcome) this problem. Several numerical experiments are presented to validate the approach.
KeywordsNeural networks Quantum randomness Training strategy Function approximation
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
- Beck F, Eccles JC (1994) Quantum aspects of brain activity and the role of consciousness. How the self controls its brain. Springer, Berlin HeidelbergGoogle Scholar
- Branke J (1995) Evolutionary algorithms for neural network design and training. In: Proceedings of the first nordic workshop on genetic algorithms and its applicationsGoogle Scholar
- Goldberg A, Schey HM, Schwartz JL (1967) Computer-generated motion pictures of one-dimensional quantum-mechanical transmission and reflection phenomena. Am J Phys 35(3)Google Scholar
- Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson Education, Upper Saddle RiverGoogle Scholar
- McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133Google Scholar
- Penrose R (1999) The emperors new mind: concerning computers. Brains and the laws of physics. Oxford University Press, OxfordGoogle Scholar
- Pratama M, Anavatti SG, Lughofer E (2014a) GENEFIS: toward an effective localist network. IEEE Trans Fuzzy Syst 22(3):547–562Google Scholar
- Pratama M, Anavatti SG, Angelov PP, Lughofer E (2014b) PANFIS: a novel incremental learning machine. IEEE Trans Neural Netw Learn Syst 25(1):55–68Google Scholar
- Schaul T, Zhang S, LeCun Y (2012) No more pesky learning rates. arXiv:1206.1106