Stochastic approximation techniques and circuits and systems associated tools for neural network optimization
This paper is devoted to the optimization of feedforward and feedback Artificial Neural Networks (ANN) working in supervised learning mode. We describe in a general way how it is possible to derive first and second order stochastic approximation methods that provide learning capabilities. We show how certain variables, the sensitivities of the ANN outputs, play a key role in the ANN optimization process. Then we describe how some useful and elementary tools known in circuit theory can be used to compute these sensitivities with a low computational cost. We show by example how to apply these two sets of complementary tools, i.e. stochastic approximation and sensitivity theory.
KeywordsArtificial Neural Networks Stochastic Approximation Sequential Parameter Estimation Adaptive Systems Sensitivity Theory
Unable to display preview. Download preview PDF.
- 1.L. Ljung and T. Söderström, Theory and Practice of Recursive Identification. The MIT Press, Camdbrige, Massachusetts, 1987. December 1968.Google Scholar
- 2.R. K. Brayton and R. Spence, Sensitivity and Optimization. Elsevier Scientific Publishing Company, Amsterdam, 1980.Google Scholar
- 3.C. M. Bishop, Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.Google Scholar
- 4.A. Cichocki, R. Unbehaen, Neural Networks for Optimization and Signal Processing. J. Wiley, Chichester, 1993.Google Scholar
- 5.H. Dedieu, C. Dehollain, J. Neirynck, G. Rhodes, A New Method for Solving Broadband Matching Problems. IEEE Trans. on Circuits and Systems, Systems-I: Fundamental Theory and Applications, Vol. 41, NO. 9, pp. 561–571, Sept. 1994.Google Scholar
- 6.H. Dedieu, O. Chételat, Automatic Derivation of Adaptive Algorithms for a Large Class of Filter Structures. IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP'93, Minneapolis, April 27–30, 1993, pp. III.476–III.479Google Scholar