Advertisement

Rule-injection hints as a means of improving network performance and learning time

  • S. C. Suddarth
  • Y. L. Kergosien
Part II Theory, Algorithms
Part of the Lecture Notes in Computer Science book series (LNCS, volume 412)

Abstract

Neural networks can be given “hints” by increasing the number of parameters learned to include parameters related to the original relationship. The effect of this hint, whether applied to back-propagation learning or to more general types of pattern associators is to reduce training time and improve generalization performance. A detailed vector field analysis of a hinted back-propagation network solving the XOR problem, shows that the hint is capable of eliminating pathological local minima. A set-theory/functional entropy analysis shows that the hint can be applied to any learning mechanism that has an internal (“hidden”) layer of processing. These analyses and tests conducted on a variety of problems using different types of networks demonstrate the potential of the hint as a method of controlling training in order to predictably train systems to effectively model data.

Keywords

Hide Layer Hide Neuron Associative Memory Hide Unit Rule Extraction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bienenstock E.L., Cooper L.N., Munrow P.W., 1982: Theory for the development of neuron activity, Journal of Neuro-science, 2.Google Scholar
  2. 2.
    Denker, J., Schwartz, D., Wittner, B., Solla, S., Hopfield, J., Howard, R., and Jackel, L. (1987), "Automatic Learning, Rule Extraction, and Generalization," Complex Systems, 1, 887–922.Google Scholar
  3. 3.
    Kergosien, Y.L., “Local versus global minima, hysteresis, multiple meanings,” Disordered Systems and Biological Organization (Les Houches, France 1985), E. Bienenstock, F. Fogelman, G. Weisbuch eds. NATO ASI series F vol 20, Springer, 1986.Google Scholar
  4. 4.
    Kergosien, Y.L., “Distributed Matrix Memories,” Internal report, EEC BRAIN Initiative Contract ST 2J-0415-C, 1989.Google Scholar
  5. 5.
    Suddarth, S.C. (1988), “A Symbolic-Neural Method for Creating Models and Control Behaviors from Examples,” Ph.D. Dissertation, University of Washington.Google Scholar
  6. 6.
    Suddarth, S. and Bourrely, J., “A back-propagation associative memory for both positive and negative learning,” 3rd IEEE International Conference on Neural Networks, Washington, June, 1989.Google Scholar
  7. 7.
    Suddarth, S. and Lamoulie, L., “Data fusion using a back-propagation analog associative memory,” 4th International Symposium on Computer and Information Sciences, Cesme, Turkey, Nov. 1989.Google Scholar
  8. 8.
    Suddarth, S.C., Sutton, S.A., Holden, A.D.C. (1988), “A Symbolic-Neural Method for Solving Control Problems,” Proceedings International Conference on Neural Networks, San Diego.Google Scholar
  9. 9.
    Thom R. (1972), Stabilité structurelle et morphogenèse, Benjamin.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • S. C. Suddarth
    • 1
  • Y. L. Kergosien
    • 2
  1. 1.ONERA/DESChâtillonFRANCE
  2. 2.Département de MathématiqueUniversité de Paris SudOrsayFRANCE

Personalised recommendations