Skip to main content

Symmetry: Between indecision and equality of choice

  • Methodology for Data Analysis, Task Selection and Nets Design
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

Abstract

The training of a neural network is an intricate balance between knowledge, randomness and symmetry. Symmetry can both be beneficial and detrimental to the learning process by respectively equality of choice and indecision. The paper provides a critical review and classification, and offers a constructive procedure to handle problem-indigenous symmetries.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. Albertini and E. Sontag, “For Neural Networks, Function Determines Form”, Neural Networks 6, pp. 975–990, 1993.

    Google Scholar 

  2. G. An, “The Effects of Adding Noise During Backpropagation Training on a Generalization Performance”, Neural Computation 8, pp. 643–674, 1996.

    Google Scholar 

  3. E. I. Barakova and L. Spaanenburg, “Selective Sampling for Reliable Neural Signal Approximation”, Proceedings NICROSP'96 (Venice, Italy) pp. 183–193, 1996.

    Google Scholar 

  4. D. Barber, D. Saad and P. Sollich, “Finite size effects in on-line learning of multilayer neural networks”, Europhysics letters 34, pp. 151.

    Google Scholar 

  5. M. Biehl and H. Schwartze, “Learning by Online Gradient Descent”, Journal of Physics A: Mathematical and general 28, pp. 643–656, 1995.

    Google Scholar 

  6. E. K. Blum, “Approximation of Boolean Functions by Sigmoidal Networks: Part I: XOR and Other Two-Variable Functions”, Neural Computation 1, pp. 532–540, 1989.

    Google Scholar 

  7. M. H. terBrugge, J.A.G. Nijhuis, W.J. Jansen, H. Drenth, and L. Spaanenburg, “On the representation of data for optimal learning”, Proceedings ICNN'95 VI (Perth, Western Australia) pp. 3180–3184, 1995.

    Google Scholar 

  8. An Mei Chen, H. Lu, R. Hecht-Nielsen, “On the geometry of feedforward neural networks error surfaces”, Neural Computation 5, nr. 6, pp. 910–927, 1993.

    Google Scholar 

  9. M. Diepenhorst, W.J. Jansen, J.A.G. Nijhuis and L. Spaanenburg, “On the learnability of temporal relations”, to be published, 1997.

    Google Scholar 

  10. E. Gardner, “The Space of Interactions in Neural Networks Models”, Journal of physics A: Mathematical and general 21, pp. 257–270, 1988.

    Google Scholar 

  11. E. Gardner and B. Derida, Journal of physics A: Mathematical and general 22, nr 12 pp. 1983, 1989.

    Google Scholar 

  12. Y. Grandvalet and S. Canu, “Comments on Noise Injection into Inputs in Back Propagation Learning”, IEEE Transactions on Systems, Man and Cybernetics 25, Nr. 4, April 1995.

    Google Scholar 

  13. S. Haykin, Neural Networks: a comprehensive foundation (MacMillan), 1994.

    Google Scholar 

  14. T. M. Heskes and B. Kappen, “Learning processes in neural networks”, Physical Review A 44 pp. 2718–2726, 1991.

    Google Scholar 

  15. F. Jordan and G. Clement, “Using the Symmetries of Multilayered Network To Reduce the Weight Space”, IEEE ICNN 2, pp. 391–396, 1991.

    Google Scholar 

  16. F. Kolen and J.B. Pollack, “Back Propagation is Sensitive to Initial Conditions”, In D. Touretzky, ed.,” Advances in Neural Information Processing Systems4 (Morgan Kaufmann, San Francisco, CA) 1992.

    Google Scholar 

  17. V.Kurkova and P.Kainen, “Functionally Equivalent Feedforward Neural Networks”, Neural Computation 6, nr. 3, pp. 553–558.

    Google Scholar 

  18. K. Matsuoka, “Noise injection into inputs in Backpropagation learning”, IEEE Transactions on Systems, Man and Cybernetics 22, nr. 3, 1992.

    Google Scholar 

  19. M. Mezard, G. Parisi, and M.A. Virasoro, “Spin-Glass Theory and Beyond”, Lecture notes in Physics 9 (World Scientific Press, Singapore).

    Google Scholar 

  20. M. Minski and S. Papert, Perceptrons (MIT Press, Cambridge) 1969.

    Google Scholar 

  21. R. Monasson and D. O'Kane, “Domains of solutions and replica symmetry breaking in multilayer neural networks”, Europhysics letters 27. nr. 2, pp. 85, July 10 1994.

    Google Scholar 

  22. C. Peterson, “Mean field theory neural networks for feature recognition, content addressable memory and optimization”, Connectionists Science 3, pp. 3–33, 1991.

    Google Scholar 

  23. M. Plutowski and H. White, “Selecting concise Training Sets from Clean Data”, IEEE Trans. on Neural Networks 4, nr. 2, March 1993.

    Google Scholar 

  24. P.Riegler and M.Biehl,”On-linebackpropagationintwo-layeredneuralnetworks.”Journal of Physics. A: Mathematical and general 28, pp. 507–513, 1995.

    Google Scholar 

  25. D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning internal Representations by Error Propagation”, in Parallel Distributed Processing 1 (MIT Press) 1986.

    Google Scholar 

  26. D. Saad and S.A. Solla, “On line learning in soft committee machines”, Physical Review E 52, nr. 4, pp. 4225–4243.

    Google Scholar 

  27. T. J. Sejnowski, P.K. Kienker, and G.E. Hinton, “Learning symmetry groups with hidden units: Beyond the perceptron”, Physica 22D, pp. 260–275, 1986.

    Google Scholar 

  28. J. Shawe-Taylor, “Symmetries and discriminability in feedforward network architectures”, IEEE Transactions on Neural Networks 4, nr. 5, pp. 816–826, 1993.

    Google Scholar 

  29. H. J. Sussmann, “Uniqueness of the weights for minimal feedforward nets with a given i/o map”, Neural networks 5, pp. 589–593, 1992.

    Google Scholar 

  30. G.Thimm and E.Fiesler,”High Order and Multilayer Perceptron Initialization”,Accepted for publication by IEEE Transactions on Neural Networks, 1996.

    Google Scholar 

  31. T.L.Watkin, A. Rau and M. Biehl, “The Statistical Mechanics of Learning a Rule”, Review of modern Physics 6, nr. 2, April 1993.

    Google Scholar 

  32. W. Wiegerinck and T. Heskes, “How dependencies between successive examples affect on-line learning”, to appear in Neural Computation, 1996.

    Google Scholar 

  33. A West D.Saad,andI.Nabney,“The learning dynamics of a universal approximator”, Proceedings NIPS'96, 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barakova, E.I., Spaanenburg, L. (1997). Symmetry: Between indecision and equality of choice. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032550

Download citation

  • DOI: https://doi.org/10.1007/BFb0032550

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics