Skip to main content

Feed Forward Neural Network entities

  • Formal Tools and Computational Models of Neurons and Neural Net Architectures
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

Abstract

Feed Forward Neural Networks (FFNNs) are computational techniques inspired by the physiology of the brain and used in the approximation of general mappings from one finite dimensional space to another. They present a practical application of the theoretical resolution of Hilbert's 13th problem by Kolmogorov and Lorenz, and have been used with success in a variety of applications. However, as the training data grows in both dimension and size, larger network implementations are required. As a consequence, in most cases scaling problems arise; the existing training algorithms can not handle the vast search space, saturation occurs at the output of the hidden layer nodes and, in general, the network becomes inflexible, slow and inefficient. Considering all of the above, we are proposing a methodology for breaking down the traditionally single and rigid FFNN into an entity of simpler units, in line with the connectionist view of the distributive representation of knowledge and using Kolmogorov's paradigm of approximating functions of many variables by compositions of one-variable functions. Although the entities' concept is still developing, some preliminary results indicate superiority over the single FFNN model when applicable to problems involving high-dimensional data (e.g. financial/meteorological data analysis, etc.).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. C. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford 1995.

    Google Scholar 

  2. T. L. Burrows and M. Niranjan. The use of feed-forward and recurrent neural networks for system identification. Technical report, Cambridge University Engineering Department, 1993.

    Google Scholar 

  3. G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2(4):303–314, 1989.

    Google Scholar 

  4. J. Freeman. Neural Networks: Theory and Practice. Addison-Wesley, 1991.

    Google Scholar 

  5. K. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.

    Google Scholar 

  6. G. E. Hinton. Connectionist learning procedures. Artificial Intelligence, 40:185–234, 1989.

    Google Scholar 

  7. R. Hecht-Nielsen. Kolmogorov's mapping neural network existence theorem. IEEE First International Conference on Neural Networks, San Diego, 3:11–14, 1987.

    Google Scholar 

  8. R. Hecht-Nielsen. Theory of the backpropagation neural network. In Proceedings of the International Joint Conference on Neural Networks, volume 1, pages 593–606, 1989.

    Google Scholar 

  9. K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4:251–257, 1991.

    Google Scholar 

  10. S. J. Hanson and L. Y. Pratt. Comparing biases for minimal network construction with back-propagation. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 1, pages 177–185. Morgan Kaufmann, San Mateo, CA, 1989.

    Google Scholar 

  11. K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. In Halber White, editor, Artificial Neural Networks: Approximation and Learning Theory, pages 12–28. Blackwell, Oxford, UK, 1992.

    Google Scholar 

  12. A. N. Kolmogorov. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSR, 114:953–956, 1957.

    Google Scholar 

  13. A. V. Levy and A. Montalvo. The tunnelling algorithm for the global minimization of functions. SIAM J. Sci. Stat. Comput., 6:15–29, 1985.

    Google Scholar 

  14. A. Tesi M. Gori. Some examples of local minima during learning with backpropagation. Parallel Architectures and Neural Networks, 1990.

    Google Scholar 

  15. R. Raghavan M. L. Brady and J. Slawny. Back-propagation fails to separate where perceptrons succeed. IEEE Transactions on Circuits and Systems, 36:665–674, 1989.

    Google Scholar 

  16. M. Minsky and S. Papert. Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, MA, expanded edition, 1988.

    Google Scholar 

  17. M. P. Perrone. General averaging results for convex optimization. In M. C. Mozer, editor, Proceedings 1993 Connectionist Models Summer School, pages 364–371, Hillsdale, NJ, 1994. Lawrence Erlbaum.

    Google Scholar 

  18. M. Gori P. Frasconi and A. Tesi. Susccesses and failures of backpropagation: a theoretical investigation. In O. Omidvar, editor, Progress in Neural Networks. Ablex Publishing.

    Google Scholar 

  19. S. J. Nowlan R. A. Jacobs, M. I. Jordan and G. E. Hinton. Adaptive mixture of local experts. Neural Computation, 3:79–87, 1991.

    Google Scholar 

  20. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by back-propagating errors. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  21. W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, New York, 1964.

    Google Scholar 

  22. P. D. Wasserman. Neural Computing: Theory and practice. Van Nostrand Reinhold, 1989.

    Google Scholar 

  23. A. S. Weigend. Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley, 1993.

    Google Scholar 

  24. P. J. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Doctoral Dissertation, Applied Mathematics, Harvard University, Boston, MA, November 1974.

    Google Scholar 

  25. P. J. Werbos. Back-propacation: Past and future. In Proceedings of IEEE International Conference on Neural Networks, volume 1, pages 343–353. IEEE Press, New York, 1988.

    Google Scholar 

  26. H. White. Learning in artificial neural networks: A statistical perspective. Neural Computation, 1(4):425–464, 1989.

    Google Scholar 

  27. A. S. Weigend, B. A. Huberman, and D. E. Rumelhart. Predicting sunspots and exchange rates with connectionist networks. In M. Casdagli and S. Eubank, editors, Nonlinear Modeling and Forcasting, SFI Studies in the Sciences of Complexity, volume 12. Addison-Wesley, 1991.

    Google Scholar 

  28. D. H. Wolpert. Stacked generalisation. Neural Networks, 5:241–259, 1992.

    Google Scholar 

  29. A. S. Weigend, D. E. Rumelhart, and B. A. Huberman. Back-propagation, weight elimination and time series prediction. In Proceedings of the 1990 Connectionist Models Summer School, pages 65–80. Morgan Kaufmann, 1990.

    Google Scholar 

  30. X. Yao. A review of evolutionary artificial neural networks. International Journal of Intelligent Systems, 8:539–567, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hadjiprocopis, A., Smith, P. (1997). Feed Forward Neural Network entities. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032493

Download citation

  • DOI: https://doi.org/10.1007/BFb0032493

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics