Abstract
Feed Forward Neural Networks (FFNNs) are computational techniques inspired by the physiology of the brain and used in the approximation of general mappings from one finite dimensional space to another. They present a practical application of the theoretical resolution of Hilbert's 13th problem by Kolmogorov and Lorenz, and have been used with success in a variety of applications. However, as the training data grows in both dimension and size, larger network implementations are required. As a consequence, in most cases scaling problems arise; the existing training algorithms can not handle the vast search space, saturation occurs at the output of the hidden layer nodes and, in general, the network becomes inflexible, slow and inefficient. Considering all of the above, we are proposing a methodology for breaking down the traditionally single and rigid FFNN into an entity of simpler units, in line with the connectionist view of the distributive representation of knowledge and using Kolmogorov's paradigm of approximating functions of many variables by compositions of one-variable functions. Although the entities' concept is still developing, some preliminary results indicate superiority over the single FFNN model when applicable to problems involving high-dimensional data (e.g. financial/meteorological data analysis, etc.).
Preview
Unable to display preview. Download preview PDF.
References
C. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford 1995.
T. L. Burrows and M. Niranjan. The use of feed-forward and recurrent neural networks for system identification. Technical report, Cambridge University Engineering Department, 1993.
G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2(4):303–314, 1989.
J. Freeman. Neural Networks: Theory and Practice. Addison-Wesley, 1991.
K. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.
G. E. Hinton. Connectionist learning procedures. Artificial Intelligence, 40:185–234, 1989.
R. Hecht-Nielsen. Kolmogorov's mapping neural network existence theorem. IEEE First International Conference on Neural Networks, San Diego, 3:11–14, 1987.
R. Hecht-Nielsen. Theory of the backpropagation neural network. In Proceedings of the International Joint Conference on Neural Networks, volume 1, pages 593–606, 1989.
K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4:251–257, 1991.
S. J. Hanson and L. Y. Pratt. Comparing biases for minimal network construction with back-propagation. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 1, pages 177–185. Morgan Kaufmann, San Mateo, CA, 1989.
K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. In Halber White, editor, Artificial Neural Networks: Approximation and Learning Theory, pages 12–28. Blackwell, Oxford, UK, 1992.
A. N. Kolmogorov. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSR, 114:953–956, 1957.
A. V. Levy and A. Montalvo. The tunnelling algorithm for the global minimization of functions. SIAM J. Sci. Stat. Comput., 6:15–29, 1985.
A. Tesi M. Gori. Some examples of local minima during learning with backpropagation. Parallel Architectures and Neural Networks, 1990.
R. Raghavan M. L. Brady and J. Slawny. Back-propagation fails to separate where perceptrons succeed. IEEE Transactions on Circuits and Systems, 36:665–674, 1989.
M. Minsky and S. Papert. Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, MA, expanded edition, 1988.
M. P. Perrone. General averaging results for convex optimization. In M. C. Mozer, editor, Proceedings 1993 Connectionist Models Summer School, pages 364–371, Hillsdale, NJ, 1994. Lawrence Erlbaum.
M. Gori P. Frasconi and A. Tesi. Susccesses and failures of backpropagation: a theoretical investigation. In O. Omidvar, editor, Progress in Neural Networks. Ablex Publishing.
S. J. Nowlan R. A. Jacobs, M. I. Jordan and G. E. Hinton. Adaptive mixture of local experts. Neural Computation, 3:79–87, 1991.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by back-propagating errors. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, MA, 1986.
W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, New York, 1964.
P. D. Wasserman. Neural Computing: Theory and practice. Van Nostrand Reinhold, 1989.
A. S. Weigend. Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley, 1993.
P. J. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Doctoral Dissertation, Applied Mathematics, Harvard University, Boston, MA, November 1974.
P. J. Werbos. Back-propacation: Past and future. In Proceedings of IEEE International Conference on Neural Networks, volume 1, pages 343–353. IEEE Press, New York, 1988.
H. White. Learning in artificial neural networks: A statistical perspective. Neural Computation, 1(4):425–464, 1989.
A. S. Weigend, B. A. Huberman, and D. E. Rumelhart. Predicting sunspots and exchange rates with connectionist networks. In M. Casdagli and S. Eubank, editors, Nonlinear Modeling and Forcasting, SFI Studies in the Sciences of Complexity, volume 12. Addison-Wesley, 1991.
D. H. Wolpert. Stacked generalisation. Neural Networks, 5:241–259, 1992.
A. S. Weigend, D. E. Rumelhart, and B. A. Huberman. Back-propagation, weight elimination and time series prediction. In Proceedings of the 1990 Connectionist Models Summer School, pages 65–80. Morgan Kaufmann, 1990.
X. Yao. A review of evolutionary artificial neural networks. International Journal of Intelligent Systems, 8:539–567, 1993.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hadjiprocopis, A., Smith, P. (1997). Feed Forward Neural Network entities. In: Mira, J., Moreno-DÃaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032493
Download citation
DOI: https://doi.org/10.1007/BFb0032493
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63047-0
Online ISBN: 978-3-540-69074-0
eBook Packages: Springer Book Archive