- 209 Downloads
The main aim of this book was to focus on building NNs as models based on biological concept of cell bodies. These NNs were called flexible neural networks in this book. We have reviewed the fundamental concepts of different neuron models and NN structures. Moreover, different learning approaches for training NNs have been also explained. Throughout this book, we have seen that a wide variety of problems solved by conventional NNs could be also solved by utilizing such flexible NNs with a superior learning performance. We have shown that each class of problems usually requires a different structure of flexible NN and a different training approach, as well as different storage of data in the network in the form of connection weights and/or SF parameters. Numerical examples have been used with more stress on flexible NNs rather than conventional NNs. It is thus shown that flexible NNs can highly improve the computation speed. As could be understood from the flexible NN algorithms, the NNs have been trained in a very simple manner compared to back-propagation algorithms in the supervised learning or self-organizing algorithms in the unsupervised learning methods. This technique can be used as an efficient method for solving a variety of technical problems. Most applications of conventional NNs require at least several tens of SFs in the hidden-layers, and the NN implementation is achieved by manipulating the connection weights. In addition, most applications of conventional NNs to large and complex systems require large NN structures which leads to the long times for learning processes. In this regard, the flexible NNs have provided a way to overcome for most of the shortcomings of the conventional NNs by considering fewer number of SFs, even for large and complex systems, with a better performance. In this way, in all of the examined cases, the flexible NNs offer improvements both in simulations and experimental operations over the conventional NNs. Thus, we emphasize that the implementation of flexible NNs provides a higher operational quality and higher computational speed over conventional ones.
KeywordsConnection Weight Intelligent Controller Unsupervised Learning Method Learning Architecture High Computational Speed
Unable to display preview. Download preview PDF.
- T. J. Sejnowski and C. R. Rosenberg, “NETtalk: a parallel network that learns to read aloud,” The Johns Hopkins University Electrical Engineering and Computer Science Technical Report, JHU/EECS-86/01, Baltimore, MD, 1986.Google Scholar
- K. Fukushima, S. Miyake and T. Ito, “Neocognitron: a neural network model for a mechanism of visual pattern recognition,” IEEE Trans. on Systems, Man, and Cybernetics, Vol. SMC-13, pp. 826–834, 1983.Google Scholar
- M. Sugisaka and M. Teshnehlab, “Fast pattern recognition by using moment invariants computation via artificial neural networks,” Control Theory and Advanced Technology, C-TAT, Vol. 9, No. 4, pp. 877–886, Dec. 1993.Google Scholar