Advertisement

Conclusions

  • Mohammad Teshnehlab
  • Keigo Watanabe
Chapter
Part of the International Series on Microprocessor-Based and Intelligent Systems Engineering book series (ISCA, volume 19)

Abstract

The main aim of this book was to focus on building NNs as models based on biological concept of cell bodies. These NNs were called flexible neural networks in this book. We have reviewed the fundamental concepts of different neuron models and NN structures. Moreover, different learning approaches for training NNs have been also explained. Throughout this book, we have seen that a wide variety of problems solved by conventional NNs could be also solved by utilizing such flexible NNs with a superior learning performance. We have shown that each class of problems usually requires a different structure of flexible NN and a different training approach, as well as different storage of data in the network in the form of connection weights and/or SF parameters. Numerical examples have been used with more stress on flexible NNs rather than conventional NNs. It is thus shown that flexible NNs can highly improve the computation speed. As could be understood from the flexible NN algorithms, the NNs have been trained in a very simple manner compared to back-propagation algorithms in the supervised learning or self-organizing algorithms in the unsupervised learning methods. This technique can be used as an efficient method for solving a variety of technical problems. Most applications of conventional NNs require at least several tens of SFs in the hidden-layers, and the NN implementation is achieved by manipulating the connection weights. In addition, most applications of conventional NNs to large and complex systems require large NN structures which leads to the long times for learning processes. In this regard, the flexible NNs have provided a way to overcome for most of the shortcomings of the conventional NNs by considering fewer number of SFs, even for large and complex systems, with a better performance. In this way, in all of the examined cases, the flexible NNs offer improvements both in simulations and experimental operations over the conventional NNs. Thus, we emphasize that the implementation of flexible NNs provides a higher operational quality and higher computational speed over conventional ones.

Keywords

Connection Weight Intelligent Controller Unsupervised Learning Method Learning Architecture High Computational Speed 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    S. Jain, P. Peng, A. Tzes, and F. Khorrami, “Neural network design with genetic learning for control of a single link flexible manipulator,” Journal of Intelligent 4 Robotic Systems, Vol. 15, No. 2, pp. 135–151, 1996.CrossRefGoogle Scholar
  2. [2]
    K. Warwick and N. Ball, “Self-organising neural network for adaptive control,” Journal of Intelligent @ Robotic Systems, Vol. 15, No. 2, pp. 135–151, 1996.CrossRefGoogle Scholar
  3. [3]
    K. M. Passino, “Intelligent control for autonomous systems,” IEEE Spectrum, Vol. 32, No. 6, pp. 55–62, June 1995.CrossRefGoogle Scholar
  4. [4]
    T. J. Sejnowski and C. R. Rosenberg, “NETtalk: a parallel network that learns to read aloud,” The Johns Hopkins University Electrical Engineering and Computer Science Technical Report, JHU/EECS-86/01, Baltimore, MD, 1986.Google Scholar
  5. [5]
    D. J. Amit, H. Gutfreund and H. Sompolinsky, “Strong infinite number of patterns in a spin glass model of neural networks,” Physical Review Letter, Vol. 55, pp. 1530–1533, 1985.CrossRefGoogle Scholar
  6. [6]
    A. F. Murray, D. D. Corso and L. Trassenko, “Pulse-stream VLSI neural networks mixing analog and digital techniques,” IEEE Trans. on Neural Networks, Vol. 2, No. 2, pp. 193–204, 1991.CrossRefGoogle Scholar
  7. [7]
    K. Fukushima, S. Miyake and T. Ito, “Neocognitron: a neural network model for a mechanism of visual pattern recognition,” IEEE Trans. on Systems, Man, and Cybernetics, Vol. SMC-13, pp. 826–834, 1983.Google Scholar
  8. [8]
    M. Sugisaka and M. Teshnehlab, “Fast pattern recognition by using moment invariants computation via artificial neural networks,” Control Theory and Advanced Technology, C-TAT, Vol. 9, No. 4, pp. 877–886, Dec. 1993.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1999

Authors and Affiliations

  • Mohammad Teshnehlab
    • 1
  • Keigo Watanabe
    • 2
  1. 1.Faculty of Electrical EngineeringK.N. Toosi UniversityTehranIran
  2. 2.Department of Mechanical EngineeringSaga UniversityJapan

Personalised recommendations