Flexible Neural Networks

  • Mohammad Teshnehlab
  • Keigo Watanabe
Part of the International Series on Microprocessor-Based and Intelligent Systems Engineering book series (ISCA, volume 19)


The application of ANNs has been a subject of extensive studies in the past four decades. There are several types of NNs that can be used in control systems as discussed in Chapter 2: the multi-layered feedforward, the Kohonen’s self-organizing map [1], the Hopfield network [2] and the Boltzmann machine [3], etc. These types of NNs are based on biological nervous system. The layered structure of parts of the brain, and multilayer (instead of single layer) arrangement of neurons in biological systems comprise the main idea of mimicking the biological neural system for obtaining higher capabilities in learning algorithms.


Sigmoid Function Connection Weight Boltzmann Machine Teaching Signal Hopfield Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    T. Kohonen, “Self-organizing and associative memory,” Springer-verlag, Berlin, West Germany, 1984.Google Scholar
  2. [2]
    J. J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proceedings of the National Academy of Sciences, Vol. 79, pp. 2558–2558, 1982.MathSciNetCrossRefGoogle Scholar
  3. [3]
    G. E. Hinton, T. J. Sejnowski and D. H. Ackley, “Boltzmann Machines: Constraint Satisfaction Networks that Learn,” Technical Report CMU-CS-84–119, Carnegie-Mellon University, Dept. of Computer Science, 1984.Google Scholar
  4. [4]
    D. E. Rumehart and D. Zipser, “Feature discovery by competitive learning,” Cognitive Science, Vol. 9, pp. 75–112, 1985.CrossRefGoogle Scholar
  5. [5]
    D. E. Rumelhart, G. E. Hinton and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, Edited by D. E. Rumelhart, and J. L. McClelland, MIT Press, Cambridge, MA, pp. 318–362, 1986.Google Scholar
  6. [6]
    T. Yamada and T. Yabuta, “Neural network controller using autotuning method for nonlinear functions,” IEEE Trans. on Neural Networks, Vol. 3, No. 4, pp. 595–601, July 1992.Google Scholar
  7. [7]
    J. A. Anderson, “Neural models with cognitive implications, in basic processes in reading,” Edited by Laberge and Samuels, Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 27–90, 1977.Google Scholar
  8. [8]
    T. J. Sejnowski and C. R. Rosenberg, “NETtalk a parallel network that learns to read aloud,” The Johns Hopkins University Electrical Engineering and Computer Science Technical Report, JHU/EECS-86/01, Baltimore, NID, 1986.Google Scholar
  9. [9]
    D. J. Amit, H. Gutfreund and H. Sompolinsky, “Strong infinite number of patterns in a spin glass model of neural networks,” Phys., Rev. Lett. 55, pp. 1530–1533, 1985.CrossRefGoogle Scholar
  10. [10]
    N. Parga and M. A. Virasoro, “The Ultrametic Organization of Memories in a Neural Networks,” J. Phisique, Vol. 47, pp. 1857–1864, 1986.MathSciNetCrossRefGoogle Scholar
  11. [11]
    S. Shinomoto, “A Cognitive and Associative Memory,” Biol. Cybern. Vol. 57, pp. 197–206, 1987.zbMATHCrossRefGoogle Scholar
  12. [12]
    D. Psaltis, A. Sideris and Yamamura, “A multilayered neural network controller,” IEEE Control Systems Magazine, pp. 17–20, April 1988.Google Scholar
  13. [13]
    A. F. Murray, D. D. Corso and L. Trassenko, “Pulse-stream VLSI neural networks mixing analog and digital techniques,” IEEE Trans. on Neural Networks, Vol. 2, No. 2, pp. 193–204, 1991.Google Scholar
  14. [14]
    K. Fukushima, S. Miyake and T. Ito, “Necognitron:a neural network model for a mechanism of visual pattern recognition,” IEEE Trans. on Systems, Man, and Cybernetics, Vol. SMC-13, pp. 826–834, 1983.Google Scholar
  15. [15]
    T. Fukuda and H. Ishigami, “Recognition and counting method of mammalian cell on micro-carrier using image processing and neural network,” Proc. JAACT, pp. 84, 1991.Google Scholar
  16. [16]
    M. Kawato, K. Furukawa and R. Suzuki, “A hierarchical neural network model for control and learning of voluntary movement,” Biological Cybernetics, Vol. 57, pp. 169–185, 1987.zbMATHCrossRefGoogle Scholar
  17. [17]
    A. Khotanzad and J. Lu, “Classification of invariant image representations using a neural netweork,” IEEE Trans. Acoustics, Speech, and Signal Processing, Vol. ASSP-38, pp. 1028–1038, 1990.Google Scholar
  18. [18]
    M. Sugisaka and M. Teshnehlab, “Fast pattern recognition by using moment invariants computation via artificial neural networks,” Control Theory and Advanced Technology, C-TAT, Vol. 9, No. 4, pp. 877–886, Dec. 1993.Google Scholar
  19. [19]
    S. M. Fatemi Aghda, A. Suzuki, M. Teshnehlab, T. Akiyoshi, and Y. Kitazono, “Microzoning of liquefaction potential using multilayer artificial neural network classification method,” 8th Iranian international proceeding on earthquake prognostics (ISEP), Iran (Tehran), 1993.Google Scholar
  20. [20]
    B. Kosko, “Bi-directional associative memories,” IEEE Trans. on Systems, Man and Cybernetics, Vol. 18, No. 1, pp. 49–60, 1987.MathSciNetCrossRefGoogle Scholar
  21. [21]
    M. Teshnehlab and K. Watanabe, “The high flexibility and learning capability of neural networks with learning bipolar and unipolar sigmoid functions,” Proceeding of Japan-U.S.A. symposium on flexible automation, Vol. 3, pp. 1453–1460, Kobe, 1994.Google Scholar
  22. [22]
    M. Teshnehlab and K. Watanabe, “Flexible structural learning control of a robotic manipulator using artificial neural networks,” JSME International Journal, Vol. 13, pp. 1–21, 1995.Google Scholar
  23. [23]
    M. Teshnehlab and K. Watanabe, “Neural networks-based self-tuning controller using the learning of sigmoid functions,” IEEE/ Nagoya University World Wisemen/ Women Workshop (WWW), pp. 31–38, Oct. 1993.Google Scholar
  24. [24]
    M. Teshnehlab and K. Watanabe, “Self-tuning of computed torque gains by using neural networks with flexible structure,” IEE Proceedings-D, Vol. 141, No. 4, pp. 235–242, July 1994.zbMATHGoogle Scholar
  25. [25]
    M. Teshnehlab and K. Watanabe, “A feedback-error-learning by using flexible star network,” Proceedings of First Asian Control Conference, Vol. 3, pp. 475–478, Tokyo, 1994.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1999

Authors and Affiliations

  • Mohammad Teshnehlab
    • 1
  • Keigo Watanabe
    • 2
  1. 1.Faculty of Electrical EngineeringK.N. Toosi UniversityTehranIran
  2. 2.Department of Mechanical EngineeringSaga UniversityJapan

Personalised recommendations