Fundamentals of Neural Networks

  • Mohammad Teshnehlab
  • Keigo Watanabe
Part of the International Series on Microprocessor-Based and Intelligent Systems Engineering book series (ISCA, volume 19)


Neural networks (NNs), the parallel distributed processing and connectionist models which we referred to as ANN systems, represent some of the most active research areas in artificial intelligence (AI) and cognitive science today. The main concepts of ANNs are related to human brain. The capabilities of the human brain have always fascinated scientists and led them to investigate its inner workings. Over the past 50 years, a number of models have been developed in an attempt to replicate the brain’s various functions. At the same time, the development of computers was taking a totally different direction. As a result, today’s computer architectures, operating systems, and programming have very little in common with information processing as performed by the brain. Currently, we are experiencing a reevaluation of the brain’s abilities, and models of information processing in the brain have been translated into algorithms and made widely available. The basic building block of these brain models (i.e., ANNs) is an information processing unit that models neuron. An artificial neuron of this kind performs only rather simple mathematical operations. The artificial neuron in performing more complex operation is derived solely from the way in which large numbers of neurons may be connected to form a network. Since the various neural models replicate different abilities of the brain, they can be utilized to solve different types of problems, such as the storage and retrieval of information, the modeling of functional relationships and the representation of large amounts of data. Thus, different kinds of neurons, as functions of the brain model, have been proposed and studied. Promising results for a number of problems such as pattern recognition, category formation, speech production, and addressable memory and optimization (specially in control theory) have been reported [1]–[7]. To model neurons, the major function used in the ANN is a sigmoid-type function (SF).


Neural Network Associative Memory Connection Weight Recurrent Network Bidirectional Associative Memory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    T. Kohonen, Self-organizing and associative memory Springer-verlag, Berlin, West Germany, 1984.Google Scholar
  2. [2]
    D. E. Rumehart and J. L. MCclelland, Parallel distributed processing: explorstions in the microstructures of cognition, Vol. 1, MIT Press, Cambridge, MA, 1986.Google Scholar
  3. [3]
    D. E. Rumehart and D. Zipser, “Feature discovery by competitive learning,” Cognitive Science, Vol. 9, pp. 75–112, 1985.CrossRefGoogle Scholar
  4. [4]
    D. E. Rumelhart, G. E. Hinton and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, Edited by D. E. Rumelhart, and J. L. McClelland, MIT Press, Cambridge, MA, pp. 318–362, 1986.Google Scholar
  5. [5]
    J. A. Anderson, “Neural models with cognitive implications, in basic processes in reading,” Edited by Laberge and Samuels, Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 27–90, 1977.Google Scholar
  6. [6]
    T. J. Sejnowski and C. R. Rosenberg, “NETtalk: a parallel network that learns to read aloud,” The Johns Hopkins University electrical engineering and computer science technical report, JHU/EECS-86/01, Baltimore, MD, 1986.Google Scholar
  7. [7]
    T. Kohonen and D. W. Tank, “Computing with neural circuits: a model, sicence,” Vol. 233, pp. 625–632, 1986.Google Scholar
  8. [8]
    P. D. Wasserman, “Neural computing, Theory and Practice,” Van Nostrand Reinhold, 1989.Google Scholar
  9. [9]
    G. E. Hinton, T. J. Sejnowski and D. H. Ackley, “Boltzmann machines: constraint satisfaction networks that learn,” Technical report CMU-CS-84–119, carnegie-mellon university, Dept. of computer science, 1984.Google Scholar
  10. [10]
    G. E. Hinton and T. J. Sejnowski, “Learning and relearning in boltzmann machines”, in Parallel distributed processing, Vol. 1, Ch.7, D. E. Rumelhart et al., Eds., Cambridge, MA, MIT press, 1984.Google Scholar
  11. [11]
    J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences, Vol. 79, pp. 2558–2558, 1982.MathSciNetCrossRefGoogle Scholar
  12. [12]
    D. J. Amit, H. Gutfreund and H. Sompolinsky, “Strong infinite number of patterns ina spin glass model of neural networks,” Phys., Rev. Lett. 55, pp. 1530–1533, 1985.CrossRefGoogle Scholar
  13. [13]
    N. Parga and M. A. Virasoro, “The Ultrametic organization of memories in a neural networks,” J. Phisique 47, pp. 1857–1864, 1986.MathSciNetCrossRefGoogle Scholar
  14. [14]
    S. Shinomoto, “A Cognitive and associative memory,” Biol. Cybern. 57, pp. 197–206, 1987.MathSciNetzbMATHCrossRefGoogle Scholar
  15. [15]
    D. J. Amit, “Modeling brain function,” Cambridge U. Press, 1989.Google Scholar
  16. [16]
    A. F. Murray, D. D. Corso and L. Trassenko, “Pulse-stream VLSI neural networks mixing analog and digital techniques,” Trans. on Neural Networks, Vol. 2, No. 2, pp. 193–204, 1991.CrossRefGoogle Scholar
  17. [17]
    A. G. Andreou, K. A. Boahen, and P. O. Pouliquen, “Current-mode subthreshold MOS circuits for analog VLSI neural systems,” Trans. on Neural Networks, Vol. 2, No. 2, pp. 205–213, 1991.CrossRefGoogle Scholar
  18. [18]
    K. Fukushima, S. Miyake and T. Ito, “Necognitron:a neural network model for a mechanism of visual pattern recognition,” IEEE Trans. on Systems, Man, and Cybernetics, Vol. 13, pp. 826–834, 1983.CrossRefGoogle Scholar
  19. [19]
    T. Fukuda and H. Ishigami, “Recognition and counting method of mammalian cell on micro-carrier using image processing and neural network,” Proc. JAACT, pp. 84, 1991.Google Scholar
  20. [20]
    A. Waibel, “Modular construction of time-delay neural networks for speech recognition,” Neural Computation, Vol. 1, pp. 39–46, 1989.CrossRefGoogle Scholar
  21. [21]
    D. A. Landgrebe, “Analysis technology for land remote sensing,” Proceeding of the IEEE, Vol. 69, No. 5, pp. 628–642, 1981.CrossRefGoogle Scholar
  22. [22]
    A. Khotanzad and J. Lu, “Classification of invariant image representations using a neural netweork,” IEEE Trans. Acoustics, Speech, and Signal Processing, 38, pp. 1028–1038, 1990.Google Scholar
  23. [23]
    M. Sugisaka and M. Teshnehlab, “Fast pattern recognition by using moment invariants computation via artificial neural networks,” Control Theory and Advanced Technology, C-TAT, Vol. 9, No. 4, pp. 877–886, Dec. 1993.Google Scholar
  24. [24]
    S. M. Fatemi Aghda, A. Suzuki, M. Teshnehlab, T. Akiyoshi, and Y. Kitazono, “Microzoning of liquefaction potential using multilayer artificial neural network classification method,” 8th Iranian International Proceeding on Earthquake Prognostics (ISEP),Iran (Tehran), inpress.Google Scholar
  25. [25]
    J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Science, pp. 3088–3092, 1981.Google Scholar
  26. [26]
    B. Kosko, “Bi-directional associative memories,” IEEE Trans. on Systems, Man and Cybernetics, Vol. 18, No. 1, pp. 49–60, 1987a.MathSciNetCrossRefGoogle Scholar
  27. [27]
    B. Kosko, “Competitive adaptive Bi-directional associative memories,” in Proceeding of the IEEE First International Conferrence on Neural Networks, Eds. M. Caudill and Butler, Vol. 2, pp. 759–766, San Diego, CA, 1987b.Google Scholar
  28. [28]
    H. Wakuya and K. Shida, “Proposal of motor control with a Bi-directional neural network model,” Report of the Faculty of Science and Engineering of Saga University, Vol. 23, No. 2, pp. 23–26, 1995.Google Scholar
  29. [29]
    D. Parker, “Learning-logic,” Technical Report TR-47, Center for computational research in economics and management science, MIT, 1985.Google Scholar
  30. [30]
    Y. Le Cun, “A learning procedure for asymmetic threshold networks,” in Proceedings of Cognitiva, Paris, June 1985.Google Scholar
  31. [31]
    P. Werbos, Beyond Regression: New tools for prediction and analysis in the behavioral sciences, Ph. D. Thesis, Harvard University, Cambridge, MA, August 1974.Google Scholar
  32. [32]
    B. Widrow and M. E. Hoff, “Adaptive switching circuits,” Technical Report 1553–1, Stanford electron Lab., Stanford, CA June 1960.Google Scholar
  33. [33]
    D. Psaltis, A. Sideris and Yamamura, “A multilayered neural network controller,” IEEE Control Systems Magazine, pp. 17–20, April 1988.Google Scholar
  34. [34]
    P. Raiskila and H. N. Koivo, Properties of a neural network controller, ICARV’90 International Conference on Automation, Robotics and Computer Vision, pp. 1–5, 1990.Google Scholar
  35. [35]
    H. Miyamoto, M. Kawato, T. Setoyama and R. Suzuki, “Feedback-errorlearning-neural network for trajectory control of a robotic manipulator,” Neural Networks, Vol. 1, pp. 251–265, 1988.CrossRefGoogle Scholar
  36. [36]
    M. Kawato, “Computational schemes and neural network models for formation and control of multijoint arm trajectory,” in W. T. Miller, R. S. Sutton, and P. J. Werbos (Eds), Neural networks for control, The MIT Press, 1990.Google Scholar
  37. [37]
    W. T. Miller III, F. H. Glanz and L. G. Kraft III, “Application of a general learning algorithm to the control of robotic manipulators,” Int. J. of Robotics Research, Vol. 6, pp. 84–98, 1987.CrossRefGoogle Scholar
  38. [38]
    M. Kawato, K. Furukawa and R. Suzuki, “A hierarchical neural network model for control and learning of voluntary movement,” Biological Cybernetics, Vol. 57, pp. 169–185, 1987.zbMATHCrossRefGoogle Scholar
  39. [39]
    B. Widrow, “Generalization and information storage in networks of adaline neurons,”in Self-organizing Systems, ed. M. C. Jovitz, G. T. Jacobi, G. Goldstein. Washington, D. C, Soartan Books, 435–461, 1962.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1999

Authors and Affiliations

  • Mohammad Teshnehlab
    • 1
  • Keigo Watanabe
    • 2
  1. 1.Faculty of Electrical EngineeringK.N. Toosi UniversityTehranIran
  2. 2.Department of Mechanical EngineeringSaga UniversityJapan

Personalised recommendations