Abstract

Neural networks are systems that typically consist of a large number of simple processing unit, called neurons. A neuron has generally a high-dimensional input vector and one single output signal; this output signal is usually a non-linear function of the input vector and a weight vector. The function to be performed on the input vectors is hence defined by the non-linear function and the weight vector of the neuron.

Keywords

Neural Network Input Vector Weight Adaptation Analog Neural Network Neural Network Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    L.G. Allred and G.E. Kelly, “Supervised Learning Techniques for Backpropagation Networks”, in Proc. IJCNN, San Diego CA., vol. 1, pp. 721–728, 1990Google Scholar
  2. [2]
    R Aloni-Lavi, Y. Metzger and Y. Stein, “A BP Variant with Improved Convergence and Generalization”, in Proc. IJCNN, Baltimore, vol. 1, pp. 932–937, 1992Google Scholar
  3. [3]
    R Anand, “Efficient neural network algorithms for multiclass problems”, Ph.D. Thesis, Syracuse University, New York, 1992Google Scholar
  4. [4]
    D. Anguita, M Pampolini, G. Parodi and R Zunino, “YPROP: yet another accelerating technique for the back-propagation”, in Proc. ICANN 93, Amsterdam, pp. 500, 1993Google Scholar
  5. [5]
    M. Arai, “Mapping Abilities of Three-Layer Neural Networks”, in Proc. IJCNN, Washington DC., vol. 1, pp. 419–423, 1989Google Scholar
  6. [6]
    K. Asanovic and N. Morgan “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks”, in Proc. 2nd Int. Conf. on Microelectronics for Neural Networks, pp. 9–15, Munich, 1991Google Scholar
  7. [7]
    E. Bamard, “Comparison of backpropagation and Bayes results for using a priori information in neural net classifiers”, IEEE tr. Neural Networks, vol. 4, pp. 794–802, 1993CrossRefGoogle Scholar
  8. [8]
    I. Bellido and E. Fiesler, “Do Backpropagation Trained Neural Networks have Normal Weight Distributions?”, in Proc. ICANN 93, Amsterdam, pp. 772–775, 1993Google Scholar
  9. [9]
    A Bos, “Artificial Neural Networks as a Tool in Chemometrics”, Ph.D.Thesis, Twente University, The Netherlands, 1993Google Scholar
  10. [10]
    C.F. Chang and B.J. Sheu, “Digital VLSI Multiprocessor Design for Neurocomputers”, in Proc. IJCNN, Baltimore, vol. 2, pp. 1–6, 1993Google Scholar
  11. [11]
    H.W. Chen and V.W. Soo, “An Adaptive Back-Propagation Learning Method: A Preliminary Study for Incremental Neural Networks”, in Proc. IJCNN, Baltimore, vol. 1, pp. 713–718, 1992Google Scholar
  12. [12]
    MR Choi and F.MA Salam, “Implementation of Feedforward Artificial Neural Nets with Learning using Standard CMOS VLSI Technology”, in Proc. IEEE ISCAS, Singapore, pp. 1509–1512, 1991Google Scholar
  13. [13]
    J. Choi and B.J. Sheu, “VLSI Design of Compact and High-Precision Analog Neural Network Processors”, in Proc. IJCNN Maryland, vol. 2, pp. 637–641, 1992Google Scholar
  14. [14]
    L.O. Chua and L. Yang, “Cellular Neural Networks: Theory”, IEEE Trans. Circuits And Systems, vol. 35, pp. 1257–1272, 1988MathSciNetMATHCrossRefGoogle Scholar
  15. [15]
    R. Coggins, M. Jabri, B. Flower and S. Pickard, “Low Power Intracardiac Electrogram Classification unsing Analogue VLSI”, in Proc. Fourth Int’l. Conf. on MicroNeuron, pp. 376–382, 1994, TurinGoogle Scholar
  16. [16]
    E. Cosatto and H.P. Graf, “NET32K High Speed Image Understanding System”, in Proc. Fourth Int’l. Conf. on MicroNeuron, pp. 413–421, 1994, TurinGoogle Scholar
  17. [17]
    MP. Craven, KMCurtis and B.RHayes-Gill, “Frequency division multiplexing in analogue neural network”, Electronic Letters, vol. 27, no. 11, pp. 918–920, 1991CrossRefGoogle Scholar
  18. [18]
    R.C. Frye, E.A. Rietman and C.C. Wong, “Back-Propagation Learning and Nonidealities in Analog Neural Network Hardware”, IEEE Trans. Neural Networks, vol. 2, pp. 110–117, 1991CrossRefGoogle Scholar
  19. [19]
    B. Furman, J. White and A.A. Abidi, “CMOS Analog IC Implementing the Back Propagation Algorithm”, in Proc. INNS 1988, Boston, pp. 381Google Scholar
  20. [20]
    AR. Gallant and H. White, “On Learning the Derivatives of a Unknown Mapping With Multilayer Feedforward Networks”, Neural Networks, vol. 5, pp. 129–138, 1992CrossRefGoogle Scholar
  21. [21]
    H. Gish, “A Probabilistic Approach to the Understanding and Training of Neural Network Classifiers”, in Proceedings of the Int’l Conf on Acoustics Speech and Signal Processing, pp. 1361–1364, 1990Google Scholar
  22. [22]
    C.de Groot and D. Wurtz, “Plain backpropagation and advanced optimization algorithms: Acomparative study”, Neurocomputing, vol. 6, pp. 153–159, 1992CrossRefGoogle Scholar
  23. [23]
    I.S. Han and K.H. Ahn, “Implementation of Million Connections Neural Hardware with URAN-I”, in Proc. ICANN’93, Amsterdam, pp. 1030–1033, 1993Google Scholar
  24. [24]
    A.A. Handzel, T. Grossmann, E. Domany, S. Tarem and E. Duchovni, “A neural network classifier in experimental particle physics”, Int’l Journal of Neural Systems, vol. 4, pp. 95–108, 1993CrossRefGoogle Scholar
  25. [25]
    D.O. Hebb, “The Organisation of Behavior”, New York: Wiley, 1949Google Scholar
  26. [26]
    R. Hecht-Nielsen, “Kolmogorov’s Mapping Neural Network Existence Theorem”, Proc. IEEE IJCNN, vol. 3, pp. 11–13, 1987Google Scholar
  27. [27]
    R Hecht-Nielsen, “Theory of the Backpropagation Neural Network”, Proc. IEEE IJCNN, vol. 1, pp. 593–605, 1989Google Scholar
  28. [28]
    G.E. Hinton and T.J. Sejnowski, “Learning and relearning in Boltzmann machines”, in Parallel Distributed Processing, vol. 1, chapter 7, Eds. D.E. Rumelhart and J.L. McClelland, Cambridge MA: MIT Press, 1986Google Scholar
  29. [29]
    M Holler, S. Tam, H. Castro and R. Benson, “An Electrically Trainable Artificial Neural Network (ETANN) with 10,240 “Floating Gate” Synapses”, in Proc. IJCNN, Washington DC., vol. 2, pp. 191–196, 1989Google Scholar
  30. [30]
    P.W. Hollis, J.S. Harper and J.J. Paulos, “The Effects of Precision Constraints in a Backpropagation Learning Network”, Neural Computation, vol. 2, pp. 363–373, 1990CrossRefGoogle Scholar
  31. [31]
    J.J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities”, Proc. Natl. Acad. Sci., vol. 79, p. 2554–2558, 1982MathSciNetCrossRefGoogle Scholar
  32. [32]
    K. Homik, M. Stinchcombe and H. White, “Universal Approximation of an Unknown Mapping and Its Derivatives Using Multilayer Feedforward Networks”, Neural Networks, vol. 3, pp. 551–560, 1990CrossRefGoogle Scholar
  33. [33]
    T. Hrycej, “Modular Learning in Neural Networks”, New York: Wiley & Sons, 1992MATHGoogle Scholar
  34. [34]
    W.Y. Huang and RP. Lippmann, “Neural Net and Traditional Classifiers”, in Proc. Neural Information Processing Systems, pp. 387–396, 1987Google Scholar
  35. [35]
    B. Irie and S. Myake, “Capabilities of Three-layered Perceptrons”, in Proc. IJCNN, San Diego, vol. 1, pp. 641–648, 1988Google Scholar
  36. [36]
    H. Katsuura and D.A. Sprecher, “Computational Aspects of Kolmogorov’s Superposition Theorem”, Neural Networks, vol. 7, pp. 455–462, 1994MATHCrossRefGoogle Scholar
  37. [37]
    F. Kanaya and S. Myake, “Bayes statistical behavior and valid generalization of pattern classifying neural networks”, IEEE tr. Neural Networks, vol. 2, pp. 471–475, 1991CrossRefGoogle Scholar
  38. [38]
    Y.C. Kim and MA Shanblatt, “An Implementable Digital Multilayer Neural Network”, in Proc. IJCNN, Baltimore, vol. 2, pp. 594–600, 1992Google Scholar
  39. [39]
    J.S. Koford and G.F. Groner, “The Use of an Adaptive Threshold Element to Design a Linear Optimal Pattern Classifier”, IEEE Trans. Information Theory, vol. 12, no. 1, pp. 42–50, 1966CrossRefGoogle Scholar
  40. [40]
    T. Kohonen, “Self-organized formation of topologically correct feature maps”, Biolog. Cybernetics, vol. 43, pp. 59–69, 1982MathSciNetMATHCrossRefGoogle Scholar
  41. [41]
    P. Koiran, “On the complexity of approximating mapping using feedforward networks”, Neural Networks, vol. 6, pp. 649–653, 1993CrossRefGoogle Scholar
  42. [42]
    A.N. Kolmogorov, “On the Representation of Continuous Functions of Many Variables by Superposition of Continuous Functions of One Variable and Addition”, Doklady Akademic Nauk USSR, pp. 953–956, 1957 (Translations American Mathematical Society, vol. 2, pp. 55-59, 1963Google Scholar
  43. [43]
    B. Kosko, “Adaptive bidirectional associative memories”, Appl. Optics, vol. 26, pp. 4947–4960, 1987CrossRefGoogle Scholar
  44. [44]
    R Kothari, P. Klinkhachorn and RS. Nutter, “An Accelerated Back Propagation Training Algorithm”, in Proc. IJCNN, Singapore, vol. 1, pp. 165–170, 1991Google Scholar
  45. [45]
    P. Kotilainen, J. Saarinen and K. Kaski, “Mapping of some neural network algorithms to a general purpose parallel computer”, in Proc. ICANN 93, Amsterdam, pp. 1085, 1993Google Scholar
  46. [46]
    M Leshno, V.Y. Lin, A Pinkus and S. Schocken, “Multilayer Feedforward Networks With a Nonpolynomial Activation Function Can Approximate Any Function”, Neural Networks, vol. 6, pp. 861–868, 1993CrossRefGoogle Scholar
  47. [47]
    E. Levin, “Multilayer hidden-control neural architecture for modeling signals produced by nonlinear time-varying systems; application to speech and handwriting recognition”, IEEE tr. Neural Networks, vol. 4, pp. 109–116, 1993CrossRefGoogle Scholar
  48. [48]
    RP. Lippmann, “An Introduction into Computing with Neural Nets”, IEEE ASSP Mag, vol. 4, pp. 4–22, 1987CrossRefGoogle Scholar
  49. [49]
    J.B. Lont and W. Guggenbuhl, “Analog CMOS Implementation of a Multilayer Perception with Nonlinear Synapses”, IEEE Trans. Neural Networks, vol. 3, 1992Google Scholar
  50. [50]
    P. Masa, K. Hoen and H. Wallinga, “20 Mllion Patterns Per Second Analog CMOS Neural Network Pattern Classifier”, in Proc. ECCTD’93, Davos, pp. 497–502, 1993Google Scholar
  51. [51]
    P. Masa, K. Hoen and H. Wallinga, “High Speed VLSI Neural Network for High-Energy Physics”, in Proc. Fourth Int’l. Conf on MicroNeuron, pp. 422–429, 1994, TurinGoogle Scholar
  52. [52]
    W.S. McCullochand W. Pitts, “A logical calculus of the ideas immanent in nervous activity”, Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943CrossRefGoogle Scholar
  53. [53]
    M. Minsky and S. Papert, “Perceptrons”, Cambridge, MA: MIT Press, 1969MATHGoogle Scholar
  54. [54]
    A.F. Murray and P.J. Edwards, “Synaptic Weight Noise During Multilayer Perceptron Training: Fault Tolerance and Training Improvements”, IEEE Trans. Neural Networks, vol. 4, pp. 722–725, 1993CrossRefGoogle Scholar
  55. [55]
    D.B. Mundie and L.W. Massengill, “Weight Decay and Resolution Effects in Feedforward Artificial Neural Networks”, IEEE Trans. Neural Networks, vol. 2, pp. 168–170, 1991CrossRefGoogle Scholar
  56. [56]
    J.AG. Nijhuis, “An Engineering Approach to Neural System Design”, PhD. Thesis, Katholieke Universiteit Nijmegen, The Netherlands, 1992Google Scholar
  57. [57]
    A. Rao, M.R. Walker, L.T. Clark, L.A. Akers and R.O. Grondin, “VLSI Implementation of Neural Classifiers”, Neural Computation, vol. 2, pp. 35–43, 1990CrossRefGoogle Scholar
  58. [58]
    M.D. Richard and R.P. Lippmann, “Neural Network Classifiers Estimate Bayesian a posteriori Probabilities”, Neural Computation, vol. 3, pp. 461–483, 1991CrossRefGoogle Scholar
  59. [59]
    R.A. Rosario and N. Tepedelenlioglu, “A Rapid Multi-Layer Perceptron Training Algorithm”, in Proc. IJCNN, Baltimore, vol. 1, pp. 824–829, 1992Google Scholar
  60. [60]
    F.E. Rosenblatt, “Perceptron Simulation Experiments”, in Proc. of the IRE, pp. 301–309, 1960Google Scholar
  61. [61]
    F.E. Rosenblatt, “Principles of Neurodynamics”, New York: Spartan, 1962MATHGoogle Scholar
  62. [62]
    D.E. Rumelhart, G.E. Hinton and R.J. Williams, “Learning internal representations by error propagation”, in Parallel Distributed Processing, vol. 1, chapter 8, eds. D.E. Rumelhart and J.L. McClelland, Cambridge MA.: MIT Press, 1986Google Scholar
  63. [63]
    E. Sackinger and H.P. Graf, “A System for High-Speed Pattern Recognition and Image Analysis”, in Proc. Fourth Int’l. Conf. on MicroNeuron, pp. 364–375, 1994, TurinGoogle Scholar
  64. [64]
    P.K. Sankar, “Multilayer perceptron, model for fuzzy classification of patterns; application to speech recognition”, IEEE tr. Neural Networks, vol. 3, pp. 683–697, 1992CrossRefGoogle Scholar
  65. [65]
    M Schwarz, B.J. Hostica, M. Kesper, P. Richert and M. Scholles, “A CMOS-Array-Computer with On-Chip Communication Hardware Developed for Massively Parallel Applications”, in Proc. IJCNN, Singapore, vol. 1, pp. 89–94, 1991Google Scholar
  66. [66]
    T. Shima, T. Kimura, Y. Kamatani, T. Itakura, Y. Fujita and T. Iida, “Neuro Chips with On-Chip Back-Propagation and/or Hebbian Learning”, IEEE J. Solid-State Circuits, vol. 27, pp. 1868–1875, 1992CrossRefGoogle Scholar
  67. [67]
    L. Spaanenburg, B. Hoefflinger, S. Neusser, J.AG. Nijhuis and A Siggelkow, “A Muliplier-less Digital Neural Network”, in Proc. 2nd Int. Conf. Microelectronics for Neural Networks, Munich, pp. 291–310, 1991Google Scholar
  68. [68]
    D.A. Sprecher, “A Universal Mapping for Kolmogorov’s Superposition Theorem”, Neural Networks, vol. 6, pp. 1089–1093, 1993MATHCrossRefGoogle Scholar
  69. [69]
    M. Stinchcombe and H. White, “Approximating and Learning Unknown Mappings using Multilayer Feedforward Networks with Bounded Weights”,in Proc. IJCNN, San Diego, vol. 3, pp. 7–16, 1990Google Scholar
  70. [70]
    J. Sun, W.I. Grosky and M.H. Hassoun, “A Fast Algorithm for Finding Minima of Error Functions in Layered Neural Networks”, in Proc. IJCNN, San Diego, vol. 1, pp. 715–720, 1990Google Scholar
  71. [71]
    L.R. Talbert, G.F. Groner, J.S. Koford, R.J. Brown, P.R. Low and C.H. Mays, “A Real-Time Adaptive Speech-Recognition System”, Stanford University Technical Report 6760-1, 1963Google Scholar
  72. [72]
    B.A. Telfer and H.H. Szu, “Energy Functions For Minimizing Misclassification Error With Minimum-Complexity Networks”, Neural Networks, vol. 7, no. 5, pp. 809–818, 1994CrossRefGoogle Scholar
  73. [73]
    M. Valle, D.D. Caviglia and G.M. Bisio, “An Experimental Analog VLSI Neural Chip with Qn-Chip Back-Propagation Learning”, in Proc. ESSCIRC 1992, Copenhagen, pp. 203–206Google Scholar
  74. [74]
    A Von Lehmen, E.G. Peak, P.F. Liao, A Marrakchi and J.S. Patel, “Factors Influencing Learning by Backpropagation”, in Proc. IJCNN, San Diego, vol. 1, pp. 335–341, 1988Google Scholar
  75. [75]
    Y. Wang, “A Modular Analog CMOS LSI for Feedforward Neural Networks with On-Chip BEP Learning”, in Proc. IEEE ISCAS 1993, Chicago, pp. 2744–2747Google Scholar
  76. [76]
    B. Widrow and M.E. Hoff, “Adaptive Switching Circuits”, IRE WESCON Conv. Record, part 4, pp. 96–104, 1960Google Scholar
  77. [77]
    B. Widrow, “An Adaptive ‘Adaline’ Neuron Using Chemical ‘Memistors’”, Stanford University Technical Report no. 1553-2, October 1960Google Scholar
  78. [78]
    B. Widrow, “Generalization and Information Storage in Networks of Adaline ‘Neurons’”, in Self Origizing Systems 1963, Eds. M.C. Yovitz, G.T. Jacobi and G.D. Goldstein, Washington DC.: Spartan Books, pp. 435–461Google Scholar
  79. [79]
    B. Widrow, “The original adaptive neural network Broom-Balancer”, in Proc. IEEE ISCAS, Philadelphia, pp. 351–357, 1987Google Scholar
  80. [80]
    P.J. Werbos, “Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences”, Ph.D. thesis, Harvard University, 1974Google Scholar
  81. [81]
    P.J. Werbos, “Backpropagation: Past and Future”, in Proc. IJCNN 1988, San Diego, vol. 1, pp. 343–353Google Scholar
  82. [82]
    Y. Zhang, G.E. Hearn and P. Sen, “A Modified Learning Algorithm for Backpropagation Network”, in Proc. ICANN 93, Amsterdam, pp. 478, 1993Google Scholar
  83. [83]
    P. Zwieterink, “The Complexity of Multi-Layer Perceptrons”, Ph.D.Thesis, Technical University of Eindhoven, The Netherlands, 1994Google Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Anne-Johan Annema
    • 1
  1. 1.MESA Research InstituteUniversity of TwenteNetherlands

Personalised recommendations