A Hierarchical Clustering Network Based on a Model of Olfactory Processing

  • P. A. Shoemaker
  • C. G. Hutchens
  • S. B. Patil
Chapter
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 191)

Abstract

We describe a direct analog implementation of a neural network model of olfactory processing [44-48]. This model has been shown capable of performing hierarchical clustering as a result of a coactivity-based unsupervised learning rule which is modeled after long-term synaptic potentiation. Network function is statistically based and does not require highly precise weights or other components. We present current-mode circuit designs to implement the required functions in CMOS integrated circuitry, and propose the use of floating-gate MOS transistors for modifiable, nonvolatile interconnection weights. Methods for arrangement of these weights into a sparse pseudorandom interconnection matrix, and for parallel implementation of the learning rule, are described. Test results from functional blocks on first silicon are presented. It is estimated that a network with upwards of 50K weights and with submicrosecond settling times could be built with a conventional CMOS double-poly process and die size.

Keywords

Dition Settling Univer Polysilicon 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Vol. 1 (D.E. Rumelhart and J.L. McClelland, eds), MIT Press: Cambridge, MA, pp. 318–362, 1986.Google Scholar
  2. 2.
    T. Kohonen, Self-Organization and Associative Memory, 2nd ed., Springer-Verlag: Berlin, 1988.MATHCrossRefGoogle Scholar
  3. 3.
    D. Specht, “Probabilistic neural networks,” Neural Networks, Vol. 3, pp. 109–118, 1990.CrossRefGoogle Scholar
  4. 4.
    C.L. Scofield and D.L. Reilly, “Into silicon: real time learning in a high density RBF neural network, in Proc. Int. Joint Conf. Neural Networks, Seattle, WA, Vol. 1, pp. 551–556, 1991.CrossRefGoogle Scholar
  5. 5.
    J.J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Amer. Acad. Sci. Vol. 79, pp. 2554–2558, 1982.MathSciNetCrossRefGoogle Scholar
  6. 6.
    G.E. Hinton and T.J. Sejnowski, “Learning and relearning in Boltzmann machines,” in Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Vol. 1 (D.E. Rumelhart and J.L. McClelland, eds.), MIT Press: Cambridge, MA, pp. 282–317, 1986.Google Scholar
  7. 7.
    J. Bailey and D. Hammerstrom, “Why VLSI implmentations of associative VLCNs require connection multiplexing,” in Proc. IEEE Int. Conf. Neural Networks, San Diego, CA, Vol. 2, pp. 173–180, 1989.Google Scholar
  8. 8.
    S. Morton, “An argument for digital neural nets,” Letter to the Editor, Electronics May 26, 1988, p. 26.Google Scholar
  9. 9.
    J. Daugman, “Networks for image analysis: motion and texture,” in Proc. Int. Joint Conf. Neural Networks, Washington, DC, Vol. 1, pp. 189–194, 1989.CrossRefGoogle Scholar
  10. 10.
    N. Suga, “Cortical computational maps for auditory imaging,” Neural Networks, Vol. 3, pp. 3–22, 1990.CrossRefGoogle Scholar
  11. 11.
    S.A. Shamma, N. Shen, and P. Gopalaswarmy, “Stereausis: binaural processing without neural delays,” J. Acoust. Soc. Amer. Vol. 86, pp. 989–1006, 1989.CrossRefGoogle Scholar
  12. 12.
    M. Holler, S. Tam, H. Castro, and R. Benson, “An electrically trainable artificial neural network (ETANN) with 10240 `floating gate’ synapses,” in Proc. Int. Joint Conf. Neural Networks, Washington, DC, Vol. 2, pp. 191–196, 1989.CrossRefGoogle Scholar
  13. 13.
    D.B. Schwartz, R.E. Howard, and W.E. Hubbard, “A programmable analog neural network chip,” IEEE J. Solid-State Circuits, Vol. 24, pp. 313–319, 1989.CrossRefGoogle Scholar
  14. 14.
    L.D. Jackel, H.P. Graf, and R.E. Howard, “Electronic neural network chips,” Appl. Opt. Vol. 26, pp. 5077–5080, 1987.CrossRefGoogle Scholar
  15. 15.
    F.J. Kub, K.K. Moon, I.A. Mach, and F.M. Long, “Programmable analog vector-matrix multipliers,” IEEE J. Solid-State Circuits Vol. 25, pp. 207–214, 1990.CrossRefGoogle Scholar
  16. 16.
    S.P. Eberhardt, T. Duong, and A.P. Thakoor, “Design of parallel hardware neural network systems from custom analog VLSI `building block’ chips,” in Proc. Int. Joint Conf. Neural Networks, Washington, DC, Vol. 2, pp. 183–190, 1989.CrossRefGoogle Scholar
  17. 17.
    G. Cauwenberghs, C.F. Neugebauer, and A. Yariv, “Analysis and verification of an analog VLSI incremental outer-product learning system,” IEEE Trans. Neural Networks, Vol. 3, pp. 488–497, 1992.CrossRefGoogle Scholar
  18. 18.
    J.B. Lont and W. Guggenbuehl, “Analog CMOS implementation of a multilayer perceptron with nonlinear synapses,” IEEE Trans. Neural Networks Vol. 3, pp. 457–465, 1992.CrossRefGoogle Scholar
  19. 19.
    M.A. Cohen and S. Grossberg, “Absolute stability of global pattern formation and parallel memory storage by competitive neural networks,” IEEE Trans. Syst. Man, Cybernetics, Vol. SMC-13, pp. 815–826, 1983.Google Scholar
  20. 20.
    J. Alspector, R.B. Allen, V. Hu, and S. Satyanarayana, “Stochastic learning networks and their implementation,” in Proc. IEEE Conf. Neural Infor. Processing Syst.—Natural Synthetic, Denver, CO, (D. Z. Anderson, ed.), American Institute of Phsycis: New York, pp. 9–21, 1988.Google Scholar
  21. 21.
    D.W. Tank and J.J. Hopfield, “Simple `neural’ optimization networks: and A/D converter, signal decision circuit and a linear programming circuit,” IEEE Trans. Circuits Syst., Vol. CAS-33, pp. 534–541, 1986.CrossRefGoogle Scholar
  22. 22.
    Y. Arima, K. Mashiko, K. Okada, T. Yamada, A. Maeda, H. Kondoh, and S. Kayano, “A self-learning neural network chip with 125 neurons and 10K self-organization synapses,” IEEE J. Solid-State Circuits, Vol. 26, pp. 607–611, 1991.CrossRefGoogle Scholar
  23. 23.
    C.A. Mead, Analog VLSI and Neural Systems, Addison-Wesley: Reading, MA, 1989.MATHCrossRefGoogle Scholar
  24. 24.
    C.A. Mead and M.A. Mahowald, “A silicon model of early visual processing,” Neural Networks, Vol. 1, pp. 91–97, 1988.CrossRefGoogle Scholar
  25. 25.
    C.A. Mead, X. Arreguit, and J. Lazzaro, `Analog VLSI model of binaural hearing,“ IEEE Trans. Neural Networks, Vol. 2, pp. 230–236, 1991.CrossRefGoogle Scholar
  26. 26.
    A. Moore, J. Allman, and R.M. Goodman, `A real-time neural system for color constancy,“ IEEE Trans. Neural Networks, vol. 2, pp. 234–237, 1991.CrossRefGoogle Scholar
  27. 27.
    C.A. Mead, “Neuromorphic electronic systems,” Proc. IEEE, Vol. 78, pp. 1629–1636, 1990.CrossRefGoogle Scholar
  28. 28.
    S.P. DeWeerth and C.A. Mead, “An analog VLSI model of adaptation in the vestibulo-ocular reflex,” in Advances in Neural Information Processing Systems 2 (D. Tottretzky, ed.), Morgan-Kaufmann: San Mateo, CA, pp. 742449, 1990.Google Scholar
  29. 29.
    M. Mahowald and R. Douglas, “A silicon neuron,” Nature, Vol. 354, pp. 515–518, 1991.CrossRefGoogle Scholar
  30. 30.
    P. Mueller, J. van der Spiegel, D. Blackman, T. Chiu, T. Clare, J. Dao, C. Donham, T. Hsieh, and M. Loinaz, `A general purpose analog neurocomputer,“ in Proc. Int. Joint Conf. Neural Networks, Washington, DC, Vol. 2, pp. 177–182, 1989.CrossRefGoogle Scholar
  31. 31.
    R.L. Shimabukuro and P.A. Shoemaker, “Circuitry for artificial neural networks with nonvolatile analog memories,” in Proc. IEEE Inter. Symp. Circuits Syst., pp. 1217–1220, 1989.Google Scholar
  32. 32.
    V. Hu, A. Kramer, and P.K. Ko, “EEPROMs as analog storage devices for neural nets,” First Annual Meeting, INNS, Boston. Abstract appears in Neural Networks, Vol. 1, Supp. 1, p. 385, 1988.Google Scholar
  33. 33.
    H.C. Card and W.R. Moore, “Silicon models of associative learning in Aplysia,” Neural Networks, Vol. 3, pp. 333–346, 1990.CrossRefGoogle Scholar
  34. 34.
    B.W. Lee, B.J. Sheu, and H. Yang, “Analog floating-gate synapses for general-purpose VLSI neural computation,” IEEE Trans. Circuits Syst., Vol. 38, pp. 654–658, 1991.CrossRefGoogle Scholar
  35. 35.
    D.A. Durfee and F.S. Shoucair, “Comparison of floating gate neural network memory cells in standard VLSI technology,” IEEE Trans. Neural Networks, Vol. 3, pp. 347–353, 1992.CrossRefGoogle Scholar
  36. 36.
    J.L. Meador, A. Wu, C. Cole, N. Nintunze, and P. Chintrakulchai, “Programmable impulse neural circuits,” IEEE Trans. Neural Networks, Vol. 2, pp. 101–109, 1991.CrossRefGoogle Scholar
  37. 37.
    S.M. Sze, Physics of Semiconductor Devices, Wiley: New York, 1981.Google Scholar
  38. 38.
    P.A. Shoemaker, M.J. Carlin, and R.L. Shimabukuro, “Back-propagation learning with trinary quantization of weight updates;’ Neural Networks, Vol. 4, pp. 231–241, 1991.CrossRefGoogle Scholar
  39. 39.
    C. Peterson and E. Hartman, “Explorations of the mean field theory learning algorithm,” Neural Networks, Vol. 2, pp. 475–494, 1989.CrossRefGoogle Scholar
  40. 40.
    D. O. Hebb, The Organization of Behavior, Wiley: New York, 1949.Google Scholar
  41. 41.
    T.V.P. Bliss and A. Gardner-Medwin, “A long-lasting potentiation of synaptic transmission in the dentate area of the unanesthetized rabbit following stimulation of the perforant path;’ J. Physiology London, Vol. 232, pp. 357–374, 1983.Google Scholar
  42. 42.
    R.J. Racine, N.W. Milgram, and S. Hafner, “Long-term potentiation phenomena in the rat limbic forebrain,” Brain Research, Vol. 260, pp. 217–231, 1983.CrossRefGoogle Scholar
  43. 43.
    G. Lynch, Synapses,Circuits, and the Beginnings of Memory, MIT Press:. Cambridge, MA, 1986.Google Scholar
  44. 44.
    J. Ambros-Ingerson, R. Granger, and G. Lynch, “Simulation of paleocortex performs hierarchical clustering,” Science, Vol. 247, pp. 1344–1348, 1990.CrossRefGoogle Scholar
  45. 45.
    J. Atribros-Ingerson, “Computational properties and behavioral expression of cortical-peripheral interactions suggested by a model of the olfactory bulb and cortex,” Ph.D. Dissertation, University of California, Irvine, 1990.Google Scholar
  46. 46.
    R. Granger, J.A. Ambros-Ingerson, P. Anton, and G. Lynch, “Unsupervised perceptual learning: a paleocortical model,” in Connectionist Modeling and Brain Function, Chap. 5 (S. Hanson and C. Olsen, eds.), MIT Press: Cambridge, MA, 1990.Google Scholar
  47. 47.
    R. Granger, J.A. Ambros-Ingerson, and G. Lynch, “Derivation of encoding characteristics of layer II cerebral cortex,” J. Cognitive Neurosci., Vol. 1, pp. 61–87, 1989.CrossRefGoogle Scholar
  48. 48.
    G. Lynch and R. Granger, “Simulation and analysis of a simple cortical network,” Psychol. Learning Motivation, Vol. 23, pp. 205–241, 1989.CrossRefGoogle Scholar
  49. 49.
    J. Lazzaro, S. Rychebusch, M.A. Mahowald, and C.A. Mead, “Winner-take-all networks of O (N) complexity,” California Institute of Technology, Technical Report Caltech-CS TR 21–88,1989.Google Scholar
  50. 50.
    D. Hammerstrom and E. Means, “System design for a second generation neurocomputer,” in Proc. Int. Conf. Neural Networks, Washington, DC, Vol. 2, pp. 80–83, 1990.Google Scholar
  51. 51.
    A. S. Sedra and G.W. Roberts, “Current conveyor theory and practice,” in Analogue IC Design: The Current-Mode Approach, Chap. 3 (C. Toumazou, F.J. Lidgey, and D.G. Haigh, eds), Peregrinus: London, 1990.Google Scholar
  52. 52.
    O. Fujita, Y. Amemiya, and A. Iwata, “Characteristics of floating gate device as an analogue memory for neural networks,” Electron. Lett., Vol. 27, pp. 924–926, 1991.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1992

Authors and Affiliations

  • P. A. Shoemaker
    • 1
  • C. G. Hutchens
    • 1
  • S. B. Patil
    • 2
  1. 1.Control, and Ocean Surveillance Center, RDT&E DivisionNaval CommandSan DiegoUSA
  2. 2.Electrical and Computer Engineering DepartmentOklahoma State UniversityStillwaterUSA

Personalised recommendations