Advertisement

Analog Integrated Circuits and Signal Processing

, Volume 33, Issue 3, pp 263–287 | Cite as

Analog VLSI Implementation of Artificial Neural Networks with Supervised On-Chip Learning

  • Maurizio Valle
Article

Abstract

Analog VLSI on-chip learning Neural Networks represent a mature technology for a large number of applications involving industrial as well as consumer appliances. This is particularly the case when low power consumption, small size and/or very high speed are required. This approach exploits the computational features of Neural Networks, the implementation efficiency of analog VLSI circuits and the adaptation capabilities of the on-chip learning feedback schema.

Many experimental chips and microelectronic implementations have been reported in the literature based on the research carried out over the last few years by several research groups. The author presents and discusses the motivations, the system and circuit issues, the design methodology as well as the limitations of this kind of approach. Attention is focused on supervised learning algorithms because of their reliability and popularity within the neural network research community. In particular, the Back Propagation and Weight Perturbation learning algorithms are introduced and reviewed with respect to their analog VLSI implementation.

Finally, the author also reviews and compares the main results reported in the literature, highlighting the efficiency and the reliability of the on-chip implementation of these algorithms.

analog VLSI neural networks on chip learning supervised learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haykin, S., Neural Networks: A Comprehensive Foundation, 2nd ed., Prentice Hall, 1999.Google Scholar
  2. 2.
    Kohonen, T., Self-Organization and Associative Memory. 2nd ed., Springer Verlag, 1988.Google Scholar
  3. 3.
    Sheu, B. J. and Choi, J., Neural Information Processing and VLSI. Kluwer Academic Publishers, 1995.Google Scholar
  4. 4.
    Cauwenberghs, G. and Bayoumi, M. (eds.), Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.Google Scholar
  5. 5.
    Lippmann, R. P., “An introduction to computing with neural nets.” IEEE ASSP Magazine 4(2), pp. 4-22, 1987.Google Scholar
  6. 6.
    Rosenblatt, F., Principles of Neurodynamics: Perceptron and Theory of Brain Machines. Washington D.C.: Spartan Books, 1962.Google Scholar
  7. 7.
    Rumelhart, D. E. and McClelland, J. L., Parallel Distributed Processing. Cambridge: MIT Press, 1986.Google Scholar
  8. 8.
    Murray, A. F., “Silicon implementations of neural networks,” in IEE Proceedings-F 138(1), February 1991.Google Scholar
  9. 9.
    Anguita, D. and Valle, M., “Perspectives on dedicated hardware implementations,” in Proc. of the European Symposium on Artificial Neural Networks, Bruges (Belgium), pp. 45-55, April 25-27, 2001.Google Scholar
  10. 10.
    Omondi, A. R., “Neurocomputers: A dead end?” International Journal of Neural Systems 10(6), pp. 475-481, 2000.Google Scholar
  11. 11.
    Hopfield, J. J. and Tank, D. W., “Computing with neural circuits: A model.” Science 233, pp. 625-633, 8 August 1986.Google Scholar
  12. 12.
    Tsividis, Y. and Satyanarayana, S., “Analog circuits for variable synapse electronic neural networks.” Electronic Letters 2(24), pp. 1313-1314, 1987.Google Scholar
  13. 13.
    Mead, C., Analog VLSI and Neural Systems. Reading MA: Addison Wesley, 1989.Google Scholar
  14. 14.
    Vittoz, E. A., “Future of the analog in the VLSI environment,” in Proc. of ISCAS 1990, pp. 1372-1375, 1990.Google Scholar
  15. 15.
    Draghici, S., “Neural networks in analog hardware-design and implementation issues.” Int. Journal of Neural Systems 10(3), 2000.Google Scholar
  16. 16.
    Mead, C., “Neuromorphic electronic systems,” in Proceedings of the IEEE 78(10), October 1990.Google Scholar
  17. 17.
    Sarpeshkar, R., “Analog versus digital: Extrapolating from electronics to neurobiology.” Neural Computation 10, pp. 1601-1638, 1998.Google Scholar
  18. 18.
    Indiveri, G., “A neuromorphic VLSI device for implementing 2-D selective attention systems.” IEEE Trans. on Neural Networks 12(6), pp. 1455-1463, November 2001.Google Scholar
  19. 19.
    Vittoz, E., “Present and future industrial applications of bio-inspired VLSI systems.” Analog Integrated Systems and Signal Processing 30, pp. 173-184, 2002.Google Scholar
  20. 20.
    Mahowald, M. and Douglas, R., “A silicon neuron.” Nature 354, pp. 19-26, December 1991.Google Scholar
  21. 21.
    Andreou, A. G., “Electronic arts imitate life.” Nature 354, 19/26, December 1991.Google Scholar
  22. 22.
    Meador, J. L. and Cole, C. S., “A low-power CMOS circuit which emulates temporal electrical properties of neurons.” In: Touretzky, S. (ed.), Advances in Neural Information Processing Systems. Morgan Kaufmann Publishers, 1, 1989.Google Scholar
  23. 23.
    Le Masson, S. et al., “Analog circuits for modeling biological neural networks: Design and applications.” IEEE Trans. on Biomedical Engineering 46(6), pp. 638-645, June 1999.Google Scholar
  24. 24.
    Rasche, C. and Douglas, R. J., “Forward and backpropagation in a silicon dendrite.” IEEE Trans. on Neural Networks 12(2), pp. 386-393, March 2001.Google Scholar
  25. 25.
    Bibyk, S. and Ismail, M., “Issues in analog VLSI and MOS techniques for neural computing.” In: Mead, C. and Ismail, M. (eds.), Analog VLSI Implementation of Neural Systems. Kluwer Academic Publishers, 1989.Google Scholar
  26. 26.
    Tsividis, Y., “Analog MOS integrated circuits-certain new ideas, trends, and obstacles.” IEEE Journal of Solid-State Circuits SC-22(3), pp. 317-321, June 1987.Google Scholar
  27. 27.
    Harris, J. G. et al., “Analog hardware implementation of continuous-time adaptive filter structures.” In: Cauwenberghs, G. and Bayoumi, M. B. (eds.), Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.Google Scholar
  28. 28.
    Vittoz, E. A., “Analog VLSI signal processing: Why, where, and how?” Journal of VLSI Signal Processing 8, pp. 27-44, 1994.Google Scholar
  29. 29.
    Cairns, G. and Tarassenko, L., “Precision issues for learning with analog VLSI multilayer perceptrons.” IEEEMICRO 15(3), pp. 54-56, June 1995.Google Scholar
  30. 30.
    Ismail, M. and Fiez, T., Analog VLSI Signal and Information Processing. McGraw-Hill International Editors, 1994.Google Scholar
  31. 31.
    Masa, P., Hoen, K. and Wallinga, H., “A high-speed analog neural processor.” IEEE Micro 14(3), pp. 40-50, June 1994.Google Scholar
  32. 32.
    Holler, M., Tam, S., Castro, H. and Benson, R., “An electrically trainable artificial neural network (ETANN) with 10240 floating gate synapses,” in Proc. of IJCNN, Washington DC, 2, pp. 191-196, 1989.Google Scholar
  33. 33.
    Choi, J. and Sheu, B., “VLSI design of compact and high precision analog neural network processor,” in Proc. of IJCNN 2, pp. 637-641, 1992.Google Scholar
  34. 34.
    Masa, P. et al., “10 mW CMOS retina and classifier for handheld, 1000 images per second optical character recognition systems,” in Proc. ISSCC'99, San Francisco, 1999.Google Scholar
  35. 35.
    Shima, T., Kimura, T., Kamatani, Y., Itakura, T., Fujita, Y. and Iida, T., “Neuro chips with on-chip back-propagation and/or hebbian learning.” IEEE Journal of Solid State Circuits 27, pp. 1868-1875, 1992.Google Scholar
  36. 36.
    Valle, M., Caviglia, D. D. and Bisio, G. M., “An analog VLSI neural network with on-chip back propagation learning.” Analog Integrated Circuits and Signal Processing, Kluwer AcademicGoogle Scholar
  37. 37.
    Boahen, K. A. and Andreou, A. G., “A contrast sensitive silicon retina with reciprocal synapses.” In: Moody, J. E., Hanson, S. J. and Lippmann, R. P. (eds.): Advances in Neural Processing Systems San Mateo, CA: Morgan Kaufmann Publishers, 4, 1992.Google Scholar
  38. 38.
    Boahen, K. A., “Aretinomorphic vision system.” IEEE MICRO 1996, 16(5), pp. 30-39, October 1996.Google Scholar
  39. 39.
    Boahen, K. A., “A retinomorphic chip with parallel pathways: Encoding increasing, on, decreasing, and off, visual signals.” Analog Integrated Circuits and Signal Processing (30), pp. 121-135, 2002.Google Scholar
  40. 40.
    Liu, W., Andreou, A. G. and Goldstein, M. H., “Voiced-speech representation by an analog silicon model of the auditory periphery.” IEEE Trans. On Neural Networks 3(3), May 1992.Google Scholar
  41. 41.
    Liu, S., “Silicon photoreceptors with controllable adaptive filtering properties.” In: Cauwenberghs, G. and Bayoumi, M. B. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.Google Scholar
  42. 42.
    Ridella, S. et al., “K-Winner machines for pattern classification.” IEEE Trans. on Neural Networks 12(2), pp. 371-385, March 2001.Google Scholar
  43. 43.
    Cauwenberghs, G., “Learning on silicon: A survey.” In: Cauwenberghs, G. and Bayoumi, M. B. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.Google Scholar
  44. 44.
    Hebb, D. O., The Organization of Behavior, A Neuropsychological Theory. New York: John Wiley, 1949.Google Scholar
  45. 45.
    Carpenter, G. A., “Neural network models for pattern recognition and associative memory.” Neural Networks 2(4), pp. 243-257, 1989.Google Scholar
  46. 46.
    Tarassenko, L., A Guide to Neural Computing Applications. London: Arnold Publishers, 1999.Google Scholar
  47. 47.
    Hertz, J., Krogh, A. and Palmer, R. G., Introduction to the Theory of the Neural Computation. Addison-Wesley Publishing Company, 1991.Google Scholar
  48. 48.
    Cichocki, A. and Unbehauen, R., Neural Networks for Optimisation and Signal Processing. John Wiley & Sons, 1993.Google Scholar
  49. 49.
    Jabri, M. and Flower, B., “Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks.” IEEE Trans. Neural Networks 3(1), pp. 154-157, 1992.Google Scholar
  50. 50.
    Alspector, J. and Lippe, D., “A study of parallel perturbative gradient descent.” Advances in Neural Information Processing Systems (NIPS96), pp. 803-810, 1996.Google Scholar
  51. 51.
    Jabri, M. A., Coggins, R. F. and Flower, B. G., Adaptive Analog VLSI Neural Systems. Chapman & Hall, 1996.Google Scholar
  52. 52.
    Cauwenberghs, G., “A fast stochastic error-descent algorithm for supervised learning and optimization.” Advances in Neural Information Processing Systems 5 (NIPS5), pp. 244-251, 1993.Google Scholar
  53. 53.
    Alspector, J., Meir, R., Yuhas, B., Jayakumar, A. and Lippe, D., “Aparallel gradient descent method for learning in analog VLSI neural networks.” Advances in Neural Information Processing Systems 5 (NIPS5), pp. 836-844, 1993.Google Scholar
  54. 54.
    Reyneri, L. M. and Filippi, E., “An analysis on the performance of silicon implementations of backpropagation algorithms for artificial neural networks.” IEEE Trans. on Computers 12, pp. 1380-1389, 1991.Google Scholar
  55. 55.
    Caviglia, D. D., Valle, M., Rossi, A., Vincentelli, M., Bo, G. M., Colangelo, P., Pedrazzi, P. and Colla, A., “Feature extraction circuit for optical character recognition.” Electronics Letters 30(10), pp. 769-760, 1994.Google Scholar
  56. 56.
    Jacobs, B. A., “Increased rates of convergence through learning rate adaptation.” Neural Networks 4, pp. 295-307, 1988.Google Scholar
  57. 57.
    Tollenaere, T., “SuperSAB: Fast adaptive back propagation with good scaling properties.” Neural Networks 3, pp. 561-573, 1990.Google Scholar
  58. 58.
    Edwards, P. J. and Murray, A. F., “Analogue imprecision in MLP training.”World Scientific Publishing Co. Pte. Ltd., 1996.Google Scholar
  59. 59.
    Valle, M., Caviglia, D. D., Donzellini, G., Mussi, A., Oddone, F. and Bisio, G. M., “A neural computer based on an analog VLSI neural network,” in Proc. of the International Conference on Artificial Neural Network, ICANN94 2, pp. 1339-1342, 1994.Google Scholar
  60. 60.
    Bo, G. M., Caviglia, D. D., Chiblé, H. and Valle, M., “A circuit architecture for analog on-chip back propagation learning with local learning rate adaptation.” Analog Integrated Circuits and Signal Processing 18(2/3), pp. 163-174, 1999.Google Scholar
  61. 61.
    Bo, G. M., Caviglia, D. D., Chiblé, H. and Valle, M., “A self-learning analog neural processor.” To be published on IEICE Trans. on Fundamentals, 2002.Google Scholar
  62. 62.
    Murray, A. F. and Tarassenko, L., Analogue Neural VLSI: A Pulse Stream Approach. Chapman & Hall, 1994.Google Scholar
  63. 63.
    Kondo, Y. and Sawada, Y., “Functional abilities of a stochastic logic neural network.” IEEE Trans. on Neural Networks 3(3), pp. 434-443, May 1992.Google Scholar
  64. 64.
    Andreou, A. G. et al., “Current-mode subthreshold MOS circuits for analog VLSI neural systems.” IEEE Trans. on Neural Networks 2(2), March 1991.Google Scholar
  65. 65.
    Andreou, A. G. and Boahen, K. A., “Translinear circuits in subthreshold CMOS.” Analog Integrated Circuits and Signal Processing 9, pp. 141-166, 1996.Google Scholar
  66. 66.
    Alspector, J. et al., “VLSI architectures for neural networks.” In: Antognetti, P. and Milutinovic V. (eds.): Neural Networks: Concepts, Applications and Implementations I, Prentice Hall Publisher, pp. 180-215, 1991.Google Scholar
  67. 67.
    Annema, A., Feed-Forward Neural Networks: Vector Decomposition Analysis, Modelling and Analog Implementation. Kluwer Academic Publishers, 1995.Google Scholar
  68. 68.
    Sarné, G. M. L. and Pastorino, M. N., “Application of neural network for the simulation of the traffic flows in a real transportation network,” in Proc. of the International Conference on Artificial Neural Networks ICANN94, pp. 831-833, 1994.Google Scholar
  69. 69.
    Bo, G. M., Caviglia, D. D. and Valle, M., “An analog VLSI neural architecture for handwritten numeric character recognition,” in Proc. of the International Conference on Artificial Neural Networks ICANN95, Industrial Conference, Paris, France, 1995.Google Scholar
  70. 70.
    Bourlard, H. and Morgan, N., “Hybrid connectionist models for continuous speech recognition.” In: Lee, C., Soong, F. K. and Paliwal, K. K. (eds.): Automatic Speech and Speaker Recognition. Kluwer Academic Publishers, pp. 259-283, 1996.Google Scholar
  71. 71.
    Diotalevi, F., Bo, G. M., Caviglia, D. D. and Valle, M., “Evaluation and validation of local and adaptive weight perturbation learning algorithms for optical character recognition applications,” in Proc. of the Third International ICSC Symposia on Intelligent Industrial Automation (IIA'99) and Soft Computing (SOCO'99), Genova, pp. 508-512, 1-4 June, 1999.Google Scholar
  72. 72.
    Diotalevi, F., Valle, M. and Caviglia, D. D., “An adaptive local rate technique for hierarchical feed forward architectures.” IEEE-INNS-ENNS International Joint Conference on Neural Networks. Piscataway, NJ, Como, Italy: IEEE Press, 24-27 July 2000.Google Scholar
  73. 73.
    Diotalevi, F., “Analog microelectronic supervised learning systems.” Ph.D. Thesys, http://www.micro.dibe.unige.it/works/FDiotalevi_PHD_Thesis.zip.Google Scholar
  74. 74.
    Diotalevi, F. and Valle, M., “Weight perturbation learning algorithm with local learning rate adaptation for the classification of remote-sensing images.” ESANN'01, Bruges (Belgium): D-Facto Publisher, pp. 217-222, 25-27 April 2001. (ISBN 2-930307-01-3).Google Scholar
  75. 75.
    Cauwenberghs, G., “An analog VLSI recurrent neural network learning a continuous-time trajectory.” IEEE Transaction on Neural Networks 7(2), pp. 346-361, 1996.Google Scholar
  76. 76.
    Hollis, P. W. et al., “The effects of precision constraints in a backpropagation learning network.” Neural Computation 2, pp. 363-373, 1990.Google Scholar
  77. 77.
    Anguita, D. et al., “Worst case analysis of weight inaccuracy effects in multilayer perceptrons.” IEEE Trans. on Neural Networks 10(2), pp. 415-418, March 1999.Google Scholar
  78. 78.
    Montalvo, A. J. et al., “An analog VLSI neural network with on-chip perturbation learning.” IEEE Journal of Solid State Circuits, 32(4), pp. 535-543, April 1997.Google Scholar
  79. 79.
    Montalvo, A. J. et al., “Towards a general-pourpose analog VLSI neural network with on-chip learning.” IEEE Trans. on Neural Networks 8(2), pp. 413-423, March 1997.Google Scholar
  80. 80.
    Horio, Y. et al., “Analog memories for VLSI neurocomputing.” IEEE International Symposium on Circuits and Systems 4, pp. 2986-2989, 1990.Google Scholar
  81. 81.
    Hollis, P. W. and Paulos, J. J., “Artificial neural networks using MOS analog multiplier.” IEEE JSSCs 25(3), pp. 849-855, June 1990.Google Scholar
  82. 82.
    van Der Spiegel, J. V. et. al., “An analog neural computer with modular architecture for real-time dynamic computations.” IEEE JSSCs 22(1), pp. 82-91, January 1992.Google Scholar
  83. 83.
    Kramer, A. et al., “Flash-based programmable nonlinear capacitor for switched-capacitor implementations of neural networks.” IEDM Tech. Dig., pp. 17.6.1-17.6.4, December 1994.Google Scholar
  84. 84.
    Pavan, P., Bez, R., Olivo, P. and Zanoni, E., “Flash memory cells-an overview,” in Proceedings of the IEEE 85(8), pp. 1248-1271, 1997.Google Scholar
  85. 85.
    Harrison, R. R. et al., “Special issue on floating-gate devices, circuits and systems.” IEEE Trans. on Circuits and Systems II: Analog and Digital Signal Processing 48(1), January 2001.Google Scholar
  86. 86.
    Kim, K., Lee, K., Jung, T. and Suh, K., “An 8-bit-resolution, 360-us write time non-volatile analog memory based on differentially balanced constant-tunnelling-current scheme (DBCS).” IEEE Journal of Solid State Circuits 33(11), pp. 1758-1762, 1998.Google Scholar
  87. 87.
    Harrison, R. R. et al., “A CMOS programmable analog memory-cell array using floating-gate circuits.” IEEE Trans. on Circuits and Systems II: Analog and Digital Signal Processing 48(1), pp. 4-11, January 2001.Google Scholar
  88. 88.
    Murray, A. F., “Pulse arithmetic in VLSI neural networks.” IEEE Micro 9(6), pp. 64-74, 1989.Google Scholar
  89. 89.
    Schwartz, D. B. et al., “A programmable analog neural network chip.” IEEE Journal of Solid State Circuits 24(2), pp. 313-319, 1989.Google Scholar
  90. 90.
    Kub, F. J. et al., “Programmable analog vector-matrix multipliers.” IEEE Journal of Solid State Circuits 25(1), pp. 207-214, February 1990.Google Scholar
  91. 91.
    Tsividis, Y., Mixed Analog-Digital VLSI Devices and Technology: An Introduction. McGraw-Hill, 1995.Google Scholar
  92. 92.
    Sheu, B. J., Shieh, J. and Patil, M., “Modelling charge injection in MOS analog switch.” IEEE Transaction on Circuits and Systems 34(2), pp. 214-216, 1987.Google Scholar
  93. 93.
    Vittoz, E. et al., “Analog storage of adjustable synaptic weights.” In: Ramacher, U. (ed.) Introduction to VLSI Design of Neural Networks. Kluwer Academic Publisher, 1991.Google Scholar
  94. 94.
    Castello, R., Caviglia, D. D., Franciotta, M. and Montecchi, F., “Self refreshing analogue memory cell for variable synaptic weights.” IEE Electronics Letters 27(20), pp. 1871-1872, 1991.Google Scholar
  95. 95.
    Hochet, B. et al., “Implementation of a learning Kohonen neuron based on a new multilevel storage technique.” IEEE Journal of Solid State Circuits 26(3), pp. 262-267, March 1991.Google Scholar
  96. 96.
    Cauwenberghs, G. and Yariv, A., “Fault-tolerant dynamic multilevel storage in analog VLSI.” IEEE Trans. on Circuits and Systems II 41(2), pp. 827-829, 1994.Google Scholar
  97. 97.
    Ehlert, M. and Klair, H., “A 12-bit medium-time analog storage device in a CMOS standard process.” IEEE Journal of Solid State Circuits 33(7), pp. 1139-1143, 1998.Google Scholar
  98. 98.
    Pelgrom, M. J. M., Aad, L., Duinmaijer, C. J. and Welbers, A. P. G., “Matching properties of MOS transistors.” IEEE Journal of Solid-State Circuits 24(5), pp. 1433-1440, October 1989.Google Scholar
  99. 99.
    Kinget, P. and Steyaert, M., “Analog VLSI integration of massive parallel processing systems.” Kluwer Academic Publishers, November 1996.Google Scholar
  100. 100.
    Shoval, A., Johns, D. A. and Snelgrove, W. M., “Comparison of DC offset effects in four LMS adaptive algorithms.” IEEE Transaction on Circuits and Systems II 42(3), pp. 176-185, 1995.Google Scholar
  101. 101.
    Satyanarayana, S., Tsividis, Y. P. and Graf, H. P., “A reconfigurable VLSI neural network.” IEEE Journal of Solid State Circuits 27(1), pp. 67-81, 1992.Google Scholar
  102. 102.
    Dolenko, B. K. and Card, H. C., “Tolerance to analog hardware of on-chip learning in backpropagation networks.” IEEE Trans. on Neural Networks 6(5), pp. 1045-1052, 1995.Google Scholar
  103. 103.
    Morie, T., “Analog VLSI implementation of self learning neural networks.” In: Cauwenberghs, G. and Bayoumi, M. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, 1999.Google Scholar
  104. 104.
    Alhalabi, B. A., Bayoumi, M. B. and Maaz, B., “Mixed-mode programmable and scalable architecture for on-chip learning.” Analog Integrated Circuits and Signal Processing (18), pp. 175-194, 1999.Google Scholar
  105. 105.
    Murray, A. F. and Woodburn, R., “The prospects for analogue neural VLSI.” International Journal of Neural Systems 8, pp. 559-579, October/December 1997.Google Scholar
  106. 106.
    Lehmann, T. and Woodburn, R., “Biologically-inspired learning in pulsed neural networks.” In: Cauwenberghs, G. and Bayoumi, M. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, pp. 105-130, 1999.Google Scholar
  107. 107.
    Gray, P. R. and Meyer, R. G., Analysis and Design of Analog Integrated Circuits, 2nd Edition, McGraw Hill, 1984.Google Scholar
  108. 108.
    Osa, J. I., Carlosena, A. and Lopez-Martin, A. J., “MOSFET-C filter with on-chip tuning and wide programming range.” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 48(10), pp. 944-951, October 2001.Google Scholar
  109. 109.
    Tsividis, Y., “Integrated continuous-time filter design-an overview.” IEEE Journal of Solid State Circuits 29(3), pp. 166-176, March 1994.Google Scholar
  110. 110.
    Andreou, A. G. and Furth, P. M., “An information theoretic framework for comparing the bit energy of signal representations at the circuit level.” In: Sanchez-Sinencio, E. and Andreou, A. G. (eds.): Low-Voltage Low-Power Integrated Circuits and Systems. IEEE Press, 1999.Google Scholar
  111. 111.
    Jespers, P. G. A., Integrated Converters. Oxford University Press, 2001.Google Scholar
  112. 112.
    Kramer, A. H., “Array-based computation: Principles, advantages and limitations,” in Proc. of Microneuro'96, pp. 68-79, 1996.Google Scholar
  113. 113.
    Frantz, G., “Digital signal processor trends.” IEEE Micro 20(6), pp. 52-59, November/December 2000.Google Scholar
  114. 114.
    Cauwenberghs, G., Bayoumi, M. and Sanchez-Sinencio, E. (eds.), Special Issue on Learning on Silicon, Analog Integrated Circuits and Signal Processing. Kluwer Academic Publisher, 18(2-3), 1999.Google Scholar
  115. 115.
    Lehmann, T., “Hardware learning in analog VLSI neural networks.” Ph.D. Thesis, Electronics Institute, Technical University of Denmark, 1994. (http://eivind.imm.dtu.dk/publications/phdthesis.html).Google Scholar
  116. 116.
    Berg, Y., Sigvartsen, R. L., Lande, T. S. and Abusland, A., “An analog feed-forward neural network with on-chip learning.” Analog Integrated Circuits and Signal Processing 9, pp. 65-75, 1996.Google Scholar
  117. 117.
    Bo, G. M., Chiblé, H., Caviglia, D. D. and Valle, M., “Analog VLSI on-chip learning neural network with learning rate adaptation.” In: Cauwenberghs, G., Bayoumi, M. A. (eds.): Learning on Silicon-Adaptive VLSI Neural Systems. Kluwer Academic Publishers, pp. 305-330, June 1999.Google Scholar
  118. 118.
    Lu, C., Shi, B. and Chen, L., “An on-chip BP learning neural network with ideal neuron characteristics and learning rate adaptation.” Analog Integrated Circuits and Signal Processing 31, pp. 55-62, 2002.Google Scholar
  119. 119.
    http://dspvillage.ti.com/docs/dspproducthome.jhtmlGoogle Scholar
  120. 120.
    Kramer, A. H., “Array-based analog computation,” in IEEE Micro 1996 16(5), pp. 20-29, October 1996.Google Scholar
  121. 121.
    Genov, R. and Cauwenberghs, G., “Charge-mode parallel architecture for vector-matrix multiplication.” IEEE Trans. on CAS II-Analog and Digital Signal Processing 48(10), pp. 930-936, October 2001.Google Scholar
  122. 122.
    Caviglia, D. D., Valle, M. and Bisio, G. M., “Effects of weight discretization on the back propagation learning method: Algorithm design and hardware realization.” in Proc. of the IEEE-INNS International Joint Conference on Neural Networks IJCNN, Ann Arbor, MI, San Diego (USA), pp. II 631-II 637, June 17-21, 1990.Google Scholar
  123. 123.
    Valle, M., Caviglia, D. D. and Bisio, G. M., “Back-propagation learning algorithms for analog VLSI implementation, in VLSI for neural networks and artificial intelligence,” in Proc. International Workshop on VLSI for Neural Networks and Artificial Intelligence, New York, Oxford UK: Plenum Press, 2-4 September 1992; In: Delgado-Frias, J. D. and Moore, W. R. (eds.), 1994, pp. 35-44. (ISBN: 0-306-44722-3).Google Scholar
  124. 124.
    Aslam-Siddiqi, A. et al., “A 16 × 16 nonvolatile programmable analog vector-matrix multiplier.” IEEE Journal of Solid-State Circuits, 33(10), pp. 1502-1509, October 1998.Google Scholar
  125. 125.
    Diotalevi, F., Valle, M., Bo, G. M., Biglieri, E. and Caviglia, D. D., “Analog CMOS current mode neural primitives.” In Proc. ISCAS'2000, IEEE International Symposium on Circuits and Systems, Geneva, Switzerland, 28-31 May, 2000, pp. I 419-I 422. (ISBN: 0-7803-5485-0).Google Scholar
  126. 126.
    Diotalevi, F., Valle, M., Bo, G. M., Biglieri, E. and Caviglia, D. D., “An analog on-chip learning circuit architecture of the weight perturbation algorithm.” ISCAS'2000, IEEE International Symposium on Circuits and Systems, Geneva, Switzerland, 28-31 May, 2000, pp. II 717-II 720. (ISBN: 0-7803-5485-0).Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Maurizio Valle
    • 1
  1. 1.Department of Biophysical and Electronic Engineering (DIBE)University of GenoaItaly

Personalised recommendations