A Convolutional Neural Network Tolerant of Synaptic Faults for Low-Power Analog Hardware

  • Johannes Fieres
  • Karlheinz Meier
  • Johannes Schemmel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4087)


Recently, the authors described a training method for a convolutional neural network of threshold neurons. Hidden layers are trained by by clustering, in a feed-forward manner, while the output layer is trained using the supervised Perceptron rule. The system is designed for implementation on an existing low-power analog hardware architecture, exhibiting inherent error sources affecting the computation accuracy in unspecified ways. One key technique is to train the network on-chip, taking possible errors into account without any need to quantify them. For the hidden layers, an on-chip approach has been applied previously. In the present work, a chip-in-the-loop version of the iterative Perceptron rule is introduced for training the output layer. Influences of various types of errors are thoroughly investigated (noisy, deleted, and clamped weights) for all network layers, using the MNIST database of hand-written digits as a benchmark.


Hide Layer Output Layer Training Image Convolutional Neural Network Threshold Neuron 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Fieres, J., Grubl, A., Philipp, S., Meier, K., Schemmel, J., Schürmann, F.: A platform for parallel operation of VLSI neural networks. In: Conference on Brain Inspired Cognitive Systems (BICS 2004), Stirling, Scotland (2004)Google Scholar
  2. 2.
    Fieres, J., Schemmel, J., Meier, K.: Training convolutional neural networks of threshold neurons suited for low-power hardware implementation. In: Int. Joint Conference on Neural Networks (IJCNN 2006), Vancouver, CA (accepted, 2006)Google Scholar
  3. 3.
    Fukushima, K.: Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Networks 1, 119–130 (1988)CrossRefGoogle Scholar
  4. 4.
    Hohmann, S.G., Fieres, J., Meier, K., Schemmel, J., Schmittz, T., Schürmann, F.: Training Fast Mixed-Signal Neural Networks for Data Classification. In: Proceedings of the 2004 International Joint Conference on Neural Networks (IJCNN 2004), pp. 2647–2652. IEEE Press, Los Alamitos (2004)Google Scholar
  5. 5.
    Jang, J.-S.R., Sun, C.-T., Mizutani, E.: Neuro-Fuzzy and Soft Computing. Prentice-Hall, Englewood Cliffs (1997)Google Scholar
  6. 6.
    Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: a convolutional neural network approach. Transactions on Neural Networks 8(1), 98–113 (1997)CrossRefGoogle Scholar
  7. 7.
    LeCun, Y., Jackel, L.D., Boser, B., Denker, J.S., Graf, H.P., Guyon, I., Henderson, D., Howard, R.E., Hubbard, W.: Handwritten digit recognition: Applications of neural net chips and automatic learning. IEEE Communications Magazine, 41–46 (1989)Google Scholar
  8. 8.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  9. 9.
    LeCun, Y.: The MNIST database of handwritten digits, http://yann.lecun.com/exdb/mnist
  10. 10.
    Linsker, R.: From basic network principles to neural architecture (Series of 3 papers). Proc. Natl. Sci. USA 83, 7508–7512 (1983)CrossRefGoogle Scholar
  11. 11.
    Neubauer, C.: Evaluation of convolutional neural networks for visual recognition. Transactions on Neural Networks 9(4), 685–696 (1998)CrossRefGoogle Scholar
  12. 12.
    Oram, M.W., Perret, D.I.: Modeling visual recognition from neurobiological constraints. Neural Networks (7), 945–972 (1994)CrossRefGoogle Scholar
  13. 13.
    Riesenhuber, M., Poggio, T.: Hierarchical Models of Object Recognition in Cortex. Nature Neuroscience 2, 1019–1025 (1999)CrossRefGoogle Scholar
  14. 14.
    Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cognitive Science 9, 75–112 (1985)CrossRefGoogle Scholar
  15. 15.
    Schemmel, J., Hohmann, S., Meier, K., Schurmann, F.: A mixed-mode analog neural network using current-steering synapses. Analog Integrated Circuits and Signal Processing 38, 233–244 (2004)CrossRefGoogle Scholar
  16. 16.
    Simard, P.Y., Steinkraus, D., Platt, J.C.: Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. In: Intl. Conf. Document Analysis and Recognition, pp. 958–962 (2003)Google Scholar
  17. 17.
    Tanaka, K.: Inferotemporal cortex and object vision. Ann. Rev. Neuroscience 19, 109–139 (1996)CrossRefGoogle Scholar
  18. 18.
    Ullmann, S., Soloviev, S.: Computation of pattern invariance in brain-like structures. Neural Networks 12, 1021–1036 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Johannes Fieres
    • 1
  • Karlheinz Meier
    • 1
  • Johannes Schemmel
    • 1
  1. 1.Ruprecht-Karls UniversityHeidelbergGermany

Personalised recommendations