Energy Complexity and Entropy of Threshold Circuits

  • Kei Uchizawa
  • Rodney Douglas
  • Wolfgang Maass
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4051)


Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a “1” as output) during a computation. By contrast, the activity of neurons in the brain is much more sparse, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this article we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O(logn) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits, but for circuit complexity in general. As an example of how this measure can be applied we show that any polynomial size threshold circuit with entropy O(logn) can be simulated by a polynomial size threshold circuit of depth 3.


Boolean Function Binary Tree Internal Node Complexity Measure Binary String 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cheremisin, O.V.: On the activity of cell circuits realising the system of all conjunctions. Discrete Mathematics and Applications 13(2), 209–219 (2003)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Gröger, H.D., Turán, G.: On linear decision trees computing Boolean functions. In: Leach Albert, J., Monien, B., Rodríguez-Artalejo, M. (eds.) ICALP 1991. LNCS, vol. 510, pp. 707–718. Springer, Heidelberg (1991)Google Scholar
  3. 3.
    Hajnal, A., Maass, W., Pudlak, P., Szegedy, M., Turan, G.: Threshold circuits of bounded depth. Journal of Computer and System Sciences 46, 129–154 (1993)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    0. M. Kasim-Zade, On a measure of the activeness of circuits made of functional elements (Russian). Mathematical problems in cybernetics, 4:218–228, Nauka, Moscow, see Math. Reviews MR1217502 (94c:94019) (1992)Google Scholar
  5. 5.
    Kissin, G.: Upper and lower bounds on switching energy in VLSI. J. of Assoc. for Comp. Mach. 38, 222–254 (1991)MATHMathSciNetGoogle Scholar
  6. 6.
    Lennie, P.: The cost of cortical computation. Current Biology 13, 493–497 (2003)CrossRefGoogle Scholar
  7. 7.
    Margrie, T.W., Brecht, M., Sakmann, B.: In vivo, low-resistance, whole-cell recordings from neurons in the anaesthetized and awake mammalian brain. Pflugers Arch 444(4), 491–498 (2002)CrossRefGoogle Scholar
  8. 8.
    Minsky, M., Papert, S.: Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge (1988)MATHGoogle Scholar
  9. 9.
    Olshausen, B.A., Anderson, C.H., Essen, D.C.V.: A multiscale dynamic routing circuit for forming size- and position-invariant object representations. J. Comput. Neurosci. 2(1), 45–62 (1995)CrossRefGoogle Scholar
  10. 10.
    Parberry, I.: Circuit Complexity and Neural Networks. MIT Press, Cambridge (1994)MATHGoogle Scholar
  11. 11.
    Reif, J.H., Tyagi, A.: Energy complexity of optical computations. In: Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (December 1990), pp. 14–21 (1990)Google Scholar
  12. 12.
    Roychowdhury, V.P., Siu, K.Y., Orlitsky, A.: Theoretical Advances in Neural Computation and Learning. Kluwer Academic, Boston (1994)MATHGoogle Scholar
  13. 13.
    Siu, K.Y., Roychowdhury, V., Kailath, T.: Discrete Neural Computation; A Theoretical Foundation. Information and System Sciences Series. Prentice-Hall, Englewood Cliffs (1995)MATHGoogle Scholar
  14. 14.
    Weinzweig, M.N.: On the power of networks of functional elements. Dokl.Akad.Nauk SSSR 139, 545–547 (1961) (Russian); in English: Sov.Phys.Dokl, 6, 545–547, see Math. Reviews MR0134413 (24 #B466) Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Kei Uchizawa
    • 1
  • Rodney Douglas
    • 2
  • Wolfgang Maass
    • 3
  1. 1.Graduate School of Information SciencesTohoku University 
  2. 2.Institute of NeuroinformaticsUniversity and ETH Zurich 
  3. 3.Institute for Theoretical Computer ScienceTechnische Universitaet Graz 

Personalised recommendations