Skip to main content
Log in

Activity Propagation in a Network of Coincidence-Detecting Neurons

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

This paper presents a formal analytical description of activity propagation in a simple multilayer network of coincidence-detecting neuron models receiving and generating Poisson spike trains. Simulations are also presented. In feedforward networks of coincidence-detecting neurons, the average firing rate decreases layer by layer, until information disappears. To prevent this, the model assumes that all neurons exhibit self-sustained firing, at a preset rate, initiated by the recognition of local features of the stimulus. Such firing can be interpreted as a form of local short-term memory. Inhibitory feedback signals from higher layers are then included in the model to minimize the duration of sustained firing, while ensuring information propagation. The theory predicts the time-dependent firing probability in successive layers and can be used to fit experimental data. The analyzed multilayer neural network exhibits stochastic propagation of neural activity. Such propagation has interesting features, such as information delocalization, that could explain backward masking. Stochastic propagation is normally observed in simulations of networks of spiking neurons. One of the contributions of this paper is to offer a method for formalizing and quantifying such effects, albeit in a simplified system. The mathematical analysis produces expressions for latencies in successive layers in dependence of the number of inputs of a neuron, the level of sustained firing, and the onset time jitter in the first layer of the network. In this model, latencies are not caused by the neuronal integration time, but by the waiting time before a coincidence of input spikes occurs. Numerical evaluation indicates that the retinal jitter may make a major contribution to inter-layer visual latencies. This could be confirmed experimentally. An interesting feature of the model is its potential to describe, within a single framework, a number of apparently unrelated characteristics of visual information processing, such as latencies, backward masking, synchronization, and temporal pattern of post-stimulus histograms. Due to its simplicity, the model can easily be understood, refined, and extended. This work has its origins in the nineties, but modeling latencies and firing probabilities in realistic biological systems is still an unsolved problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Such sub-network can be built using learning rules described in Bugmann [11].

Abbreviations

k :

Time (in discrete steps)

k c (n):

Time at which the firing probability of a neuron in layer n reaches 1/2 of the sustained rate p 1

\(\Updelta L_n\) :

Interlayer latency = k c (n) − k c (n − 1)

\(\Updelta L_0\) :

Time for the firing probability of an input neuron to reach 1/2 of the sustained rate p 1 [formally equal to k c (0)]

m :

Number of neurons in layer n − 1 from which a neuron in layer n receives inputs

n :

Layer identification (n = 0: input layer)

p 0 :

Initial firing probability of input neurons

p 1 :

Firing probability during sustained firing

P n (k):

Probability that a neuron in layer n produces a spike at time k

P n (0, k):

Probability that a neuron in layer n has produced no spike yet up to time k included

P n (1, k):

Probability that a neuron in layer n produces its first spike at time k

P n (ck):

Probability that a neuron in layer n experiences a coincidence of m input spikes at time k and thus produces and output spike at time k

σ k :

Standard deviation of the starting time of the sustained firing in layer 0

τ :

Propagation time delay between 2 layers, in time steps

References

  1. Ahmed B, Douglas RJ, Martin KA, Nelson C. Polyneural innervation of spiny stellate neurons in cat visual cortex. J Comp Neurol 1994;341:39–49.

    Article  CAS  PubMed  Google Scholar 

  2. Bair W, Cavanaugh JR, Smith MA, Movshon JA. The timing of response onset and offset in macaque visual neurons. J. Neurosci. 2002;22:3189–205.

    CAS  PubMed  Google Scholar 

  3. Bergen JR, Julesz B. Rapid discrimination of visual patterns. IEEE Trans. 1983; SMC-13:857–63.

    Google Scholar 

  4. Best J, Reuss S, Dinse HRO. Lamina-specific differences of visual latencies following photic stimulations in the cat striate cortex. Brain Res. 1986;385:356–60.

    Article  CAS  PubMed  Google Scholar 

  5. Brody CD, Romo R, Kepecs A. Basic mechanisms for graded persistent activity: discrete attractors, continuous attractors, and dynamic representations. Curr Opin Neurobiol. 2003;13(2):204–11.

    Article  CAS  PubMed  Google Scholar 

  6. Boudreau CE, Ferster DE. Short-term depression in thalamocortical synapses of cat primary visual cortex. J Neurosci. 2005;25(31):7179–90.

    Article  CAS  PubMed  Google Scholar 

  7. Bugmann G. Summation and multiplication: two distinct operation domains of leaky integrate-and-fire neurons. Network. 1991;2:489–509.

    Article  Google Scholar 

  8. Bugmann G. Multiplying with neurons: Compensation of irregular input spike trains by using time-dependent synaptic efficiencies. Biol Cybern. 1992;68:87–92.

    Article  CAS  PubMed  Google Scholar 

  9. Bugmann G. The neuronal computation time. In: Aleksander I, Taylor JG, editors. Artificial neural networks II. Amsterdam: Elsevier; 1992. p. 861–4.

    Google Scholar 

  10. Bugmann G. Binding by synchronisation: a task dependence hypothesis. Brain Behav Sci. 1997;20:685–6.

    Article  Google Scholar 

  11. Bugmann G. Modelling fast stimulus-response association learning along the occipito–parieto–frontal pathway following rule instructions. Brain Res. 2012;1434:73–89.

    Article  CAS  PubMed  Google Scholar 

  12. Bugmann G, Taylor JG. A stochastic short-term memory using a pRAM neuron and its potential applications. In: Beale R, Plumbley MD, editors. Recent advances in neural networks. Prentice Hall; 1993 (in press). Also avialable as Research Report NRG-93-01, School of Computing, University of Plymouth, Plymouth PL4 8AA, UK.

  13. Bugmann G, Taylor JG. A model for latencies in the visual system. In: Gielen S, Kappen B, editors. Proceedings of the international conference on artificial neural networks (ICANN ’93). Amsterdam: Springer; 1993. p. 165–8.

  14. Bugmann G, Taylor JG. Role of Short-term memory in neural information propagation. In: Extended abstract book of the symposium on dynamics of neural processing, Washington, DC, June 6–8. 1994, p. 132–6.

  15. Bugmann G, Taylor JGA top-down model for neuronal synchronization. Research Report NRG-94-02, School of Computing, University of Plymouth, Plymouth PL4 8AA, UK. 1994.

  16. Bugmann G. Determination of the fraction of active inputs required by a neuron to fire. Biosystems. 2007;89(1–3):154–9.

    Article  PubMed  Google Scholar 

  17. Bugmann G. A neural architecture for fast learning of stimulus-response associations. In: Proceedings of the IICAI ‘09, Tumkur, Bangalore. 2009; p. 828–41.

  18. Bugmann G, Christodoulou C, Taylor JG. Role of temporal integration and fluctuation detection in the highly irregular firing of a leaky integrator neuron model with partial reset. Neural Comput. 1997;9:985–1000.

    Article  Google Scholar 

  19. Burgess N, Hitch GJ. Towards a network model of the articulatory loop. J Mem Lang. 1992;31:429–60.

    Article  Google Scholar 

  20. Burgi P-Y, Pun T. Temporal analysis of contrast and geometrical selectivity in the early visual system. In: Blum P, editor Channels in the visual nervous system: neurophysiology, psychophisics and models. London: Freund; 1991. p. 273–88.

    Google Scholar 

  21. Celebrini S, Thorpe S, Trotter Y, Imbert M. Dynamics of orientation coding in area V1 of the awake primate. Vis Neurosci. 1993;10:811–25.

    Article  CAS  PubMed  Google Scholar 

  22. Douglas RJ, Martin KAC. A functional microcircuit for cat visual cortex. J Physiol. 1991;440:735–69.

    CAS  PubMed  Google Scholar 

  23. Felsten G, Wasserman GS. Visual masking: mechanisms and theory. Psychol Bull. 1980;88:329–53.

    Article  CAS  PubMed  Google Scholar 

  24. Ferster D, Lindstrom S. An intracellular analysis of geniculo-cortical connectivity in area 17 of the cat. J Physiol. 1983;342:181–215.

    CAS  PubMed  Google Scholar 

  25. Funahashi S, Bruce CJ, Goldman-Rakic P. Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. J Neurophysiol. 1989;61:331–49.

    CAS  PubMed  Google Scholar 

  26. Fuster JM, Bauer RH, Jervey JP. Effects of cooling inferotemporal cortex on performance of visual memory tasks. Exp Neurol. 1981;71:398–409.

    Article  CAS  PubMed  Google Scholar 

  27. Garey LF, Dreher B, Robinson SR. The organization of the visual thalamus, chap 3. In: Dreher B, Robinson SR, editors. Neuroanatomy of the visual pathways and their development. Volume 3 of the series “vision and visual dysfunction”. London: MacMillan; 1991. p. 176–234.

    Google Scholar 

  28. Gorse D, Taylor JG. A general model of stochastic neural processing. Biol Cybern. 1990;63:299–306.

    Article  Google Scholar 

  29. Gorse D, Taylor JG. Hardware realisable training algorithms. In: Proceedings of the international conference on neural networks, Paris, France; 1990. p. 821–4.

  30. Granger R, Ambros-Ingerson J, Staubli U, Lynch G. Memorial operation of multiple, interacting simulated brain structures. In: Gluck MA, Rumelhart D, editors. Neuroscience and connectionist theory. London: Lawrence Erlenbaum Associates; 1990. p. 95–129.

    Google Scholar 

  31. Grossberg S. Contour enhancement, short-term memory, and constancy in reverberating neural networks (reprinted in Grossberg S. Studies of mind and brain. D. Boston: Reidel; 1982. p. 334–78).

  32. Houghton G. The problem of serial order: a neural network model for sequence learning and recall. In Dale R, Mellish C, Zock M, editors. Current research in natural language generation. London: Academic Press; 1990. p. 287–319.

    Google Scholar 

  33. Humphrey GW, Muller HJ. Search via recursive rejection (SERR): a connectionist model of visual search. Cogn Psychol. 1993;25:43–110.

    Article  Google Scholar 

  34. Kawano K, Shidara M, Yamane S. Neural activity in dorsolateral pontine nucleus of alert monkey during ocular following response. J Neurophysiol. 1992;67:680–703.

    CAS  PubMed  Google Scholar 

  35. Kawano K, Shidara M, Watanabe Y, Yamane S. Neural activity in cortical area MST of alert monkey during ocular following responses. J Neurophysiol. 1994;71:2305–24.

    CAS  PubMed  Google Scholar 

  36. Lesica NA, Stanley GB. Encoding of natural scene movies by tonic and burst spikes in the lateral geniculate nucleus. J Neurosci. 2004;24(47):10731–40.

    Article  CAS  PubMed  Google Scholar 

  37. Levick WR. Variation in the response latency of cat retinal ganglion cells. Vis Res. 1973;13:837–53.

    Article  CAS  PubMed  Google Scholar 

  38. Logothetis NK, Pauls J, Poggio T. Shape representation in the inferior temporal cortex of monkeys. Curr Biol. 1995;5:552–63.

    Article  CAS  PubMed  Google Scholar 

  39. Maunsell JHR, Gibson J. Visual response latencies in striate cortex of the macaque monkey. J Neurophysiol. 1992;68:1332–44.

    CAS  PubMed  Google Scholar 

  40. McCormick DA, Shu Y, Hasenstaub A, Sanchez-Vives M, Badoual M, Bal T. Persistent cortical activity: mechanisms of generation and effects on neuronal excitability. Cereb Cortex. 2003;13(11):1219–31.

    Article  PubMed  Google Scholar 

  41. Oram MW. Contrast induced changes in response latency depend on stimulus specificity. J Physiol Paris. 2010;104:167–75.

    Article  PubMed  Google Scholar 

  42. Oram MW, Perrett DI. Time course of neural responses discriminating different views of face and head. J Neurophysiol. 1992;68:70–84.

    CAS  PubMed  Google Scholar 

  43. Paulin MG. Digital filter for firing rate estimation. Biol Cybern. 1992;66:525–31.

    Article  CAS  PubMed  Google Scholar 

  44. Reich DS, Mechler F, Victor JD. Temporal coding of contrast in primary visual cortex: when, what, and why. J Neurophysiol 2001;85:1039–50.

    CAS  PubMed  Google Scholar 

  45. Rolls ET, Tovee MJ. Processing speed in the cerebral cortex and the neurophysiology of visual masking. Proc R Soc Lond B. 1994;257(1348):9–15.

    Article  CAS  Google Scholar 

  46. Roy SA, Alloway KD. Coincidence detection or temporal integration? What the neurons in somatosensory cortex are doing. J Neurosci. 2001;21(7):2462–73.

    CAS  PubMed  Google Scholar 

  47. Saito H-A. Hierarchical neural analysis of optical flow in the macaque visual pathway. In: Ono T, Squire LR, Raichle ME, Perret DI, Fukuda M, editors. Brain mechanisms of perception and memory. Oxford, NY: Oxford University Press; 1993. p. 121–40.

    Google Scholar 

  48. Sherman SM, Koch C. The control of retinogeniculate transmission in the mamalian lateral geniculate nucleus. Exp Brain Res 1986;63:1–20.

    Article  CAS  PubMed  Google Scholar 

  49. Softky WR, Koch C. The highly irregular firing of cortical-cells is inconsistent with temporal integration of random EPSPs. J Neurosci. 1993;13:334–50.

    CAS  PubMed  Google Scholar 

  50. Thomson AM, Deuchars J. Temporal and spatial properties of local circuits in neocortex. Trends Neurosci. 1994;17:119–26.

    Article  CAS  PubMed  Google Scholar 

  51. Thorpe SJ, Imbert M. Biological constraints on connectionist modelling. In: Pfeifer R, et al., editors. Connectionnism in perspective. Amsterdam: Elsevier; 1989. p. 63–92.

    Google Scholar 

  52. Tolhurst DJ, Movshon JA, Dean AF. The statistical reliability of signals in single neurons in cat and monkey striate cortex. Vis Res. 1983;23:775–85.

    Article  CAS  PubMed  Google Scholar 

  53. Troyer TW, Miller KD. Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell. Neural Comput. 1997;9(5):971–83.

    Article  CAS  PubMed  Google Scholar 

  54. Van der Loos H, Glaser EM. Autapses in neocortex cerebri: Synapses between a pyramidal cell’s axon and its own dendrites. Brain Res. 1972;48:355–60.

    Article  CAS  Google Scholar 

  55. van Rossum MCW, van der Meer MAA, Xiao D, Oram MW. Adaptive integration in the visual cortex by depressing recurrent cortical circuit. Neural Comput. 2008;20:1847–72.

    Article  PubMed  Google Scholar 

  56. Verri A, Straforini M, Torre V. Computational aspects of motion perception in natural and artificial vision systems. Phil Trans Roy Soc Lond. 1992;337:429–43.

    Article  CAS  Google Scholar 

  57. Zipser D. Recurrent network model of the Neural mechanism of short-term active memory. Neural Comput. 1991;3:179–93.

    Article  Google Scholar 

  58. Zohary E, Hillman P, Hochstein S. Time course of perceptual discrimination and single neuron reliability. Biol Cybern. 1990;62:475–86.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgments

This work was supported in part by SERC/EPSRC under Grant GR/H22495.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guido Bugmann.

Appendices

Appendix 1

In this appendix, we attempt to find a self-consistent solution of Eq. (17) valid for large k in the form:

$$ P_{n}(k) \approx p_1 + \varepsilon_n(k). $$
(36)

Then, under the assumption that only terms linear in the \(\varepsilon_n\)s need to be kept in (17), that reduces to

$$ \varepsilon_{n+1}(k) = mp_1^{m-1} \varepsilon_n(k-\tau) + (p_1 - p_1^m)mp_1^{m-1}\sum_{i=0}^{k-2}(1-p_1^m)^i \varepsilon_n(k-i-1-\tau). $$
(37)

We attempt to justify

$$ \varepsilon_{n}(k) = A_nX_1^k + B_n(k) Y_1^k, $$
(38)

where \(X_1 = \overline{p_0};\, Y_1=1-p_1^m = xX_1\) and \(x = \frac{1-p_1^m}{\overline{p_0}}.\)

From (4), we know that \(P_0(k) = p_1 P_0(f,k) + P_0(1,k) = p_1 + (p_0-p_1) \overline{p_0}^{k-1}\) so that

$$ \varepsilon_0(k) = (p_0 - p_1)\overline{p_0}^{k-1}. $$
(39)

Thereby, \(A_0= \frac{p_0-p_1}{\overline{p_0}}\) and B 0(k) = 0.

Substitution of (39) into (37) leads to

$$ \begin{aligned} \varepsilon_1(k) &= mp_1^{m-1} \left[\frac{p_0-p_1}{\overline{p_0}}\overline{p_0}^{k-\tau}+(p_1-p_1^m) \sum\limits_{i=0}^{k-2}(1-p_1^m)^i\frac{p_0-p_1}{\overline{p_0}}\overline{p_0}^{k-i-1-\tau}\right] \\ &= mp_1^{m-1} \left[\frac{p_0-p_1}{\overline{p_0}}\overline{p_0}^{k-\tau}+(p_1-p_1^m)\frac{p_0-p_1}{\overline{p_0}}\overline{p_0}^{k-1-\tau} \sum\limits_{i=0}^{k-2}x^i\right] \\ &= mp_1^{m-1} \left[A_0 X_1^k\overline{p_0}^{-\tau}+A_0(p_1-p_1^m)X_1^k\overline{p_0}^{-1-\tau} \left(\frac{1-x^{k-1}}{1-x}\right)\right] \\ &= mp_1^{m-1} \left(A_0\overline{p_0}^{-\tau} + A_0\frac{p_1-p_1^m}{1-x}\overline{p_0}^{-1-\tau}\right)X_1^k - mp_1^{m-1} \left(A_0\left(\frac{p_1-p_1^m}{1-x}\right)\overline{p_0}^{-1-\tau}\frac{1}{x}\right)Y_1^k \\ &= A_1 X_1^k + B_1 Y_1^k, \end{aligned} $$
(40)

with

$$ A_1 = mp_1^{m-1} A_0\left(\overline{p_0}^{-\tau} + \frac{p_1-p_1^m}{1-x}\overline{p_0}^{-1-\tau}\right), $$
(41)

and

$$ B_1 = - mp_1^{m-1} A_0\left(\frac{p_1-p_1^m}{1-x}\right)\frac{\overline{{cp_0}^{-1-\tau}}}{x}. $$
(42)

This process can be continued iteratively, the dependence on k of B n (k) appearing only for k ≤ 2. We find

$$ \varepsilon_{2}(k) = A_2X_1^k + B_2(k) Y_1^k, $$
(43)

with

$$ A_2 = mp_1^{m-1} \left[x_1^{-\tau} + \frac{p_1-p_1^m}{1-x}x_1^{-1-\tau}\right]A_1, $$
(44)

and

$$ B_2(k) = mp_1^{m-1} \left[B_1 Y_1^{-\tau} - \frac{p_1-p_1^m}{1-x}\frac{A_1}{x}x_1^{-1-\tau} + (p_1-p_1^m)B_1 Y_1^{-1-\tau} (k-2)\right]. $$
(45)

In general, (38) is verified with

$$ A_{n+1} = mp_1^{m-1} x_1^{-\tau}\left[1 + \frac{p_1-p_1^m}{1-x}x_1^{-1}\right]A_n, $$
(46)

and

$$ B_{n+1}(k) = mp_1^{m-1} \left[B_n(k) Y_1^{-\tau} - \frac{p_1-p_1^m}{1-x}\frac{A_n}{x}x_1^{-1-\tau} + (p_1-p_1^m) Y_1^{-1-\tau}\sum\limits_{i=0}^{k-2}B_n(k-1-i-\tau)\right]. $$
(47)

Inspecting \(B_0, B_1, B_2(k),{\ldots}\) one finds that for n > 0

$$ B_n(k) = \sum\limits_{i=0}^{n-1}\gamma_{i,n} k^i. $$
(48)

Keeping only the term with the highest power of k:

$$ B_n(k) \approx \gamma_{n-1,n} k^{n-1}, $$
(49)

where

$$ \gamma_{n-1,n} = B_1 m p_1^{m-1} (p_1 - p_1^m) Y_1^{-1-\tau}. $$
(50)

So, for large values of k and for n > 0, we obtain the approximation

$$ B_n(k) \approx B_1 \left[m p_1^{m-1} (p_1 - p_1^m) Y_1^{-1-\tau}\right]^{n-1} k^{n-1} $$
(51)
$$ \approx B_1 \beta_1^{n-1} k^{n-1}. $$
(52)

There is a similar but exact expression for A n for n ≤ 0

$$ A_n = A_0 \left[m p_1^{m-1} \left(1 + \frac{p_1 - p_1^m}{1-x} X_1^-1\right), X_1^{-\tau}\right]^n $$
(53)
$$ = A_0 \alpha_1(x)^n $$
(54)

so that

$$ \varepsilon_n(k) \approx A_0 \alpha(x)^n \overline{p_0}^k + B_1(x) \beta_1^{n-1} x^k \overline{p_0}^k. $$
(55)

Latencies k c are defined by

$$ P_n(k_c) = \frac{p_1}{2}, $$
(56)

or, from (36)

$$ \varepsilon_n(k_c) = - \frac{p_1}{2}. $$
(57)

To solve Eq. (55) for k c , it is convenient to distinguish 3 domains for the values of x:

$$ \begin{aligned} I\quad &x>1\quad (1-p_1^m > \overline{p_0}). \\ II\quad &x<1\quad (1-p_1^m < \overline{p_0}). \\ III\quad &x=1\quad (1-p_1^m = \overline{p_0}). \end{aligned} $$
(58)

Reminding that p 1 > p 0 is required for \(\varepsilon_n(k)\) to be <0, the three domains correspond to the areas marked in Fig. 9. For p 1 < p 0, we would observe a decrease in firing rate in post-stimulus histograms.

Domain I

For x > 1 and large values of k c , the second term in the right-hand side of (55) is dominant:

$$ \varepsilon_n(k) \approx - B_1(x) \beta_1^{n-1} x^{k_c} \overline{p_0}^{k_c}. $$
(59)

Neglecting the term in \(\ln(k_c)\) one finds

$$ k_c \approx \frac{\ln(\frac{p_1}{2}) - \ln\left(-B_1(x)\beta_1^{n-1}\right)}{\ln(1-p_1^m)}, $$
(60)

and the interlayer latency difference becomes

$$ \Updelta L_n = k_c(n) - k_c(n-1) $$
(61)
$$ \approx -\frac{\ln(\beta_1)}{\ln(1-p_1^m)} $$
(62)
$$ \approx \tau +1 - \frac{\ln(m)+(m-1)\ln(p_1) + ln(p_1-p_1^m)}{\ln(1-p_1^m)}. $$
(63)

Assuming further p m1  << 1 and \(\ln(p_1) \approx 0\) one gets

$$ \Updelta L_n \approx \tau +1 +\frac{\ln(m)}{(p_1^m)}. $$
(64)

Domain II

For x < 1, the second term on the right-hand side of (55) behaves as k n-1 c x k_c . It goes to zero for \(k_c \rightarrow \infty\) and has a maximum at \(k_m = (n-1)/\ln(1/m).\) Then, for the values of K c >>K m we have

$$ \varepsilon_n(k) \approx A_0 \alpha(x)^n \overline{p_0}^{k_c}. $$
(65)

Then,

$$ k_c \approx \frac{\ln\left(\frac{p_1}{2}\right) - \ln(-A_0) - n\ln(\alpha(x))}{\ln(\overline{p_0})}, $$
(66)
$$ \Updelta L_n \approx \tau - \frac{\ln(m)+(m-1)\ln(p_1) + \ln\left(\frac{p_1 -p_0}{p_1^m - p_0}\right)}{\ln(\overline{p_0})}. $$
(67)

In the limit \(p_1 \rightarrow 1,\) (67) reduces to (24).

Domain III

For x = 1 and large k c , the first term in the right-hand side of (55) is still dominant and we end up with same expression as in domain II.

Appendix 2

This appendix describes the simulation of the firing rate of a more realistic neuron model (than a simple coincidence detector).

Using a leaky integrate-and-fire (LIF) neuron as in Bugmann [11], partial reset to 91 % of the firing threshold. Depressing synapses were used with parameters inspired by Boudreau and Ferster [6]. The fraction of weight depressed by each input spike was 75 %. The recovery time constant was 80 ms. The somatic potential decay time constant was τ RC = 20 ms. These values produced a reasonable fit to their Fig. 9f.

Fig. 9
figure 9

Domains for the parameter x

Initial Conditions

A notable observation in Boudreau and Ferster [6] is that V1 synapses are already depressed to some degree at the start of the stimulus, due to the background activity of LGN cells. To determine what value to use, we conducted simulations using inputs firing at their average background figure of 10 Hz. The result was an asymptotic weight value of 23 % of its non-depressed value (77 % depression).

Weights for Maximal Selectivity

We then set the initial weights of a neuron with m = 25 inputs to 23 % of a value w and started the simulation with each input producing Poisson spikes trains at 200 Hz. We increased the value of w to find the value w 1 for which the neuron started firing. We then reduced progressively the number of active inputs to find the lowest value m′ for which the neuron still responds. The selectivity S of the neuron was calculated as:

$$ S = 1 - \frac{m-m^{\prime}}{m}. $$
(68)

We then repeated the procedure for input firing rates of 200, 150, 100, 50, 25 Hz.

We found S = 0.96 for w = w 1 and the neuron did only respond to inputs rates of 200 Hz. Each run simulated 1 s of real time to determine the average output firing rate.

Low Selectivity Regime

We then set w = 2.2w 1 and repeated the above measurements. Table 1 summarizes the results.

Table 1 Results of simulating a low selectivity neuron

The results show that the weight value selected corresponds to a neuron with very low selectivity, able to start firing when only 44 % of inputs are active. This is plausible for visual neurons [16]. Despite such a low selectivity, the output firing rate of the neuron is always less than half of the input firing rate.

The low average hides a strong frequency peak lasting approximately 30 ms for the 200 Hz case, where the instantaneous rate is above 500 Hz. This peak is due to the larger initial EPSP sizes, despite initial depression, the low selectivity and partial reset that makes it easier for a spike to be produced immediately after a previous one.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bugmann, G., Taylor, J.G. Activity Propagation in a Network of Coincidence-Detecting Neurons. Cogn Comput 5, 307–326 (2013). https://doi.org/10.1007/s12559-013-9216-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-013-9216-1

Keywords

Navigation