Abstract
Cortical neurons are massively connected with other cortical and subcortical cells, and they receive synaptic inputs from multiple sources. To explore the basis of how interconnected cortical cells are locally activated by such inputs, we theoretically analyze the local excitation patterns elicited by external input stimuli by using a one-dimensional neural field model. We examine the conditions for the existence and stability of the local excitation solutions under arbitrary time-invariant inputs and establish a graphic analysis method that can detect all steady local excitation solutions and examine their stability. We apply this method to a case where a pair of supra- and subthreshold stimuli are applied to nearby positions in the field. The results demonstrate that there can exist bistable local excitation solutions with different lengths and that the local excitation exhibits hysteretic behavior when the relative distance between the two stimuli is altered.
Similar content being viewed by others
References
Wilson HR, Cowan JD (1973) A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 13:55–80
Amari S (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27:77–87
Werner H, Richter T (2001) Circular stationary solutions in two-dimensional neural fields. Biol Cybern 85:211–217
Laing CR, Troy WC, Gutkin B, Ermentrout GB (2002) Multiple bumps in a neuronal model of working memory. SIAM J Appl Math 63:62–97
Laing CR, Troy WC (2003) Two-bump solutions of Amari-type models of neuronal pattern formation. Physica D 178:190–218
Enculescu M, Bestehorn M (2003) Activity dynamics in nonlocal interacting neural fields. Phys Rev E 67:041904
Hutt A, Bestehorn M, Wennekers T (2003) Pattern formation in intracortical neural fields. Network 14:351–368
Owen MR, Laing CR, Coombes S (2007) Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Physics 9:378
Pinto DJ, Ermentrout GB (2001) Spatially structured activity in synaptically coupled neuronal networks: II. Lateral inhibition and standing pulses. SIAM J Appl Math 62:226–243
Ben-Yishai R, Hansel D, Sompolinsky H (1997) Traveling waves and the processing of weakly tuned inputs in a cortical network module. J Comput Neurosci 4:57–77
Bressloff PC (2001) Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 155:83–100
Pinto DJ, Ermentrout GB (2001) Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses. SIAM J Appl Math 62:206–225
Amari S (1989) Dynamical stability of formation of cortical maps. In: Arbib MA, Amari SI (eds) Dynamic interactions in neural networks: models and data. Springer, New York, pp 15–34
Giese MA (1999) Dynamic neural field theory for motion perception. Kluwer, Boston
Kubota S, Aihara K (2005) Analyzing global dynamics of a neural field model. Neural Process Lett 21:133–141
Coombes S (2005) Waves, bumps, and patterns in neural field theories. Biol Cybern 93:91–108
Ermentrout B (1998) Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phys 61:353–430
Fender D, Julesz B (1967) Extension of Panum’s fusional area in binocularly stabilized vision. J Opt Soc Am 57:819–830
Ichihara-Takeda S, Funahashi S (2007) Activity of primate orbitofrontal and dorsolateral prefrontal neurons: task-related activity during an oculomotor delayed-response task. Exp Brain Res 181:409–425
Tsujimoto S, Sawaguchi T (2004) Properties of delay-period neuronal activity in the primate prefrontal cortex during memory- and sensory-guided saccade tasks. Eur J NeuroSci 19:447–457
Durstewitz D, Seamans JK, Sejnowski TJ (2000) Neurocomputational models of working memory. Nat Neurosci suppl 3:1184–1191
Tegner J, Compte A, Wang XJ (2002) The dynamical stability of reverberatory neural circuits. Biol Cybern 87:471–481
Wang (2001) Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci 24:455–463
Tsunoda K, Yamane Y, Nishizaki M, Tanifuji M (2001) Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nat Neurosci 4:832–838
Yamane Y, Tsunoda K, Matsumoto M, Phillips AN, Tanifuji M (2006) Representation of the spatial relationship among object parts by neurons in macaque inferotemporal cortex. J Neurophysiol 96:3147–3156
Zhang (1996) Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 16:2112–2126
Bassett JP, Tullman ML, Taube JS (2007) Lesions of tegmentomammillary circuit in the head direction system disrupt the head direction signal in the anterior thalamus. J Neurosci 27:7564–7577
Ben-Yishai R, Bar-Or RL, Sompolinsky H (1995) Theory of orientation tuning in visual cortex. Proc Natl Acad Sci USA 92:3844–3848
Acknowledgments
This study is partially supported by the Advanced and Innovational Research Program in Life Sciences, Grant-in-Aid for Scientific Research on Priority Areas—System study on higher-order brain functions—(17022012), and the Grant-in-Aid for Scientific Research (KAKENHI (19700281), Young Scientists (B)) from the Ministry of Education, Culture, Sports, Science, and Technology, the Japanese Government. K. Hamaguchi is supported by JSPS Research Fellowship.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proof of Theorem 2
-
(1)
When S *x1 > S *x2 and \( w(a^{*} )\left( {S_{x1}^{*} - S_{x2}^{*} } \right) + S_{x1}^{*} S_{x2}^{*} < 0 \) hold, we find C > 0 in (10) by using u *x1 > 0 and u *x2 < 0. Differentiating (3) with respect to x yields
$$ \frac{{{\text{d}}\bar{u}(x)}}{{{\text{d}}x}} = w(x - x_{1}^{*} ) - w(x - x_{2}^{*} ) + \frac{{{\text{d}}S(x)}}{{{\text{d}}x}}. $$(A.1)Then, by substituting x = x *1 and x *2 , we obtain
$$ u_{x1}^{*} = w(0) - w(a^{*} ) + S_{x1}^{*} , $$(A.2)$$u_{x2}^{*} = - w(0) + w(a^{*} ) + S_{x2}^{*} . $$(A.3)From these equations, (9) can be transformed into
$$ \begin{aligned} \begin{array}{*{20}c} B \\ \end{array} &= \frac{1}{{\tau^{{}} u_{x1}^{*} u_{x2}^{*} }}\left[ {w(0)\left\{ {2w(a^{*} ) - S_{x1}^{*} + S_{x2}^{*} } \right\} - 2w(a^{*} )^{2} + 2\left\{ {(S_{x1}^{*} - S_{x2}^{*} )w(a^{*} ) + S_{x1}^{*} S_{x2}^{*} } \right\}} \right] \\ &= \frac{1}{{\tau^{{}} u_{x1}^{*} u_{x2}^{*} }}\left[ {w(0)\left\{ {2w(a^{*} ) - S_{x1}^{*} + S_{x2}^{*} } \right\} - 2w(a^{*} )^{2} } \right] + 2\tau C. \end{aligned}$$(A.4)Hence, by using the relation
$$ 2w(a^{*} ) - S_{x1}^{*} + S_{x2}^{*} < - \frac{{2S_{x1}^{*} S_{x2}^{*} }}{{S_{x1}^{*} - S_{x2}^{*} }} - S_{x1}^{*} + S_{x2}^{*} = - \frac{{S_{x1}^{*2} + S_{x2}^{*2} }}{{S_{x1}^{*} - S_{x2}^{*2} }} < 0 $$(A.5)and the property of the connectivity w(0) > 0, we can find B > 0. Therefore, both the coefficients B and C of the characteristic function are positive, so that the differential equation (8) is stable.
-
(2)
When \( w(a^{*} )\left( {S_{x1}^{*} - S_{x2}^{*} } \right) + S_{x1}^{*} S_{x2}^{*} > 0 \) holds, we find C < 0 from (10) by using u *x1 > 0 and u *x2 < 0. Thus, the system is unstable.
Furthermore, when S *x1 > S *x2 and \( w(a^{*} )\left( {S_{x1}^{*} - S_{x2}^{*} } \right) + S_{x1}^{*} S_{x2}^{*} \le 0 \) hold, we find
$$ w(a^{*} ) \ge - \frac{{S_{x1}^{*} S_{x2}^{*} }}{{S_{x1}^{*} - S_{x2}^{*} }}. $$(A.6)
Hence,
Note that at least either S *x1 or S *x2 must take a value different from 0 because of S *x1 < S *x2 . Therefore, we obtain B < 0 from (9), so that the system is unstable.
Appendix B: The method for plotting the \( a - \hat{S} \) curve
In this appendix, we show the method for plotting the \( a - \hat{S} \) curve from the function S(x) according to Definition 1. First, we give some definitions.
Definition 2
We say that f(x) is a monotone increasing (decreasing) function if f(x1) < f(x2) (f(x1) > f(x2) ) for any x1, x2with x1 < x2in its domain. We refer to a monotone increasing or decreasing function as a monotone function. We also say that f(x) is a constant function if f(x1) = f(x2) for any x1and x2in its domain.
Definition 3
We define S i (x) (i = 1,…, N) to be functions that satisfy the following three conditions, where a finite interval [d i , di+1] is the domain of S i (x). We refer to the function S i (x) as a subfunction of S(x).
-
(1)
S i (x) = S(x) for all i,
-
(2)
S i (x) is either a monotone or constant function,
-
(3)
The domain of the neural field [xmin, xmax] is covered by the domain of subfunctions, i.e., d1 = xminand dN+1=xmax.
Definition 3 means the division of S(x) into N subfunctions S i (x) (i = 1,…, N) such that each subfunction is either a monotone or constant function. S1(x) − S7(x) in Fig. 6a shows an example of the subfunctions for N = 7 corresponding to S(x) in Fig. 3a.
Definition 4
We define Γ ij (i = 1,…, N, j = 1,…, N) to be a set of (\( a,\hat{S} \)) such that there exist x1and x2satisfying the following relations:
-
$$ (1) \quad x_{1} \in D_{i} ,\quad x_{2} \in D_{j} , $$(B.1)
-
$$ (2) \quad x_{1} < x_{2} , $$(B.2)
-
$$ (3) \quad S_{i} (x_{1} ) = S_{j} (x_{2} ) = \hat{S}, $$(B.3)
-
$$ (4) \quad a = x_{2} - x_{1} , $$(B.4)
where D i ≡[d i , di+1] is the domain of the subfunction S(x).
From Definitions 1 and 4, we can find that Γ ij is a subset of the \( a - \hat{S} \) curve and that the \( a - \hat{S} \) curve is described as \( \bigcup\nolimits_{i,j} {\Upgamma_{ij} } .\)
Definition 5
Let S −1 Li (S) and S −1 Hi (S) be functions such that
where S −1 i (S) denotes an inverse function of S i (x).
From the above definitions, we have the following theorem that gives the explicit description of Γ ij .
Theorem 5
Let R i be the range of a subfunction S i (x). Then, Γ ij is described as follows:
-
(1)
When both S i and S j (i < j) are monotone functions and R i ∩ Rj≠ϕ,
$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}),\,a > 0,\,\hat{S} \in R_{i} \cap R_{j} } \right\}, $$(B.7) -
(2)
When S i and/or S j (i ≤ j) are constant functions and R i ∩ Rj≠ϕ,
$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a \in [S_{Lj}^{ - 1} (S_{c} ) - S_{Hi}^{ - 1} (S_{c} ),\,S_{Hj}^{ - 1} (S_{c} ) - S_{Li}^{ - 1} (S_{c} )]\begin{array}{*{20}c} , \\ \end{array} \quad a > 0,\,\hat{S} = S_{c} } \right\}, $$(B.8)where S c is defined such that {S c } = R i ∩ R j ,
-
(3)
Otherwise, Γ ij = ϕ.
Proof of Theorem 5
We can prove Γ ij = ϕ from Definition 4 in case of (1) i > j, (2) R i ∩ R j≠ϕ, and (3) S i is a monotone function and i = j. Thus, by excluding these cases, we consider the following five cases,
-
Case A: Both S i and S j are monotone functions, i < j, and R i ∩ R j≠ϕ,
-
Case B: S i is a monotone function, S j is a constant function, i < j, and R i ∩ R j≠ϕ,
-
Case C: S i is a constant function, S j is a monotone function, i < j, and R i ∩ R j≠ϕ,
-
Case D: S i is a constant function and i = j,
-
Case E: Both S i and S j are constant functions, i < j, and R i ∩ R j≠ϕ.
Then, for each case, Γ ij is given by the following lemma. (Proof of the lemma is given later.)
Lemma 1
Let S ci be the value of a subfunctionS i (x) when S i (x) is a constant function. Then, for each case, Γ ij is described as follows:
From Lemma 1, we can see that Γ ij for Cases A is the same as (B.7), and that Γ ij for Cases B-E are summarized as (B.8) by using the notations of S −1 Li and S −1 Hi in Definition 5, so that we obtain Theorem 5. □
The proof of Lemma 1 is given as follows.
Proof of Lemma 1
Since proofs of Cases A − E are similar, we show only proof of Case A and omit proofs of the other cases.
If \( (a,\hat{S}) \) \( \in \Upgamma_{ij} \), there exist x 1 and x 2 that satisfy (B.1)–(B.4) from the definition of Γ ij . We can find \( \hat{S} \in R_{i} \cap R_{j} \) from (B.3) and \( a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}) \) from (B.3) and (B.4). a > 0 also holds from (B.2) and (B.4), so that
On the contrary, if (B.14) holds, we can obtain (B.1)–(B.4) by setting \( x_{1} = S_{i}^{ - 1} (\hat{S}) \) and \( x_{2} = S_{j}^{ - 1} (\hat{S}) \). Thus, we have \( (a,\hat{S}) \in \Upgamma_{ij} \). □
Since the \( a - \hat{S} \) curve is \( \bigcup\nolimits_{i,j} {\Upgamma_{ij} } \) as stated above, we can draw the \( a - \hat{S} \) curve by plotting points \( (a,\hat{S}) \in \Upgamma_{ij} \) for every i, j with Γ ij ≠ϕ by using Theorem 5. Figure 6b shows how the \( a - \hat{S} \) curve is composed of Γ ij , where each curve Γ ij has been obtained from S i (x) and S j (x) depicted in Fig. 6a.
Appendix C: The method for finding solutions of the steady condition 1
Here we show how the solutions of the steady condition 1 can be obtained when the intersections of the \( a - \hat{S} \) curve with Y(a) are given. All the definitions in Appendix B (Definitions 2–5) are also used in this appendix.
As mentioned in Appendix B, the \( a - \hat{S} \) curve is written as \( \bigcup\nolimits_{i,j} {\Upgamma_{ij} } \) by using Γ ij in Definition 4. Therefore, when a point \( (a,\hat{S}) \) lies on the \( a - \hat{S} \) curve, \( (a,\hat{S}) \in \Upgamma_{ij} \) holds for some pair of i, j, and there exist x 1 and x 2 satisfying (B.1)–(B.4). Hence, we denote a set of these variables (x 1, x 2) by \( \Uptheta_{ij} [a,\hat{S}] \). The following theorem shows the explicit description of \( \Uptheta_{ij} [a,\hat{S}] \).
Theorem 6
\( \Uptheta_{ij} [a,\hat{S}] \) is given as follows:
-
(1)
When S i and/or S j are monotone functions,
$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} = \max \left( {S_{Li}^{ - 1} (\hat{S}),\,S_{Lj}^{ - 1} (\hat{S}) - a} \right),\,x_{2} = x_{1} + a} \right\}, $$(C.1) -
(2)
When both S i and S j are constant functions,
$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} \in \left[ {\max \left( {S_{Li}^{ - 1} (\hat{S}),\,S_{Lj}^{ - 1} (\hat{S}) - a} \right),\,\min \left( {S_{Hi}^{ - 1} (\hat{S}),\,S_{Hj}^{ - 1} (\hat{S}) - a} \right)} \right],\,x_{2} = x_{1} + a} \right\}, $$(C.2)where S i and S j are the subfunctions defined in Definition 3. S −1 Li and S −1 Hi are the functions defined in Definition 5.
Proof of Theorem 6
We present the following lemma. (The proof of this lemma is shown later.)
Lemma 2
Consider the same classification as Cases A–E shown in proof of Theorem 5 in Appendix B , then\( \Uptheta_{ij} [a,\hat{S}] \)for each case is given as follows:
where [d i , di+1] is the domain of the subfunction S i (x) in Definition 3.
If we use the notations of S −1 Li and S −1 Hi in Definition 5, then Cases A–E in Lemma 2 can be summarized as Theorem 6. □
The proof of Lemma 2 is given as follows.
Proof of Lemma 2
Since proofs of Cases A–E are similar, we show only proof of Case A and omit the proofs of the other cases. From \( (a,\hat{S}) \in \Upgamma_{ij} \) and Lemma 1, we have
If \( (x_{1} ,x_{2} ) \in \Uptheta_{ij} [a,\hat{S}] \), we find \( x_{1} = S_{i}^{ - 1} (\hat{S}) \) and \( x_{2} = S_{j}^{ - 1} (\hat{S}) \) from (B.3). By using (C.8), we have x 2 = x 1 + a, so that
On the contrary, if (C.11) holds, we can prove that (B.1)–(B.4) also hold by using (C.8) and (C.9). Thus, we obtain \( (x_{1} ,x_{2} ) \in \Uptheta_{ij} [a,\hat{S}]. \) □
Consider an intersection point (a * k , S * k ) of the \( a - \hat{S} \) curve with Y(a) as in Step 1 of the graphic analysis method. Let us define i k and j k to be the integers satisfying (a * k , S * k ) ∈ \( \Upgamma_{{i_{k} j_{k} }} \), and set \( \Uptheta_{k} = \Uptheta_{{i_{k} j_{k} }} [a_{k}^{*} ,S_{k}^{*} ] \). Then, from the definition of \( \Uptheta_{ij} [a,\hat{S}] \), we can find a * k = x *2,k − x *1,k and S(x *1,k ) = S(x *2,k ) = S * k for (x *1,k , x *2,k ) ∈ Θk. Since we can have the following corollary from Theorem 3, \( \left( {x_{1}^{*} ,x_{2}^{*} } \right) \in \bigcup\nolimits_{k} {\Uptheta_{k} } \) are the solutions of the steady condition 1 in Theorem 1.
Corollary 1
The steady condition 1 holds for x *1 , x *2 if and only if
Proof of Corollary 1
If the steady condition 1 holds for x *1 , x *2 (x *1 < x *2 ), the point (a *, S*) with a * = x *2 − x *1 and S * = S(x *2 ) − S(x *1 ) is an intersection of the \( a - \hat{S} \) curve with Y(a) from Theorem 3.
We find \( S_{i} (x_{1}^{*} ) = S_{j} (x_{2}^{*} ) = S^{*} \) with i, j satisfying x *1 ∈ D i and x *2 ∈ D j , where S i and D i (i = 1,…, N) denote the subfunction and its domain in Definition 3. Hence, (a *, S *) ∈ Γ ij holds from Definition 4. Since a * = a * k , S * = S * k , i = i k , and j = j k hold for some k, we can obtain
Therefore, we can find \( (x_{1}^{*} ,x_{2}^{*} ) \in \Uptheta_{k} \).
On the contrary, if \( (x_{1}^{*} ,x_{2}^{*} ) \in \Uptheta_{k} \) holds for some k, (C.13) and (C.14) hold. Since \( (a_{k}^{*} ,S_{k}^{*} ) \) lies on an intersection of the \( a - \hat{S} \) curve with Y(a), we obtain the steady condition 1 from Theorem 3. □
Since the elements of \( \Uptheta_{k} = \Uptheta_{{i_{k} j_{k} }} [a_{k}^{*} ,S_{k}^{*} ] \) can be obtained by using Theorem 6 for each k, we can actually find the solutions of the steady condition 1. Note that, Theorem 6 indicates that, if both \( S_{{i_{k} }} \) and \( S_{{j_{k} }} \) are constant functions, Θ k contains infinite number of elements. Thus, in this case, we need to pick up some elements \( (x_{1,k}^{*} ,x_{2,k}^{*} ) \in \Uptheta_{k} \) by the required accuracy in order to plot \( G[x;x_{1,k}^{*} ,x_{2,k}^{*} ] \) in Step 2 of the graphic analysis method.
Rights and permissions
About this article
Cite this article
Kubota, S., Hamaguchi, K. & Aihara, K. Local excitation solutions in one-dimensional neural fields by external input stimuli. Neural Comput & Applic 18, 591–602 (2009). https://doi.org/10.1007/s00521-009-0246-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-009-0246-2