Skip to main content

Advertisement

Log in

Local excitation solutions in one-dimensional neural fields by external input stimuli

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Cortical neurons are massively connected with other cortical and subcortical cells, and they receive synaptic inputs from multiple sources. To explore the basis of how interconnected cortical cells are locally activated by such inputs, we theoretically analyze the local excitation patterns elicited by external input stimuli by using a one-dimensional neural field model. We examine the conditions for the existence and stability of the local excitation solutions under arbitrary time-invariant inputs and establish a graphic analysis method that can detect all steady local excitation solutions and examine their stability. We apply this method to a case where a pair of supra- and subthreshold stimuli are applied to nearby positions in the field. The results demonstrate that there can exist bistable local excitation solutions with different lengths and that the local excitation exhibits hysteretic behavior when the relative distance between the two stimuli is altered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Wilson HR, Cowan JD (1973) A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 13:55–80

    Article  Google Scholar 

  2. Amari S (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27:77–87

    Article  MATH  MathSciNet  Google Scholar 

  3. Werner H, Richter T (2001) Circular stationary solutions in two-dimensional neural fields. Biol Cybern 85:211–217

    Article  MATH  Google Scholar 

  4. Laing CR, Troy WC, Gutkin B, Ermentrout GB (2002) Multiple bumps in a neuronal model of working memory. SIAM J Appl Math 63:62–97

    Article  MATH  MathSciNet  Google Scholar 

  5. Laing CR, Troy WC (2003) Two-bump solutions of Amari-type models of neuronal pattern formation. Physica D 178:190–218

    Article  MATH  MathSciNet  Google Scholar 

  6. Enculescu M, Bestehorn M (2003) Activity dynamics in nonlocal interacting neural fields. Phys Rev E 67:041904

    Article  MathSciNet  Google Scholar 

  7. Hutt A, Bestehorn M, Wennekers T (2003) Pattern formation in intracortical neural fields. Network 14:351–368

    Article  Google Scholar 

  8. Owen MR, Laing CR, Coombes S (2007) Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Physics 9:378

    Article  Google Scholar 

  9. Pinto DJ, Ermentrout GB (2001) Spatially structured activity in synaptically coupled neuronal networks: II. Lateral inhibition and standing pulses. SIAM J Appl Math 62:226–243

    Article  MATH  MathSciNet  Google Scholar 

  10. Ben-Yishai R, Hansel D, Sompolinsky H (1997) Traveling waves and the processing of weakly tuned inputs in a cortical network module. J Comput Neurosci 4:57–77

    Article  Google Scholar 

  11. Bressloff PC (2001) Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 155:83–100

    Article  MATH  MathSciNet  Google Scholar 

  12. Pinto DJ, Ermentrout GB (2001) Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses. SIAM J Appl Math 62:206–225

    Article  MATH  MathSciNet  Google Scholar 

  13. Amari S (1989) Dynamical stability of formation of cortical maps. In: Arbib MA, Amari SI (eds) Dynamic interactions in neural networks: models and data. Springer, New York, pp 15–34

    Google Scholar 

  14. Giese MA (1999) Dynamic neural field theory for motion perception. Kluwer, Boston

    Google Scholar 

  15. Kubota S, Aihara K (2005) Analyzing global dynamics of a neural field model. Neural Process Lett 21:133–141

    Article  Google Scholar 

  16. Coombes S (2005) Waves, bumps, and patterns in neural field theories. Biol Cybern 93:91–108

    Article  MATH  MathSciNet  Google Scholar 

  17. Ermentrout B (1998) Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phys 61:353–430

    Article  Google Scholar 

  18. Fender D, Julesz B (1967) Extension of Panum’s fusional area in binocularly stabilized vision. J Opt Soc Am 57:819–830

    Article  Google Scholar 

  19. Ichihara-Takeda S, Funahashi S (2007) Activity of primate orbitofrontal and dorsolateral prefrontal neurons: task-related activity during an oculomotor delayed-response task. Exp Brain Res 181:409–425

    Article  Google Scholar 

  20. Tsujimoto S, Sawaguchi T (2004) Properties of delay-period neuronal activity in the primate prefrontal cortex during memory- and sensory-guided saccade tasks. Eur J NeuroSci 19:447–457

    Article  Google Scholar 

  21. Durstewitz D, Seamans JK, Sejnowski TJ (2000) Neurocomputational models of working memory. Nat Neurosci suppl 3:1184–1191

    Article  Google Scholar 

  22. Tegner J, Compte A, Wang XJ (2002) The dynamical stability of reverberatory neural circuits. Biol Cybern 87:471–481

    Article  MATH  Google Scholar 

  23. Wang (2001) Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci 24:455–463

  24. Tsunoda K, Yamane Y, Nishizaki M, Tanifuji M (2001) Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nat Neurosci 4:832–838

    Article  Google Scholar 

  25. Yamane Y, Tsunoda K, Matsumoto M, Phillips AN, Tanifuji M (2006) Representation of the spatial relationship among object parts by neurons in macaque inferotemporal cortex. J Neurophysiol 96:3147–3156

    Article  Google Scholar 

  26. Zhang (1996) Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 16:2112–2126

  27. Bassett JP, Tullman ML, Taube JS (2007) Lesions of tegmentomammillary circuit in the head direction system disrupt the head direction signal in the anterior thalamus. J Neurosci 27:7564–7577

    Article  Google Scholar 

  28. Ben-Yishai R, Bar-Or RL, Sompolinsky H (1995) Theory of orientation tuning in visual cortex. Proc Natl Acad Sci USA 92:3844–3848

    Article  Google Scholar 

Download references

Acknowledgments

This study is partially supported by the Advanced and Innovational Research Program in Life Sciences, Grant-in-Aid for Scientific Research on Priority Areas—System study on higher-order brain functions—(17022012), and the Grant-in-Aid for Scientific Research (KAKENHI (19700281), Young Scientists (B)) from the Ministry of Education, Culture, Sports, Science, and Technology, the Japanese Government. K. Hamaguchi is supported by JSPS Research Fellowship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shigeru Kubota.

Appendices

Appendix A: Proof of Theorem 2

  1. (1)

    When S *x1  > S *x2 and \( w(a^{*} )\left( {S_{x1}^{*} - S_{x2}^{*} } \right) + S_{x1}^{*} S_{x2}^{*} < 0 \) hold, we find > 0 in (10) by using u *x1  > 0 and u *x2  < 0. Differentiating (3) with respect to x yields

    $$ \frac{{{\text{d}}\bar{u}(x)}}{{{\text{d}}x}} = w(x - x_{1}^{*} ) - w(x - x_{2}^{*} ) + \frac{{{\text{d}}S(x)}}{{{\text{d}}x}}. $$
    (A.1)

    Then, by substituting x *1 and x *2 , we obtain

    $$ u_{x1}^{*} = w(0) - w(a^{*} ) + S_{x1}^{*} , $$
    (A.2)
    $$u_{x2}^{*} = - w(0) + w(a^{*} ) + S_{x2}^{*} . $$
    (A.3)

    From these equations, (9) can be transformed into

    $$ \begin{aligned} \begin{array}{*{20}c} B \\ \end{array} &= \frac{1}{{\tau^{{}} u_{x1}^{*} u_{x2}^{*} }}\left[ {w(0)\left\{ {2w(a^{*} ) - S_{x1}^{*} + S_{x2}^{*} } \right\} - 2w(a^{*} )^{2} + 2\left\{ {(S_{x1}^{*} - S_{x2}^{*} )w(a^{*} ) + S_{x1}^{*} S_{x2}^{*} } \right\}} \right] \\ &= \frac{1}{{\tau^{{}} u_{x1}^{*} u_{x2}^{*} }}\left[ {w(0)\left\{ {2w(a^{*} ) - S_{x1}^{*} + S_{x2}^{*} } \right\} - 2w(a^{*} )^{2} } \right] + 2\tau C. \end{aligned}$$
    (A.4)

    Hence, by using the relation

    $$ 2w(a^{*} ) - S_{x1}^{*} + S_{x2}^{*} < - \frac{{2S_{x1}^{*} S_{x2}^{*} }}{{S_{x1}^{*} - S_{x2}^{*} }} - S_{x1}^{*} + S_{x2}^{*} = - \frac{{S_{x1}^{*2} + S_{x2}^{*2} }}{{S_{x1}^{*} - S_{x2}^{*2} }} < 0 $$
    (A.5)

    and the property of the connectivity w(0) > 0, we can find > 0. Therefore, both the coefficients B and C of the characteristic function are positive, so that the differential equation (8) is stable.

  2. (2)

    When \( w(a^{*} )\left( {S_{x1}^{*} - S_{x2}^{*} } \right) + S_{x1}^{*} S_{x2}^{*} > 0 \) holds, we find < 0 from (10) by using u *x1  > 0 and u *x2  < 0. Thus, the system is unstable.

    Furthermore, when S *x1  > S *x2 and \( w(a^{*} )\left( {S_{x1}^{*} - S_{x2}^{*} } \right) + S_{x1}^{*} S_{x2}^{*} \le 0 \) hold, we find

    $$ w(a^{*} ) \ge - \frac{{S_{x1}^{*} S_{x2}^{*} }}{{S_{x1}^{*} - S_{x2}^{*} }}. $$
    (A.6)

Hence,

$$ w(a^{*} ) - S_{x1}^{*} \ge - \frac{{S_{x1}^{*} S_{x2}^{*} }}{{S_{x1}^{*} - S_{x2}^{*} }} - S_{x1}^{*} = - \frac{{S_{x1}^{*2} }}{{S_{x1}^{*} - S_{x2}^{*} }} \ge 0, $$
(A.7)
$$ w(a^{*} ) + S_{x2}^{*} \ge - \frac{{S_{x1}^{*} S_{x2}^{*} }}{{S_{x1}^{*} - S_{x2}^{*} }} + S_{x2}^{*} = - \frac{{S_{x2}^{*2} }}{{S_{x1}^{*} - S_{x2}^{*} }} \ge 0. $$
(A.8)

Note that at least either S *x1 or S *x2 must take a value different from 0 because of S *x1  < S *x2 . Therefore, we obtain < 0 from (9), so that the system is unstable.

Appendix B: The method for plotting the \( a - \hat{S} \) curve

In this appendix, we show the method for plotting the \( a - \hat{S} \) curve from the function S(x) according to Definition 1. First, we give some definitions.

Definition 2

We say that f(x) is a monotone increasing (decreasing) function if f(x1) < f(x2) (f(x1) > f(x2) ) for any x1, x2with x1 < x2in its domain. We refer to a monotone increasing or decreasing function as a monotone function. We also say that f(x) is a constant function if f(x1) = f(x2) for any x1and x2in its domain.

Definition 3

We define S i (x) (= 1,…, N) to be functions that satisfy the following three conditions, where a finite interval [d i , di+1] is the domain of S i (x). We refer to the function S i (x) as a subfunction of S(x).

  1. (1)

    S i (x= S(x) for all i,

  2. (2)

    S i (x) is either a monotone or constant function,

  3. (3)

    The domain of the neural field [xmin, xmax] is covered by the domain of subfunctions, i.e., d1 = xminand dN+1=xmax.

Definition 3 means the division of S(x) into N subfunctions S i (x) (= 1,…, N) such that each subfunction is either a monotone or constant function. S1(x) − S7(x) in Fig. 6a shows an example of the subfunctions for = 7 corresponding to S(x) in Fig. 3a.

Fig. 6
figure 6

The way of plotting the \( a - \hat{S} \) curve. a According to Definition 3, the subfunctions S 1(x) − S 7(x) are constructed from function S(x) in Fig. 3a such that each subfunction is either a monotone or constant function. b The \( a - \hat{S} \) curve has been drawn by plotting Γ ij (= 1,…, 7, = 1,…, 7) by using Theorem 5 from the subfunctions S 1(x) − S 7(x). The line of \( \hat{S} \) = 0 contains Γ ij for many pairs of i, j, but their labels are omitted

Definition 4

We define Γ ij (= 1,…, N, = 1,…, N) to be a set of (\( a,\hat{S} \)) such that there exist x1and x2satisfying the following relations:

  • $$ (1) \quad x_{1} \in D_{i} ,\quad x_{2} \in D_{j} , $$
    (B.1)
  • $$ (2) \quad x_{1} < x_{2} , $$
    (B.2)
  • $$ (3) \quad S_{i} (x_{1} ) = S_{j} (x_{2} ) = \hat{S}, $$
    (B.3)
  • $$ (4) \quad a = x_{2} - x_{1} , $$
    (B.4)

    where D i [d i , di+1] is the domain of the subfunction S(x).

From Definitions 1 and 4, we can find that Γ ij is a subset of the \( a - \hat{S} \) curve and that the \( a - \hat{S} \) curve is described as \( \bigcup\nolimits_{i,j} {\Upgamma_{ij} } .\)

Definition 5

Let S −1 Li (S) and S −1 Hi (S) be functions such that

$$ S_{Li}^{ - 1} (S) = \left\{ \begin{gathered} S_{i}^{ - 1} (S),\quad{\text{if}}\,S_{i} \,{\text{is}}\,{\text{a}}\,{\text{monotone}}\,{\text{function,}} \hfill \\ d_{i} ,\quad{\text{if}}\,S_{i} \,{\text{is}}\,{\text{a constant}}\,{\text{function}}, \hfill \\ \end{gathered} \right. $$
(B.5)
$$ S_{Hi}^{ - 1} (S) = \left\{ \begin{gathered} S_{i}^{ - 1} (S),\quad{\text{if}}\,S_{i} \,{\text{is}}\,{\text{a}}\,{\text{monotone}}\,{\text{function,}} \hfill \\ d_{i + 1} ,\quad{\text{if}}\,S_{i} \,{\text{is}}\,{\text{a constant}}\,{\text{function}}, \hfill \\ \end{gathered} \right. $$
(B.6)

where S −1 i (S) denotes an inverse function of S i (x).

From the above definitions, we have the following theorem that gives the explicit description of Γ ij .

Theorem 5

Let R i be the range of a subfunction S i (x). Then, Γ ij is described as follows:

  1. (1)

    When both S i and S j (i < j) are monotone functions and R i  ∩ Rj≠ϕ,

    $$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}),\,a > 0,\,\hat{S} \in R_{i} \cap R_{j} } \right\}, $$
    (B.7)
  2. (2)

    When S i and/or S j (i ≤ j) are constant functions and R i  ∩ Rj≠ϕ,

    $$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a \in [S_{Lj}^{ - 1} (S_{c} ) - S_{Hi}^{ - 1} (S_{c} ),\,S_{Hj}^{ - 1} (S_{c} ) - S_{Li}^{ - 1} (S_{c} )]\begin{array}{*{20}c} , \\ \end{array} \quad a > 0,\,\hat{S} = S_{c} } \right\}, $$
    (B.8)

    where S c is defined such that {S c } = R i  ∩ R j ,

  3. (3)

    Otherwise, Γ ij  ϕ.

Proof of Theorem 5

We can prove Γ ij  = ϕ from Definition 4 in case of (1) j, (2) R i  ∩ R jϕ, and (3) S i is a monotone function and j. Thus, by excluding these cases, we consider the following five cases,

  • Case A: Both S i and S j are monotone functions, j, and R i  ∩ R jϕ,

  • Case B: S i is a monotone function, S j is a constant function, j, and R i  ∩ R jϕ,

  • Case C: S i is a constant function, S j is a monotone function, j, and R i  ∩ R jϕ,

  • Case D: S i is a constant function and j,

  • Case E: Both S i and S j are constant functions, j, and R i  ∩ R jϕ.

Then, for each case, Γ ij is given by the following lemma. (Proof of the lemma is given later.)

Lemma 1

Let S ci be the value of a subfunctionS i (x) when S i (x) is a constant function. Then, for each case, Γ ij is described as follows:

$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}),\,a > 0,\,\hat{S} \in R_{i} \cap R_{j} } \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{A}}, $$
(B.9)
$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a \in [d_{j} - S_{i}^{ - 1} (S_{cj} ),\,d_{j + 1} - S_{i}^{ - 1} (S_{cj} )],\,a > 0,\,\hat{S} = S_{cj} } \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{B}}, $$
(B.10)
$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a \in [S_{j}^{ - 1} (S_{ci} ) - d_{i + 1} ,\,S_{j}^{ - 1} (S_{ci} ) - d_{i} ],\,a > 0,\,\hat{S} = S_{ci} } \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{C}}, $$
(B.11)
$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a \in (0,d_{i + 1} - d_{i} ],\,\hat{S} = S_{ci} } \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{D}}, $$
(B.12)
$$ \Upgamma_{ij} = \left\{ {(a,\hat{S})|a \in [d_{j} - d_{i + 1} ,\,d_{j + 1} - d_{i} ],\,a > 0,\,\hat{S} = S_{ci} } \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{E}}. $$
(B.13)

From Lemma 1, we can see that Γ ij for Cases A is the same as (B.7), and that Γ ij for Cases B-E are summarized as (B.8) by using the notations of S −1 Li and S −1 Hi in Definition 5, so that we obtain Theorem 5. □

The proof of Lemma 1 is given as follows.

Proof of Lemma 1

Since proofs of Cases A − E are similar, we show only proof of Case A and omit proofs of the other cases.

If \( (a,\hat{S}) \) \( \in \Upgamma_{ij} \), there exist x 1 and x 2 that satisfy (B.1)–(B.4) from the definition of Γ ij . We can find \( \hat{S} \in R_{i} \cap R_{j} \) from (B.3) and \( a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}) \) from (B.3) and (B.4). > 0 also holds from (B.2) and (B.4), so that

$$ (a,\hat{S}) \in \left\{ {(a,\hat{S})|a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}),\,a > 0,\,\hat{S} \in R_{i} \cap R_{j} } \right\}. $$
(B.14)

On the contrary, if (B.14) holds, we can obtain (B.1)–(B.4) by setting \( x_{1} = S_{i}^{ - 1} (\hat{S}) \) and \( x_{2} = S_{j}^{ - 1} (\hat{S}) \). Thus, we have \( (a,\hat{S}) \in \Upgamma_{ij} \). □

Since the \( a - \hat{S} \) curve is \( \bigcup\nolimits_{i,j} {\Upgamma_{ij} } \) as stated above, we can draw the \( a - \hat{S} \) curve by plotting points \( (a,\hat{S}) \in \Upgamma_{ij} \) for every i, j with Γ ij ϕ by using Theorem 5. Figure 6b shows how the \( a - \hat{S} \) curve is composed of Γ ij , where each curve Γ ij has been obtained from S i (x) and S j (x) depicted in Fig. 6a.

Appendix C: The method for finding solutions of the steady condition 1

Here we show how the solutions of the steady condition 1 can be obtained when the intersections of the \( a - \hat{S} \) curve with Y(a) are given. All the definitions in Appendix B (Definitions 2–5) are also used in this appendix.

As mentioned in Appendix B, the \( a - \hat{S} \) curve is written as \( \bigcup\nolimits_{i,j} {\Upgamma_{ij} } \) by using Γ ij in Definition 4. Therefore, when a point \( (a,\hat{S}) \) lies on the \( a - \hat{S} \) curve, \( (a,\hat{S}) \in \Upgamma_{ij} \) holds for some pair of i, j, and there exist x 1 and x 2 satisfying (B.1)–(B.4). Hence, we denote a set of these variables (x 1, x 2) by \( \Uptheta_{ij} [a,\hat{S}] \). The following theorem shows the explicit description of \( \Uptheta_{ij} [a,\hat{S}] \).

Theorem 6

\( \Uptheta_{ij} [a,\hat{S}] \) is given as follows:

  1. (1)

    When S i and/or S j are monotone functions,

    $$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} = \max \left( {S_{Li}^{ - 1} (\hat{S}),\,S_{Lj}^{ - 1} (\hat{S}) - a} \right),\,x_{2} = x_{1} + a} \right\}, $$
    (C.1)
  2. (2)

    When both S i and S j are constant functions,

    $$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} \in \left[ {\max \left( {S_{Li}^{ - 1} (\hat{S}),\,S_{Lj}^{ - 1} (\hat{S}) - a} \right),\,\min \left( {S_{Hi}^{ - 1} (\hat{S}),\,S_{Hj}^{ - 1} (\hat{S}) - a} \right)} \right],\,x_{2} = x_{1} + a} \right\}, $$
    (C.2)

    where S i and S j are the subfunctions defined in Definition 3. S −1 Li and S −1 Hi are the functions defined in Definition 5.

Proof of Theorem 6

We present the following lemma. (The proof of this lemma is shown later.)

Lemma 2

Consider the same classification as Cases A–E shown in proof of Theorem 5 in Appendix B , then\( \Uptheta_{ij} [a,\hat{S}] \)for each case is given as follows:

$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} = S_{i}^{ - 1} (\hat{S}),\,x_{2} = x_{1} + a} \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{A}}, $$
(C.3)
$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} = S_{i}^{ - 1} (\hat{S}),\,x_{2} = x_{1} + a} \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{B}}, $$
(C.4)
$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} = S_{j}^{ - 1} (\hat{S}) - a,\,x_{2} = x_{1} + a} \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{C}}, $$
(C.5)
$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} \in [d_{i} ,d_{i + 1} - a],\,x_{2} = x_{1} + a} \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{D}}, $$
(C.6)
$$ \Uptheta_{ij} [a,\hat{S}] = \left\{ {(x_{1} ,x_{2} )|x_{1} \in [\max (d_{i} ,d_{j} - a),\min (d_{i + 1} ,\,d_{j + 1} - a)],\,x_{2} = x_{1} + a} \right\}\quad {\text{for}}\;{\text{Case}}\;{\text{E}}, $$
(C.7)

where [d i , di+1] is the domain of the subfunction S i (x) in Definition 3.

If we use the notations of S −1 Li and S −1 Hi in Definition 5, then Cases A–E in Lemma 2 can be summarized as Theorem 6. □

The proof of Lemma 2 is given as follows.

Proof of Lemma 2

Since proofs of Cases A–E are similar, we show only proof of Case A and omit the proofs of the other cases. From \( (a,\hat{S}) \in \Upgamma_{ij} \) and Lemma 1, we have

$$ a = S_{j}^{ - 1} (\hat{S}) - S_{i}^{ - 1} (\hat{S}), $$
(C.8)
$$ a > 0, $$
(C.9)
$$ \hat{S} \in R_{i} \cap R_{j} . $$
(C.10)

If \( (x_{1} ,x_{2} ) \in \Uptheta_{ij} [a,\hat{S}] \), we find \( x_{1} = S_{i}^{ - 1} (\hat{S}) \) and \( x_{2} = S_{j}^{ - 1} (\hat{S}) \) from (B.3). By using (C.8), we have x 2 = x 1 + a, so that

$$ (x_{1} ,x_{2} ) \in \left\{ {(x_{1} ,x_{2} )|x_{1} = S_{i}^{ - 1} (\hat{S}),\,x_{2} = x_{1} + a} \right\}. $$
(C.11)

On the contrary, if (C.11) holds, we can prove that (B.1)–(B.4) also hold by using (C.8) and (C.9). Thus, we obtain \( (x_{1} ,x_{2} ) \in \Uptheta_{ij} [a,\hat{S}]. \)

Consider an intersection point (a * k , S * k ) of the \( a - \hat{S} \) curve with Y(a) as in Step 1 of the graphic analysis method. Let us define i k and j k to be the integers satisfying (a * k , S * k ) ∈ \( \Upgamma_{{i_{k} j_{k} }} \), and set \( \Uptheta_{k} = \Uptheta_{{i_{k} j_{k} }} [a_{k}^{*} ,S_{k}^{*} ] \). Then, from the definition of \( \Uptheta_{ij} [a,\hat{S}] \), we can find a * k  = x *2,k  − x *1,k and S(x *1,k ) = S(x *2,k ) = S * k for (x *1,k , x *2,k ) ∈ Θk. Since we can have the following corollary from Theorem 3, \( \left( {x_{1}^{*} ,x_{2}^{*} } \right) \in \bigcup\nolimits_{k} {\Uptheta_{k} } \) are the solutions of the steady condition 1 in Theorem 1.

Corollary 1

The steady condition 1 holds for x *1 , x *2 if and only if

$$ (x_{1}^{*} ,x_{2}^{*} ) \in \bigcup\limits_{k} {\Uptheta_{k} } . $$

Proof of Corollary 1

If the steady condition 1 holds for x *1 , x *2 (x *1  < x *2 ), the point (a *, S*) with a * = x *2  − x *1 and S * = S(x *2 ) − S(x *1 ) is an intersection of the \( a - \hat{S} \) curve with Y(a) from Theorem 3.

We find \( S_{i} (x_{1}^{*} ) = S_{j} (x_{2}^{*} ) = S^{*} \) with i, j satisfying x *1  ∈ D i and x *2  ∈ D j , where S i and D i (= 1,…, N) denote the subfunction and its domain in Definition 3. Hence, (a *, S *) ∈ Γ ij holds from Definition 4. Since a * = a * k , S * = S * k , i k , and j k hold for some k, we can obtain

$$ x_{1}^{*} \in D_{{i_{k} }} ,\quad x_{2}^{*} \in D_{{j_{k} }} , $$
(C.12)
$$ S_{{i_{k} }} (x_{1}^{*} ) = S_{{j_{k} }} (x_{2}^{*} ) = S_{k}^{*} , $$
(C.13)
$$ a_{k}^{*} = x_{2}^{*} - x_{1}^{*} . $$
(C.14)

Therefore, we can find \( (x_{1}^{*} ,x_{2}^{*} ) \in \Uptheta_{k} \).

On the contrary, if \( (x_{1}^{*} ,x_{2}^{*} ) \in \Uptheta_{k} \) holds for some k, (C.13) and (C.14) hold. Since \( (a_{k}^{*} ,S_{k}^{*} ) \) lies on an intersection of the \( a - \hat{S} \) curve with Y(a), we obtain the steady condition 1 from Theorem 3. □

Since the elements of \( \Uptheta_{k} = \Uptheta_{{i_{k} j_{k} }} [a_{k}^{*} ,S_{k}^{*} ] \) can be obtained by using Theorem 6 for each k, we can actually find the solutions of the steady condition 1. Note that, Theorem 6 indicates that, if both \( S_{{i_{k} }} \) and \( S_{{j_{k} }} \) are constant functions, Θ k contains infinite number of elements. Thus, in this case, we need to pick up some elements \( (x_{1,k}^{*} ,x_{2,k}^{*} ) \in \Uptheta_{k} \) by the required accuracy in order to plot \( G[x;x_{1,k}^{*} ,x_{2,k}^{*} ] \) in Step 2 of the graphic analysis method.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kubota, S., Hamaguchi, K. & Aihara, K. Local excitation solutions in one-dimensional neural fields by external input stimuli. Neural Comput & Applic 18, 591–602 (2009). https://doi.org/10.1007/s00521-009-0246-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-009-0246-2

Keywords

Navigation