Abstract
We show that point-neuron models with a Heaviside firing rate function can be ill posed. More specifically, the initial-condition-to-solution map might become discontinuous in finite time. Consequently, if finite precision arithmetic is used, then it is virtually impossible to guarantee the accurate numerical solution of such models. If a smooth firing rate function is employed, then standard ODE theory implies that point-neuron models are well posed. Nevertheless, in the steep firing rate regime, the problem may become close to ill posed, and the error amplification, in finite time, can be very large. This observation is illuminated by numerical experiments. We conclude that, if a steep firing rate function is employed, then minor round-off errors can have a devastating effect on simulations, unless proper error-control schemes are used.
Similar content being viewed by others
1 Introduction
Modeling of electrical potentials has a long tradition in computational neuroscience. One model with some physiological significance is the voltage-based system
where
In the rate model (1)–(2), each component function \(u_{i}(t)\) of \(\mathbf{u}(t)\) represents the time dependent potential of the ith unit in a network of N units. The nonlinear function \(S_{\beta}\) is called the firing rate function, \(\{ \omega_{ij} \}\) are the connectivities, and \(\mathbf{q}(t)\) models the external drive. A detailed derivation of this model can be found in [1–3].
The purpose of this paper is to explore the properties of the initial-condition-to-solution map
associated with (1)–(2). Note that we use the subscript β to emphasize that \(R_{\beta}\) depends on the steepness parameter β, and that \(R_{\infty}\) corresponds to using a Heaviside firing rate function, i.e. \(S_{\infty} = H\). We will also make use of the standard notation
for the supremum norm throughout this paper.
A simple example, presented in Sect. 4, shows that \(R_{\infty}\) can become discontinuous. Hence, the model is mathematically ill posed [4, 5] and round-off errors of any size can corrupt computations. We conclude that it is very difficult to produce reliable simulations with such models. Since all norms for finite dimensional spaces are equivalent, it is not possible to “circumvent” this problem by changing the involved topologies.
According to standard ODE theory (Appendix A), \(R_{\beta}\), with \(\beta< \infty\), is continuous, but the size of the error-amplification ratio
may be huge for large β, which will be demonstrated and analyzed in Sects. 2 and 3, respectively. Here, \(\tilde{\mathbf{u}}_{0}\) represents a perturbed initial condition and \(\tilde{\mathbf{u}}(t)\) its associated solution. This implies that, also for \(1 \ll\beta< \infty\), it can become difficult to guarantee the accurate numerical solution of (1)–(2): Minor round-off errors may be significantly amplified within short time intervals, which can lead to erroneous simulations.
Our investigation is motivated by the fact that steep sigmoid functions, or even the Heaviside function, often are employed in mathematical/computational neuroscience; see e.g. [1, 6] and references therein. Other authors [7, 8] have also pointed out that severe challenges occur if \(\beta= \infty\), i.e. issues concerning how to define suitable function spaces and to prove existence of solutions. Nevertheless, as far as we know, results which explicitly discuss the ill-posed nature of (1)–(2) when \(\beta=\infty\), and how this property yields extra numerical challenges in the steep, but smooth, firing rate regime, has not previously been published.
Remark
We would like to point-out the following: Assume that an initial condition is close to an unstable equilibrium. Our results should not be interpreted as expressing the mundane fact that a perturbation of this initial condition, moving it to another region with completely different dynamical properties, may lead to large changes in the solution. In fact, we show that the error-amplification ratio can be huge, during small time intervals, even though the perturbation does not change which neurons are active. That is, the change in the initial condition is not such that it changes the qualitative behavior of the dynamical system for \(0< t \ll 1\)—only the quantitative properties are dramatically altered. This can happen in the steep firing rate regime.
2 Numerical Results
Let us first compute the error-amplification ratio (5) for some simple problems.
Example 1
Consider the following model of a single point neuron, i.e. \(N=1\),
We used Matlab’s ode45 solver, with the default error-control settings and \(T=0.1\), to compute numerical approximations of
Also a second series of simulations were performed, using the same selection of values for the steepness parameter, but with the perturbed initial condition
The corresponding solution is denoted \(\tilde{u}(t)=\tilde{u}(t; \beta)\).
Plots of u and ũ, with \(\beta=200\), are displayed in Fig. 1. Note that in both cases the neuron fires, i.e. the change in the initial condition is not such that it has moved from one side of an unstable equilibrium to the other side. Even so, according to Fig. 2 and Table 1, the error-amplification ratio \(E(T;\beta)\), due to the minor perturbation (6) of the initial condition, is in the range \([80.6, 1054.1]\) for \(\beta \in[100, 200]\), and also very large for \(\beta= 50, 75\).
Simulations with the strict error-control setting
generated the same results, and so did an explicit Euler scheme, with uniform time-step \(\Delta t= 10^{-7}\).
Example 2
Let us consider a model of two point neurons:
The same procedure as in Example 1 was used, but with the perturbed initial condition
Figures 3 and 4 show that this minor change of the initial condition, in the steep firing rate regime, has a huge impact on the solution of the model. And, the perturbation does not change which neuron that fires. In Fig. 5 we have plotted the error-amplification ratio \(E(T;\beta)\), see (5), as a function of \(\beta=1,2,\ldots,200\). Clearly, in this case \(E(T;\beta)\) is unacceptably large, even for rather moderate values of the steepness parameter.
As in Example 1, we used Matlab’s ode45 solver with the standard settings. Computations with the strict error-control parameters
produced virtually the same results. The simulations were also “confirmed” by our explicit Euler implementation with time-step \(\Delta t = 10^{-7}\).
Figure 6 shows numerical results computed with Matlab’s ode15s solver, employing the default error-control settings. The curves shown in this figure are very different from the graphs displayed in Fig. 4, which were computed by the ode45 software. We conclude that even the toy example considered in this section is not trivial to solve (with the strict error-control setting (7), ode15s also managed to produce the curves shown in Fig. 4).
If \(u_{1}(t) \approx u_{\theta}\) and \(u_{2}(t) \approx u_{\theta}\), then
and the model implies that
which are rather small. One therefore might think that it is sufficient to employ a moderate time-step to obtain an accurate numerical approximation. Figure 7 shows that this is not the case. (In computational mathematics it is well known that the accuracy of the finite difference approximation \([u_{1}(t+\Delta t) - u_{1}(t)]/\Delta t\), of the derivative \(u'_{1}(t)\), depends on the second order derivative \(u''_{1}(t)\), which in our case is of order \(O(\beta)\). This explains the poor approximation obtained with time-step \(\Delta t = 0.01\).)
3 Analysis
The purpose of this section is to present an analysis of the error-amplification ratio (5) and thereby explain the main features of our numerical results. Even though the Picard–Lindelöf theorem [9, 10] asserts that (1)–(2) has a unique solution \(\mathbf{u}(t)\), provided that \(\mathbf{q}(t)\) is continuous and that \(\beta< \infty\), it is virtually impossible to determine a simple expression for \(\mathbf {u}(t)\). On the other hand, if \(\mathbf{u}(t) \approx\mathbf {u}_{\theta }\) and \(\beta< \infty\), then we can linearize \(S_{\beta}\) to get an approximate model, which is much easier to work with.
3.1 Linearization
The linearization of \(S_{\beta}\) about zero reads
Define \(\boldsymbol {\tau} = \mathbf{I}\), the identity matrix, then the linear approximation of (1)–(2) reads
where
The linearized problem with a perturbed initial condition becomes
and the difference \(\mathbf{s}(t) - \tilde{\mathbf{s}}(t)\) obeys
Therefore,
and the error-amplification ratio can be written in the form
Since the entries of \(\mathbf{A} = \mathbf{A} (\beta)\) are of order \(O(\beta)\), see (11), we conclude that the error-amplification ratio for the linearized model is of exponential order \(O(e^{\beta})\). Is this also the case for the highly nonlinear model (1)–(2)? We will now explore this issue, but first we would like to make a short remark.
Remark
Recall the definition (8) of \(L_{\beta}\). If we replace \(S_{\beta}\) in (1)–(2) with
then the analysis of the linearized model, presented above, would also be valid for (1)–(2), provided that
Similarly to the sigmoid function \(S_{\beta}\), \(\tilde{L}_{\beta}\) also converges point-wise to the Heaviside function as \(\beta\rightarrow \infty\). If one employs the sigmoid function in the point-neuron model, then the analysis, as we will see below, becomes much more involved.
3.2 Preparations
Let \(\beta_{\max}\), T̂, and α be arbitrary positive constants. It is easy to construct a smooth vector-valued function z satisfying
Hence, defining the source as
we conclude that the solution \(\mathbf{u}(t; \beta_{\max}) = \mathbf {z}(t)\) of (1)–(2) also satisfies (13), provided that \(\mathbf{u}_{0} = \mathbf{z}(0)\). By employing standard techniques, one can show that the solution \(\mathbf{u}(t; \beta)\) of (1)–(2) depends continuously on \(0 < \beta< \infty\); see Appendix B. Consequently, there exists \(\bar{\beta}_{\min} < \beta_{\max}\) such that
For the sake of simple notation, we will in our analysis write u, or \(\mathbf{u}(t)\), instead of \(\mathbf{u}(t; \beta)\).
Furthermore, according to the analysis presented in Appendices A–C, u depends continuously on both the initial condition \(\mathbf{u}_{0}\) and the steepness parameter β, when \(0 < \beta< \infty\). Motivated by this property of (1)–(2), we assume that both u and \(\tilde{\mathbf{u}}\), where \(\tilde{\mathbf{u}}\) denotes the solution of (1) generated by a perturbed initial condition \(\tilde{\mathbf{u}}(0)=\tilde{\mathbf{u}}_{0}\), satisfy
where \(\hat{\beta}_{\min} < \beta_{\max}\). Then, by invoking the triangle inequality, we find that
which will be small if \(\beta_{\max}\) is large. Even so, as will become evident below, the error-amplification ratio (5) can be significant and lead to erroneous results.
Let s and \(\tilde{\mathbf{s}}\) denote the associated solutions of the linearized model (9)–(10). From (14) we find that the initial conditions \(\mathbf {u}_{0}\) and \(\tilde{\mathbf{u}}_{0}\) satisfy
Since s and \(\tilde{\mathbf{s}}\) are continuous with respect to t, the same initial conditions are employed in the linearized model, and these solutions depend continuously on \(0< \beta< \infty\), it follows that there exist \(\tilde{T} > 0\) and \(\tilde{\beta}_{\min} < \beta_{\max}\) such that
The main point of this discussion is to show that there exist (smooth) source terms q and perturbations of the initial condition such that (14) holds, regardless how large \(\hat{T},\beta_{\max},\alpha> 0\) are. Also, the solutions of the linearized model will satisfy (16). For the sake of simple notation, let \(T=\min\{ \tilde{T},\hat{T} \}\) and \(\beta_{\min} = \max\{ \tilde{\beta}_{\min}, \hat{\beta}_{\min } \}\).
The triangle inequality implies that
obey
We will derive a bound for \(\| \mathbf{e}(T) \|_{\infty}\). The analysis of \(\| \tilde{\mathbf{e}}(T) \|_{\infty}\) is completely analogous, and thus it is omitted.
3.3 Linearization Error
Subtracting (9) from (1), and keeping in mind that we consider the case \(\boldsymbol {\tau} = \mathbf {I}\), yields
\(i=1,2,\ldots,N\), where we use the notation \(\mathbf{e}(t) = [e_{1}(t), e_{2}(t), \ldots, e_{N}(t)]^{T}\), and similarly for the entries of \(\mathbf{u}(t)\) and \(\mathbf{s}(t)\). Integrating and invoking the fact that \(e_{i}(0) = 0\), we get
\(i=1,2,\ldots,N\).
The triangle inequality, Taylor’s theorem and Eq. (8) for \(L_{\beta}\) imply that
where the second last inequality follows from (14). By combining this with (18), and the triangle inequality, one finds that
where
Since this must hold for \(i=1,2,\ldots,N\),
and Grönwall’s inequality implies that
3.4 Error-Amplification Ratio
Clearly,
and the reverse triangle inequality yields
From (12) it follows that the error-amplification ratio (5) satisfies
Recall that the entries of the matrix \(A=A(\beta)\) are of order β; see (11). To derive a bound for \(\textit{II}(T;\beta)\), we employ (19), and a similar inequality for \(\|\tilde{\mathbf{e}}(T) \|_{\infty}\),
Hence, if
is not very large, β is fairly large and, e.g., \(\alpha\geq 0.5\), then the size of the error-amplification ratio \(E(T;\beta)\) is dominated by \(I(T;\beta)\), i.e. by the term stemming from the linearized model. (Note that (20) and the reverse triangle inequality also imply that \(|E(T;\beta)-I(T;\beta)| \leq \textit{II}(T;\beta)\).)
In our numerical experiments, \(\| \mathbf{u}_{0} - \tilde{\mathbf {u}}_{0} \| _{\infty} = 10^{-5}\) and \(\beta_{\max}=200\). That is, \(\| \mathbf{u}_{0} - \tilde{\mathbf{u}}_{0} \|_{\infty} \ll\beta_{\max}^{-1}\) and (14) will hold with some \(\alpha\geq1\) during a short time interval \([0,\hat{T}]\). It is virtually impossible to distinguish between the curves of \(E(T;\beta)\) and \(I(T;\beta)\), \(\beta=1,2, \ldots , 200\), when \(T=10^{-4}\) (curves not presented). Figure 8 illustrates that \(I(T;\beta)\) also yields a reasonable approximation of \(E(T;\beta)\) for \(T=0.06\).
We conclude that, during time intervals in which (14) holds, the linearized equations (9)–(10) yield a fair approximation of the point-neuron model (1)–(2). Hence, the analysis presented in this section, which provided an error-amplification ratio of order \(O(e^{\beta})\) for (9)–(10), explains our numerical results. More precisely, even though the error is bounded by \(2 \beta_{\max}^{-1-\alpha}\) during such time intervals, see (15), the error-amplification ratio can approximately be of order \(O(e^{\beta})\). This implies that minor perturbations, e.g. round-off errors, can corrupt computations. For example, in Fig. 4 an initial perturbation of size 10−5 is increased to an error of approximately \(0.04=4~\%\).
Remark
Assume that the \(\| \cdot\|_{\infty}\)-norm of the source term \(\mathbf {q}(t)\) is bounded. Then, since \(0< S_{\beta}[x] < 1\) for all \(x \in \mathbb {R}\), it follows from (1) that both \(\| \mathbf{u}'(t) \|_{\infty}\) and \(\| \tilde{\mathbf{u}}'(t) \| _{\infty}\) are bounded independently of the size of the steepness parameter β, at least when \(\mathbf{u}(t)\approx u_{\theta}\) and \(\tilde{\mathbf{u}}(t) \approx u_{\theta}\). Consequently, also the difference \(\| \mathbf{u}(T) - \tilde{\mathbf{u}}(T) \|_{\infty}\) is bounded independently of \(\beta> 0\). Our results therefore might appear to be somewhat counter-intuitive: But note that we have only argued that the error-amplification ratio (5) may, approximately, be of order \(O(e^{\beta})\). If β is large, this can cause severe numerical challenges.
We would also like to comment that standard theory for general dynamical systems
relies on the size of \(\| \mathbf{F}' \|\), which for the point-neuron model (1)–(2) is of order \(O(\beta)\). Also, \(\mathbf{F}(t,\mathbf{z}) = -\mathbf{z}+ \omega S_{\beta }[\mathbf {z}-\mathbf{u}_{\theta}] + \mathbf{q}(t)\) is not Lipschitz continuous with respect to z when \(\beta= \infty\), which the Picard–Lindelöf theorem [9, 10] requires. (F is not even continuous when \(\beta=\infty\).)
The maximum error bound (15), valid when \(\mathbf {u}-\mathbf{u}_{\theta}\) and \(\tilde{\mathbf{u}}-\mathbf {u}_{\theta}\) satisfy (14), suggests that setting \(\beta= \infty\) might provide a solution to the issues discussed above. Unfortunately, as will be explained in the next section, this is not the case.
4 Ill Posed
We will now show that (1)–(2) can become truly ill posed, if a Heaviside firing rate function is employed. More specifically, the initial-condition-to-solution map, in finite time, can be discontinuous.
Consider the case \(N=1\), \(\tau=1\) and no source term:
If, for \(0 < \epsilon\ll1\),
then
provided that \(\omega> u_{\theta}\). Consequently,
where \(R_{\infty}\) denotes the initial-condition-to-solution map (3). We conclude that, no matter how close \(u_{0}\) and \(\widetilde{u}_{0}\) are, the difference \(v(T)-\widetilde{v}(T)\) between the corresponding solutions will not become small. Hence, \(R_{\infty}\) is discontinuous. It follows that the initial value problem, with a Heaviside firing rate function, is ill posed, in finite time—at least in the sense of Hadamard. Also note that, unless \(\omega=2 u_{\theta}\), \(u_{\theta}\) is not a stationary solution of (22), i.e. not an unstable equilibrium.
The error-amplification ratio for this ill-posed problem becomes infinite when \(\epsilon\rightarrow0\):
as \(\epsilon\rightarrow0\), for any \(T>0\).
One may consider this issue from a more pragmatic point of view. Let \(v_{\Delta t}\) denote a numerical approximation of v. If a Heaviside firing rate function is employed, then \(H(v_{\Delta t}-u_{\theta})\) must be evaluated in some line of the simulation software. This is an unstable procedure because H has a jump discontinuity at 0, and round-off errors of any size can corrupt computations.
In contrast to this, provided that \(\beta< \infty\),
see the analysis of the model (1)–(2) presented in Appendix A. Here, \(\tilde{\mathbf{u}}_{0}\) is any perturbation of the initial condition \(\mathbf{u}_{0}\), and A and B are positive constants depending on the matrices τ and ω, but not on β. This inequality shows that the initial-condition-to-solution map \(R_{\beta}\), \(\beta< \infty\), also is continuous at unstable equilibria.
5 Conclusions and Discussion
Since \(R_{\infty}\) can become discontinuous, it is virtually impossible to guarantee the accurate numerical solution of point-neuron models which employ a Heaviside firing rate function: Any round-off errors can potentially corrupt simulations. Alternatively, one may stop the simulation as soon as the solution hits the jump discontinuity, i.e. the threshold value for firing.
We have also observed that models with a steep, but smooth, firing rate function can amplify errors to an extreme degree, which is typical for “almost ill-posed” problems. Consequently, reliable simulations can only be obtained if proper error-control schemes are invoked. How to design effective error-control methods, for models with a large steepness parameter β, is, as far as the authors know, still an open problem. Nevertheless, it seems plausible that suitable adaptive numerical schemes, where the time steps become smaller when the solution reaches regions in the vicinity of the threshold value for firing, might be capable of handling the numerical error amplification.
Let
be the operator which maps the solution of the point-neuron model (1)–(2) from time \(t_{1}\) to time \(t_{2}\). Note that the action of \(F_{\beta;t_{1},t_{2}}\) can be determined by solving the point-neuron model with \(\mathbf{u}(t_{1})\) as initial condition. Therefore, from the argument presented above, it follows that the error amplification ratio associated with \(F_{\beta; t_{1},t_{2}}\) may be large, provided that \(\beta\gg1\). We conclude that the issues pointed out in this study cannot necessarily be avoided by using an initial condition which is far from the threshold value \(\mathbf{u}_{\theta}\) for firing. In fact, it seems that one must prove that \(\mathbf{u}(t)\) never gets close to \(\mathbf {u}_{\theta}\) for \(t>0\)—a herculean task, if correct.
From a modeling perspective one might wonder: Should a voltage-based model of cortex be ill posed or “almost ill posed”? If so, then models employing a Heaviside firing rate function cannot be robustly solved with finite precision arithmetic and regularized approximations are numerically challenging [4, 5].
We fear that similar unfortunate properties, to those discussed in this paper, might be valid for models which can be written in the form
where \(\| \mathbf{F}'_{\beta} \| \rightarrow\infty\) when \(\beta \rightarrow\infty\). This can, e.g., be the case for a number of models in use in computational neuroscience and gene regulatory networks.
An easy solution to the issues raised in this paper, is to avoid steep firing rate functions. If β is fairly small, then standard ODE theory [9, 10] and textbook material about their numerical treatment can be used, provided that the source term \(\mathbf {q}(t)\) is continuous. Nevertheless, steep sigmoid functions are popular in computational neuroscience.
References
Bressloff P. Spatiotemporal dynamics of continuum neural fields. J Phys A, Math Theor. 2012;45:033001.
Ermentrout B. Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phys. 1998;61:353–430.
Faugeras O, Veltz R, Grimbert F. Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks. Neural Comput. 2009;21:147–87.
Engl HW, Hanke M, Neubauer A. Regularization of inverse problems. Dordrecht: Kluwer Academic; 1996.
Well-posed problem. Wikipedia. https://en.wikipedia.org/wiki/Well-posed_problem (2016).
Coombes S. Waves, bumps, and patterns in neural field theories. Biol Cybern. 2005;93:91–108.
Veltz R, Faugeras O. Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst. 2010;9:954–98.
Potthast R, beim Graben P. Existence and properties of solutions for neural field equations. Math Methods Appl Sci. 2010;33:935–49.
Hirsch MW, Smale S. Differential equations, dynamical systems and linear algebra. New York: Academic Press; 1974.
Picard–Lindelöf theorem. Wikipedia. https://en.wikipedia.org/wiki/Picard (2016).
Acknowledgements
This work was supported by The Research Council of Norway, project number 239070. The authors would like to thank the reviewers for a number of interesting comments, which significantly improved this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing Interests
The authors declare that they have no competing interests.
Authors’ Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Appendices
Appendix A: Continuous Dependence on the Initial Condition
We will prove the stability estimate (23), which holds if \(\beta< \infty\). Let u and \(\tilde{\mathbf{u}}\) denote the solutions of (1) corresponding to the initial conditions \(\mathbf{u}_{0}\) and \(\tilde{\mathbf{u}}_{0}\), respectively. For the purpose of detailing the derivation of this estimate, we work with the components of these vector functions, i.e.,
Let us also assume that the diagonal entries \(\{ \tau_{i} \}\), of the diagonal matrix τ, are positive, and let \(\{ \omega_{ij} \}\) denote the entries of ω.
We first observe that the component functions \(u_{i}\) and \(\tilde {u}_{i}\) satisfy the fixed point problems
from which we find, by using the triangle inequality, that
Then, by exploiting the fact that
for all \(x,y\in \mathbb {R}\), we get
where we in the final step have used the definition of the supremum norm (4). Thus, we arrive at the inequality
The stability estimate (23), with
then follows from Grönwall’s lemma.
Appendix B: Continuous Dependence on the Steepness Parameter
For the sake of completeness, we also show that the solution of the initial value problem (1)–(2) depends continuously on the steepness parameter β. Let β and β̂ be steepness parameters for the firing rate function. We fix the initial condition and the connectivity parameters of (1)–(2). The solutions corresponding to β and β̂ are denoted by u and \(\hat{\mathbf{u}}\), respectively. We readily obtain
for the component functions \(u_{i}\) and \(\hat{u}_{i}\) of u and \(\hat{\mathbf{u}}\), respectively. We now make use of the property
for the firing rate function, and we get the chain of inequalities
Since \(0\leq S(x)\leq1\), we find the bound
uniformly in β, for the solutions of (1)–(2). Here,
and it follows that the integrals \(\int_{0}^{T}|\hat {u}_{j}(t)|\,dt\) in (24) can be bounded by β-independent constants. Thus, we end up with the bounding inequality
where A and B are as defined in Appendix A and
Grönwall’s lemma yields the stability estimate
which proves that the solution of (1)–(2) depends continuously on the steepness parameter \(\beta< \infty\) of the firing rate function.
Since \(q(T)\) is increasing, it is possible to extract further information from this argument. More specifically, let
with norm
Then we can conclude that the mapping
is continuous.
Appendix C: Continuous Dependence on the Initial Condition and the Steepness Parameter
We finally prove that the solution \(\mathbf{u}=\mathbf{u}(t;\mathbf {u}_{0},\beta)\) of (1)–(2) depends continuously on \((\mathbf{u}_{0},\beta)\). The proof of this fact proceeds as follows: By exploiting the triangle inequality and the stability estimates (23) and (25), we find that
and we are done. Here \(\mathbf{u}_{0}\) and \(\tilde{\mathbf{u}}_{0}\) denote two initial conditions of (1), while β̂ and β are two steepness parameters.
The function \(g(T)\) is increasing, and we conclude that the mapping
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Nielsen, B.F., Wyller, J. Ill-Posed Point Neuron Models. J. Math. Neurosc. 6, 7 (2016). https://doi.org/10.1186/s13408-016-0039-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13408-016-0039-8