Basic theory of Ordinary Differential Equations

  • Mimmo Iannelli
  • Andrea Pugliese
Part of the UNITEXT book series (UNITEXT, volume 79)


Differential Equations are somewhat pervasive in the description of natural phenomena and the theory of Ordinary Differential Equations is a basic framework where concepts, tools and results allow a systematic approach to knowledge. This same book aims to give a concrete proof of how the modeling of Nature is based on this theory and beyond. This appendix is intended to provide some concepts and results that are used in the text, referring to the student background and to textbooks for a full acquaintance of the material. We actually mention [2,3,5,7,10] as basic references on the subject.


Periodic Orbit Hopf Bifurcation Stable Manifold Negative Real Part Floquet Multiplier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Differential Equations are somewhat pervasive in the description of natural phenomena and the theory of Ordinary Differential Equations is a basic framework where concepts, tools and results allow a systematic approach to knowledge. This same book aims to give a concrete proof of how the modeling of Nature is based on this theory and beyond. This appendix is intended to provide some concepts and results that are used in the text, referring to the student background and to textbooks for a full acquaintance of the material. We actually mention [2,3,5,7,10] as basic references on the subject.

1 A.1 The Cauchy problem

As a starting point of our review we recall basic results relative to the Cauchy Problem, under fair assumptions and conditions that allow the analysis of the models we present in the text. So we consider the following system in vector form
$$ \left\{ {\begin{array}{*{20}c} {{\mathbf{Y'}}(t) = {\mathbf{F}}(t,{\mathbf{Y}}(t)),} \hfill \\ {{\mathbf{Y}}(t_0 ) = {\mathbf{Y}}_0 } \hfill \\ \end{array} } \right. $$
$$ \begin{array}{*{20}c} {{\mathbf{Y}}(t) \equiv (y_1 (t), \cdots ,y_n (t)),} & {{\mathbf{Y}}_0 \equiv (y_1^0 , \cdots ,y_N^0 ),} \\ \end{array} $$
$$ {\mathbf{F}}(t,{\mathbf{x}}) \equiv (F_1 (t,x_1 , \cdots ,x_n ), \cdots ,F_n (t,x_1 , \cdots ,x_n )). $$

For simplicity, we assume that the function F (t, x) is defined everywhere in ℝn + 1

A first basic result on the Cauchy problem (A.1), concerns existence and uniqueness of a solution

Theorem A.1 (Existence and uniqueness). Let the function F(t, x) be continuously differentiable in n then, for any Y 0, there exists an interval (t 0δ, t 0 + δ) and a unique continuously differentiable function Y(t), defined for t (t 0δ, t 0 + δ)and satisfying (A.1).

We will denote the solution by
$$ {\mathbf{Y}}(t,t_0 ,{\mathbf{Y}}_0 ) $$
where we show both the initial time t 0 and the initial datum Y 0. We note that we have the semigroup property
$$ {\mathbf{Y}}(t,t_0 ,{\mathbf{Y}}_0 ) = {\mathbf{Y}}(t,t_1 ,{\mathbf{Y}}(t_1 ,t_0 ,{\mathbf{Y}}_0 )) $$
meaning that the evolution of the system depends on the initial datum, not on past history.

We see that the existence stated by Theorem A.1 is local, and in general the solution cannot be extended beyond a maximal finite interval. Thus (A3) defines a function of the variables (t,t 0,Y 0) on a region Ω that, in general, is strictly contained in ℝ n+2, because for each t 0 and Y 0 the solution is not in general globally defined. However, we have

Theorem A.2 (continuity). Under the assumption of Theorem A.1, the function
$$ {\mathbf{Y}}( \cdot , \cdot , \cdot ):\Omega \to \mathbb{R}^n $$
is continuous.
In particular we have continuity with respect to the initial datum. Namely, if
$$ \mathop {\lim }\limits_{k \to \infty } {\mathbf{Y}}_k = {\mathbf{Y}}_\infty $$
then there exists an interval I ≡ [t 0δ, t 0 + δ] such that Y(t,t 0,Y k ) is defined in I for k sufficiently large and
$$ \mathop {\lim }\limits_{k \to \infty } {\mathbf{Y}}(t,t_0 ,{\mathbf{Y}}_k ) = {\mathbf{Y}}(t,t_0 ,{\mathbf{Y}}_\infty ),uniformly\,in\,[t_0 - \delta ,t_0 + \delta ]. $$

Since we deal with models requiring not only existence and uniqueness of the solution but also global existence, we need conditions ensuring that the solution is indeed defined for all t ≥ 0. In fact we have

Theorem A.3. Suppose that, for given t0 and Y 0, there exists M > 0 such that the solution satisfies
$$\left| {Y\left( {t,{t_0},{Y_0}} \right)} \right| \leqslant M. $$
as far as it exists, then it exists for t∈ℝ.

Actually boundedness of the solution is a requirement occurring in many significant results. If estimate (A.4) is satisfied only for tt 0, then the solution is global in the future.

A second condition concerns the structure of the system.

Theorem A.4. Suppose that there exists M > 0 such that
$$ \left| {\frac{{\partial F_i }} {{\partial x_i }}(t,x_1 , \cdots ,x_n )} \right| \leqslant M. $$

Then the solution exists globally.

We are interested in describing the behavior of solutions that exists globally in time. The analysis leads to concept and results of qualitative analysis.

2 A.2 Equilibria and their stability

We focus on the autonomous case
$$ \begin{array}{*{20}c} {{\mathbf{Y'}}(t) = {\mathbf{F}}({\mathbf{Y}}(t)),} & {{\mathbf{Y}}(0) = {\mathbf{Y}}_0 } \\ \end{array} $$
for which the solution can be represented as Y (t, Y 0) since any solution satisfying a condition Y (t 0) = Y 0 is then given by
$$ {\mathbf{Y}}(t,t_0 ,{\mathbf{Y}}_0 ) = {\mathbf{Y}}(t - t_0 ,Y_0 ). $$
We want to discuss the local behavior of solutions in a neighborhood of equilibria. Actually, we first identify equilibria Y* as the solutions of the equation
$$ {\mathbf{F}}({\mathbf{Y}}^* ) = 0. $$
These points of the phase spacen correspond to solutions that are constant in time.
$$ {\mathbf{Y}}(t,{\mathbf{Y}}^* ) \equiv {\mathbf{Y}}^* . $$

Equilibria may be isolated points or even form a continuum in the phase space. A given equilibrium may be stable, asymptotically stable or unstable, according with the following

Definition A.1. A given equilibrium Y* is said to be
  • stable, if for any δ > 0 it exists δ such that
    $$ \begin{array}{*{20}c} {\left| {{\mathbf{Y}}_0 - {\mathbf{Y}}^* } \right| < \delta } \hfill & \Rightarrow \hfill & {\left| {{\mathbf{Y}}(t,0,{\mathbf{Y}}_0 ) - {\mathbf{Y}}^* } \right| < \varepsilon } \hfill & {{\text{for}}} \hfill & {t \geqslant 0;} \hfill \\ \end{array} $$
  • asymptotically stable, if it is stable and there exists δ such that
    $$ \begin{array}{*{20}c} {\left| {{\mathbf{Y}}_0 - {\mathbf{Y}}^* } \right| < \delta } & \Rightarrow & {\mathop {\lim }\limits_{t \to + \infty } {\mathbf{Y}}(t,0,{\mathbf{Y}}_0 ) = {\mathbf{Y}}^* } \\ \end{array} ; $$
  • unstable if it is not stable.

This definition is illustrated in the picture sketched in Fig. A.1. Actually we should specify that these definition concerns the future behavior of the solution, since we consider positive times t > 0. However we have the same definition concerning stability in the past.
Fig. A.1

Stability of the equilibrium Y*. For a fixed circle of radius δ > 0 we find a circle with radius δ such that the trajectories starting inside the latter do not exit from the former

These definitions can be soon applied to the one-dimensional case of a scalar equation
$$ y'(t) = F(y(t)). $$
In this case, the stability of an isolated equilibrium y* (namely a root of the function F(y)) is determined by the derivative of F(y) at the point y*. In fact we have
  • if F′(y*) x003C; 0 the equilibrium y* is asymptotically stable;

  • ifF′(y*) x003C; 0 it is unstable.

Indeed such a statement can be easily checked since we have complete knowledge of how the solutions of (A.6) behave (see Fig. A.2), however here we have the basic paradigm according to which in a neighborhood of the equilibrium the deviation ω(t) = y(t) −y* satisfies approximately the equation
$$ \omega '(t) = F'(y^* )\omega (t). $$
Fig. A.2

The scalar case: stability and instability of an equilibrium point. (a) F′(y*) < 0 and the equilibrium is asymptotically stable; (b) the equilibrium is unstable since F′(y*) <

We note that the previous statement gives only sufficient conditions and if we have F′(y*) = 0 we cannot conclude either for stability nor for instability.

3 A.3 Linear systems

The analysis of the linear case is especially important not only because it is possible to give a full description of the solutions, but also because, in the the nonlinear cases, local analysis in a neighborhood of an equilibrium point resorts on linearization (as in the one dimensional case) to draw conditions for stability.

Thus we consider the problem
$$ \begin{array}{*{20}c} {{\mathbf{Y'}}(t) = A{\mathbf{Y}}(t),} & {{\mathbf{Y}}(0) = {\mathbf{Y}}_0 } \\ \end{array} $$
where A is a n × n matrix. In such a case the origin O is an equilibrium and the solution can be explicitly expressed through the fundamental solution
$$ e^{tA} = \sum\limits_{k = 0}^\infty {\frac{{t^k }} {{k!}}A^k {\text{as}}\,\,{\mathbf{Y}}(t) = e^{tA} {\mathbf{Y}}_0 } . $$
Going through the Jordan form of A, one can write Y(t) in terms of the eigenvalues of A. Without going into the details (see [4]), we just state that it is possible to arrive at
$$ {\mathbf{Y}}(t) = \sum\limits_{i = 1}^p {} \sum\limits_{j = 0}^{m_i - 1} {t^j e^{\lambda _i t} {\mathbf{v}}_{ij} } $$
  • λ i are the eigenvalues of A (i = 1,⋯,p);

  • m i is the multiplicity of the eigenvalue λ i (i = 1,⋯,p);

  • v ij are the projection of the initial vector Y 0 onto the generalized eigenvectors of A (i =1,⋯ ,p j = 0, ⋯ ,m i − 1).

Thus we can state

Theorem A.5. For system (A.7) the origin O is
  1. 1.

    Asymptotically stable, if and only if ℜλ i < 0 for all i=1,⋯ ,p.

  2. 2.

    Unstable, if there exists k such that ℜλ k > 0.

  3. 3.

    Stable, if ℜλ i ≤ 0 for all i = 1,⋯, p, and all eigenvalues with null real part are simple.

Since each eigenvalue determines a subspace E i of ℜ n with dimension mi, such that
$$ \begin{array}{*{20}c} {e^{tA} E_i \subset E_i ,} & {\mathbb{R}^n = \oplus _i E_i i = 1, \cdots ,p,} \\ \end{array} $$
we can define three main subspaces E s, E c, E u as
  • E s (stable space), the direct sum of all the E i with ℜλi < 0;

  • E c (central space), the direct sum of all the E i with ℜλi = 0;

  • E u (unstable space), the direct sum of all the E i with ℜλi> 0.

These three spaces are invariant under the matrix e tA and, if Y 0 belongs to one of them, then in (A.9) only the eigenvalues relative to that space are present. Thus we have
  • if Y 0E s then \(\mathop {\lim }\limits_{t \to + \infty } \left| {Y(t,Y_0 )} \right| = 0\) and \( \mathop {\lim }\limits_{t \to - \infty } \left| {Y(t,Y_0 )} \right| = + \infty ; \)

  • if Y 0E u then \(\mathop {\lim }\limits_{t \to - \infty } \left| {Y(t,Y_0 )} \right| = 0\) and \(\mathop {\lim }\limits_{t \to + \infty } \left| {Y(t,Y_0 )} \right| = + \infty ;\)

  • if Y 0E c then the solution Y(t,Y 0) may be constant, run on a circle, or \(\mathop {\lim }\limits_{t \to \pm \infty } \left| {Y(t,Y_0 )} \right| = + \infty \)

Of course, case (1) of Theorem A.5 corresponds to E u E c = ∅10.When E s E c = ∅ the origin O is said to be completely unstable, when E s ≠ ∅ and E u ≠ ∅, O is said to be a saddle point. In Fig. A.3 the case of planar systems is illustrated.
Fig. A.3

Phase plane pictures for linear planar systems. The main cases of stable points: a focus (a); a center (b); a saddle point (c); a node (d)

By the previous discussion we see that the problem of asymptotic stability of the origin is based on the analysis of the set of the eigenvalues, i.e. on the study of the roots of the characteristic polynomial
$$ a_0 \lambda ^n + a_1 \lambda ^{n - 1} + \cdots + a_n . $$
In the planar case this polynomial reads
$$ \lambda ^2 - {\text{trace(}}A{\text{)}}\lambda {\text{ + det(}}A{\text{)}} $$
and we have
Proposition A.1. Let A be a 2×2 matrix, then
  • the origin is asymptotically stable if and only if trace (A) < 0 and det(A) > 0;

  • if det(A) < 0, then the origin is a saddle point;

  • if det(A) > 0 and trace(A) > 0, then the origin is completely unstable.

The general n-dimensional case can be systematically approached through the so called Routh-Hurwitz criterion. This is based on the n×n matrix ((H ij)) built from (A.10) by setting
$$ H_{ij} = \left\{ {\begin{array}{*{20}c} {a_{2i - j} } \hfill & {{\text{if}}\,{\text{0}}\,{\text{ < }}\,{\text{2}}i - j \leqslant n,} \hfill \\ 0 \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right. $$
then we have

Theorem A.6. All the roots of the polynomial (A.10), with a 0 > 0, have negative realpart ifandonly if the principal minors of ((H ij )) are all positive.

In the case n = 3 the Routh-Hurwitz matrix reads
$$ \left( {\begin{array}{*{20}c} {a_1 } \hfill & {a_0 } \hfill & 0 \hfill \\ {a_3 } \hfill & {a_2 } \hfill & {a_1 } \hfill \\ 0 \hfill & 0 \hfill & {a_3 } \hfill \\ \end{array} } \right) $$
so that we have the conditions
$$ \begin{array}{*{20}c} {a_0 > 0,} & {a_1 > 0,} & {a_3 > 0,} & {a_1 a_2 - a_0 a_3 > 0.} \\ \end{array} $$
Applying this criterion, a condition can be drawn basing on the matrix itself. In fact for a 3 × 3 matrix A, the characteristic polynomial reads
$$ \lambda ^3 - {\text{trace(}}A{\text{)}}\lambda ^2 + \sum\limits_{i = 1}^3 {M_i \lambda - \det (A) = 0} $$
where M i are the three principal minor of order 2. Then, applying (A.11) we have
Proposition A.2. For a 3 × 3 matrix A, all eigenvalues have negative real part if and only if
$$ \begin{array}{*{20}c} {trace{\text{(}}A{\text{)}}\,\,{\text{ < }}\,{\text{0,}}} & {det(A) < 0,} & {trace{\text{(}}A{\text{)}}\,\sum\limits_{i = 1}^3 {M_i < \det (A)} } \\ \end{array} $$
where M i are the principal minor of order 2.
The following simple fact is sometimes useful when studying bifurcations (see next section) of 3-dimensional systems. Consider a 3rd-order equation
$$ \lambda ^3 + a_1 (\alpha )\lambda ^2 + a_2 (\alpha )\lambda + a_3 (\alpha ) = 0 $$
whose coefficients depend continuously on the parameter α.

Assume that the equation satisfies the conditions (A.11) for α < α* [or α > α*], while they are violated at α = α*. Then, either α3*) = 0 in which case 0 is a root of (A.12) for α α* or α1(α*)α2(α*) − α3*) = 0, in which case (A.12) has two purely imaginary roots at α α*.

The behavior of the solution of a linear system can also be described in the non autonomous case
$$ \begin{array}{*{20}c} {{\mathbf{Y'}}(t) = A(t){\mathbf{Y}}(t),} & {{\mathbf{Y}}(0) = {\mathbf{Y}}_0 } \\ \end{array} $$
where now the matrix A(t) is continuous and periodic in time
$$ A(t) = A(t + T) $$
where T is the period. In fact, also in this case we have a useful form for the fundamental solution (see [2]):
Theorem A.7 (Floquet Theorem). Let A(t) be continuously periodic in time with period T, then there exist a matrix P(t), continuously periodic in time with period T, and a constant matrix R, such that the fundamental solution of (A.13) is given by
$$ P(t)e^{tR} . $$
Thus the solution of (A.13) can be expressed as
$$ {\mathbf{Y}}(t) = P(t)e^{tR} {\mathbf{Y}}_0 $$
and the eigenvalues of the matrix R determine the asymptotic behavior. These eigenvalues z i are traditionally called the characteristic exponents of system (A.13). Instead, the eigenvalues μ i of the matrix e TR are called the Floquet multipliers of the system. And we have
$$ \mu _i = e^{Tz_i } . $$

4 A.4 The non-linear case

The results of the previous section provide the key to approach the non-linear case. We consider the following autonomous system
$$ {\mathbf{Y'}}(t) = A{\mathbf{Y}}(t) + {\mathbf{G}}({\mathbf{Y}}(t)), $$
where A is a n × n matrix and G : ℝn → ℝn is continuously differentiable and
$$ \mathop {\lim }\limits_{{\mathbf{x}} \to {\mathbf{O}}} \frac{{\left| {{\mathbf{G}}({\mathbf{x}})} \right|}} {{\left| {\mathbf{x}} \right|}} = 0 $$

Then the origin O is an equilibrium for (A.14) and we have

Theorem A.8. Consider system, (A.14) and let λ 1, ⋯,λp be the eigenvalues of the matrix A. We have
  • if ℜλ i < 0⋯for all i= 1,⋯,p, then the origin O is asymptotically stable;

  • if there exist k such that ℜ λ k > 0, then the origin O is unstable.

This result can be actually applied to the stability analysis of equilibria of system (A.5), because if Y* is such an equilibrium, we consider the deviation
$$ {\mathbf{W}}(t) = {\mathbf{Y}}(t,{\mathbf{Y}}_0 ) - {\mathbf{Y}}^* $$
which is a solution of the linearized system
$$ \begin{array}{*{20}c} {{\mathbf{W'}}(t) = {\mathbf{JE}}({\mathbf{Y}}^* ){\mathbf{W}}(t) + {\mathbf{G}}({\mathbf{W}}(t)),} & {{\mathbf{W}}(0) = {\mathbf{Y}}_0 - {\mathbf{Y}}^* ,} \\ \end{array} $$
where JF(Y*) is the Jacobian matrix of F at the equilibrium and G(•) the remainder, satisfying (A.15). Indeed we have
Theorem A.9. Consider system (A.5) and an equilibrium Y*. let λ1, ⋯, λp be the eigenvalues of the Jacobian JF(Y*) at Y*. We have
  • if ℜλi < 0 for all i= 1,…,p, then Y* is asymptotically stable;

  • if there exist k such thatλ k > 0, then Y* is unstable.

We stress the local nature of the previous result and also note that the critical cases, when one or more eigenvalues have null real part, are not decidable on the basis of the linearization.

A few additional results are in order. First we have

Theorem A.10. If Y* is unstable withλ i > 0 for all i = 1, ⋯,p, then Y*#x002A; is said to be completely unstable and there exists δ > 0 such that, for any Y 0 satisfying ǀY 0Y*ǀ < δ, there exist T > 0 such that
$$ \begin{array}{*{20}c} {\left| {{\mathbf{Y}}(t,{\mathbf{Y}}_0 ) - {\mathbf{Y}}^* } \right| > \delta } & {for\,all} & {t > T.} \\ \end{array} $$

Then we consider the case of a saddle point, i.e. the case when the Jacobian JF(Y*) has both eigenvalues with positive real part and eigenvalues with negative real part. We have

Theorem A.11. If Y* is a saddle point with no eigenvalues with null real part, then there exist two invariant differential manifolds \( \mathcal{M}^s (Y^* ) \) and \( \mathcal{M}^u (Y^* ) \) with the following properties
  • \( \mathcal{M}^s (Y^* ) \) and \( {\mathcal{M}^u}({Y^*}) \) are respectively tangent to E s andE u at Y*

  • there exist a neighborhood \( \mathcal{U} \) of Y* such that
    $$ \begin{array}{*{20}c} {\mathcal{M}^s ({\mathbf{Y}}^* ) = \left\{ {\begin{array}{*{20}c} {\left. {{\mathbf{x}} \in U} \right|\mathop {\lim }\limits_{t \to + \infty } {\mathbf{Y}}(t,{\mathbf{x}}) = {\mathbf{Y}}^* } & {and} & {{\mathbf{Y}}(t,{\mathbf{x}}) \in \mathcal{U}} & {for\,t \geqslant 0} \\ \end{array} } \right\},} \\ {\mathcal{M}^u ({\mathbf{Y}}^* ) = \left\{ {\begin{array}{*{20}c} {\left. {{\mathbf{x}} \in U} \right|\mathop {\lim }\limits_{t \to + \infty } {\mathbf{Y}}(t,{\mathbf{x}}) = {\mathbf{Y}}^* } & {and} & {{\mathbf{Y}}(t,{\mathbf{x}}) \in \mathcal{U}} & {for\,t \leqslant 0} \\ \end{array} } \right\}.} \\ \end{array} $$
The two manifolds are respectively named stable manifold and unstable manifold. In Fig. A.4 a few example are shown in the planar case to illustrate the previous concepts.
Fig. A.4

Phase plane pictures for nonlinear planar systems. (a) approximate picture of a completely unstable node; (b) approximate picture of a saddle node: here the two curves pointing to the equilibrium are the stable manifold, while the two curves exiting from the equilibrium are the unstable manifold

The method presented above, based on the linearization of the original problem to obtain results on the stability of equilibria, can be pushed forward and include stability of periodic orbits. In this case we consider a closed orbit Γ, parametrized through the periodic solution P(t ), and take the deviation
$$ \begin{array}{*{20}c} {{\mathbf{W}}(t) = {\mathbf{Y}}(t,{\mathbf{Y}}_0 ) - {\mathbf{P}}(t),} & {{\mathbf{W}}(0) = {\mathbf{Y}}_0 - {\mathbf{P}}(0).} \\ \end{array} $$
Then linerization of the autonomous problem (A.5) leads to the non autonomous linear problem
$$ \begin{array}{*{20}c} {{\mathbf{W'}}(t) = {\mathbf{JE}}({\mathbf{P}}(t)){\mathbf{W}}(t),} & {{\mathbf{W}}(0) = {\mathbf{Y}}_0 - {\mathbf{P}}(0),} \\ \end{array} $$
where the matrix JF(P(t)) is periodic. In this case we resort on Floquet theory (see Theorem A.7) and we need to consider the Floquet multipliers to discuss stability of Γ. As W(t) represents the deviation for a periodic orbit, one of the multipliers is 1. Hence Γ is asymptotically stable if all the other multipliers μ i satisfy ǀμ i ǀ < 1 and is unstable if at least one of them satisfies ǀμ iǀ > 1. Explicit formulae for the Floquet multipliers can rarely be obtained, except in some special cases as the one treated in  Sect. 9.2, and one generally has to rely on numerical computations or clever arguments.

5 A.5 Limit sets

The previous results concern the behavior of the solutions in a neighborhood of an equilibrium. Their asymptotic behavior is well determined only when the equilibrium is asymptotically stable. A basic concept for analyzing the behavior of any solution is its ω limit set
$$ \begin{array}{*{20}c} {\omega ({\mathbf{x}}) = \{ {\mathbf{y}}|\,{\text{there}}\,{\text{exists\{ }}t_n {\text{\} ,}}} \hfill & {t_n \to + \infty ,} \hfill & {{\mathbf{Y}}(t_n ,{\mathbf{x}}} \hfill \\ \end{array} ) \to {\mathbf{y}}\} $$
the relative properties are given in the following Theorem
Theorem A.12. Consider the solution Y(t,x) and suppose that its positive orbit
$$ O_ + ({\mathbf{x}}) = \{ {\mathbf{Y}}(t,x),t \geqslant 0\} $$
is bounded. Then
  • ω(x) ≢∅;

  • ω(x) is closed and connected;

  • ω (x) is invariant;

  • \(\mathop {\lim }\limits_{t \to \pm \infty } d\left( {Y(t,x),\omega (x)} \right) = 0;\)

where d(x,y) denotes the distance between x and y.

The last statement of the Theorem gives a precise account of how the solution tends to ω(x).

Special cases of ω-limit sets are
  • ω(x) = Y*, when Y* is an equilibrium and lim Y(t x)=Y*

  • ω(x) = O+(x) when Y(t,x) is aperiodic solution (Y(t,x) =Y(t + T,x)).

The concept of ω-limit set concerns the solution as t→+∞, i.e. in the future. Of course we can consider the same concept as t →−∞ and define the α-limit set
$$ \begin{array}{*{20}c} {\alpha ({\mathbf{x}}) = \{ {\text{y|}}\,{\text{there}}\,{\text{exists}}\,(t_n ),} \hfill & {t_n \to - \infty ,} \hfill & {{\mathbf{Y}}(t_n ,{\mathbf{x}}} \hfill \\ \end{array} ) \to {\mathbf{y}}\} , $$
concerning the solution in the past.

A useful result concerning limit sets is the following (see [9,  Sect. 8.2]), known as Butler-McGehee Lemma, that we state in a very simple form, originally due to Freedman and Waltman

Proposition A.3. Let Y* be an isolated equilibrium of (A.20), and let Y* ∈ ω(x) but ω(x) ≠ Y*. Then ω(x) includes a point x 1\( \mathcal{M}^s (Y^* ) \), x 1Y*, (hence the whole orbit through x 1) and a point x 2\( \mathcal{M}^s (Y^* ) \), x 2Y* (hence the whole orbit through x 2).

6 A.6 Planar case: Poincaré-Bendixson theory

In the case of planar systems
$$ \begin{array}{*{20}c} {y'_1 = F_1 (y_1 ,y_2 ),} \hfill \\ {y'_2 = F_2 (y_1 ,y_2 ),} \hfill \\ \end{array} $$
the analysis of the structure of the ω-limit sets is rather clear and is based on the following theorem;

Theorem A.13. Consider the autonomous system (A.18) and suppose that the orbit O +(Y 0) is bounded. If the ω-limit set ω(Y 0) does not contain any equilibrium point, then it is a periodic orbit.

This result has several consequences for planar systems. In fact we have

Theorem A.14. Consider the autonomous system (A.18) and suppose that there exists a finite number of equilibria. If the orbit O +(Y 0) is bounded then we have one of the following statements
  • ω(Y 0) is an equilibrium;

  • ω(Y 0) is a periodic orbit;

  • ω(Y 0) is a singular cycle, i.e. it is the union of a finite number of orbits joining such points as t → ω∞ or as t → − ∞ (either heteroclinic or homoclinic orbits).

The previous theorem restricts the possible outcomes and any information to exclude existence of periodic orbits is very useful. Indeed we have

Theorem A.15 (Bendixson-Dulac criterion). Consider system (A.18) and suppose that there exist a function L(y 1,y 2) such that
$$ \begin{array}{*{20}c} {\frac{\partial } {{\partial y_1 }}\left[ {L(y_1 ,y_2 )F_1 (y_1 ,y_2 )} \right] + \frac{\partial } {{\partial y_2 }}\left[ {L(y_1 ,y_2 )F_2 (y_1 ,y_2 )} \right] < 0,} \hfill & {(or > 0)} \hfill \\ \end{array} $$
in an open region Ω. Then no periodic orbits or singular cycles exist in Ω.

Finally, a simple statement that may help is the following

Theorem A.16. For system (A.18), the region enclosed by a periodic orbit must contain at least one equilibrium.

7 A.7 Planar competitive and cooperative systems

A class of differential equations for which a large body of theory has been developed is that of cooperative or competitive systems. An extensive account of the theory, not limited to differential equations, including many results of interest for Population Dynamics can be found in [8].

In the general case of problem (A.1) with (A.2) we have the following

Definition A.2. The system (A.1) with (A.2) is said to be competitive if
$$ \begin{array}{*{20}c} {\frac{{\partial F_i }} {{\partial x_j }} < 0} \hfill & {{\text{for}}} \hfill & {i \ne j} \hfill \\ \end{array} $$
it is said to be cooperative if, instead,
$$ \begin{array}{*{20}c} {\frac{{\partial F_i }} {{\partial x_j }} > 0} \hfill & {{\text{for}}} \hfill & {i \ne j} \hfill \\ \end{array} . $$

The class of competitive systems fits exactly the class of models designed to describe competition, as those we have considered in  Chap. 7.

We limit ourselves to consider planar systems, for which a simple result holds, allowing to determine the global asymptotic behavior of the solutions. In fact we have

Theorem A.17. For a planar competitive system, all the solutions are eventually monotone. Then any bounded solutions goes to an equilibrium point as t →+∞.

The same result holds for a cooperative system that in fact can be easily changed into a competitive one through the transformation
$$ \begin{array}{*{20}c} {F_1 \to F_1 (x_1 ,x_2 ) = F_1 (x_1 , - x_2 ),} \hfill & {F_2 \to F_2 (x_1 ,x_2 ) = - F_2 (x_1 , - x_2 ).} \hfill \\ \end{array} $$

8 A.8 Lyapunov functions

A wide range of results concerning the qualitative behavior of solutions to Problem (A.5) resort on the use of Lyapunov functions that provide an alternative method to investigate stability. To present the results we first consider a continuously differen-tiable function V(x) : ℜn → ℜ and, referring to (A.5), we define
$$ \dot V({\mathbf{x}}) = \dot V(x_1 , \ldots ,x_n ) = \sum\limits_{i = 1}^n {\frac{\partial } {{\partial x_i }}V(x_1 , \ldots ,x_n )F_i (x_1 , \ldots ,x_n ).} $$

Then we have

Theorem A.18 (Lyapunov theorem). Let Y* be an equilibrium for system (A.5), and suppose there exist a continuously differentiable function V(x) defined in a neighborhood \( u \) of Y*, such that
  • V(Y*)= 0;

  • V(x) > 0 for x\( u \) and xY*;

  • V?(x) ≤ 0 on \( u \)

    then Y* is stable. If moreover

  • V (x) < 0 for xY*,

    then Y* is asymptotically stable.

Finally, information about the ωlimit set come from the following result:

Theorem A.19 (La Salle theorem). Consider system (A.5) and suppose it exists a continuously differentiable function V(x), defined on a positively invariant region Ω and such that
$$ \begin{array}{*{20}c} {\dot V({\mathbf{x}}) \leqslant 0,} \hfill & {for\,all} \hfill & {{\mathbf{x}} \in \Omega .} \hfill \\ \end{array} $$
Then, for any Y 0 ∈ Ω, we have
$$ \omega ({\mathbf{Y}}_0 ) \subset \{ {\mathbf{x}} \in \Omega |\dot V({\mathbf{x}}) = 0\} . $$

9 A.9 Persistence

The theory, and the concept itself, of persistence developed in the last few decades, and constitutes now an important method for analyzing population dynamics models. An extensive and clear account of the theory can be found in the book by Smith and Thieme [9]. Here we restrict ourselves to the very few results we used in this book, and that we feel essential. Consider a system
$$ \begin{array}{*{20}c} {{\mathbf{Y'}}(t) = {\mathbf{F}}({\mathbf{Y}}(t))} \hfill & {{\text{with}}} \hfill & {F_i ({\mathbf{Y}}) = y_i f_i ({\mathbf{Y}})} \hfill \\ \end{array} $$
and fi defined and regular on the non-negative orthant. Note that all the hyperplanes {y i = 0} are invariant for system (A.19)
Definition A.3. System (A.19) is persistent if there exists ε > 0 such that
$$ \begin{array}{*{20}c} {{\text{if}}} \hfill & {{\mathbf{Y}}_0 } \hfill & {{\text{satisfies}}} \hfill & {\mathop {\min }\limits_{i = 1 \ldots n} (Y_0 )_i > 0,} \hfill & {{\text{then}}} \hfill & {\mathop {\lim \inf y_i (t)}\limits_{t \to \infty } } \hfill \\ \end{array} \geqslant \varepsilon $$
where Y(t;Y 0) = (y 1(t),...,y n(t)) is the solution of  (9.5) with Y(0) = Y 0.
It is weakly uniformly persistent if there exists ε > 0 such that
$$ \begin{array}{*{20}c} {{\text{if}}} \hfill & {{\mathbf{Y}}_0 } \hfill & {{\text{satisfies}}} \hfill & {\mathop {\min }\limits_{i = 1 \ldots n} (Y_0 )_i > 0,} \hfill & {{\text{then}}} \hfill & {\mathop {\lim \sup y_i (t)}\limits_{t \to \infty } } \hfill \\ \end{array} \geqslant \varepsilon . $$

We take from [9] two results simplified to suit system (A.19). Before stating the theorems, some definitions are needed.

Let X 0 be the boundary of \(\mathbb{R}_ + ^n\), i.e. the union of all coordinate hyperplanes. A set MX 0 is named weakly repelling if there exists ε > 0 such that, for all strictly positive x,
$$ \mathop {\lim \sup }\limits_{t \to \infty } d(Y(t;x),M) \geqslant \varepsilon . $$

Note that, if M is a hyperbolic equilibrium, it is weakly repelling if its stable manifold has no intersection with the interior of \(\mathbb{R}_ + ^n\).

Finally, a collection {M 1,...,M n} of subsets of X 0 is cyclic if, after possibly renumbering the sets, there exist y 12(t), y 23(t), ..., y nȡ21,n(t), y n,1(t) solutions of (A.19) belonging to X 0 such that lim t→−∞ d(y ij ,M i) = 0, lim t→−∞ d(y ij ,M j ) = 0. If {M 1,...,M n } is not cyclic, it is acyclic.

Theorem A.20 (Theorem 8.17 in [9]). Let \( \cup _{x \in X_0 } \omega (x) = \cup _{i = 1}^n M_i \) where each Mi is isolated, compact and weakly repelling. If {M 1,...,M n } is acyclic, then (A.19) is weakly uniformly persistent.

A second very useful result is the following

Theorem A.21 (Theorem 4.5 in [9]). Assume there exists a compact set B such that for each strictly positive x
$$ \mathop {\lim }\limits_{t \to \infty } d(Y(t;x),B) = 0. $$

Then if (A.19) is weakly uniformly persistent, it is persistent.

The two Theorems can be combined in the following result

Theorem A.22. Assume there exists a compact set B such that for each strictly positive x
$$ \mathop {\lim }\limits_{t \to \infty } d(Y(t;x),B) = 0. $$

Let \( \cup _{x \in X_0 } \omega (x) = \cup _{i = 1}^n M_i \) where each M i is isolated, compact and weakly repelling. if {M 1 ,...,M n } is acyclic, then (A.19) is persistent.

10 A. 10 Elementary bifurcations

Bifurcation theory (here presented for ordinary differential equations, but similar concepts hold for much more general classes of equation) concerns families of differential equations depending on a parameter a (here taken in ℜ)
$$ {\mathbf{Y'}}(t) = {\mathbf{F}}(\alpha ,{\mathbf{Y}}(t)), $$
where Y(t) ∈ ℜ n , and F(∙, ∙) is regular enough (we do not go in details here).

The theory examines the possible changes in the qualitative structure of the system, as the parameter α varies. We refer to [6] (more elementary and computational) or [10] for a comprehensive introduction to the topic. Here we present simply the recipes for the simplest (and common in mathematical biology) equilibrium bifurcations. Equilibrium bifurcation means qualitative changes that relate to changes in the properties of an equilibrium of system (A.20).

Correspondingly to the family of differential equations (A.20), we will consider family of equilibria Y α depending on α; as seen above, its stability properties, are determined by the real sign of eigenvalues of the Jacobian matrix J ( α ) = JF ( α, Y α ). Neglecting cases where system (A.20) has eigenvalues with 0 real part for all α in an interval (bifurcation theory considers only generic properties, which intuitively means for almost all systems), changes in stability may occur only when one eigenvalue crosses the imaginary axis at some value α*.

We then consider 3 cases, discussed in greater detail below:
  • tangent bifurcation J(α:*) has eigenvalue 0 and F α (α*,Y α* ) ≠ 0;

  • transcritical bifurcation J(α:*) has eigenvalue 0 but F α (α*,Y α* ) = 0;

  • Hopf bifurcation J(α:*) has eigenvalues ±iα (necessarily n ≥ 2).

Tangent bifurcation. A one dimensional prototype for this case is the equation
$$ Y'(t) = F(\alpha ,y(t)) = \alpha - y^2 (t). $$
Here we have the two branches of equilibria
$$ y_ \pm ^ * (\alpha ) = \pm \sqrt \alpha $$
existing only for α≥ 0. At α = 0 we have F x (0,0) = 0 and F α (0,0) = 1, thus α = 0 is the bifurcation point and for α≥ 0 we have \( F_x \left( {\alpha ,y_ \pm ^* (\alpha )} \right) = \mp 2\sqrt \alpha \) thus \( y_ + ^* (\alpha ) \) is asymptotically stable, \( y_ - ^* (\alpha ) \) unstable.

More generally, when some other technical conditions hold, (A.20) has two equilibria in a neighborhood ∪ of Y α * for α < α* and none for α > α**or vice versa. Furthermore, one equilibrium will havek eigenvalues with negative real part (and nk with positive), and the other one k − 1, with 1 ≤kn;ifk = n, one equilibrium is asymptotically stable, and the other one is unstable.

Transcritical bifurcation. The one dimensional prototype in this case is
$$ y'(t) = (\alpha - y(t))y(t), $$
Fig. A.5

Tangent bifurcation: the prototype example (A.21). The two branches \( y_ + ^* (\alpha ) \) and \( y_ - ^* (\alpha ) \) originating at α= 0, are shown together with their stability

and we have the equilibria
$$ \begin{array}{*{20}c} {y_0^ * (\alpha ) = 0,} \hfill & {y^ * (\alpha ) = \alpha .} \hfill \\ \end{array} $$

Now, at α = 0 we have F x (0,0) = 0, but F a(0,0) = 0. For α < 0, the equilibrium \( y_0^* (\alpha ) \) is asymptotically stable because \( F_x (\alpha ,y_0^* (\alpha )) = \alpha \) and the equilibrium α is unstable because F x (α,α) = −α; for α > 0 vice versa.

The fact that F α = 0 can be considered to be non-generic, but many classes of systems in mathematical population dynamics share the property that one state (that we call 0, thinking of population density) is an equilibrium for all parameter values. If the model has a biological interpretation, negative values are often not considered, so that the equilibrium α is neglected for α < 0. In this light, the transcritical bifurcation can be viewed not as an intersection of two equilibria that exchange their stability, but as the emergence of a new equilibrium at α = 0, as the equilibrium 0 loses its stability.

In this interpretation, one can distinguish a forward and a backward bifurcation; in the forward bifurcation (case above), the new equilibrium exists when 0 is unstable and inherits its stability; a backward bifurcation is exemplified by the equation y′(t) = (α +y(t))y(t), in which a positive equilibrium −α exists when the equilibrium 0 is asymptotically stable and is unstable. We insist that this distinction depends on the choice of restricting the system to one side of the 0 equilibrium.

More generally, without pursuing this distinction (see, for instance, [1]), a transcritical bifurcation may occur in systems where a state, say O, is an equilibrium for α. Assume then F(α O) = 0 ∀ α, and assume that J(α) has eigenvalue 0 at α* and (to be definite) k eigenvalues with negative real part (and nk with positive) for α < α*, and kȒ1 eigenvalues with negative real part for α > α*. Under some other technical conditions, there exists another branch of equilibria Y α for α close to α*, crossing 0 at α*, such that J(α) has k − 1 eigenvalues with negative real part for α < α* and k for α > α*.
Fig. A.6

Transcritical bifurcation: the prototype (A.22). Here the two branches \( y_0^* (\alpha ) \) and y*(α) exist for all values of α and cross at α = 0 exchanging their stability

Hopf bifurcation

In this case the prototype is two-dimensional:
$$ \left\{ {\begin{array}{*{20}c} {y'_1 = - y_2 + (\alpha \pm (y_1^2 + y_2^2 ))y_1 } \hfill \\ {y'_2 = y_2 + (\alpha \pm (y_1^2 + y_2^2 ))y_2 } \hfill \\ \end{array} } \right.. $$

The origin (0,0) is an equilibrium for all values of α, and its Jacobian has eigenvalues αi; hence they have negative real part for α < 0, positive real part for α > 0, while they are purely imaginary for α = 0.

It is convenient to look at the same system in polar coordinates
$$ \left\{ {\begin{array}{*{20}c} {\dot \rho = (\alpha \pm \rho ^2 )} \hfill \\ {\dot \vartheta = \,1} \hfill \\ \end{array} } \right.. $$
In fact, in this way it is clear that the properties of the system depend crucially on whether there is + or − in front of ρ 2; there are actually two types of Hopf bifurcation, supercritical (corresponding to the case with a − ) and subcritical (corresponding to the case with a +). In the former case, (0,0) is asymptotically stable also for α = 0 and, for all α > 0, there exist an attracting periodic orbit
$$ \begin{array}{*{20}c} \hfill {\rho (t) \equiv \sqrt \alpha ,} & \hfill {\vartheta (t) = t,} \\ \end{array} $$
while there are no periodic orbits for α < 0. In the latter, (0,0) is unstable also for α = 0 and, for all α < 0, there exist a repulsive periodic orbit
$$ \begin{array}{*{20}c} \hfill {\rho (t) \equiv \sqrt { - \alpha } ,} & \hfill {\vartheta (t) = t} \\ \end{array} $$
while there are no periodic orbits for α > 0.
In general, assume that there exists a branch of equilibria Y a of (A.20) such that the Jacobian has k eigenvalues with negative real part (and nk with positive) for α < α*, while k * 2 with negative real part (and n 002B; 2 − k with positive) for α> α*.
Fig. A.7

Hopf bifurcation for prototype (A.23) in the subcritical case. Here the origin (0,0) is an equilibrium for all values of α: (a) for α < 0 the origin is asymptotically stable and there exists a repulsive periodic orbit; (b) if α > 0 there are no periodic orbits close to the origin and the origin is repulsive. The radius of the periodic orbit, when it exists, is \( \rho (\alpha ) = \sqrt { - \alpha } \) it bifurcates from the origin as α crosses backward the value α = 0

Fig. A.8

Hopf bifurcation for prototype (A.23) in the supercritical case. Here the origin (0,0) is an equilibrium for all values of α. For α < 0 (a) the origin is globally attractive. As α crosses forward the value α = 0, a periodic orbit bifurcates, with radius \(\rho (\alpha ) = \sqrt \alpha \) inheriting the stability of the origin (b)

Finally assume that the Jacobian at α* has eigenvalues ±i ω with ω> 0, while all other eigenvalues have non-zero real part. Necessarily it must be nk ≥ 2.

Under some technical conditions, one can compute a quantity α 0(see, for instance,  (5.62) in [6]) such that if α 0 ≠ 0, there exist a family of periodic orbits Γ(α) contracting to Y(α*) as α approaches α*. If α 0 < 0 (supercritical Hopf bifurcation), Γ(α) exists for α > α* (and close to α*) and its stable manifold has dimension k (if k = n, i.e. x*(α) is asymptotically stable, for α < α*, the periodic orbit is attracting). If α 0 > 0 (subcritical Hopf bifurcation), Γ(α) exists for α < α* (and close to α*) and its stable manifold has dimension k − 2 (thus, the periodic orbit is unstable).

We end this section by remarking that all these results are local: they depend only on the values of the function in a neighborhood of the equilibria, and they describe the behavior of the solutions only in the neighborhood. This is the reason why in the text we generally relied on methods that could yield the global structure of the solutions. However, local bifurcation theory is a very effective method to explore (especially through numerical bifurcation tools such as AUTO or MATCONT) the (local) behavior of a system, and also to get some insights into the global behavior.


  1. 1.
    Boldin, B.: Introducing a population into a steady community: The critical case, the center manifold, and the direction of bifurcation. SIAM J. Appl. Math. 66, 1424–1453 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  2. 2.
    Brauer, F., Nohel, J.A.: The Qualitative Theory of Ordinary Differential Equations: An Introduction. Dover Publications, New York (1989)Google Scholar
  3. 3.
    Coddington, E.A., Levinson, N.: Theory of Ordinary Differential Equations. McGraw-Hill, New York (1955)zbMATHGoogle Scholar
  4. 4.
    Hirsch, M.W., Smale, S.: Differential Equations, Dynamical Systems and Linear Algebra. Academic Press, New York (1974)zbMATHGoogle Scholar
  5. 5.
    Hirsch, M.W., Smale, S., Devaney, R.L.: Differential Equations, Dynamical Systems & an Introduction to Chaos. 2nd ed., Elsevier, New York (2004)zbMATHGoogle Scholar
  6. 6.
    Kuznetsov, Y.A.: Elements of Applied Bifurcation Theory, Springer, New York (2010)Google Scholar
  7. 7.
    Perko, L.: Differential Equations and Dynamical Systems. 3rd ed., Springer, New York (1996)CrossRefzbMATHGoogle Scholar
  8. 8.
    Smith, H.L.: Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems. Mathematical Surveys and Monographs 41, American Mathematical Soc. (2008)Google Scholar
  9. 9.
    Smith, H.L., Thieme, H.R.: Dynamical Systems and Population Persistence. Graduate Studies in Mathematicas 118, American Mathematical Soc. (2011)Google Scholar
  10. 10.
    Wiggins, S.: Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, New York (1990)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Mimmo Iannelli
    • 1
  • Andrea Pugliese
    • 1
  1. 1.Department of MathematicsUniversity of TrentoItaly

Personalised recommendations