# Phase portraits, Lyapunov functions, and projective geometry

Lessons learned from a differential equations class and its aftermath

## Abstract

We discuss two problems which grew out of an introductory differential equations class but were solved only later, each after having been put into a different context. First, how do you find a rather complicated Lyapunov function with your bare hands, without using a fully developed theory (while reconstructing the steps leading up to such a theory)? Second, how can you obtain a global picture of the phase-portrait of a dynamical system (thereby invoking ideas from projective geometry)? Since classroom experiences played an important part in the making of this paper, didactical aspects will also be discussed.

## Introduction

In this paper we reap some late fruit from seeds sown about three years ago, in a class on “Ordinary Differential Equations and Dynamical Systems” which was taught by the second author and attended by the first author as a third-semester student. This class had an extent of 10 blocks per week of 45 minutes each (usually 6 for lectures and 4 for lab sessions) and covered the following topics: elementary types of scalar differential equations, general formulation of systems of ordinary differential equations, existence and uniqueness theorems, maximal solution intervals, dependence of solutions on initial data and parameters, variational equations, systems of linear differential equations, state transition operator, qualitative theory: phase portraits, equilibria, isoclines, elementary stability theory, Lyapunov functions. (Essentially, chapters 117–124 and 134 from [21] were covered.) There were three written tests during the semester, focusing on solution techniques and computational skills, and an oral exam at the end of the semester, focusing on conceptual understanding. In the class, which was attended by about 25 students, a rather exciting and communicative atmosphere developed, with lots of questions, comments and ideas being brought up by the students. The two questions discussed in this paper came up in this class, but were settled only later, each after having been put into a different context. To convey a flavor of the way the class was taught, we present the topics in the style of the course, using an example-oriented approach, favoring explicit calculations before introducing more abstract points of view and using pictures and visualizations whenever possible. At the end of the paper, we comment on the classroom experiences made with this approach and the didactical lessons learned.

## The search for a Lyapunov function

Consider the system

\begin{aligned}\displaystyle\dot{\xi}&\displaystyle=\ -2\xi+2\eta,\\ \displaystyle\dot{\eta}&\displaystyle=\ \eta^{2}-\xi^{4}.\end{aligned}
(1)

As a routine step to understand the qualitative behavior of this system, we first determine the nullclines and the equilibrium points. The $$\xi$$-nullcline is the line $$\eta=\xi$$, the $$\eta$$-nullcline is the union of the two parabolas $$\eta=\pm\xi^{2}$$, and the equilibria of the system, i.e., the points of intersection of these nullclines, are the points $$(-1,-1)$$, $$(0,0)$$ and $$(1,1)$$. Next, again as a routine step, we try to determine the character of each of these equilibria by linearizing the system about the point in question. Denoting the right-hand side of (1) by $$f(\xi,\eta)$$, we calculate the derivatives

\begin{aligned}(Df)(\xi,\eta)=\left[\begin{matrix}-2&2\\ -4\xi^{3}&2\eta\end{matrix}\right],\quad(Df)(-1,-1)=\left[\begin{matrix}-2&2 \\ 4&-2\end{matrix}\right],\\ (Df)(0,0)=\left[\begin{matrix}-2&2\\ 0&0\end{matrix}\right],\quad(Df)(1,1)=\left[\begin{matrix}-2&2\\ -4&2\end{matrix}\right].\end{aligned}
(2)

Since $$(Df)(-1,-1)$$ has a positive and a negative eigenvalue, we conclude that the equilibrium point $$(-1,-1)$$ is unstable for both the linearized and the original system. On the other hand, $$Df(0,0)$$ has the eigenvalues –2 and 0, whereas $$Df(1,1)$$ has the two purely imaginary eigenvalues $$\pm 2i$$, so that the linearizations of $$f$$ around $$(0,0)$$ and $$(1,1)$$ do not help us to determine the character of these equilibrium points. However, completely elementary observations show that both $$(0,0)$$ and $$(1,1)$$ are unstable.

• The green area in Fig. 1 is forward-invariant. A trajectory $$t\mapsto\bigl(\xi(t),\eta(t)\bigr)$$ starting in this area satisfies $$\dot{\xi}(t)> 0$$ and $$\dot{\eta}(t)> 0$$ at all times $$t$$, so that the functions $$t\mapsto\xi(t)$$ and $$t\mapsto\eta(t)$$ are both strictly increasing and, due to invariance, bounded from above by zero. This implies that $$\lim_{t\rightarrow\infty}\bigl(\xi(t),\eta(t)\bigr)$$ exists and, being a limit of a trajectory, must be an equilibrium point, which can only be $$(0,0)$$. This shows that each trajectory starting in the green area tends to $$(0,0)$$ as $$t\rightarrow\infty$$. In particular, we can conclude that $$(-1,-1)$$ is unstable without having to invoke the linearization of the system around this point.

• The blue area in Fig. 1 is also forward-invariant. Arguing in a similar way as before, we conclude that if $$t\mapsto\bigl(\xi(t),\eta(t)\bigr)$$ is a trajectory starting in this area, we have $$\xi(t)\rightarrow-\infty$$ and $$\eta(t)\rightarrow-\infty$$ as $$t\rightarrow\infty$$.

• All trajectories originating in the gray triangle in Fig. 1 move upward and to the right and must either leave this triangle through the top or tend to $$(1,1)$$ as $$t\rightarrow\infty$$. Since this holds in particular for trajectories originating arbitrarily close to $$(0,0)$$, we conclude that this equilibrium point is unstable.

We can also conclude that there must be a trajectory $$t\mapsto\bigl(\xi(t),\eta(t)\bigr)$$ converging to $$(-1,-1)$$ as $$t\rightarrow\infty$$. Let us take a quick look at the kind of geometrical and topological reasoning necessary to establish this claim, as this kind of reasoning was greatly appreciated by the students as a contrast to analytically solving differential equations and as a first experience in deriving qualitative rather than quantitative information. (However, we emphasize that, in general, such elementary arguments do not suffice to obtain a full picture of the phase-portrait, but more sophisticated tools such as the Hartman-Grobman theorem, center manifold theory, the LaSalle invariance principle or Poincaré-Bendixson theory are required. From a didactical point of view, seeing the limitations of an elementary approach in concrete examples provides motivation to study more sophisticated methods.)

Write $$A:=(-1,-1)$$, pick any two points $$P_{1}$$ on $$C_{1}:=\{(\xi,\xi)\!\mid\!\xi<-1\}$$ and $$P_{2}$$ on $$C_{2}:=\{(\xi,-\xi^{2})\!\mid\!-1<\xi<0\}$$, consider the backward trajectories originating from $$P_{1}$$ and $$P_{2}$$, choose a line segment $$S=Q_{1}Q_{2}$$ connecting these trajectories and consider the open region $$\Omega$$ (shown yellow in Fig. 2) bounded by the segment $$S$$, the trajectories $$Q_{1}P_{1}$$ and $$Q_{2}P_{2}$$, the segment $$AP_{1}$$ on $$C_{1}$$ and the arc $$AP_{2}$$ on $$C_{2}$$. A trajectory starting in $$\Omega$$ can leave this region neither across $$S$$ (because of the direction of the flow) nor across $$Q_{1}P_{1}$$ or $$Q_{2}P_{2}$$ (because trajectories cannot intersect), hence must either remain in $$\Omega$$ for all times or leave $$\Omega$$ across $$AP_{1}$$ or $$AP_{2}$$. For $$p=(\xi_{0},\eta_{0})\in S$$, denote by $$\varphi_{p}(t)=\bigl(\xi(t),\eta(t)\bigr)$$ the trajectory starting at the point $$p$$ at time $$t=0$$. For $$i\in\{1,2\}$$, due to the continuous dependence of the solutions on the initial values, the set of all $$p\in S$$ for which $$\varphi_{p}$$ leaves $$\Omega$$ across $$AP_{i}$$ is open in $$S$$, and since $$S$$, being connected, cannot be written as a disjoint union of two open sets, there must be at least one point $$p_{0}\in S$$ such that $$\varphi_{p_{0}}$$ remains in $$\Omega$$ for all times. Write $$\varphi_{p_{0}}(t)=\bigl(\xi(t),\eta(t)\bigr)$$. Then $$\dot{\xi}(t)> 0$$ and $$\dot{\eta}(t)<0$$ for all times $$t$$, so that the functions $$t\mapsto\xi(t)$$ and $$t\mapsto\eta(t)$$ are both monotonic and bounded, hence must converge to a limit as $$t\rightarrow\infty$$. As before, the point $$\lim_{t\rightarrow\infty}\bigl(\xi(t),\eta(t)\bigr)$$ must be an equilibrium point, and the only one in question is $$A=(-1,-1)$$. This argument shows that there is a system trajectory tending towards $$(-1,-1)$$ from above. Similarly, there must be at least one system trajectory tending towards $$(-1,-1)$$ from below.

This leaves us with the equilibrium point $$(1,1)$$. The sketch of the phase-portrait in Fig. 1 suggests that this might be a stable focus into which nearby trajectories spiral as $$t\rightarrow\infty$$. This is confirmed by numerically plotting some trajectories (see Fig. 3), but numerical calculations cannot substitute for a formal proof. Thus this is the main question to be answered in this section: How can we formally prove that $$(1,1)$$ is a stable (even locally asymptotically stable) equilibrium? The canonical way is to construct a suitable Lyapunov function, and general arguments (see [8], Chapter VI, Theorem 51.2) show that such a Lyapunov function must indeed exist. (See [13] for a proof of the relevant theorem and [9] for historical remarks and a reference list concerning converse Lyapunov theorems.) However, these general arguments are non-constructive and give no hint as to how such a Lyapunov function can practically be obtained. So what do we do?

Since the right-hand side of the differential equation (1) is a polynomial function, it is plausible (but not always successful; see [1]) to try to find a Lyapunov function which is also polynomial (or at least analytic in a vicinity of the equilibrium point). For simplicity’s sake, we change our coordinate system as to make this equilibrium point the origin of the new coordinates. Substituting $$\xi=1+x$$ and $$\eta=1+y$$, the system (1) becomes

\begin{aligned}\displaystyle\dot{x}&\displaystyle=\ -2x+2y\ =:\ F(x,y),\\ \displaystyle\dot{y}&\displaystyle=\ -4x+2y-6x^{2}+y^{2}-4x^{3}-x^{4}\ =:\ G(x,y).\end{aligned}
(3)

We seek a Lyapunov function $$V$$ for the equilibrium point $$(0,0)$$ of the system (3), which means that $$V$$ satisfies $$V(0,0)=0$$ and has a strict local minimum at $$(0,0)$$ whereas $$W:=V_{x}\cdot F+V_{y}\cdot G$$ (where subscripts denote partial derivatives) has a local maximum at $$(0,0)$$ (which we prefer to be strict so that we can show local asymptotic stability rather than just stability). Moreover, $$V$$ should be analytic in a neighborhood of $$(0,0)$$. Let $$V^{(k)}$$ be the homogeneous part of $$V$$ of order $$k$$; then necessarily $$V^{(0)}=0$$ and $$V^{(1)}=0$$ so that $$V=\sum_{k\geq 2}V^{(k)}$$. Then $$W$$ is automatically also analytic, and we have $$W=\sum_{k\geq 2}W^{(k)}$$ where

\begin{aligned}\displaystyle W^{(2)}&\displaystyle=\ V^{(2)}_{x}F^{(1)}+V^{(2)}_{y}G^{(1)},\\ \displaystyle W^{(3)}&\displaystyle=\ V^{(3)}_{x}F^{(1)}+V^{(3)}_{y}G^{(1)}+V^{(2)}_{y}G^{(2)},\\ \displaystyle W^{(4)}&\displaystyle=\ V^{(4)}_{x}F^{(1)}+V^{(4)}_{y}G^{(1)}+V^{(3)}_{y}G^{(2)}+V^{(2)}_{y}G^{(3)},\\ \displaystyle W^{(5)}&\displaystyle=\ V^{(5)}_{x}F^{(1)}+V^{(5)}_{y}G^{(1)}+V^{(4)}_{y}G^{(2)}+V^{(3)}_{y}G^{(3)}+V^{(2)}_{y}G^{(4)},\\ \displaystyle W^{(6)}&\displaystyle=\ V^{(6)}_{x}F^{(1)}+V^{(6)}_{y}G^{(1)}+V^{(5)}_{y}G^{(2)}+V^{(4)}_{y}G^{(3)}+V^{(3)}_{y}G^{(4)},\end{aligned}
(4)

and so on; the general formula for $$m\geq 5$$ reads

$$W^{(m)}\ =\ V^{(m)}_{x}F^{(1)}+V^{(m)}_{y}G^{(1)}+V^{(m-1)}_{y}G^{(2)}+V^{(m-2)}_{y}G^{(3)}+V^{(m-3)}_{y}G^{(4)}.$$
(5)

(We note in passing that, for a general system $$\dot{z}=Az+\text{higher-order terms}$$ in $${\mathbb{R}}^{n}$$ with the origin as an equilibrium, the symmetric matrices $$P,Q\in{\mathbb{R}}^{n\times n}$$ with $$V^{(2)}(z)=z^{T}Pz$$ and $$W^{(2)}(z)=-z^{T}Qz$$ are related by the Lyapunov equation $$A^{T}P+PA+Q=0$$.) In our situation, the quadratic term $$V^{(2)}$$ takes the form

$$V^{(2)}(x,y)=Ax^{2}+2Bxy+Dy^{2}=\left[\begin{matrix}x&y\end{matrix}\right]\left[\begin{matrix}A&B\\ B&D\end{matrix}\right]\left[\begin{matrix}x\\ y\end{matrix}\right]$$
(6)

where the coefficient matrix is necessarily positive semidefinite, because $$V$$ has a local minimum at $$(0,0)$$. This means that $$A\geq 0$$, $$D\geq 0$$ and $$AD\geq B^{2}$$. Plugging (6) into the expression for $$W^{(2)}$$ given by (4), multiplying out and sorting terms results in

$$W^{(2)}(x,y)=4\cdot\bigl(-(A\!+\!2B)\,x^{2}+(A\!-\!2D)\,xy+(B\!+\!D)\,y^{2}\bigr)\,.$$
(7)

Now since $$W$$ is supposed to have a local maximum at $$(0,0)$$, the function $$W^{(2)}$$ must be negative semidefinite, which requires that $$A+2B\geq 0$$, $$B+D\leq 0$$ and

\begin{aligned}\displaystyle 0&\displaystyle\geq\ -\det\left[\begin{matrix}-2(A\!+\!2B)&A\!-\!2D\cr A\!-\!2D&2(B\!+\!D)\end{matrix}\right]\\ \displaystyle&\displaystyle=\ A^{2}+4AB+8B^{2}+8BD+4D^{2}\\ \displaystyle&\displaystyle=\ (A+2B)^{2}+4(B+D)^{2}.\end{aligned}
(8)

This inequality can only be satisfied (and only as an equality, never as a strict inequality) if $$B=-D$$ and $$A=-2B=2D$$. Plugging these conditions into the formula (6), we find that $$V^{(2)}(x,y)=D\cdot(2x^{2}-2xy+y^{2})$$. Since we want to reuse $$A,B,C,D$$ as variable names, we write $$\lambda$$ instead of $$D$$ and then have the following result: If there is a polynomial (or analytical) Lyapunov function $$V$$ at all, then its quadratic component is necessarily of the form

$$V^{(2)}(x,y)=\lambda\cdot(2x^{2}-2xy+y^{2})\quad\text{where}\ \lambda\geq 0,$$
(9)

and in this case we have $$W^{(2)}=0$$. But then for $$W$$ to be negative semidefinite, we must necessarily have $$W^{(3)}=0$$, because otherwise $$W^{(3)}$$ would be indefinite, implying that $$W$$ cannot have a local maximum at $$(0,0)$$. Now the condition $$W^{(3)}=0$$ determines $$V^{(3)}$$. In fact, $$V^{(3)}$$ has the general form

$$V^{(3)}(x,y)=Ax^{3}+Bx^{2}y+Cxy^{2}+Dy^{3}$$
(10)

where, of course, $$A,B,C,D$$ differ from the variables with the same names used before. Plugging (10) into the expression for $$W^{(3)}$$ in (4), multiplying out and sorting terms, we find that

\begin{aligned}\displaystyle(1/2)\cdot W^{(3)}(x,y)=&\displaystyle(-3A-2B+6\lambda)\,x^{3}+(3A-B-4C-6\lambda)\,x^{2}y\\ \displaystyle&\displaystyle+(2B+C-6D-\lambda)\,xy^{2}+(C+3D+\lambda)\,y^{3},\end{aligned}
(11)

and $$W^{(3)}=0$$ if and only if the four coefficients on the right-hand side of (11) all vanish, which means that $$A,B,C,D$$ are solutions of the system

$$\left[\begin{matrix}3 & 2 & 0 & 0\\ 3 & -1 & -4 & 0\\ 0 & 2 & 1 & -6 \\ 0 & 0 & 1 & 3 \end{matrix}\right]\left[\begin{matrix}A\\ B\\ C\\ D\end{matrix}\right]=\lambda\left[\begin{matrix}6 \\ 6 \\ 1 \\ -1\end{matrix}\right]\,.$$
(12)

Upon inversion, we find that

$$\left[\begin{matrix}A\\ B\\ C\\ D\end{matrix}\right]=\frac{\lambda}{3}\left[\begin{matrix}-5 & 6 & 8 & 16 \\ 9 & -9 & -12 & -24 \\ -6 & 6 & 9 & 18 \\ 2 & -2 & -3 & -5\end{matrix}\right]\left[\begin{matrix} 6 \\ 6 \\ 1 \\ -1\end{matrix}\right]=\frac{\lambda}{3}\left[\begin{matrix}-2\\ 12 \\ -9\\ 2\end{matrix}\right].$$
(13)

To avoid fractions, we write $$\lambda=3\mu$$ and obtain the conditions

\begin{aligned}\displaystyle V^{(2)}(x,y)&\displaystyle=\ \mu\cdot(6x^{2}-6xy+3y^{2}),\\ \displaystyle V^{(3)}(x,y)&\displaystyle=\ \mu\cdot(-2x^{3}+12x^{2}y-9xy^{2}+2y^{3}),\end{aligned}
(14)

which guarantee that $$W^{(2)}=0$$ and $$W^{(3)}=0$$. Now $$V^{(4)}$$ has the general form

$$V^{(4)}(x,y)\ =\ Ax^{4}+Bx^{3}y+Cx^{2}y^{2}+Dxy^{3}+Ey^{4}.$$
(15)

(If $$\mu> 0$$ the function $$V$$ will always have a strict minimum at $$(0,0)$$, no matter how $$V^{(4)}$$ is chosen, whereas if $$\mu=0$$ we must in addition require that $$V^{(4)}$$ be positive semidefinite.) Plugging (15) into the expression for $$W^{(4)}$$ in (4), multiplying out and sorting terms, we find that

\begin{aligned}\displaystyle&\displaystyle-(1/2)\cdot W^{(4)}(x,y)\ =\ (4A\!+\!2B\!+\!24\mu)x^{4}-(4A\!-\!2B\!-\!4C\!+\!42\mu)x^{3}y\\ \displaystyle&\displaystyle-(3B\!-\!6D\!-\!12\mu)x^{2}y^{2}-(2C\!+\!2D\!-\!8E\!-\!9\mu)xy^{3}-(D\!+\!4E\!+\!3\mu)y^{4}.\end{aligned}
(16)

This must be positive semidefinite. Now a fourth-order form in two variables over the reals which is positive semidefinite is necessarily the product of two positive semidefinite quadratic forms; this follows by homogenization from the obvious fact that a fourth-order real polynomial in one variable which only takes nonnegative values is the product of two quadratic factors each of which is either irreducible over the reals or the square of a linear factor. Hence there must be numbers $$a,b,c,\alpha,\beta,\gamma$$ with $$a,c,\alpha,\gamma\geq 0$$, $$ac\geq b^{2}$$ and $$\alpha\gamma\geq\beta^{2}$$ such that

\begin{aligned}\displaystyle&\displaystyle-(1/2)W^{(4)}(x,y)\ =\ (ax^{2}+2bxy+cy^{2})(\alpha x^{2}+2\beta xy+\gamma y^{2})\ =\\ \displaystyle&\displaystyle a\alpha\,x^{4}+2(a\beta\!+\!b\alpha)\,x^{3}y+(a\gamma\!+\!c\alpha\!+\!4b\beta)\,x^{2}y^{2}+2(b\gamma\!+\!c\beta)\,xy^{3}+c\gamma\,y^{4},\end{aligned}
(17)

and if we choose $$a,b,c,\alpha,\beta,\gamma$$ such that the strict inequalities $$a,c,\alpha,\gamma> 0$$, $$ac> b^{2}$$ and $$\alpha\gamma> \beta^{2}$$ hold, then $$W$$ is guaranteed to have a strict maximum at $$(0,0)$$, independently of any higher-order terms. Comparing coefficients between (16) and (17), we see that $$A,B,C,D,E$$ must be solutions of the linear system

$$\underbrace{\left[\begin{matrix} 4 & 2& 0& 0& 0\\ 2&-1&-2& 0& 0\\ 0& 3& 0&-6& 0\\ 0& 0& 2& 2&-8\\ 0& 0& 0& 1& 4\end{matrix}\right]}_{{=:M}}\left[\begin{matrix}A\\ B\\ C\\ D\\ E\end{matrix}\right]\ =\ \underbrace{\left[\begin{matrix}-24\mu+a\alpha\\ -21\mu-(a\beta+\alpha b)\\ 12\mu-(a\gamma+c\alpha+4b\beta)\\ 9\mu-2(b\gamma+c\beta)\\ -3\mu-c\gamma\end{matrix}\right]}_{{=:w}}.$$
(18)

This system is only solvable if $$w\in\text{im}(M)=\text{ker}(M^{T})^{\perp}=(-3,6,4,6,12)^{\perp}$$, i.e., if the equation $$-3w_{1}+6w_{2}+4w_{3}+6w_{4}+12w_{5}=0$$ holds, which means that $$12\mu=3a\alpha+6(a\beta+\alpha b)+4(a\gamma+c\alpha+4b\beta)+12(b\gamma+c\beta)+12c\gamma$$ or else

$$12\mu\ =\ \left[\begin{matrix}a&b&c\end{matrix}\right]\left[\begin{matrix}3&6&4\\ 6&16&12\\ 4&12&12\end{matrix}\right]\left[\begin{matrix}\alpha\\ \beta\\ \gamma\end{matrix}\right]\ =\ u^{T}\left[\begin{matrix}3&6&4\\ 6&16&12\\ 4&12&12\end{matrix}\right]v$$
(19)

where $$u:=(a,b,c)^{T}$$ and $$v:=(\alpha,\beta,\gamma)^{T}$$. A straightforward Gaussian elimination then yields $$A,B,C,D,E$$ in terms of $$\mu$$, $$u$$ and $$v$$. In summary, we arrive at the equations

\begin{aligned}\displaystyle\mu&\displaystyle=\ {1\over{12}}\,u^{T}\!\left[\begin{matrix}3&6&4\\ 6&16&12\\ 4&12&12\end{matrix}\right]\!v,\\ \displaystyle A&\displaystyle=\ 4E-4\mu-{1\over 6}\,u^{T}\!\left[\begin{matrix}0&\!3&\!1\\ 3&\!4&\!6\\ 1&\!6&\!0\end{matrix}\right]\!v\ =\ 4E-{1\over 2}\,u^{T}\!\left[\begin{matrix}2&\!5&\!3\\ 5&\!12&\!10\\ 3&\!10&\!8\end{matrix}\right]\!v,\\ \displaystyle B&\displaystyle=\ -8E-4\mu+{1\over 6}\,u^{T}\!\left[\begin{matrix}3&\!6&\!2\\ 6&\!8&\!12\\ 2&\!12&\!0\end{matrix}\right]\!v\ =\ -8E-{1\over 2}\,u^{T}\!\left[\begin{matrix}1&\!2&\!2\\ 2&\!8&\!4\\ 2&\!4&\!8\end{matrix}\right]\!v,\\ \displaystyle C&\displaystyle=\ 8E+{{17}\over 2}\mu-{1\over{12}}\,u^{T}\!\left[\begin{matrix}3&\!6&\!4\\ 6&\!16&\!24\\ 4&\!24&\!0\end{matrix}\right]\!v\ =\ 8E+{1\over 8}\,u^{T}\!\left[\begin{matrix}15&\!30&\!20\\ 30&\!80&\!52\\ 20&\!52&\!68\end{matrix}\right]\!v,\\ \displaystyle D&\displaystyle=\ -4E-4\mu+{1\over{12}}\,u^{T}\!\left[\begin{matrix}3&\!6&\!4\\ 6&\!16&\!12\\ 4&\!12&\!0\end{matrix}\right]\!v\ =\ -4E-{1\over 4}\,u^{T}\!\left[\begin{matrix}3&\!6&\!4\\ 6&\!16&\!12\\ 4&\!12&\!16\end{matrix}\right]\!v\end{aligned}
(20)

in which $$E$$ can be arbitrarily chosen, reflecting the fact that $$M$$ has the one-dimensional kernel $$\text{ker}(M)={\mathbb{R}}\,(4,-8,8,-4,1)^{T}$$. Now it is obvious that the number $$\mu$$ defined by (19) is automatically nonnegative if $$a,b,c,\alpha,\beta,\gamma\geq 0$$. Moreover, since the matrix occurring in (19) is positive definite (with eigenvalues $$0<\lambda_{1}<\lambda_{2}<\lambda_{3}$$ given by $$\lambda_{1}\approx 0.492$$, $$\lambda_{2}\approx 2.304$$ and $$\lambda_{3}\approx 28.204$$), we conclude that $$\mu$$ is necessarily positive if $$(\alpha,\beta,\gamma)=(a,b,c)\not=(0,0,0)$$ even if $$\beta=b<0$$. However, something much more general is true: Whenever the conditions $$a,c,\alpha,\gamma\geq 0$$, $$ac\geq b^{2}$$ and $$\alpha\gamma\geq\beta^{2}$$ are satisfied, then the number $$\mu$$ defined by (19) is automatically nonnegative, and it is strictly positive unless $$(a,b,c)=(0,0,0)$$ or $$(\alpha,\beta,\gamma)=(0,0,0)$$. (This can be established by denoting the right-hand side of (19) by $$f(a,b,c,\alpha,\beta,\gamma)$$ and then determining the minimum of $$f$$ under the constraints $$a,b,\alpha,\gamma\geq 0$$, $$ac\geq b^{2}$$ and $$\alpha\gamma\geq\beta^{2}$$, distinguishing various cases as to which of the constraints are active.) Thus we arrive at the following result (which, as we emphasize, is of a purely local character).

### Proposition

If $$V$$ is an analytic Lyapunov function for the equilibrium point $$(0,0)$$ of Eq. ( 3 ), then there are numbers $$a$$ , $$b$$ , $$c$$ , $$\alpha$$ , $$\beta$$ , $$\gamma$$ satisfying the inequalities $$a,c,\alpha,\gamma\geq 0$$ , $$ac\geq b^{2}$$ and $$\alpha\gamma\geq\beta^{2}$$ and a real number $$E$$ such that

\begin{aligned}\displaystyle V^{(2)}(x,y)&\displaystyle=\ \mu\cdot(6x^{2}-6xy+3y^{2}),\\ \displaystyle V^{(3)}(x,y)&\displaystyle=\ \mu\cdot(-2x^{3}+12x^{2}y-9xy^{2}+2y^{3}),\\ \displaystyle V^{(4)}(x,y)&\displaystyle=\ Ax^{4}+Bx^{3}y+Cx^{2}y^{2}+Dxy^{3}+Ey^{4}\end{aligned}
(21)

where $$\mu,A,B,C,D$$ are determined from ( 20 ) . Conversely, if $$a$$ , $$b$$ , $$c$$ , $$\alpha$$ , $$\beta$$ , $$\gamma$$ are any numbers satisfying the strict inequalities $$a,c,\alpha,\gamma> 0$$ , $$ac> b^{2}$$ and $$\alpha\gamma> \beta^{2}$$ , if $$E$$ is an arbitrary real number and if $$\mu,A,B,C,D$$ are determined from ( 20 ) , then any analytic function $$V=\sum_{k\geq 2}V^{(k)}$$ satisfying ( 21 ) is a strict Lyapunov function.

We established much more than we needed: All that was required for our purposes is a single strict Lyapunov function, whereas we obtained an almost complete overview of all possible Lyapunov functions which start with quadratic terms, and the result obtained makes it clear that there are infinitely many essentially different such functions. (Additional analysis is required if we want to find Lyapunov functions starting with higher-order terms.) To make a specific choice which avoids fractions in the coefficients, we let $$E=0$$, $$(a,b,c)=(4,0,4)$$ and $$(\alpha,\beta,\gamma)=(6,0,6)$$, which yields the coefficients $$\mu=46$$ and $$(A,B,C,D)=(-192,\,-156,\,369,\,-162,\,0)$$ and hence the Lyapunov function

\begin{aligned}\displaystyle V(x,y)\ = &\displaystyle 138\cdot(2x^{2}-2xy+y^{2})\\ \displaystyle&\displaystyle+46\cdot(-2x^{3}+12x^{2}y-9xy^{2}+2y^{3})\\ \displaystyle&\displaystyle-192x^{4}-156x^{3}y+369x^{2}y^{2}-162xy^{3}\end{aligned}
(22)

to which arbitrary terms of order five and higher may be added without losing the property of being a strict Lyapunov function for the equilibrium point $$(0,0)$$. We note that this rather strong result was obtained by using straightforward calculations, without invoking any theory. Not being specialists in Lyapunov theory, we found out only later that this approach (which is clearly based on positivity properties of polynomials and analytic functions) was already used by Lyapunov (see [10], §37, pp. 108-115) and can be developed into a theory in which methods from algebraic geometry are used to systematically find Lyapunov functions. (Cf. [12141718]; also see [4] and [16] for relevant background.) Nevertheless, we feel that the purely computational approach described here (and used only in a specific example) has its merits, and at the end of this paper we will make some didactical remarks concerning the relation between concrete calculations and abstract reasoning and also the relation between specific examples and general theories.

## Phase portraits and projective geometry

As is readily checked, the system

\begin{aligned}\displaystyle\dot{x}&\displaystyle=\ 4y,\\ \displaystyle\dot{y}&\displaystyle=\ -x-x^{2}-2y\end{aligned}
(23)

possesses two equilibrium points, namely $$(-1,0)$$ and $$(0,0)$$. As opposed to the previous example, the character of these equilibrium points can be easily determined from the associated linearizations. Denoting the right-hand side of (23) by $$f(x,y)$$, we find that $$(Df)(-1,0)$$ has the eigenvalues $$-1\pm\sqrt{5}$$, which shows that $$(-1,0)$$ is a saddle, whereas $$(Df)(0,0)$$ has the eigenvalues $$-1\pm i\sqrt{3}$$, which shows that $$(0,0)$$ is a stable focus. Thus there is no problem in drawing the phase-portrait, and this was done by students in the lab sessions of the above-mentioned class, using both hand drawings and computer plots. However, the phase-portraits obtained looked rather different, depending on the chosen scale. For example, the domain of attraction of the equilibrium point $$(0,0)$$ looks rather big in the image section shown in the upper left corner of Fig. 4, but relatively small in the image section shown in the lower right corner of this figure. Thus the natural question arose how one could be sure that a phase-portrait obtained does not qualitatively change by simply zooming out. In short: How does one get the “big picture”?

The answer to this question, formulated sloppily, is the following: You see more if you look infinitely far. Thus one complements the phase plane by points at infinity and extends the system to this extended plane. There are different ways of doing so (see [5]); we choose here the Poincaré compactification of the plane, also known as the oriented projective plane (see [22]), which gives us a chance to bring some attention to projective geometry, one of the most beautiful creations of 19th century mathematics which, unfortunately, has virtually disappeared from the list of core topics of most mathematics lines of study. The connection between phase-portrait analysis and projective geometry is not covered in most textbooks; the only exceptions we are aware of are the graduate-level text [7], the monograph [8] and the two American textbooks [11] and [15]. Thus it may be no waste of time to discuss this topic here. Let us consider a system of differential equations

\begin{aligned}\displaystyle\dot{x}&\displaystyle=\ P(x,y)\\ \displaystyle\dot{y}&\displaystyle=\ Q(x,y)\end{aligned}
(24)

with polynomials $$P$$ and $$Q$$. We identify the $$xy$$-plane with the plane $$\{(x,y,1)\in{\mathbb{R}}^{3}\mid x,y\in{\mathbb{R}}\}$$. Given $$(x,y)$$, the line joining $$(0,0,0)$$ and $$(x,y,1)$$ intersects the sphere $$X^{2}+Y^{2}+Z^{2}=1$$ in two antipodal points; the one of these two points satisfying $$Z> 0$$ is

$$\left[\begin{matrix}X\\ Y\\ Z\end{matrix}\right]\ =\ {1\over{\sqrt{1+x^{2}+y^{2}}}}\left[\begin{matrix}x\\ y\\ 1\end{matrix}\right],$$
(25)

and the equations

$$x={X\over Z},\quad y={Y\over Z}$$
(26)

show how conversely $$(x,y)$$ can be reconstructed from $$(X,Y,Z)$$. The further away $$(x,y)$$ is from $$(0,0)$$, the closer $$(X,Y,Z)$$ is to the equator, and thus we interpret points on the equator as points at infinity. Now just like one forms the projective closure of an algebraic curve in algebraic geometry, one can ask which points at infinity lie on a solution curve of a system of differential equations. If $$t\mapsto\bigl(x(t),y(t)\bigr)$$ is a trajectory of (24), then the corresponding image curve $$t\mapsto\bigl(X(t),Y(t),Z(t)\bigr)$$ satisfies the following system of differential equations:

\begin{aligned}\displaystyle\dot{X}&\displaystyle=\ {{\dot{x}(1\!+\!y^{2})-xy\dot{y}}\over{(1\!+\!x^{2}\!+\!y^{2})^{3/2}}}\ =\ Z^{3}\cdot\left[P\left({X\over Z},{Y\over Z}\right)\cdot{{Z^{2}\!+\!Y^{2}}\over{Z^{2}}}-{{XY}\over{Z^{2}}}\cdot Q\left({X\over Z},{Y\over Z}\right)\right]\\ \displaystyle&\displaystyle=\ Z\cdot\left[(Y^{2}\!+\!Z^{2})\cdot P\left({X\over Z},{Y\over Z}\right)-XY\cdot Q\left({X\over Z},{Y\over Z}\right)\right]\!,\\ \displaystyle\dot{Y}&\displaystyle=\ {{\dot{y}(1\!+\!x^{2})-xy\dot{x}}\over{(1\!+\!x^{2}\!+\!y^{2})^{3/2}}}\ =\ Z^{3}\cdot\left[Q\left({X\over Z},{Y\over Z}\right)\cdot{{Z^{2}\!+\!X^{2}}\over{Z^{2}}}-{{XY}\over{Z^{2}}}\cdot P\left({X\over Z},{Y\over Z}\right)\right]\\ \displaystyle&\displaystyle=\ Z\cdot\left[(X^{2}\!+\!Z^{2})\cdot Q\left({X\over Z},{Y\over Z}\right)-XY\cdot P\left({X\over Z},{Y\over Z}\right)\right]\!,\\ \displaystyle\dot{Z}&\displaystyle=\ {{-x\dot{x}-y\dot{y}}\over{(1\!+\!x^{2}\!+\!y^{2})^{3/2}}}\ =\ -Z^{3}\cdot\left[P\left({X\over Z},{Y\over Z}\right)\cdot{X\over Z}+Q\left({X\over Z},{Y\over Z}\right)\cdot{Y\over Z}\right]\\ \displaystyle&\displaystyle=\ -Z^{2}\cdot\left[X\cdot P\left({X\over Z},{Y\over Z}\right)+Y\cdot Q\left({X\over Z},{Y\over Z}\right)\right]\!.\end{aligned}
(27)

This can be easily seen by first taking derivatives in (25), then plugging in the system equations (24) and finally expressing $$x,y$$ in terms of $$X,Y,Z$$ using (26). Now let $$m$$ be the highest degree of any of the monomials constituting P and Q. Concretely: Combining in the expressions for $$P$$ and $$Q$$ terms of equal order, we obtain representations

\begin{aligned}\displaystyle P(x,y)&\displaystyle=\ P_{0}(x,y)+P_{1}(x,y)+\cdots+P_{m-1}(x,y)+P_{m}(x,y),\\ \displaystyle Q(x,y)&\displaystyle=\ Q_{0}(x,y)+Q_{1}(x,y)+\cdots+Q_{m-1}(x,y)+Q_{m}(x,y),\end{aligned}
(28)

where at least one of the polynomials $$P_{m}$$ and $$Q_{m}$$ is different from zero. If we now define

\begin{aligned}\displaystyle P^{\star}(X,Y,Z)&\displaystyle:=Z^{m}\cdot P\left({X\over Z},{Y\over Z}\right)\quad\text{and}\\ \displaystyle Q^{\star}(X,Y,Z)&\displaystyle:=Z^{m}\cdot Q\left({X\over Z},{Y\over Z}\right),\end{aligned}
(29)

the equations (27) take the form

\begin{aligned}\displaystyle\dot{X}&\displaystyle=\ {{(Y^{2}+Z^{2})\cdot P^{\star}(X,Y,Z)-XY\cdot Q^{\star}(X,Y,Z)}\over{Z^{m-1}}},\\ \displaystyle\dot{Y}&\displaystyle=\ {{(X^{2}+Z^{2})\cdot Q^{\star}(X,Y,Z)-XY\cdot P^{\star}(X,Y,Z)}\over{Z^{m-1}}},\\ \displaystyle\dot{Z}&\displaystyle=\ -{{X\cdot P^{\star}(X,Y,Z)+Y\cdot Q^{\star}(X,Y,Z)}\over{Z^{m-2}}}.\end{aligned}
(30)

We see that the factor $$1/Z^{m-1}$$ occurs in all three expressions on the right-hand side of (30). To eliminate this factor, we carry out a substitution $$\tau:=\tau(t)$$ such that

$${{{\mathrm{d}}\tau}\over{{\mathrm{d}}t}}\ =\ {1\over{Z(t)^{m-1}}}$$
(31)

and treat $$\tau$$ as a new time variable. Intuitively this means that we change the speed with which the curves (30) are traversed: The smaller $$Z$$ is (i.e., the further the original curve tends to infinity), the bigger are the time steps, i.e., the slower we become. Denoting derivatives with respect to $$\tau$$ by primes (whereas derivatives with respect to $$t$$ are denoted by dots) and applying the chain rule, we find that

\begin{aligned}\displaystyle\dot{X}&\displaystyle=\ {{{\mathrm{d}}X}\over{{\mathrm{d}}t}}\ =\ {{{\mathrm{d}}X}\over{{\mathrm{d}}\tau}}\cdot{{{\mathrm{d}}\tau}\over{{\mathrm{d}}t}}\ =\ X^{\prime}\cdot{1\over{Z^{m-1}}},\\ \displaystyle\dot{Y}&\displaystyle=\ {{{\mathrm{d}}Y}\over{{\mathrm{d}}t}}\ =\ {{{\mathrm{d}}Y}\over{{\mathrm{d}}\tau}}\cdot{{{\mathrm{d}}\tau}\over{{\mathrm{d}}t}}\ =\ Y^{\prime}\cdot{1\over{Z^{m-1}}},\cr\displaystyle\dot{Z}&\displaystyle=\ {{{\mathrm{d}}Z}\over{{\mathrm{d}}t}}\ =\ {{{\mathrm{d}}Z}\over{{\mathrm{d}}\tau}}\cdot{{{\mathrm{d}}\tau}\over{{\mathrm{d}}t}}\ =\ Z^{\prime}\cdot{1\over{Z^{m-1}}}.\end{aligned}
(32)

Thus the equations (30) become

\begin{aligned}\displaystyle X^{\prime}&\displaystyle=\ (Y^{2}+Z^{2})\cdot P^{\star}(X,Y,Z)-XY\cdot Q^{\star}(X,Y,Z),\\ \displaystyle Y^{\prime}&\displaystyle=\ (X^{2}+Z^{2})\cdot Q^{\star}(X,Y,Z)-XY\cdot P^{\star}(X,Y,Z),\\ \displaystyle Z^{\prime}&\displaystyle=\ -Z\cdot\bigl(X\cdot P^{\star}(X,Y,Z)+Y\cdot Q^{\star}(X,Y,Z)\bigr).\end{aligned}
(33)

For this new system (33) the equator $$Z=0$$ (or equivalently $$X^{2}+Y^{2}=1$$) is no longer a singular set, but simply an invariant set on which the system dynamics are seen to be given by

\begin{aligned}\displaystyle X^{\prime}&\displaystyle=\ Y^{2}\cdot P^{\star}(X,Y,0)-XY\cdot Q^{\star}(X,Y,0)\\ \displaystyle Y^{\prime}&\displaystyle=\ X^{2}\cdot Q^{\star}(X,Y,0)-XY\cdot P^{\star}(X,Y,0)\end{aligned}
(34)

by plugging in $$Z\equiv 0$$ into (33). Now (28) implies

$$P\left({X\over Z},{Y\over Z}\right)=P_{0}(X,Y)+{{P_{1}(X,Y)}\over{Z}}+\cdots+{{P_{m-1}(X,Y)}\over{Z^{m-1}}}+{{P_{m}(X,Y)}\over{Z^{m}}}$$
(35)

and hence

\begin{aligned}\displaystyle P^{\star}(X,Y,Z)\ =&\displaystyle Z^{m}P\left({X\over Z},{Y\over Z}\right)\ =\ Z^{m}P_{0}(X,Y)+Z^{m-1}P_{1}(X,Y)\\ \displaystyle&\displaystyle+Z^{m-2}P_{2}(X,Y)+\cdots+Z\cdot P_{m-1}(X,Y)+P_{m}(X,Y);\end{aligned}
(36)

in particular we see that $$P^{\star}(X,Y,0)=P_{m}(X,Y)$$. A completely analogous result holds for $$Q^{\star}$$. Thus (34) becomes

\begin{aligned}\displaystyle X^{\prime}&\displaystyle=\ Y^{2}\cdot P_{m}(X,Y)-XY\cdot Q_{m}(X,Y)\\ \displaystyle Y^{\prime}&\displaystyle=\ X^{2}\cdot Q_{m}(X,Y)-XY\cdot P_{m}(X,Y)\end{aligned}
(37)

which means that

$$\left[\begin{matrix}X^{\prime}\\ Y^{\prime}\end{matrix}\right]\ =\bigl(XQ_{m}(X,Y)-YP_{m}(X,Y)\bigr)\left[\begin{matrix}-Y\\ X\end{matrix}\right].$$
(38)

Because of the equation $$X^{2}+Y^{2}=1$$ it is not possible that $$X$$ and $$Y$$ simultaneously take the value zero. Hence the only equilibria of (38) are the solutions $$(X,Y)$$ of the equation

$$XQ_{m}(X,Y)-YP_{m}(X,Y)\ =\ 0.$$
(39)

Thus this equation yields the equilibria at infinity. What has been done here can be summarized as follows. To every polynomial system on $${\mathbb{R}}^{2}$$ one can associate a system on $${\mathbb{S}}^{2}_{+}$$ (i.e., the open upper hemisphere $$Z> 0$$) such that there is a one-to-one correspondence between the trajectories of both systems (via the standard pull-back/push-forward of the corresponding vector fields). The resulting system on $${\mathbb{S}}^{2}_{+}$$ can then be extended (possibly after rescaling the time variable) to the closure $$\overline{{\mathbb{S}}^{2}_{+}}$$ (i.e., to the closed upper hemisphere $$Z\geq 0$$). The flow on the equator $$Z=0$$, given by (39), is then invariant for the extended system and characterizes the flow of the original system “at infinity”; cf. Theorem 1 in [15], Sect. 3.10. We note that there are other ways to analyze the flow of the original system “at infinity”, for example Bendixson’s approach of using the standard one-point compactification of $${\mathbb{R}}^{2}$$. Using the Poincaré sphere can be regarded as a way of “blowing up” the singularity at “$$\infty$$” which would be obtained by Bendixson’s approach, which has the disadvantage that the complete behavior of the original flow “at infinity” is concentrated in one single point (around which the induced flow can be very complicated) rather than being spread out along the equator. (See the remarks on p. 268 in [15].)

Let us look at some examples! The figures attached show projections $$\tau\mapsto\bigl(X(\tau),Y(\tau)\bigr)$$ of solution curves $$\tau\mapsto\bigl(X(\tau),Y(\tau),Z(\tau)\bigr)$$ of (33). In each case, the character of each equilibrium point at infinity could be determined by projecting the phase portrait onto the tangent space of the sphere at this point. However, we do not go into the details, but simply present the final pictures.

### Example 1

(See [8], pp. 84/85.) The system

\begin{aligned}\displaystyle\dot{x}&\displaystyle=\ -x+y^{2}\\ \displaystyle\dot{y}&\displaystyle=\ -y+x^{2}\end{aligned}
(40)

has a stable node at $$(0,0)$$, the only eigenvalue of the linearization being $$-1$$ with multiplicity two, and a saddle at $$(1,1)$$, the eigenvalues of the linearization being $$-3$$ and 1. We have $$m=2$$, $$P_{m}(X,Y)=Y^{2}$$ and $$Q_{m}(X,Y)=X^{2}$$, so that equation (39) becomes $$X^{3}-Y^{3}=0$$ and hence $$X=Y$$. Thus there are two equilibria at infinity, a stable node at $$(1/\sqrt{2},1/\sqrt{2},0)$$ and an unstable node at $$(-1/\sqrt{2},-1/\sqrt{2},0)$$. This is shown on the left-hand side of Fig. 5.

### Example 2

(See [11], pp. 236-238.) The system

\begin{aligned}\displaystyle\dot{x}&\displaystyle=\ -4y+2xy-8\\ \displaystyle\dot{y}&\displaystyle=\ 4y^{2}-x^{2}\end{aligned}
(41)

has an unstable node at $$(4,2)$$, the eigenvalues of the linearization being 8 and 12, and a stable focus at $$(-2,-1)$$, the eigenvalues of the linearization being $$-5\pm i\sqrt{23}$$. We have $$m=2$$, $$P_{m}(X,Y)=0$$ and $$Q_{m}(X,Y)=4Y^{2}-X^{2}$$, so that equation (39) becomes $$Y(2Y-X)(2Y+X)=0$$ with the three solutions $$Y=0$$ and $$Y=\pm X/2$$. Consequently, there are six equilibria at infinity, namely nodes at $$(0,\pm 1,0)$$ and saddles at $$(\pm\sqrt{2/3},\pm\sqrt{1/3},0)$$. This is shown on the right-hand side in Fig. 5. To obtain a visually appealing presentation, we used the scaling $$(x,y)\mapsto(x/c,y/c)$$ with the factor $$c:=\sqrt{20}$$.

### Example 3

This is the system (23) with which we started our discussion. We have $$m=2$$, $$P_{m}(X,Y)=0$$ and $$Q_{m}(X,Y)=X^{2}$$, so that equation (39) reduces to $$X^{3}=0$$ and hence $$X=0$$. Thus there are two equilibria at infinity, namely a stable node at $$(0,-1,0)$$ and an unstable node at $$(0,1,0)$$. This is shown in Fig. 6, which can be considered as the limiting case of the diagrams in Fig. 4 as the image section considered gets infinitely large. Again, we used a scaling $$(x,y)\mapsto(x/c,y/c)$$, this time with the factor $$c=3$$, to improve the graphical output.

We emphasize that passing from the real plane to its Poincaré compactification is merely a way of seeing the phase-portrait of a planar system on a global scale, but does not eliminate the need for other methods to reveal the details of this phase-portrait.

## Didactical issues and classroom experiences

Let us discuss some of the didactical issues which played a role in both the genesis of this paper and the differential equations class from which it emerged.

Subject matter: Didactical issues in mathematics education cannot be separated from the subject at hand, and the study of differential equations and dynamical systems is a particularly attractive and rewarding topic, combining mathematical methods with interesting applications in mechanics, optics, biology, medicine, economics and other disciplines, blending computational and conceptual aspects and drawing on prerequisites from various different other disciplines (analysis, linear algebra, mathematical structures, point mechanics). The students appreciated being able to apply and consolidate what they had learned in previous semesters and seeing connections between hitherto separated topics, a prominent example being the use of the exponential function for matrices. The ample time available for the course allowed for the inclusion of all of these different facets.

Example-oriented approach: In our experience, students find it encouraging to try their hands on specific examples before being exposed to the development of a general theory. (In fact, it seems to be a common mistake in teaching mathematics to provide answers to questions the students have not even thought to ask.) Well-chosen examples can provide motivation and a hands-on feeling for the subject matter, and they can stimulate the wish to replace ad hoc arguments and lengthy calculations by general arguments. A mathematical theory, however elegant it may be, is not understood without having relevant examples in mind.

Numerical calculations: Students usually like doing concrete calculations, and they are not in bad company in this regard: Great mathematicians like Newton, Euler, Gauß or Jacobi did not shy away from extremely tedious calculations. (In a famous instance Newton, after determining the first 16 decimal places of $$\pi$$, somewhat sheepishly wrote: “I am ashamed to tell you to how many figures I carried these computations, having no other business at the time.” See [23], p. 133, quoted from [20], p. 219.) While – arguably – it may ultimately be the goal of mathematics to replace explicit computations by conceptual arguments, such computations have their place in mathematics education, providing a hands-on feeling, a sense for the numerical difficulties inherent in a given problem and, especially if tedious and cumbersome, a strong motivation for the development of a general theory.

Visualization: From our experience, the usefulness of visualizing mathematical circumstances can hardly be overestimated. For example, in the differential equations class which gave rise to this paper, both sketching phase-portraits of dynamical systems by hand and using mathematical software to produce more accurate computer plots formed an essential part of the course, with positive effects not only on geometrical thinking and programming skills, but also on a deeper understanding of dynamical systems, since creating good plots requires not only computer skills, but also mathematical considerations. Besides that, the positive impact of the mere aesthetical effect of a nicely drawn diagram should not be underestimated.

Perseverance: The problems discussed in this paper came up in a third-semester differential equations class, but they were not solved in this class, but left unsettled until later. The problem of finding a suitable Lyapunov function for the equilibrium point in question was taken up by the first author when studying the article [19] and realizing the applicability of optimality criteria for polynomials to this problem; see [23 and 6] for relevant background. Similarly, the possibility of applying ideas from projective geometry to better understand phase-portraits of dynamical systems occurred to the second author only later, while preparing a geometry class. Thus in both cases the solution ideas sprang to mind due to being placed in a different context and suddenly seeing a connection hidden before: When new topics and methods were studied, the mind was prepared to apply these new methods to the old problems. (As Louis Pasteur observed, chance favors the prepared mind.) Seeing connections between formerly unrelated fields – differential equations, algebraic geometry, projective geometry – induced in us excitement and a feeling for the general unity of mathematics.

## References

1. 1.

Ahmadi, A.A., El Khadir, B.: A globally asymptotically stable polynomial vector field with rational coefficients and no local polynomial Lyapunov function. Syst. Control Lett. 121, 50–53 (2018)

2. 2.

Barone-Netto, A.: Jet-Detectable Extrema. Proc. Am. Math. Soc. 92(4), 604–608 (1984)

3. 3.

Barone-Netto, A., Gorni, G., Zampieri, G.: Local extrema of analytic funtions. Nonlinear Differ. Equations Appl. 3, 287–303 (1996)

4. 4.

Basu, S., Pollack, R., Roy, M.-F.: Algorithms in real algebraic geometry. Springer, Berlin, Heidelberg (2010)

5. 5.

Behnke, H.: Über die verschiedenen Möglichkeiten, in der ebenen Geometrie unendlich ferne Punkte einzuführen. Math Semesterber 63, 171–185 (2016)

6. 6.

Bierstone, E., Milman, P.D.: Semianalytic and subanalytic sets. Publ. Mathem. l’I.H.É.S 67, 5–42 (1988)

7. 7.

Dumortier, F., Llibre, J., Artés, J.C.: Qualitative theory of planar differential systems. Springer, Berlin Heidelberg (2006)

8. 8.

Hahn, W.: Stability of motion. Springer, New York (1967)

9. 9.

Christopher, M.: Classical converse theorems in Lyapunov’s second method. Discrete Continuous Dyn. Syst. Ser. B. 20(8), 2333–2360 (2015)

10. 10.

Malkin, J.G.: Theorie der Stabilität einer Bewegung. Akademie-Verlag, Berlin (1959)

11. 11.

Meiss, J.D.: Differential Dynamical Systems. Society for Industrial and Applied Mathematics, Philadelphia (2007)

12. 12.

Malisoff, M., Mazenc, F.: Constructions of strict Lyapunov functions. Springer, London (2009)

13. 13.

Massera, J.L.: On Liapounoff’s conditions of stability. Ann. Math. 50(3), 705–721 (1949)

14. 14.

Parrilo, P.A.: Semidefinite programming relaxations for semialgebraic problems. Math. Program. Ser. B 96, 293–320 (2003)

15. 15.

Perko, L.: Differential equations and dynamical systems, 3rd edn. Springer, New York, Berlin, Heidelberg (2001)

16. 16.

Prestel, A., Delzell, C.: Positive polynomials. Springer, Berlin, Heidelberg (2001)

17. 17.

Ravanbakhsh, H., Sankaranarayanan, S.: Learning control Lyapunov functions from counterexamples and demonstrations. Auton. Robots 43, 275–307 (2019)

18. 18.

Sassi, M.A.B., Sankaranarayanan, S., Chen, X., Ábrahám, E.: Linear relaxations of polynomial positivity for polynomial Lyapunov function synthesis. IMA J. Math. Control Inf. 33(3), 723–756 (2016)

19. 19.

Scheeffer, L.: Theorie der Maxima und Minima einer Function von zwei Variabeln. Math. Ann. 25(4), 541–576 (1890)

20. 20.

Sonar, T.P.M., et al.: The history of the priority dispute between Newton and Leibniz: Mathematics in history and culture. Birkhäuser, Cham, Switzerland (2018)

21. 21.

Spindler, K.: Höhere Mathematik: Ein Begleiter durch das Studium. Harri Deutsch, Frankfurt (2010)

22. 22.

Stolfi, J.: Oriented projective geometry – A framework for geometric computations. Academic Press, San Diego (2014)

23. 23.

Turnbull, H.W. (ed.): The correspondence of Isaac Newton vol. 2, pp 1676–1687. Cambridge University Press, Cambridge (1960)

## Funding

Open Access funding enabled and organized by Projekt DEAL.

## Author information

Authors

### Corresponding author

Correspondence to Karlheinz Spindler.

## Rights and permissions

Reprints and Permissions

Naiwert, L., Spindler, K. Phase portraits, Lyapunov functions, and projective geometry. Math Semesterber 68, 143–161 (2021). https://doi.org/10.1007/s00591-020-00288-y

• Accepted:

• Published:

• Issue Date:

### Keywords

• Ordinary differential equations
• Lyapunov functions
• Projective phase-portraits
• Mathematics education

• 34A26
• 34D20
• 37C75
• 51N15
• 97D40
• 97E50