1 Introduction

We consider a system of differential equations of the form

$$ \begin{aligned} &x' = a_{11}(t)x+a_{12}(t) \phi_{p^{*}}(y), \\ &y' = a_{21}(t)\phi_{p}(x)+a_{22}(t)y, \end{aligned} $$
(1.1)

where the prime denotes \(d/dt\); the coefficients \(a_{11}(t)\), \(a_{12}(t)\), \(a_{21}(t)\), and \(a_{22}(t)\) are continuous on \(I = [0,\infty)\); the numbers p and \(p^{*}\) are positive and satisfy

$$\frac{1}{ p }+\frac{1}{p^{*}} = 1; $$

the real-valued function \(\phi_{q}(z)\) is defined by

$$\phi_{q}(z) = \begin{cases} |z|^{q-2}z & \text{if } z \neq0, \\ 0 & \text{if } z = 0, \end{cases}\quad z \in\mathbb{R} $$

with \(q = p\) or \(q = p^{*}\). Note that \(\phi_{p^{*}}\) is the inverse function of \(\phi_{p}\), and the numbers p and \(p^{*}\) are naturally greater than 1. Hence, we also note that the right-hand side of (1.1) does not satisfy Lipschitz condition at the origin since the function \(\phi_{p}\) satisfies

$$\lim_{z\to0}\frac{d}{dz}\phi_{p}(z) = \lim _{z\to0}(p-1)|z|^{p-2} = \infty, $$

if \(1 < p < 2\), and the function \(\phi_{p^{*}}\) satisfies

$$\lim_{z\to0}\frac{d}{dz}\phi_{p^{*}}(z) = \lim _{z\to0}\bigl(p^{*}-1\bigr)|z|^{p^{*}-2} = \infty, $$

if \(p > 2\). Since \(\phi_{p}(0) = 0 = \phi_{p^{*}}(0)\), system (1.1) has the zero solution \((x(t),y(t)) \equiv(0,0)\). The type of system (1.1) appeared in [19]. Let \(u = \exp (-\int a_{11}(t)\, dt )x\) and \(v = \exp (-\int a_{22}(t)\, dt )y\). Then we can transform system (1.1) into the simpler system

$$u' = a(t)\phi_{p^{*}}(v),\qquad v' = b(t) \phi_{p}(u), $$

where

$$a(t) = a_{12}(t)\exp \biggl(\int \bigl\{ (p^{*}-1)a_{22}(t)-a_{11}(t) \bigr\} \,dt \biggr) $$

and

$$b(t) = a_{21}(t)\exp \biggl(\int \bigl\{ \bigl(p-1 \bigr)a_{11}(t)-a_{22}(t) \bigr\} \,dt \biggr). $$

The problem of the global existence and uniqueness of solutions of this type of system are treated in [1, 2, 6]. However, keep in mind that we do not require the uniqueness of solutions of (1.1) throughout this paper.

In the special case that \(a_{11}(t) \equiv0\) and \(a_{12}(t) \equiv1\), (1.1) is transformed into the equation

$$ \bigl(\phi_{p}\bigl(x'\bigr) \bigr)'-a_{22}(t) \phi_{p}\bigl(x'\bigr)-a_{21}(t) \phi_{p}(x) = 0. $$
(1.2)

If \(x(t)\) is a solution of (1.2), then \(cx(t)\) is also a solution of (1.2) for any \(c \in\mathbb{R}\); that is, the solution space of (1.2) is homogeneous. However, even if \(x_{1}(t)\) and \(x_{2}(t)\) are two solutions of (1.2), the function \(x_{1}(t)+x_{2}(t)\) is not always a solution of (1.2); that is, the solution space of (1.2) is not additive. For this reason, this equation is often called ‘half-linear’. For example, half-linear differential equation or half-linear differential system can be found in [119] and the references cited therein. Furthermore, we can confirm that the global existence and uniqueness of solutions of (1.2) are guaranteed for the initial value problem (see [1, 2]). To study half-linear differential equations is important in the field of difference equations, dynamic equations on time scales, partial differential equations and various functional equations, because the method of differential equation might be able to use for them. The reader may refer to [2024] for example.

In the other special case, \(p = 2\), (1.1) becomes the two-dimensional linear differential system

$$\begin{aligned} \begin{aligned} &x' = a_{11}(t)x+a_{12}(t)y, \\ &y' = a_{21}(t)x+a_{22}(t)y. \end{aligned} \end{aligned}$$

We now consider more general linear differential systems of the form

$$ \mathbf{x}' = A(t)\mathbf{x}, $$
(1.3)

with an \(n\times n\) continuous matrix \(A(t)\). It is well known that the zero solution of (1.3) is uniformly asymptotically stable if and only if it is exponentially stable (see Section 2 about the precise definitions of uniform asymptotic stability and exponential stability). To be precise, the following theorem holds (for the proof, see ([25],pp.43-44), ([26], p.85), ([27], pp.499-500) and ([28], pp.29-30)).

Theorem A

If the zero solution of (1.3) is uniformly asymptotically stable, then it is exponentially stable.

In general, uniform asymptotic stability is not equivalent to exponential stability in the case of nonlinear systems. Actually, we consider the scalar equation \(x' = -x^{3}\) (see ([26], p.85) and ([29], p.49)). Clearly, this equation has the unique zero solution \(x(t;t_{0},0) \equiv0\). In the case that \(x_{0} \neq0\), the solution of this equation passing through a point \(x_{0} \in\mathbb{R}\) at \(t_{0} \in I\) is given by

$$x(t) = \frac{x_{0}}{\sqrt{1+2{x_{0}}^{2}(t-t_{0})}}. $$

It is known that the zero solution of this equation is uniformly asymptotically stable. However, it is not exponentially stable. In fact, we see

$$\bigl\vert x(t)\bigr\vert e^{\Lambda(t-t_{0})} = \frac{x_{0}e^{\Lambda(t-t_{0})}}{\sqrt {1+2{x_{0}}^{2}(t-t_{0})}} \to\infty \quad \text{as } t \to\infty $$

for any \(\Lambda> 0\). It is clear that the solution \(x(t)\) does not converge to the zero solution exponentially; that is, the zero solution is not exponentially stable. Here, the first question of this paper arises. Will uniform asymptotic stability guarantee exponential stability, even if the half-linear differential system (1.1) is nonlinear? We now give the answer to this question.

Theorem 1.1

If the zero solution of (1.1) is uniformly asymptotically stable, then it is exponentially stable.

Uniform asymptotic stability and exponential stability are of utmost importance for control theory. For example, we can assert that these stabilities guarantee the existence of a Lyapunov function which has good characteristics. Such results is called ‘converse theorems’ on stability. The converse theorems are important in studying the properties of solutions of perturbed systems. We will consider the general nonlinear system

$$ \mathbf{x}' = \mathbf{f}(t,\mathbf{x}), $$
(1.4)

where \(\mathbf{f}(t,\mathbf{x})\) is continuous on \(I\times\mathbb{R}^{n}\) and satisfies \(\mathbf{f}(t,\mathbf{0}) = \mathbf{0}\). We define the set \(S_{\alpha}= \{\mathbf{x} \in\mathbb{R}^{n}: \Vert\mathbf {x}\Vert< \alpha \}\) for \(\alpha> 0\). Let \(\mathbf {x}(t;t_{0},\mathbf{x}_{0})\) be a solution of (1.4) passing through a point \(\mathbf{x}_{0} \in\mathbb{R}^{n}\) at a time \(t_{0} \in I\). It is well known that if we suppose that \(\mathbf{f}(t,\mathbf{x})\) satisfies locally Lipschitz condition with respect to x and if the zero solution of (1.4) is uniformly asymptotically stable, then there exists a strict Lyapunov function (or strong Lyapunov function) \(V(t,\mathbf{x})\); that is, the scalar function \(V(t,\mathbf{x})\) defined on \(I\times S_{\alpha}\), where α is a suitable constant, which satisfies the following conditions:

  1. (i)

    \(a(\Vert\mathbf{x}\Vert) \le V(t,\mathbf{x}) \le b(\Vert \mathbf{x}\Vert)\);

  2. (ii)

    \(\dot{V}_{\text{(1.4)}}(t,\mathbf{x}) \le-c(\Vert\mathbf {x}\Vert)\),

where a, b, and c are continuous increasing and positive definite functions and the function \(\dot{V}_{\text{(1.4)}}(t,\mathbf{x})\) is defined by

$$\dot{V}_{\text{(1.4)}}(t,\mathbf{x}) = \limsup_{h \to0+} \frac {V(t+h,\mathbf{x}(t+h;t,\mathbf{x}))-V(t,\mathbf{x})}{h} $$

(see [2536]).

Furthermore, global exponential stability guarantees the existence of better Lyapunov functions in the sense that it can be estimated by exact functions. The definition of global exponential stability is as follows. Let \(\Vert\mathbf{x}\Vert\) be the Euclidean norm. The zero solution is said to be globally exponentially stable (or globally exponentially asymptotically stable or exponentially asymptotically stable in the large) if there exists a \(\lambda> 0\) and, for any \(\alpha> 0\), there exists a \(\beta(\alpha) > 0\) such that \(t_{0} \in I\) and \(\Vert\mathbf{x}_{0}\Vert< \alpha\) imply \(\Vert\mathbf {x}(t;t_{0},\mathbf{x}_{0})\Vert\le\beta(\alpha) e^{-\lambda(t-t_{0})}\Vert \mathbf{x}_{0}\Vert\) for all \(t \ge t_{0}\). For example, we can refer to the books [2830, 35, 37, 38] for this definition. The following result is a converse theorem on (global) exponential stability, which guarantees the existence of a Lyapunov function estimated by quadratic form \(\Vert\mathbf{x}\Vert^{2}\) (see [30, 31, 34, 35]).

Theorem B

Suppose that for any \(\alpha> 0\), there exists an \(L(\alpha) > 0\) such that

$$\bigl\Vert \mathbf{f}(t,\mathbf{x})-\mathbf{f}(t,\mathbf{y})\bigr\Vert \le L(\alpha )\Vert\mathbf{x}-\mathbf{y}\Vert $$

for \((t,\mathbf{x}), (t,\mathbf{y}) \in I\times S_{\alpha}\), where \(L(\alpha)\) is independent of \(t \in I\). If the zero solution of (1.4) is globally exponentially stable, then there exist three positive constants \(\beta_{1}(\alpha)\), \(\beta_{2}(\alpha)\), \(\beta_{3}\) and a Lyapunov function \(V(t,\mathbf{x})\) defined on \(I\times S_{\alpha}\) which satisfies the following conditions:

  1. (i)

    \(\beta_{1}(\alpha)\Vert\mathbf{x}\Vert^{2} \le V(t,\mathbf{x}) \le\beta_{2}(\alpha)\Vert\mathbf{x}\Vert^{2}\);

  2. (ii)

    \(\dot{V}_{\text{(1.4)}}(t,\mathbf{x}) \le-\beta_{3}\Vert \mathbf{x}\Vert^{2}\),

where α is a number given in the definition of global exponential stability.

In addition, another type converse theorem on (global) exponential stability can also be found in the classical books [28, 29].

Theorem C

Suppose that \(\mathbf{f}(t,\mathbf{x})\) satisfy locally Lipschitz condition. If the zero solution of (1.4) is globally exponentially stable, then there exists a Lyapunov function \(V(t,\mathbf {x})\) defined on \(I\times S_{\alpha}\) which satisfies the following conditions:

  1. (i)

    \(\Vert\mathbf{x}\Vert\le V(t,\mathbf{x}) \le\beta(\alpha )\Vert\mathbf{x}\Vert\);

  2. (ii)

    \(\dot{V}_{\text{(1.4)}}(t,\mathbf{x}) \le-k\lambda V(t,\mathbf{x})\), where \(0 < k < 1\);

  3. (iii)

    there exists an \(L(t,\alpha) > 0\) such that \(|V(t,\mathbf {x})-V(t,\mathbf{y})| \le L(t,\alpha)\Vert\mathbf{x}-\mathbf{y}\Vert\) for \((t,\mathbf{x}), (t,\mathbf{y}) \in I\times S_{\alpha}\),

where α, β, and λ are numbers given in the definition of global exponential stability.

When restricted to the case of the linear system (1.3), the following facts are known (see ([28], pp.44-45), ([29], p.78) and ([37], p.155)).

Theorem D

If the zero solution of (1.3) is exponentially stable, then it is globally exponentially stable. In this case, we can find a \(\beta > 0\) independent of α in the definition of globally exponentially stable.

From Theorems A and D, for the linear differential system (1.3), uniform asymptotic stability and global exponential stability are equivalent. In the case of the linear system (1.3), the following converse theorem on (global) exponential stability is known (see ([28], pp.92-93), ([29], p.105) and ([32], p.327)).

Theorem E

Suppose that the zero solution of (1.3) is globally exponentially stable, i.e., there exist a \(\lambda> 0\) and a \(\beta> 0\) such that

$$\bigl\Vert \mathbf{x}(t;t_{0},\mathbf{x}_{0})\bigr\Vert \le\beta e^{-\lambda (t-t_{0})}\Vert\mathbf{x}_{0}\Vert $$

for all \(t \ge t_{0}\). Then there exists a Lyapunov function \(V(t,\mathbf {x})\) defined on \(I\times\mathbb{R}^{n}\) which satisfies the following conditions:

  1. (i)

    \(\Vert\mathbf{x}\Vert\le V(t,\mathbf{x}) \le\beta\Vert \mathbf{x}\Vert\);

  2. (ii)

    \(\dot{V}_{\text{(1.4)}}(t,\mathbf{x}) \le-\lambda V(t,\mathbf{x})\);

  3. (iii)

    \(|V(t,\mathbf{x})-V(t,\mathbf{y})| \le\beta\Vert\mathbf {x}-\mathbf{y}\Vert\) for \((t,\mathbf{x}), (t,\mathbf{y}) \in I\times \mathbb{R}^{n}\).

To present our result, we give some definitions of stability and its equivalent conditions in the next section. Also, we give a proposition which is the most important property for the proof of Theorem 1.1. In Section 3, we state the proof of Theorem 1.1. In Section 4, we give a natural generalization of Theorem D with \(n = 2\). In the final section, we present the converse theorems for half-linear system (1.1), for comparison with Theorems B, C, and E.

2 Definitions and lemmas

We now give some definitions about the zero solution \(\mathbf {x}(t;t_{0},\mathbf{0}) \equiv\mathbf{0}\) of (1.1). The zero solution is said to be uniformly attractive if there exists a \(\delta_{0} > 0\) and, for every \(\varepsilon> 0\), there exists a \(T(\varepsilon) > 0\) such that \(t_{0} \in I\) and \(\Vert\mathbf{x}_{0}\Vert < \delta_{0}\) imply \(\Vert\mathbf{x}(t;t_{0},\mathbf{x}_{0})\Vert< \varepsilon\) for all \(t \ge t_{0}+T(\varepsilon)\). The zero solution of (1.1) is said to be uniformly stable if, for any \(\varepsilon> 0\), there exists a \(\delta(\varepsilon) > 0\) such that \(t_{0} \in I\) and \(\Vert\mathbf{x}_{0}\Vert< \delta(\varepsilon)\) imply \(\Vert\mathbf{x}(t;t_{0},\mathbf{x}_{0})\Vert< \varepsilon\) for all \(t \ge t_{0}\). The zero solution is uniformly asymptotically stable if it is uniformly attractive and is uniformly stable. The zero solution is said to be exponentially stable (or exponentially asymptotically stable); if there exists a \(\lambda> 0\) and, given any \(\varepsilon> 0\), there exists a \(\delta(\varepsilon) > 0\) such that \(t_{0} \in I\) and \(\Vert\mathbf{x}_{0}\Vert< \delta(\varepsilon)\) imply \(\Vert\mathbf{x}(t;t_{0},\mathbf{x}_{0})\Vert\le\varepsilon e^{-\lambda (t-t_{0})}\) for all \(t \ge t_{0}\). For example, we can refer to the books and papers [7, 18, 19, 2553] for those definitions.

In this section, before giving the proof of Theorem 1.1, we prepare some lemmas. First, we will give the conditions which are equivalent to the above mentioned definitions. Some of these conditions are applicable to the proof of Theorem 1.1. For \(\mathbf{x} = (x,y) \in\mathbb{R}^{2}\) and \(p\ge1\), we define a norm \(\Vert\mathbf {x}\Vert_{p}\) by \(\sqrt[p]{|x|^{p}+|y|^{p}}\). This norm is often called the ‘Hölder norm’ or the ‘p-norm’ (see [30, 31, 37, 54, 55]).

Lemma 2.1

The zero solution of (1.1) is uniformly attractive if and only if there exists a \(\gamma_{0} > 0\) and, for every \(\rho> 0\), there exists an \(S(\rho) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},\phi _{p^{*}}(y_{0}))\Vert_{p} < \gamma_{0}\) imply

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \rho $$

for all \(t \ge t_{0}+S(\rho)\).

Proof

First we prove the necessity. We suppose that the zero solution of (1.1) is uniformly attractive. That is, there exists a \(\delta _{0} > 0\) and, for every \(\varepsilon> 0\), there exists a \(T(\varepsilon ) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \delta_{0}\) imply \(\Vert(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\Vert< \varepsilon\) for all \(t \ge t_{0}+T\). Let

$$\overline{p} = \max\bigl\{ p,p^{*}\bigr\} \quad \text{and}\quad \gamma_{0} = \min \biggl\{ 1, \biggl(\frac{\delta_{0}}{\sqrt{2}} \biggr)^{\frac{\overline{p}}{p}} \biggr\} . $$

For every \(0 < \rho< 1\), we determine \(\varepsilon= \rho^{p}/2\) and \(S(\rho) = T(\rho^{p}/2)\). We consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma_{0}\). From \(\sqrt [p]{|x_{0}|^{p}+|y_{0}|^{p^{*}}} = \Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma _{0}\) it follows that

$$|x_{0}| < \gamma_{0}\quad \text{and}\quad |y_{0}| < {\gamma_{0}}^{\frac{p}{p^{*}}}. $$

Hence, combining this estimation with \(0 < \gamma_{0} \le1 \le\overline {p}/p\) and \(\overline{p}/p^{*} \ge1\), we obtain

$$\begin{aligned} \bigl\Vert (x_{0},y_{0})\bigr\Vert &= \sqrt{{x_{0}}^{2}+{y_{0}}^{2}} < \sqrt{{ \gamma _{0}}^{2}+{\gamma_{0}}^{\frac{2p}{p^{*}}}} \\ &= \sqrt{\min \biggl\{ 1, \biggl(\frac{\delta_{0}}{\sqrt{2}} \biggr)^{\frac {2\overline{p}}{p}} \biggr\} +\min \biggl\{ 1, \biggl(\frac{\delta_{0}}{\sqrt {2}} \biggr)^{\frac{2\overline{p}}{p^{*}}} \biggr\} } \\ &\le\sqrt{2\min \biggl\{ 1, \biggl(\frac{\delta_{0}}{\sqrt{2}} \biggr)^{2} \biggr\} } \le\delta_{0}, \end{aligned}$$

and, therefore,

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert < \varepsilon= \frac{\rho ^{p}}{2} $$

for \(t \ge t_{0}+S\). Taking into account that this inequality and \(0 < \rho^{p}/2 < 1 < p\) and \(p^{*} > 1\) hold, we have

$$\bigl\vert x(t;t_{0},x_{0},y_{0})\bigr\vert ^{p} < \biggl(\frac{\rho^{p}}{2} \biggr)^{p} < \frac{\rho ^{p}}{2} \quad \text{and}\quad \bigl\vert y(t;t_{0},x_{0},y_{0}) \bigr\vert ^{p^{*}} < \biggl(\frac{\rho ^{p}}{2} \biggr)^{p^{*}} < \frac{\rho^{p}}{2} $$

for \(t \ge t_{0}+S\). From these inequalities, we see that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \sqrt [p]{\frac{\rho^{p}}{2}+ \frac{\rho^{p}}{2}} = \rho $$

for \(t \ge t_{0}+S\).

Conversely, we prove the sufficiency. We suppose that there exists a \(\gamma_{0} > 0\) and, for every \(\rho> 0\), there exists an \(S(\rho) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma _{0}\) imply \(\Vert(x(t;t_{0},x_{0},y_{0}),\phi_{p^{*}}(y(t;t_{0},x_{0},y_{0})))\Vert_{p} < \rho\) for all \(t \ge t_{0}+S\). Let

$$\delta_{0} = \min \biggl\{ 1,\frac{{\gamma_{0}}^{p}}{2} \biggr\} . $$

For every \(0 < \varepsilon< 1\), we determine \(\rho= (\varepsilon /\sqrt{2} )^{\overline{p}/p}\) and \(T(\varepsilon) = S ( (\varepsilon/\sqrt{2} )^{\overline{p}/p} )\). We consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \delta_{0}\). From \(\Vert (x_{0},y_{0})\Vert< \delta_{0}\) it follows that

$$|x_{0}| < \delta_{0} \quad \text{and} \quad |y_{0}| < \delta_{0}. $$

Hence, combining this estimation with \(0 < \delta_{0} \le1 < p\) and \(p^{*} > 1\), we obtain

$$\begin{aligned} \bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr) \bigr\Vert _{p} &< \sqrt[p]{{\delta_{0}}^{p}+{ \delta _{0}}^{p^{*}}} = \sqrt[p]{\min \biggl\{ 1, \biggl( \frac{{\gamma_{0}}^{p}}{2} \biggr)^{p} \biggr\} +\min \biggl\{ 1, \biggl( \frac{{\gamma_{0}}^{p}}{2} \biggr)^{p^{*}} \biggr\} } \\ &\le\sqrt[p]{2\min \biggl\{ 1,\frac{{\gamma_{0}}^{p}}{2} \biggr\} } \le \gamma_{0}, \end{aligned}$$

and, therefore,

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \rho= \biggl(\frac{\varepsilon}{\sqrt{2}} \biggr)^{\frac{\overline{p}}{p}} $$

for \(t \ge t_{0}+T\). Taking into account that this inequality and \(0 < \varepsilon^{2}/2 < 1 \le\overline{p}/p\), and \(\overline{p}/p^{*} \ge1\) hold, we have

$$x^{2}(t;t_{0},x_{0},y_{0}) < \biggl( \frac{\varepsilon^{2}}{2} \biggr)^{\frac {\overline{p}}{p}} \le\frac{\varepsilon^{2}}{2} \quad \text{and} \quad y^{2}(t;t_{0},x_{0},y_{0}) < \biggl(\frac{\varepsilon^{2}}{2} \biggr)^{\frac {\overline{p}}{p^{*}}} \le\frac{\varepsilon^{2}}{2} $$

for \(t \ge t_{0}+T\). From these inequalities, we see that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert < \varepsilon $$

for \(t \ge t_{0}+T\). This completes the proof of Lemma 2.1. □

By using the same arguments as in Lemma 2.1, we can prove the following lemma.

Lemma 2.2

The zero solution of (1.1) is uniformly stable if and only if, for any \(\rho> 0\), there exists a \(\gamma(\rho) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma(\rho)\) imply

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \rho $$

for all \(t \ge t_{0}\).

Proof

First we prove the necessity. We suppose that the zero solution of (1.1) is uniformly stable. That is, for any \(\varepsilon> 0\), there exists a \(\delta(\varepsilon) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \delta(\varepsilon)\) imply \(\Vert (x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\Vert< \varepsilon\) for all \(t \ge t_{0}\). For every \(0 < \rho< 1\), we determine an \(\varepsilon= \rho ^{p}/2\). Let

$$\overline{p} = \max\bigl\{ p,p^{*}\bigr\} \quad \text{and}\quad \gamma(\rho) = \min \biggl\{ 1, \biggl(\frac{1}{\sqrt{2}}\delta \biggl(\frac{\rho^{p}}{2} \biggr) \biggr)^{\frac{\overline{p}}{p}} \biggr\} . $$

We consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0}, \phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma\). From \(\sqrt[p]{|x_{0}|^{p}+|y_{0}|^{p^{*}}} = \Vert(x_{0},\phi _{p^{*}}(y_{0}))\Vert_{p} < \gamma\) it follows that

$$|x_{0}| < \gamma\quad \text{and}\quad |y_{0}| < { \gamma}^{\frac{p}{p^{*}}}. $$

Hence, combining this estimation with \(0 < \gamma\le1 \le\overline {p}/p\) and \(\overline{p}/p^{*} \ge1\), we obtain

$$\begin{aligned} \bigl\Vert (x_{0},y_{0})\bigr\Vert &= \sqrt{{x_{0}}^{2}+{y_{0}}^{2}} < \sqrt{{ \gamma}^{2}+{\gamma }^{\frac{2p}{p^{*}}}} \\ &= \sqrt{\min \biggl\{ 1, \biggl(\frac{\delta}{\sqrt{2}} \biggr)^{\frac {2\overline{p}}{p}} \biggr\} +\min \biggl\{ 1, \biggl(\frac{\delta}{\sqrt {2}} \biggr)^{\frac{2\overline{p}}{p^{*}}} \biggr\} } \\ &\le\sqrt{2\min \biggl\{ 1, \biggl(\frac{\delta}{\sqrt{2}} \biggr)^{2} \biggr\} } \le\delta, \end{aligned}$$

and, therefore,

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert < \varepsilon= \frac{\rho ^{p}}{2} $$

for \(t \ge t_{0}\). Taking into account that this inequality and \(0 < \rho ^{p}/2 < 1 < p\) and \(p^{*} > 1\) hold, we have

$$\bigl\vert x(t;t_{0},x_{0},y_{0})\bigr\vert ^{p} < \biggl(\frac{\rho^{p}}{2} \biggr)^{p} < \frac{\rho ^{p}}{2}\quad \text{and} \quad \bigl\vert y(t;t_{0},x_{0},y_{0}) \bigr\vert ^{p^{*}} < \biggl(\frac{\rho ^{p}}{2} \biggr)^{p^{*}} < \frac{\rho^{p}}{2} $$

for \(t \ge t_{0}\). From these inequalities, we see that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \sqrt [p]{\frac{\rho^{p}}{2}+ \frac{\rho^{p}}{2}} = \rho $$

for \(t \ge t_{0}\).

Conversely, we prove the sufficiency. We suppose that for any \(\rho> 0\), there exists a \(\gamma(\rho) > 0\) such that \(t_{0} \in I\) and \(\Vert (x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma(\rho)\) imply

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \rho $$

for all \(t \ge t_{0}\). For every \(0 < \varepsilon< 1\), we determine a \(\rho= (\varepsilon/\sqrt{2} )^{\overline{p}/p}\). Let

$$\delta(\varepsilon) = \min \biggl\{ 1,\frac{1}{2}{\gamma^{p} \biggl( \biggl(\frac {\varepsilon}{\sqrt{2}} \biggr)^{\frac{\overline{p}}{p}} \biggr)} \biggr\} . $$

We consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \delta\). From \(\Vert(x_{0},y_{0})\Vert< \delta\) it follows that

$$|x_{0}| < \delta\quad \text{and} \quad |y_{0}| < \delta. $$

Hence, combining this estimation with \(0 < \delta\le1 < p\) and \(p^{*} > 1\), we obtain

$$\begin{aligned} \bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr) \bigr\Vert _{p} &< \sqrt[p]{{\delta}^{p}+{\delta }^{p^{*}}} = \sqrt[p]{\min \biggl\{ 1, \biggl(\frac{{\gamma}^{p}}{2} \biggr)^{p} \biggr\} +\min \biggl\{ 1, \biggl(\frac{{\gamma}^{p}}{2} \biggr)^{p^{*}} \biggr\} } \\ &\le\sqrt[p]{2\min \biggl\{ 1,\frac{{\gamma}^{p}}{2} \biggr\} } \le\gamma, \end{aligned}$$

and, therefore,

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \rho= \biggl(\frac{\varepsilon}{\sqrt{2}} \biggr)^{\frac{\overline{p}}{p}} $$

for \(t \ge t_{0}\). Taking into account that this inequality and \(0 < \varepsilon^{2}/2 < 1 \le\overline{p}/p\) and \(\overline{p}/p^{*} \ge1\) hold, we have

$$x^{2}(t;t_{0},x_{0},y_{0}) < \biggl( \frac{\varepsilon^{2}}{2} \biggr)^{\frac {\overline{p}}{p}} \le\frac{\varepsilon^{2}}{2} \quad \text{and} \quad y^{2}(t;t_{0},x_{0},y_{0}) < \biggl(\frac{\varepsilon^{2}}{2} \biggr)^{\frac {\overline{p}}{p^{*}}} \le\frac{\varepsilon^{2}}{2} $$

for \(t \ge t_{0}\). From these inequalities, we see that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert < \varepsilon $$

for \(t \ge t_{0}\). This completes the proof of Lemma 2.2. □

Furthermore, by using the same arguments as in Lemmas 2.1 and 2.2, we have the following result.

Lemma 2.3

The zero solution of (1.1) is exponentially stable if and only if there exists a \(\mu> 0\) and, given any \(\rho> 0\), there exists a \(\gamma(\rho) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},\phi _{p^{*}}(y_{0}))\Vert_{p} < \gamma(\rho)\) imply

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} \le\rho e^{-\mu(t-t_{0})} $$

for all \(t \ge t_{0}\).

Proof

First we prove the necessity. We suppose that the zero solution of (1.1) is exponentially stable. That is, there exists a \(\lambda> 0\) and, given any \(\varepsilon> 0\), there exists a \(\delta (\varepsilon) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \delta(\varepsilon)\) imply \(\Vert (x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\Vert\le\varepsilon e^{-\lambda (t-t_{0})}\) for all \(t \ge t_{0}\). Let \(\mu= \lambda/p\). For every \(0 < \rho< 1\), we determine an \(\varepsilon= \rho^{p}/2\). Let

$$\overline{p} = \max\bigl\{ p,p^{*}\bigr\} \quad \text{and}\quad \gamma(\rho) = \min \biggl\{ 1, \biggl(\frac{1}{\sqrt{2}}\delta \biggl(\frac{\rho^{p}}{2} \biggr) \biggr)^{\frac{\overline{p}}{p}} \biggr\} . $$

We consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0}, \phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma\). From \(\sqrt[p]{|x_{0}|^{p}+|y_{0}|^{p^{*}}} = \Vert(x_{0},\phi _{p^{*}}(y_{0}))\Vert_{p} < \gamma\) it follows that

$$|x_{0}| < \gamma\quad \text{and}\quad |y_{0}| < { \gamma}^{\frac{p}{p^{*}}}. $$

Hence, combining this estimation with \(0 < \gamma\le1 \le\overline {p}/p\) and \(\overline{p}/p^{*} \ge1\) we obtain

$$\begin{aligned} \bigl\Vert (x_{0},y_{0})\bigr\Vert &= \sqrt{{x_{0}}^{2}+{y_{0}}^{2}} < \sqrt{{ \gamma}^{2}+{\gamma }^{\frac{2p}{p^{*}}}} \\ &= \sqrt{\min \biggl\{ 1, \biggl(\frac{\delta}{\sqrt{2}} \biggr)^{\frac {2\overline{p}}{p}} \biggr\} +\min \biggl\{ 1, \biggl(\frac{\delta}{\sqrt {2}} \biggr)^{\frac{2\overline{p}}{p^{*}}} \biggr\} } \\ &\le\sqrt{2\min \biggl\{ 1, \biggl(\frac{\delta}{\sqrt{2}} \biggr)^{2} \biggr\} } \le\delta, \end{aligned}$$

and, therefore,

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert \le\varepsilon e^{-\lambda(t-t_{0})} = \frac{\rho^{p}}{2}e^{-p\mu(t-t_{0})} $$

for \(t \ge t_{0}\). Taking into account that this inequality and

$$0 < \frac{\rho^{p}}{2}e^{-p\mu(t-t_{0})} \le\frac{\rho^{p}}{2} < 1 < p \quad \text {and}\quad 1 < p^{*} $$

hold, we have

$$\bigl\vert x(t;t_{0},x_{0},y_{0})\bigr\vert ^{p} \le \biggl(\frac{\rho^{p}}{2}e^{-p\mu(t-t_{0})} \biggr)^{p} < \frac{\rho^{p}}{2} e^{-p\mu(t-t_{0})} $$

and

$$\bigl\vert y(t;t_{0},x_{0},y_{0})\bigr\vert ^{p^{*}} \le \biggl(\frac{\rho^{p}}{2}e^{-p\mu (t-t_{0})} \biggr)^{p^{*}} < \frac{\rho^{p}}{2}e^{-p\mu(t-t_{0})} $$

for \(t \ge t_{0}\). From these inequalities, we see that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} < \rho e^{-\mu(t-t_{0})} $$

for \(t \ge t_{0}\).

Conversely, we prove the sufficiency. We suppose that there exists a \(\mu> 0\) and, given any \(\rho> 0\), there exists a \(\gamma(\rho) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p} < \gamma (\rho)\) imply

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} \le\rho e^{-\mu(t-t_{0})} $$

for all \(t \ge t_{0}\). Let \(\lambda= \mu p/\overline{p}\). For every \(0 < \varepsilon< 1\), we determine a \(\rho= (\varepsilon/\sqrt {2} )^{\overline{p}/p}\). Let

$$\delta(\varepsilon) = \min \biggl\{ 1,\frac{1}{2}{\gamma^{p} \biggl( \biggl(\frac {\varepsilon}{\sqrt{2}} \biggr)^{\frac{\overline{p}}{p}} \biggr)} \biggr\} . $$

We consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \delta\). From \(\Vert(x_{0},y_{0})\Vert< \delta\) it follows that

$$|x_{0}| < \delta \quad \text{and}\quad |y_{0}| < \delta. $$

Hence, combining this estimation with \(0 < \delta\le1< p\) and \(p^{*} > 1\), we obtain

$$\begin{aligned} \bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr) \bigr\Vert _{p} &< \sqrt[p]{{\delta}^{p}+{\delta }^{p^{*}}} = \sqrt[p]{\min \biggl\{ 1, \biggl(\frac{{\gamma}^{p}}{2} \biggr)^{p} \biggr\} +\min \biggl\{ 1, \biggl(\frac{{\gamma}^{p}}{2} \biggr)^{p^{*}} \biggr\} } \\ &\le\sqrt[p]{2\min \biggl\{ 1,\frac{{\gamma}^{p}}{2} \biggr\} } \le\gamma, \end{aligned}$$

and, therefore,

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} \le\rho e^{-\mu(t-t_{0})} = \biggl( \frac{\varepsilon}{\sqrt{2}} \biggr)^{\frac {\overline{p}}{p}}e^{-\frac{\overline{p}\lambda}{p}(t-t_{0})} $$

for \(t \ge t_{0}\). Taking into account that this inequality and

$$0 < \frac{\varepsilon}{\sqrt{2}}e^{-\lambda(t-t_{0})} \le\frac {\varepsilon}{\sqrt{2}} < 1 \le \frac{\overline{p}}{p} \quad \text{and}\quad 1 \le \frac{\overline{p}}{p^{*}} $$

hold, we have

$$\bigl\vert x(t;t_{0},x_{0},y_{0})\bigr\vert \le \biggl(\frac{\varepsilon}{\sqrt{2}}e^{-\lambda (t-t_{0})} \biggr)^{\frac{\overline{p}}{p}} \le \frac{\varepsilon}{\sqrt {2}}e^{-\lambda(t-t_{0})} $$

and

$$\bigl\vert y(t;t_{0},x_{0},y_{0})\bigr\vert \le \biggl(\frac{\varepsilon}{\sqrt{2}}e^{-\lambda (t-t_{0})} \biggr)^{\frac{\overline{p}}{p^{*}}} \le \frac{\varepsilon}{\sqrt {2}}e^{-\lambda(t-t_{0})} $$

for \(t \ge t_{0}\). From these inequalities, we see that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert \le\varepsilon e^{-\lambda(t-t_{0})} $$

for \(t \ge t_{0}\). This completes the proof of Lemma 2.3. □

In the special case in which \(p = 2\), (1.1) becomes the linear system. As is well known, the solution space of the linear system is homogeneous and additive. On the other hand, in the general case in which \(p \neq2\), the solution space of (1.1) is not homogeneous or additive. However, we can show that (1.1) has a homogeneous-like property on the solution space.

Lemma 2.4

If \((x(t),y(t))\) is a solution of (1.1) passing through a point \((x_{0},y_{0}) \in\mathbb{R}^{2}\) at \(t = t_{0} \in I\), then \((cx(t),\phi _{p}(c)y(t))\) is also a solution of (1.1) passing through a point \((cx_{0},\phi_{p}(c)y_{0}) \in\mathbb{R}^{2}\) at \(t = t_{0}\) for any \(c \in\mathbb{R}\).

Proof

We consider the solution \((x(t),y(t))\) of (1.1) passing through a point \((x_{0},y_{0})\) at \(t = t_{0}\). Let \(\tilde{x}(t) = cx(t)\) and \(\tilde{y}(t) = \phi_{p}(c)y(t)\) with \(c \in\mathbb{R}\). It is clear that \((\tilde{x}(t_{0}),\tilde{y}(t_{0})) = (cx_{0},\phi_{p}(c)y_{0})\) is satisfied. Since \(\phi_{p^{*}}\) is the inverse function of \(\phi_{p}\), we have

$$\tilde{x}'(t) = a_{11}(t)cx(t)+a_{12}(t) \phi_{p^{*}}\bigl(\phi_{p}(c)y(t)\bigr) = a_{11}(t) \tilde{x}(t)+a_{12}(t)\phi_{p^{*}}\bigl(\tilde{y}(t)\bigr) $$

and

$$\tilde{y}'(t) = a_{21}(t)\phi_{p}\bigl(cx(t) \bigr)+a_{22}(t)\phi_{p}(c)y(t) = a_{21}(t) \phi_{p}\bigl(\tilde{x}(t)\bigr)+a_{22}(t)\tilde{y}(t). $$

We therefore conclude that \((cx(t),\phi_{p}(c)y(t))\) is also a solution of (1.1) passing through a point \((cx_{0},\phi_{p}(c)y_{0})\) at \(t = t_{0}\). □

We state the following proposition, which is the most important property for the proof of Theorem 1.1.

Proposition 2.5

If the zero solution of (1.1) is uniformly attractive, then there exists a \(\gamma_{0} > 0\) and, for every \(\nu> 1\), there exists a \(T(\nu) > 0\) such that \(t_{0} \in I\) and \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert _{p} < \gamma_{0}\nu^{-(k-1)}\) imply

$$\bigl\Vert \bigl(x\bigl(t;t_{0}+(k-1)T(\nu),x_{0},y_{0} \bigr),\phi_{p^{*}}\bigl(y\bigl(t;t_{0}+(k-1)T(\nu ),x_{0},y_{0}\bigr)\bigr)\bigr)\bigr\Vert _{p} < \gamma_{0}\nu^{-k} $$

for all \(t \ge t_{0}+kT(\nu)\) and \(k \in\mathbb{N}\).

Proof

By using the assumption and Lemma 2.1, there exists a \(\gamma _{0} > 0\) and, for every \(\nu> 1\), there exists an \(S(\gamma_{0}/\nu) > 0\) such that \(\tau\ge0\) and \(\Vert(\xi,\phi_{p^{*}}(\eta))\Vert_{p} < \gamma _{0}\) imply

$$\bigl\Vert \bigl(x(t;\tau,\xi,\eta),\phi_{p^{*}}\bigl(y(t;\tau,\xi, \eta)\bigr)\bigr)\bigr\Vert _{p} < \frac {\gamma_{0}}{\nu} $$

for all \(t \ge\tau+S\).

Let \(T(\nu) = S(\gamma_{0}/\nu)\). We consider the solution

$$\bigl(x\bigl(t;t_{0}+(k-1)T,x_{0},y_{0}\bigr),y \bigl(t;t_{0}+(k-1)T,x_{0},y_{0}\bigr)\bigr) $$

of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert _{p} < \gamma_{0}\nu^{-(k-1)}\). From Lemma 2.4, we conclude that

$$\bigl(\nu^{k-1}x\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr),\phi_{p} \bigl(\nu^{k-1} \bigr)y\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr) \bigr) $$

is also a solution of (1.1) passing through a point \((\nu ^{k-1}x_{0},\phi_{p} (\nu^{k-1} )y_{0} )\) at \(t = t_{0}+(k-1)T\). Since

$$\bigl\Vert \bigl(\nu^{k-1}x_{0},\nu^{k-1} \phi_{p^{*}}(y_{0}) \bigr)\bigr\Vert _{p} = \nu^{k-1}\bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0}) \bigr)\bigr\Vert _{p} < \gamma_{0} $$

holds, we have

$$\begin{aligned} \begin{aligned} \frac{\gamma_{0}}{\nu} &> \bigl\Vert \bigl(\nu^{k-1}x \bigl(t;t_{0}+(k-1)T,x_{0},y_{0}\bigr),\nu ^{k-1}\phi_{p^{*}}\bigl(y\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr)\bigr)\bigr)\bigr\Vert _{p} \\ &= \nu^{k-1}\bigl\Vert \bigl(x\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr),\phi _{p^{*}}\bigl(y\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr)\bigr)\bigr)\bigr\Vert _{p} \end{aligned} \end{aligned}$$

for all \(t \ge t_{0}+(k-1)T+T = t_{0}+kT\). That is, we obtain

$$\bigl\Vert \bigl(x\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr),\phi_{p^{*}}\bigl(y\bigl(t;t_{0}+(k-1)T,x_{0},y_{0} \bigr)\bigr)\bigr)\bigr\Vert _{p} < \gamma_{0} \nu^{-k} $$

for all \(t \ge t_{0}+kT\). This completes the proof of Proposition 2.5. □

3 Exponential asymptotic stability

In this section, we give the proof of the main theorem.

Proof of Theorem 1.1

By using uniform attractivity of (1.1) and Proposition 2.5, there exist a \(\gamma_{0} > 0\) and a \(T(e) > 0\) such that \(t_{0} \in I\) and \(\Vert(\xi,\phi_{p^{*}}(\eta))\Vert_{p} < \gamma _{0}e^{-(k-1)}\) imply

$$ \bigl\Vert \bigl(x\bigl(t;t_{0}+(k-1)T,\xi,\eta\bigr), \phi_{p^{*}}\bigl(y\bigl(t;t_{0}+(k-1)T,\xi,\eta \bigr)\bigr) \bigr)\bigr\Vert _{p}< \gamma_{0}e^{-k} $$
(3.1)

for all \(t \ge t_{0}+kT\) and \(k \in\mathbb{N}\).

Because of the uniform stability of (1.1) and Lemma 2.2, there exists a \(\gamma(\gamma_{0}) > 0\) such that \(t_{0} \in I\) and \(\Vert(\xi,\phi_{p^{*}}(\eta))\Vert_{p} < \gamma\) imply

$$ \bigl\Vert \bigl(x(t;t_{0},\xi,\eta),\phi_{p^{*}} \bigl(y(t;t_{0},\xi,\eta)\bigr)\bigr)\bigr\Vert _{p} < \gamma _{0} $$
(3.2)

for all \(t \ge t_{0}\). Let \(\lambda= 1/T\). For every \(\varepsilon> 0\), we determine a

$$\delta(\varepsilon) = \frac{\gamma\varepsilon}{\gamma_{0}e} > 0. $$

We now consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0}, \phi_{p^{*}}(y_{0}))\Vert_{p} < \delta\). For the sake of simplicity, let

$$\bigl(x(t),y(t)\bigr) = \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr). $$

Using Lemma 2.4, we can find a solution

$$\biggl(\frac{\gamma_{0}e}{\varepsilon}x(t),\phi_{p} \biggl(\frac{\gamma _{0}e}{\varepsilon} \biggr)y(t) \biggr) $$

of (1.1) passing through a point \(((\gamma_{0}e/\varepsilon )x_{0},\phi_{p}(\gamma_{0}e/\varepsilon)y_{0})\) at \(t = t_{0}\). From \(\delta= \gamma\varepsilon/(\gamma_{0}e)\), we see that

$$\begin{aligned} \biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x_{0},\phi_{p^{*}} \biggl(\phi _{p} \biggl(\frac{\gamma_{0}e}{\varepsilon} \biggr)y_{0} \biggr) \biggr)\biggr\Vert _{p} &= \biggl\Vert \biggl( \frac{\gamma_{0}e}{\varepsilon}x_{0},\frac{\gamma _{0}e}{\varepsilon}\phi_{p^{*}}(y_{0}) \biggr)\biggr\Vert _{p} \\ &= \frac{\gamma_{0}e}{\varepsilon} \bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0}) \bigr)\bigr\Vert _{p} < \gamma \end{aligned}$$

at \(t = t_{0}\). From this inequality and (3.2) with

$$(\xi,\eta) = \biggl(\frac{\gamma_{0}e}{\varepsilon}x_{0},\phi_{p} \biggl(\frac {\gamma_{0}e}{\varepsilon} \biggr)y_{0} \biggr), $$

we have

$$\begin{aligned} \frac{\gamma_{0}e}{\varepsilon}\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr) \bigr)\bigr\Vert _{p} &= \biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x(t), \frac{\gamma _{0}e}{\varepsilon}\phi_{p^{*}}\bigl(y(t)\bigr) \biggr)\biggr\Vert _{p} \\ &= \biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x(t),\phi_{p^{*}} \biggl( \phi_{p} \biggl(\frac{\gamma_{0}e}{\varepsilon} \biggr)y(t) \biggr) \biggr) \biggr\Vert _{p} < \gamma_{0} \end{aligned}$$
(3.3)

for \(t \ge t_{0}\). We therefore conclude that

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} < \frac{\varepsilon}{e} $$

for \(t_{0} \le t \le t_{0}+T\). By using (3.3), we obtain

$$\biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x_{0},\phi_{p^{*}} \biggl(\phi _{p} \biggl(\frac{\gamma_{0}e}{\varepsilon} \biggr)y_{0} \biggr) \biggr)\biggr\Vert _{p} < \gamma_{0} $$

at \(t = t_{0}\). From this inequality and (3.1) with

$$(\xi,\eta) = \biggl(\frac{\gamma_{0}e}{\varepsilon}x_{0},\phi_{p} \biggl(\frac {\gamma_{0}e}{\varepsilon} \biggr)y_{0} \biggr), \quad k = 1, $$

we get

$$ \frac{\gamma_{0}e}{\varepsilon}\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr) \bigr)\bigr\Vert _{p} = \biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x(t), \phi_{p^{*}} \biggl(\phi_{p} \biggl(\frac{\gamma_{0}e}{\varepsilon} \biggr)y(t) \biggr) \biggr)\biggr\Vert _{p} < \frac{\gamma_{0}}{e} $$
(3.4)

for \(t \ge t_{0}+T\). We therefore conclude that

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} < \frac{\varepsilon}{e^{2}} $$

for \(t_{0}+T \le t \le t_{0}+2T\). By using (3.4), we obtain

$$\biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x(t_{0}+T),\phi_{p^{*}} \biggl( \phi_{p} \biggl(\frac{\gamma_{0}e}{\varepsilon} \biggr)y(t_{0}+T) \biggr) \biggr) \biggr\Vert _{p} < \frac{\gamma_{0}}{e} $$

at \(t = t_{0}+T\). From this inequality and (3.1) with

$$(\xi,\eta)= \biggl(\frac{\gamma_{0}e}{\varepsilon}x(t_{0}+T),\phi_{p} \biggl(\frac {\gamma_{0}e}{\varepsilon} \biggr)y(t_{0}+T) \biggr),\quad k = 2, $$

we get

$$\frac{\gamma_{0}e}{\varepsilon}\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr) \bigr)\bigr\Vert _{p} = \biggl\Vert \biggl(\frac{\gamma_{0}e}{\varepsilon}x(t), \phi_{p^{*}} \biggl(\phi_{p} \biggl(\frac{\gamma_{0}e}{\varepsilon} \biggr)y(t) \biggr) \biggr)\biggr\Vert _{p} < \frac{\gamma_{0}}{e^{2}} $$

for \(t \ge t_{0}+2T\). We therefore conclude that

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} < \frac{\varepsilon}{e^{3}} $$

for \(t_{0}+2T \le t \le t_{0}+3T\). By means of the same process as in the above mentioned estimates, we see that

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} < \varepsilon e^{-\kappa} $$

for \(t_{0}+(\kappa-1)T \le t \le t_{0}+\kappa T\) and \(\kappa\in\mathbb {N}\). Hence, by \(t \le t_{0}+\kappa T\) we have

$$-\kappa\le-\frac{1}{T}(t-t_{0}) = -\lambda(t-t_{0}), $$

and therefore

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} < \varepsilon e^{-\kappa} \le \varepsilon e^{-\lambda(t-t_{0})} $$

for \(t_{0}+(\kappa-1)T \le t \le t_{0}+\kappa T\) and \(\kappa\in\mathbb {N}\). Note that we can divide the interval \([t_{0},t_{0}+\kappa T]\) as

$$[t_{0},t_{0}+\kappa T] = \bigcup _{n = 1}^{\kappa}\bigl[t_{0}+(n-1)T,t_{0}+n T\bigr] $$

for \(\kappa\in\mathbb{N}\). Thus, it turns out that

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} \le\varepsilon e^{-\lambda(t-t_{0})} $$

for \(t \ge t_{0}\). Using Lemma 2.3, we conclude that the zero solution of (1.1) is exponentially stable. This completes the proof of Theorem 1.1. □

Note here that the proof of Theorem 1.1 does not require the uniqueness of solutions for the initial value problem.

4 Global exponential stability

Clearly concepts of above mentioned stability are local theory about the zero solution. In this section, we will discuss any initial disturbance and initial state. We consider the nonlinear scalar equation \(x' = -x+x^{2}\) (see [27], p.508). It is easy to see that the solution of this equation is given by

$$x(t;t_{0},x_{0}) = \frac{x_{0}}{x_{0}-(x_{0}-1)e^{t-t_{0}}}, $$

where \(t_{0} \in I\) and \(x_{0} \in\mathbb{R}\). Clearly, \(x(t;t_{0},0) \equiv 0\) and \(x(t;t_{0},1) \equiv1\) are the trivial solution. If \(x_{0} < 1\), then \(x(t;t_{0},x_{0}) \to0\) as \(t \to\infty\). Moreover, for every \(0<\varepsilon<1\), we choose \(\delta(\varepsilon) = \varepsilon/2\). If \(|x_{0}|<\delta(\varepsilon)\), then we have

$$\bigl\vert x(t;t_{0},x_{0})\bigr\vert \le \frac{|x_{0}|e^{-(t-t_{0})}}{1-|x_{0}| (1-e^{-(t-t_{0})} )} < \frac{|x_{0}|e^{-(t-t_{0})}}{1-|x_{0}|} < \varepsilon e^{-(t-t_{0})} $$

for \(t \ge t_{0}\). This means that the zero solution is exponentially stable. On the other hand, if \(1 < x_{0}\), then \(x(t;t_{0},x_{0}) \to\infty\) as \(t \to t_{0}+\log (x_{0}/(x_{0}-1))\); that is, in the case that \(1 < x_{0}\), all solutions are unbounded. Therefore, we can conclude that local theory and global theory are completely different concepts. Here, the second question of this paper arises. Will exponential stability guarantee global exponential stability, even if the half-linear differential system (1.1) is nonlinear? We now give the answer to this question.

Theorem 4.1

If the zero solution of (1.1) is uniformly asymptotically stable, then there exist a \(\lambda> 0\) and a \(\beta> 0\) such that \(t_{0} \in I\) and \((x_{0},y_{0}) \in\mathbb{R}^{2}\) imply

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} \le\beta e^{-\lambda(t-t_{0})}\bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr)\bigr\Vert _{p} $$

for all \(t \ge t_{0}\), where \(\beta> 0\) is independent of the size of \(\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p}\).

Proof

By virtue of Theorem 1.1, it turns out that the uniform asymptotic stability of (1.1) implies exponential stability. Using Lemma 2.3, there exist a \(\lambda> 0\) and a \(\delta(1) > 0\) such that \(t_{0} \in I\) and \(\Vert(\xi,\phi_{p^{*}}(\eta))\Vert_{p} < \delta\) imply

$$\bigl\Vert \bigl(x(t;t_{0},\xi,\eta),\phi_{p^{*}} \bigl(y(t;t_{0},\xi,\eta)\bigr)\bigr)\bigr\Vert _{p} \le e^{-\lambda(t-t_{0})} $$

for all \(t \ge t_{0}\).

We choose \(\beta= 2/\delta\). Let \(t_{0} \in I\) and \((x_{0},y_{0}) \in\mathbb {R}^{2}\) be given. We may assume without loss of generality that \((x_{0},y_{0}) \neq(0,0)\). Consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1). For the sake of convenience, we write

$$\bigl(x(t),y(t)\bigr) = \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr),\qquad c = \frac{\delta }{2\Vert(x_{0},\phi_{p^{*}}(y_{0}))\Vert_{p}}. $$

Hence, we have

$$\bigl\Vert \bigl(cx_{0},\phi_{p^{*}}\bigl( \phi_{p}(c)y_{0}\bigr)\bigr)\bigr\Vert _{p} = \bigl\Vert \bigl(cx_{0},c\phi _{p^{*}}(y_{0})\bigr) \bigr\Vert _{p} = c\bigl\Vert \bigl(x_{0}, \phi_{p^{*}}(y_{0})\bigr)\bigr\Vert _{p} < \delta. $$

Using Lemma 2.4, \((cx(t),\phi_{p}(c)y(t))\) is also a solution of (1.1) passing through a point \((cx_{0},\phi_{p}(c)y_{0})\) at \(t = t_{0}\). Thus, we get

$$c\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} = \bigl\Vert \bigl(cx(t),\phi_{p^{*}}\bigl(\phi _{p}(c)y(t)\bigr)\bigr)\bigr\Vert _{p} \le e^{-\lambda(t-t_{0})} $$

for all \(t \ge t_{0}\), and therefore

$$ \bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} \le\frac{2}{\delta}e^{-\lambda (t-t_{0})}\bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr)\bigr\Vert _{p} = \beta e^{-\lambda (t-t_{0})}\bigl\Vert \bigl(x_{0}, \phi_{p^{*}}(y_{0})\bigr)\bigr\Vert _{p} $$

for all \(t \ge t_{0}\). This completes the proof of Theorem 4.1. □

Theorem 4.1 is a natural generalization of Theorem D with \(n = 2\). Actually, in the case that \(p = 2\), Theorem 4.1 becomes Theorem D with \(n = 2\).

Moreover, let us give some definitions. The zero solution of (1.1) is said to be globally uniformly attractive (or uniformly attractive in the large) if for any \(\alpha> 0\) and any \(\varepsilon> 0\), there exists a \(T(\alpha,\varepsilon) > 0\) such that \(t_{0} \in I\) and \(\Vert\mathbf{x}_{0}\Vert< \alpha\) imply \(\Vert\mathbf {x}(t;t_{0},\mathbf{x}_{0})\Vert< \varepsilon\) for all \(t \ge t_{0}+T(\alpha ,\varepsilon)\). The solutions of (1.1) are said to be uniformly bounded if, for any \(\alpha> 0\), there exists a \(B(\alpha) > 0\) such that \(t_{0} \in I\) and \(\Vert\mathbf{x}_{0}\Vert< \alpha\) imply \(\Vert\mathbf{x}(t;t_{0},\mathbf{x}_{0})\Vert< B(\alpha)\) for all \(t \ge t_{0}\). The zero solution of (1.1) is globally uniformly asymptotically stable (or uniformly asymptotically stable in the large) if it is globally uniformly attractive and is uniformly stable and if the solutions of (1.1) are uniformly bounded. For example, we can refer to the books and papers [7, 2731, 33, 36, 37, 47] for those definitions. When restricted to the case of the linear system (1.3), the following facts are well known.

Theorem F

If the zero solution of (1.3) is uniformly asymptotically stable, then it is globally uniformly asymptotically stable.

We can state a natural generalization of Theorem F with \(n = 2\) as follows.

Theorem 4.2

If the zero solution of (1.1) is uniformly asymptotically stable, then it is globally uniformly asymptotically stable.

Proof

By virtue of Theorem 4.1, if the zero solution of (1.1) is uniformly asymptotically stable, then there exist a \(\lambda > 0\) and a \(\beta> 0\) such that \(t_{0} \in I\) and \((\xi,\eta) \in\mathbb {R}^{2}\) imply

$$\bigl\Vert \bigl(x(t;t_{0},\xi,\eta),\phi_{p^{*}} \bigl(y(t;t_{0},\xi,\eta)\bigr)\bigr)\bigr\Vert _{p} \le \beta e^{-\lambda(t-t_{0})}\bigl\Vert \bigl(\xi,\phi_{p^{*}}(\eta)\bigr)\bigr\Vert _{p} $$

for all \(t \ge t_{0}\). We have only to show that the zero solution of (1.1) is globally uniformly attractive and the solutions of (1.1) are uniformly bounded.

First we will prove the global uniform attractivity. Let \(\varepsilon> 0\) and \(\alpha> 0\) be given. We now consider the solution \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) of (1.1) with \(t_{0} \in I\) and \(\Vert(x_{0},y_{0})\Vert< \alpha\). For the sake of convenience, we write

$$\bigl(x(t),y(t)\bigr) = \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr), \qquad c(\alpha) = \bigl\Vert \bigl(\alpha,\phi_{p^{*}}(\alpha) \bigr)\bigr\Vert _{p}. $$

We choose a \(T(\varepsilon,\alpha)\) such that

$$T(\varepsilon,\alpha) = \frac{1}{\min\{1,p-1\}\lambda}\log\frac{\sqrt { (\beta c(\alpha) )^{2}+ (\beta c(\alpha) )^{2(p-1)}}}{\varepsilon}. $$

Since \(|x_{0}| < \alpha\) and \(|y_{0}| < \alpha\), we get

$$\bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr) \bigr\Vert _{p} < c(\alpha). $$

We therefore conclude that

$$\bigl\vert x(t)\bigr\vert \le\bigl\Vert \bigl(x(t),\phi_{p^{*}} \bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} < \beta c(\alpha )e^{-\lambda(t-t_{0})} $$

and

$$\bigl\vert y(t)\bigr\vert \le\phi_{p}\bigl(\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p} \bigr) < \bigl(\beta c(\alpha)e^{-\lambda(t-t_{0})} \bigr)^{p-1} $$

for \(t \ge t_{0}\). From these inequalities, we see that

$$ \bigl\Vert \bigl(x(t),y(t)\bigr)\bigr\Vert < e^{-\min\{1,p-1\}\lambda(t-t_{0})}\sqrt{ \bigl( \beta c(\alpha) \bigr)^{2}+ \bigl(\beta c(\alpha) \bigr)^{2(p-1)}} $$

for \(t \ge t_{0}\). Hence, we obtain

$$\bigl\Vert \bigl(x(t),y(t)\bigr)\bigr\Vert < \varepsilon $$

for \(t \ge t_{0}+T\).

We next prove the uniform boundedness. Let \(\alpha> 0\) be given. As mentioned in the proof of global uniform attractivity, we see that

$$ \bigl\Vert \bigl(x(t),y(t)\bigr)\bigr\Vert < e^{-\min\{1,p-1\}\lambda(t-t_{0})}\sqrt{ \bigl( \beta c(\alpha) \bigr)^{2}+ \bigl(\beta c(\alpha) \bigr)^{2(p-1)}} $$

for \(t \ge t_{0}\) and \(\Vert(x_{0},y_{0})\Vert< \alpha\), where

$$\bigl(x(t),y(t)\bigr) = \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr), \qquad c(\alpha) = \bigl\Vert \bigl(\alpha,\phi_{p^{*}}(\alpha) \bigr)\bigr\Vert _{p}. $$

We choose a \(B(\alpha) > 0\) such that

$$B(\alpha) = \sqrt{ \bigl(\beta c(\alpha) \bigr)^{2}+ \bigl(\beta c( \alpha ) \bigr)^{2(p-1)}}. $$

Hence, we obtain

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}) \bigr)\bigr\Vert < B(\alpha) $$

for \(t \ge t_{0}\). This completes the proof of Theorem 4.2. □

The claim of Theorem 4.2 contributes to clear that uniform asymptotic stability and global uniform asymptotic stability are completely same concept for the half-linear differential system (1.1). Moreover, we can conclude that uniform asymptotic stability, exponential stability and global exponential stability are equivalent for the half-linear differential system (1.1) from the results of Theorems 1.1 and 4.1.

5 Converse theorems on exponential stability

In this section, for comparison with Theorems B, C and E, we will discuss the converse theorems for half-linear system (1.1). Recall that the right-hand side of (1.1) does not satisfy Lipschitz condition at the origin. For this reason, unfortunately, Theorems B, C and E cannot apply to (1.1). First, let us consider the existence of a Lyapunov function estimated by the form \(\Vert\mathbf {x}\Vert_{p}^{p}\). For this purpose, we give a lemma as follows.

Lemma 5.1

If \((x(t),y(t))\) is a solution of (1.1) passing through a point \((x_{0},y_{0}) \in\mathbb{R}^{2}\) at \(t = t_{0} \in I\), then the following inequality holds:

$$\bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p}^{p} \ge\bigl\Vert \bigl(x_{0},\phi _{p^{*}}(y_{0})\bigr)\bigr\Vert _{p}^{p} \exp \biggl(\int_{t_{0}}^{t}\psi(s)\, ds \biggr) $$

for \(t \ge t_{0}\), where the continuous function \(\psi(t)\) defined by

$$\psi(t) = \min\bigl\{ pa_{11}(t)-\bigl\vert (p-1)a_{12}(t)+a_{21}(t) \bigr\vert ,p^{*}a_{22}(t)-\bigl\vert a_{12}(t)+\bigl(p^{*}-1 \bigr)a_{21}(t)\bigr\vert \bigr\} . $$

Proof

Define

$$w(t) = \bigl\Vert \bigl(x(t),\phi_{p^{*}}\bigl(y(t)\bigr)\bigr)\bigr\Vert _{p}^{p} = \bigl\vert x(t)\bigr\vert ^{p}+\bigl\vert y(t)\bigr\vert ^{p^{*}} $$

for \(t \ge t_{0}\). From the right-hand side of (1.1), we have

$$\begin{aligned} w'(t) =& pa_{11}(t)\bigl\vert x(t)\bigr\vert ^{p}+p^{*}a_{22}(t)\bigl\vert y(t)\bigr\vert ^{p^{*}} \\ &{}+\bigl(pa_{12}(t)+p^{*}a_{21}(t)\bigr)\phi _{p}\bigl(x(t)\bigr)\phi_{p^{*}}\bigl(y(t)\bigr) \end{aligned}$$

for \(t \ge t_{0}\). Using the classical Young inequality, we obtain

$$\begin{aligned} w'(t) \ge& pa_{11}(t)\bigl\vert x(t)\bigr\vert ^{p}+p^{*}a_{22}(t)\bigl\vert y(t)\bigr\vert ^{p^{*}}-\bigl\vert pa_{12}(t)+p^{*}a_{21}(t)\bigr\vert \bigl\vert x(t)\bigr\vert ^{p-1}\bigl\vert y(t)\bigr\vert ^{p^{*}-1} \\ \ge& pa_{11}(t)\bigl\vert x(t)\bigr\vert ^{p}+p^{*}a_{22}(t) \bigl\vert y(t)\bigr\vert ^{p^{*}} \\ &{} -\bigl\vert pa_{12}(t)+p^{*}a_{21}(t)\bigr\vert \biggl(\frac{\vert x(t)\vert ^{(p-1)p^{*}}}{p^{*}}+\frac {\vert y(t)\vert ^{(p^{*}-1)p}}{p} \biggr) \\ =& \bigl(pa_{11}(t)-\bigl\vert (p-1)a_{12}(t)+a_{21}(t) \bigr\vert \bigr)\bigl\vert x(t)\bigr\vert ^{p} \\ &{} + \bigl(p^{*}a_{22}(t)-\bigl\vert a_{12}(t)+\bigl(p^{*}-1 \bigr)a_{21}(t)\bigr\vert \bigr)\bigl\vert y(t)\bigr\vert ^{p^{*}} \\ \ge&\psi(t) w(t) \end{aligned}$$

for \(t \ge t_{0}\). Therefore, we get

$$\biggl(\exp \biggl(-\int_{t_{0}}^{t}\psi(s)\, ds \biggr)w(t) \biggr)' \ge0 $$

for \(t \ge t_{0}\). Integrate this inequality from \(t_{0}\) to t to obtain

$$w(t) \ge w(t_{0})\exp \biggl(\int_{t_{0}}^{t} \psi(s)\, ds \biggr) = \bigl\Vert \bigl(x_{0},\phi _{p^{*}}(y_{0})\bigr)\bigr\Vert _{p}^{p} \exp \biggl(\int_{t_{0}}^{t}\psi(s)\, ds \biggr) $$

for \(t \ge t_{0}\). □

The first converse theorem of this section is as follows. We can prove this theorem without requiring the uniqueness of solutions of (1.1) for the initial value problem.

Theorem 5.2

Suppose that all coefficients of (1.1) are bounded on I and that there exist a \(\lambda> 0\) and a \(\beta> 0\) such that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} \le\beta e^{-\lambda(t-t_{0})}\bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr)\bigr\Vert _{p} $$

for all \(t \ge t_{0} \ge0\), where \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) is a solution of (1.1). Then there exist three positive constants \(\beta_{1}\), \(\beta_{2}\), \(\beta_{3}\) and a Lyapunov function \(V(t,x,y)\) defined on \(I\times\mathbb{R}^{2}\) which satisfies the following conditions:

  1. (i)

    \(\beta_{1}\Vert(x,\phi_{p^{*}}(y))\Vert_{p}^{p} \le V(t,x,y) \le \beta_{2}\Vert(x,\phi_{p^{*}}(y))\Vert_{p}^{p}\);

  2. (ii)

    \(\dot{V}_{\text{(1.1)}}(t,x,y) \le-\beta_{3}\Vert(x,\phi _{p^{*}}(y))\Vert_{p}^{p}\).

Proof

From the assumption, all solutions \((x(s;t,x,y),y(s;t,x,y))\) of (1.1) passing through a point \((x,y) \in\mathbb{R}^{2}\) at \(t \in I\) satisfy

$$\bigl\Vert \bigl(x(s;t,x,y),\phi_{p^{*}}\bigl(y(s;t,x,y)\bigr)\bigr) \bigr\Vert _{p} \le\beta\bigl\Vert \bigl(x,\phi _{p^{*}}(y) \bigr)\bigr\Vert _{p} $$

for \(s \ge t\). Therefore, we can consider the function

$$\overline{v}(s;t,x,y) = \sup\bigl\Vert \bigl(x(s;t,x,y),\phi _{p^{*}} \bigl(y(s;t,x,y)\bigr)\bigr)\bigr\Vert _{p} $$

for \(s \ge t\). Note that if we suppose the uniqueness of solutions of (1.1) for the initial value problem, then \(\overline {v}(s;t,x,y) = \Vert(x(s;t,x,y),\phi_{p^{*}}(y(s;t,x,y)))\Vert_{p}\) holds for \(s \ge t\). However, we can prove this theorem without requiring the uniqueness of solutions. Let \(V(t,x,y)\) be defined by

$$V(t,x,y) = \int_{t}^{t+T}\overline{v}^{p}(s;t,x,y) \, ds, $$

where \(T = (1/\lambda)\log (\beta\sqrt[p]{2} )\) is a constant. From the assumption, we obtain the following estimate:

$$\begin{aligned} V(t,x,y) &\le\int_{t}^{t+T}e^{-p\lambda(s-t)}\, ds \bigl(\beta{\bigl\Vert \bigl(x,\phi _{p^{*}}(y)\bigr)\bigr\Vert _{p}} \bigr)^{p} \\ &= \frac{\beta^{p} (1-e^{-p\lambda T} )}{p\lambda}\bigl\Vert \bigl(x,\phi _{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} = \beta_{2}\bigl\Vert \bigl(x, \phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p}. \end{aligned}$$

We will show that \(\beta_{1}\Vert(x,\phi_{p^{*}}(y))\Vert_{p}^{p} \le V(t,x,y)\). Since all coefficients of (1.1) are bounded on I, there exists an \(L > 0\) such that \(|\psi(s)| \le L\) for all \(s \in I\), where ψ is the continuous function given in Lemma 5.1. From Lemma 5.1, we have

$$\overline{v}^{p}(s;t,x,y) \ge\exp \biggl(\int_{t}^{s} \psi(\tau)\, d\tau \biggr)\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} \ge e^{-L(s-t)}\bigl\Vert \bigl(x, \phi _{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} $$

for \(s \ge t\). Thus, we get

$$\begin{aligned} V(t,x,y) &\ge\int_{t}^{t+T}e^{-L(s-t)}\, ds\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} \\ &= \frac{1-e^{-LT}}{L}\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} = \beta_{1}\bigl\Vert \bigl(x, \phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p}. \end{aligned}$$

Therefore, condition (i) is satisfied.

We next prove the condition (ii). Let \(h > 0\) and

$$\bigl(x(s),y(s)\bigr) = \bigl(x(s;t,x,y),y(s;t,x,y)\bigr)\quad \text{for } s \ge t. $$

From the definition of \(\overline{v}\), we see that

$$\overline{v}\bigl(s;u,x(u),y(u)\bigr) \le\overline{v}(s;t,x,y) $$

for \(t \le u \le s\). Then we get

$$\begin{aligned} V\bigl(t+h, x(t+h),y(t+h)\bigr) =& \int_{t+h}^{t+h+T} \overline {v}^{p}\bigl(s;t+h,x(t+h),y(t+h)\bigr)\, ds \\ \le&\int_{t+h}^{t+h+T}\overline{v}^{p}(s;t,x,y) \, ds \\ =& V(t,x,y)+\int_{t+T}^{t+h+T}\overline{v}^{p}(s;t,x,y) \, ds -\int_{t}^{t+h}\overline{v}^{p}(s;t,x,y) \, ds \\ \le& V(t,x,y)+\int_{t+T}^{t+h+T}\beta^{p}e^{-p\lambda(s-t)} \bigl\Vert \bigl(x,\phi _{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p}\, ds \\ &{} -\int_{t}^{t+h}e^{-L(s-t)}\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p}\, ds \\ =& V(t,x,y) \\ &{} + \biggl(\frac{\beta^{p}e^{-p\lambda T}(1-e^{-p\lambda h})}{p\lambda }-\frac{1-e^{-Lh}}{L} \biggr)\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} \end{aligned}$$

from the assumption and Lemma 5.1. Therefore, we can estimate that

$$\begin{aligned}& \frac{1}{h}\bigl(V\bigl(t+h, x(t+h),y(t+h)\bigr)-V(t,x,y)\bigr) \\& \quad \le \biggl(\frac{\beta^{p}e^{-p\lambda T}(1-e^{-p\lambda h})}{p\lambda h}-\frac{1-e^{-Lh}}{Lh} \biggr)\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p}. \end{aligned}$$

From this inequality and

$$\lim_{h\to0} \biggl(\frac{\beta^{p}e^{-p\lambda T}(1-e^{-p\lambda h})}{p\lambda h}-\frac{1-e^{-Lh}}{Lh} \biggr) = \beta^{p}e^{-p\lambda T}-1 = -\frac{1}{2}, $$

we obtain

$$\dot{V}_{\text{(1.1)}}(t,x,y) \le-\frac{1}{2}\bigl\Vert \bigl(x,\phi _{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p} = - \beta_{3}\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}^{p}. $$

This completes the proof of Theorem 5.2. □

Note that three positive constants \(\beta_{1}\), \(\beta_{2}\), \(\beta_{3}\) in Theorem 5.2 are independent of the size of \(\|(x_{0},\phi_{p^{*}}(y_{0}))\|_{p}\). The second converse theorem is as follows.

Theorem 5.3

Suppose that there exist a \(\lambda> 0\) and a \(\beta> 0\) such that

$$\bigl\Vert \bigl(x(t;t_{0},x_{0},y_{0}), \phi_{p^{*}}\bigl(y(t;t_{0},x_{0},y_{0}) \bigr)\bigr)\bigr\Vert _{p} \le\beta e^{-\lambda(t-t_{0})}\bigl\Vert \bigl(x_{0},\phi_{p^{*}}(y_{0})\bigr)\bigr\Vert _{p} $$

for all \(t \ge t_{0} \ge0\), where \((x(t;t_{0},x_{0},y_{0}),y(t;t_{0},x_{0},y_{0}))\) is a solution of (1.1). Then there exists a Lyapunov function \(V(t,x,y)\) defined on \(I\times\mathbb{R}^{2}\) which satisfies the following conditions:

  1. (i)

    \(\Vert(x,\phi_{p^{*}}(y))\Vert_{p} \le V(t,x,y) \le\beta\Vert (x,\phi_{p^{*}}(y))\Vert_{p}\);

  2. (ii)

    \(\dot{V}_{\text{(1.1)}}(t,x,y) \le-\lambda V(t,x,y)\).

Proof

Let \(V(t,x,y)\) be defined by

$$V(t,x,y) = \sup_{\sigma\ge0}\overline{v}(t+\sigma;t,x,y)e^{\lambda \sigma}, $$

where \(\overline{v}\) is the function given in the proof of Theorem 5.2. It is clear that \(\Vert(x,\phi_{p^{*}}(y))\Vert_{p} \le V(t,x,y)\). On the other hand, we can easy to see that

$$V(t,x,y) \le\sup_{\sigma\ge0}\beta e^{-\lambda\sigma}\bigl\Vert \bigl(x,\phi _{p^{*}}(y)\bigr)\bigr\Vert _{p} e^{\lambda\sigma} = \beta\bigl\Vert \bigl(x,\phi_{p^{*}}(y)\bigr)\bigr\Vert _{p}, $$

by the assumption.

We next prove the condition (ii). Let \(h > 0\) and

$$\bigl(x(s),y(s)\bigr) = \bigl(x(s;t,x,y),y(s;t,x,y)\bigr)\quad \text{for } s \ge t. $$

Then we get

$$\begin{aligned} V\bigl(t+h,x(t+h),y(t+h)\bigr) &= \sup_{\sigma\ge0}\overline{v} \bigl(t+h+\sigma ;t+h,x(t+h),y(t+h)\bigr)e^{\lambda\sigma} \\ &= \sup_{\tau\ge h}\overline{v}\bigl(t+\tau;t+h,x(t+h),y(t+h) \bigr)e^{\lambda\tau }e^{-\lambda h} \\ &\le\sup_{\tau\ge h}\overline{v}(t+\tau;t,x,y)e^{\lambda\tau }e^{-\lambda h} \\ &\le\sup_{\tau\ge0}\overline{v}(t+\tau;t,x,y)e^{\lambda\tau }e^{-\lambda h} = V(t,x,y)e^{-\lambda h} \end{aligned}$$

from \(\overline{v}(s;u,x(u),y(u)) \le\overline{v}(s;t,x,y)\) for \(t \le u \le s\). Therefore, we can estimate that

$$\frac{V(t+h,x(t+h),y(t+h))-V(t,x,y)}{h} \le\frac{e^{-\lambda h}-1}{h}V(t,x,y). $$

From this inequality and

$$\lim_{h\to0}\frac{e^{-\lambda h}-1}{h} = -\lambda, $$

we obtain

$$\dot{V}_{\text{(1.1)}}(t,x,y) \le-\lambda V(t,x,y). $$

This completes the proof of Theorem 5.3. □