1 Introduction

It is well known that there exists a large class of systems whose states are always nonnegative in the real world, for example, biological systems, chemical process, economic systems, and so on. We call them positive systems with a certain stationary Schrödinger operator [1,2,3]. In particular, switched positive linear systems (SPLSs) with respect to the Schrödinger operator which consist of subsolutions of the stationary Schrödinger equation are also found in many practical systems. They have board applications in TCP congestion control, formation flying, and image processing [4], to list a few.

As is well known, the switching law design of switched systems with respect to Schrödinger operator is always one of the topics of general interest [5, 6]. Generally, the switching law is divided into time-dependent switching and state-dependent switching. Existing results on SPLSs have led to little results referring to the state-dependent switching design. The current results mainly concern the uniqueness of the common Schrödinger linear copositive Lyapunov function for SPLSs and stabilization design of SPLSs based on multiple Schrödinger Lyapunov functions. Almost all the premise conditions are required, and the subsystems are Hurwitz stable. In real control systems, there are many systems whose subsystems are not stable, i.e., the subsystems matrices are not Hurwitz (Example 1). It is natural to ask how to solve the stability and uniqueness of SPLSs with Schrödinger unstable subsystems. This inspired us to study this problem.

Based on the above discussions, this paper addresses the state-feedback switching design of SPLSs, which contain unstable subsystems. For switched linear systems (not positive), when the systems admit stable convex combinations, a state-feedback switching is designed in [7,8,9], which such that system is uniformly exponentially stable. With the aid of these results, we construct a state-feedback switching law such that SPLSs are exponentially stable [10, 11]. We establish the necessary and sufficient conditions for the stability of Schrödinger SPLSs with two subsystems.

This paper is organized as follows: In Section 2 we shall give some preliminaries. Meanwhile, an example is presented to induce the research motivation. In Section 3, we shall consider the stability of continuous-time systems and design the state-feedback switching law. In Section 4, we shall present a simulation example.

Notation

In the rest of the paper, the set of real numbers, the vector of n-tuples of real numbers and the space of \(n\times n\) matrices with real entries are denoted by ℜ, \(\Re^{n}\), and \(\Re^{n\times n}\), respectively. Two sets of nonnegative and positive integers are denoted by \(\mathbb{N}\) and \(\mathbb{N}_{+}\), respectively. Let \(I_{n}\), \(A^{T}\), and \(\|\cdot\|\) denote the \(n\times n\) identity matrix, the transpose of the matrix A and the Euclidean norm, respectively.

Let \(v_{i}\) denote the ith component of v, where \(v\in\Re^{n}\). \(v\succ0\) (\(v \succeq0\)) denotes that all components of v are positive (nonnegative), i.e., \(v_{i}>0\) (\(v_{i}\geq0\)). Similarly, we can also define \(v\prec0\) and \(v\preceq0\). And then the minimal and maximal component of v are denoted by \(\underline{\lambda}_{v}\) and \(\overline{\lambda}_{v}\), respectively.

Let A be a matrix. If its off-diagonal elements are all nonnegative real numbers, then we say that A is a Metzler matrix.

2 Preliminaries

The continuous-time Schrödinger switched linear system is defined as follows:

$$ \dot{x}(t)=A_{\sigma(t)}x(t), $$
(2.1)

where \(t\in(0,\infty)\) and \(\sigma(t)\) is a piecewise constant switching signal which takes finite values in \(S=\{1,2,\ldots,N \}\) and \(A_{i}\in\Re^{n\times n}\) (\(i\in S\)) are Schrödinger system matrices.

Assumption 1

Let \(i\in S\). Then \(A_{i}\) is a Metzler matrix for the system (2.1).

Definition 1

If a switching signal \(\sigma(t)\) depends on system states and their past values, i.e., \(\sigma (t^{+})=\sigma(x(t),\sigma(t^{-}))\), then we say that it is a state-feedback switching law.

Let \(x_{0}\) be a given initial state. We say that σ is said to be well defined if a switched system admits a solution for all forward time and there exist finite switching instants.

Lemma 1

The matrix A is a Metzler matrix if and only if the continuous-time system

$$ \dot{x}(t)=Ax(t) $$
(2.2)

is positive for any \(t\in(0,\infty)\).

Proof

Let

$$\tau(r)=\frac{x'(r)}{x(r)} $$

for any \(r\in(t,\infty)\).

From the Ito formula we have [6]

$$\begin{aligned}& d\exp\bigl\{ -\dot{x}(r)\tau'(0)\bigr\} \leq - \tau'(0)\exp\bigl\{ -\dot {x}(r)\tau'(0)\bigr\} \, d \dot{x}(r) \\& \hphantom{d\exp\bigl\{ -\dot{x}(r)\tau'(0)\bigr\} \leq{}}{}+\frac{\tau'(0)}{2}\exp\bigl\{ -\dot{x}(r)\tau'(0) \bigr\} \, dr, \end{aligned}$$
(2.3)
$$\begin{aligned}& dA\bigl(W(r)\bigr) \leq A'\bigl(W(r)\bigr)b\bigl(W(r) \bigr)\, dr+A'\bigl(W(r)\bigr)\tau\bigl(W(r)\bigr)\, d\dot{x}(r) \\& \hphantom{dA(W(r)) \leq{}}{}+\frac{1}{2}F''\bigl(W(r)\bigr) \tau\bigl(W(r)\bigr)^{2}\, dr \\& \hphantom{dA(W(r))}\leq A\bigl(W(r)\bigr)\frac{\tau'(0)b(W(r))}{\tau(W(r))}\, dr+A\bigl(W(r)\bigr)\tau '(0)\, d\dot {x}(r) \\& \hphantom{dA(W(r)) \leq{}}{}+\frac{1}{2}A\bigl(W(r)\bigr)\bigl[\tau'(0)^{2}- \tau'(0)\tau'\bigl(W(r)\bigr)\bigr]\, dr, \end{aligned}$$
(2.4)

and

$$\begin{aligned}& \bigl\langle d\exp\bigl\{ -\dot{x}(r)\tau'(0)\bigr\} , dA\bigl(W(r)\bigr)\bigr\rangle \\& \quad \leq -\tau'(0)\exp\bigl\{ -\dot{x}(r)\tau'(0) \bigr\} A\bigl(W(r)\bigr)\tau'(0)\, dr. \end{aligned}$$
(2.5)

By virtue of the product formula of continuous semi-martingales, we have from (2.1) and (2.2)

$$\begin{aligned} dV(r) =& d\exp\bigl\{ -\dot{x}(r)\tau'(0)\bigr\} A\bigl(W(r)\bigr) \\ \leq& A\bigl(W(r)\bigr)\, d\exp\bigl\{ -\dot{x}(r)\tau'(0)\bigr\} + \exp\bigl\{ -\dot{x}(r)\tau '(0)\bigr\} \, d A\bigl(W(r)\bigr) \\ &{} +\bigl\langle d\exp\bigl\{ -\dot{x}(r)\tau'(0)\bigr\} , dA \bigl(W(r)\bigr)\bigr\rangle \\ \leq&\biggl[\frac{\tau'(0)b(W(r))}{\tau(W(r))}-\frac{\tau'(0)\tau '(W(r))}{2}\biggr]\exp\bigl\{ -\dot{x}(r) \tau'(0)\bigr\} A\bigl(W(r)\bigr)\, dr \end{aligned}$$

from (2.2), (2.3), (2.4), and (2.5).

Thus we complete the proof of Lemma 1. □

Lemma 2

(([9]))

If A is a Metzler matrix and \(A\in\Re ^{n\times n}\), then the following conditions hold:

  1. (i)

    A is Hurwitz.

  2. (ii)

    There exists some vector \(v\succ0\) in \(\Re^{n}\) satisfying \(Av\prec0\).

Proof

Let \(F=\inf\{ t| |R(W(t;x_{0}))|\geq\frac{|v|}{2}\}\), where

$$x_{0}< \frac{\min\{\delta,\varepsilon\}}{4(1+C)N_{W}}. $$

It is obvious that \(F>0\). If F is finite and \(R(W(F))=\frac{|v|}{2}\) for \(t\in[0, F]\), then we have

$$\frac{dA(r)}{dr}=\bigl[v+R\bigl(W(r)\bigr)\bigr]A(r)\leq\frac{v t}{2}. $$

So

$$\begin{aligned}& A(r) \leq e^{\frac{v t}{2}}V(0)=e^{\frac{v t}{2}}F(x_{0})\leq 2Ce^{\frac {v t}{2}}x_{0}\leq\delta, \\& W(r) \leq \exp\bigl\{ \tau'(0)W(r)\bigr\} G\bigl(A(r)\bigr) \\& \hphantom{W(r)}\leq \exp\bigl\{ L_{V}(1+\sqrt{2r\log\log r})\bigr\} G \bigl(2Ce^{\frac{v r}{2}}x_{0}\bigr) \\& \hphantom{W(r)}\leq \exp\bigl\{ L_{V}(1+\sqrt{2r\log\log r})\bigr\} \frac{2}{C}\cdot 2Ce^{\frac{v r}{2}}x_{0} \\& \hphantom{W(r)}\leq 4\exp\biggl\{ L_{V}(1+\sqrt{2r\log\log r})+ \frac{v t}{2}\biggr\} x_{0} \\& \hphantom{W(r)}\leq 4L_{V} x_{0}< \varepsilon. \end{aligned}$$

It is easy to see that \(R(W(r))<\frac{|v|}{3}\) for any \(t\in[0, F]\), which together with (2.5), gives \(R(W(r))\leq\frac{|v|}{2}\). Obviously, this is a contradiction. So

$$A(r)\leq e^{\frac{v t}{2}}F(x_{0}). $$

It follows that from (2.4) that

$$\begin{aligned} W(r;x_{0}) =& \exp\bigl\{ \tau'(0)W(r)\bigr\} G\bigl(A(r) \bigr) \\ \leq&4\exp\biggl\{ L_{V}(1+\sqrt{2r\log \log r})+\frac{v r}{2} \biggr\} x_{0}, \end{aligned}$$

where

$$x_{0}< \frac{\min\{\delta,\varepsilon\}}{4(1+C)L_{V}}, $$

which, together with Definition 1, shows that the trivial solution of (2.4) is an exponentially stable.

For the system (2.2) it is easy to see that \(A^{T}v\prec0\), where \(v \in\Re^{n}\). And then we know that \(V=x^{T}v\) is an LCLF. □

Finally, an example is presented to introduce main results.

Example 1

Let us consider system (2.1) with two subsystems, where

$$A_{1}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} -4 & 0.2\\ 0.2& 2 \end{array}\displaystyle \right ) \quad \text{and}\quad A_{2}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} 2 & 0.2\\ 0.2& -2 \end{array}\displaystyle \right ). $$

For the first subsystem matrix \(A_{1}\), there does not exist \(v\succ0\) satisfying \(A_{1}^{T}v\prec0\). As well as the first one, there does not exist \(v'\succ0\) satisfying \(A_{2}^{T}v'\prec0\).

Example 1 demonstrates that two subsystem matrices are not Hurwitz. In spite of this disadvantage, we find that there are some combinations \(A_{0}\) of \(A_{1}\) and \(A_{2}\), which are Metzler and Hurwitz matrices, i.e., \(A_{0}=\lambda_{1}A_{1}+\lambda_{2}A_{2}\) is a Metzler and Hurwitz matrix, where \(\lambda_{1},\lambda_{2}\in(0,1)\), and \(\lambda_{1}+\lambda_{2}=1\). For example, choose \(\lambda_{1}=0.4\) and \(\lambda_{2}=0.6\). We see that A 0 = ( 0.4 0.2 0.2 0.4 ) is a Metzler and Hurwitz matrix.

3 Main results

First, we define the switching rule. Let there be given a stable convex combination of the system matrices

$$A_{0}=\sum^{N}_{i=1}w_{i}A_{i}, $$

where \(\sum^{N}_{i=1}w_{i}=1\) and \(w_{i}\in(0,1)\).

Since system (2.1) is positive, \(A_{i}\) is a Metzler matrix from Lemma 1, where \(i\in S\). It is obvious that \(A_{0}\) is also a Metzler matrix. There exists \(0\succ v\in\Re^{n}\) satisfying \(A_{0}^{T}v\prec 0\) from Lemma 2. Without loss of generality, we select a vector \(\mathbf{e}\in\Re^{n}\) such that \(A_{0}^{T}v=-\mathbf{e}\), where \(\mathbf {e}\succ0\). Denote \(\mathbf{\ell}_{i}=A^{T}_{i}v\), \(i\in S\).

Remark 1

Indeed, as long as the system matrices admit a stable linear combination \(A_{0}=\sum^{N}_{i=1}w'_{i}A_{i}\) for \(w'_{i}>0\), one can find a stable convex combination by choosing \(w_{i}=\frac{w'_{i}}{\sum^{N}_{i=1}w'_{i}}\). This reduces the difficulty of selecting the matrix \(A_{0}\).

Switching rule 1

  1. (i)

    For any initial state \(x(t_{0})=x_{0}\), select

    $$i_{0}=\arg \min_{i\in S}\bigl\{ x_{0}^{T} \mathbf{\ell}_{i}\bigr\} , $$

    and then define \(\tau(r_{0})=i_{0}\), where argmin means the argument which makes the function minimal.

  2. (ii)

    The first switching time instant is selected as

    $$r_{1}=\inf\bigl\{ r\geq r_{0}| x(r)^{T}\mathbf{ \ell}_{\tau (r_{0})}>-r_{\tau (r_{0})}x(r)^{T}\mathbf{e}, 0\leq r-r_{0}< \tau\bigr\} , $$

    or

    $$r_{1}=r_{0}+\tau, $$

    where τ and \(r_{\tau(r_{0})}\) are given constants with \(\tau>0\) and \(r_{\tau(r_{0})}\in(0,1)\), respectively. Thus, the switching index is determined by

    $$i_{1}=\arg \min_{i\in S}\bigl\{ x(r_{1})^{T} \mathbf{\ell}_{i}\bigr\} , $$

    and \(\tau(r_{1})=i_{1}\).

  3. (iii)

    The switching time instants are defined by

    $$r_{j+1}=\inf\bigl\{ t\geq r_{j}| x(r)^{T}\mathbf{ \ell}_{\tau(r_{j})}>-r_{\tau(r_{j})}x(r)^{T}\mathbf{e}, 0\leq r-r_{j}< \tau\bigr\} , $$

    or

    $$r_{j+1}=r_{j}+\tau. $$

    Moreover, the switching index sequences are

    $$i_{j+1}=\arg \min_{i\in S}\bigl\{ x(r_{j+1})^{T} \mathbf{\ell}_{i}\bigr\} , $$

    and \(\tau(r_{j+1})=i_{j+1}\), where \(r_{\tau(r_{j})}\in(0,1)\), \(j\in\mathbb{N}\).

Remark 2

From Switching rule 1, it is possible that \(i_{1}=i_{0}\). Furthermore, it is also possible that \(i_{j+1}=i_{j}\) for \(j\in\mathbb{N_{+}}\). We present a simple discussion of the statement. Assume \(i_{j}=\arg \min_{i\in S}\{x(r_{j})^{T}\mathbf{\ell }_{i}\}\). If \(\min_{i\in S}\{x(r_{j+1})^{T}\mathbf{\ell}_{i}\} =x(r_{j+1})^{T}\mathbf{\ell}_{m}\) and \(\min_{i\in S}\{ x(r_{j})^{T}\mathbf{\ell}_{i}\}=x(r_{j})^{T}\mathbf{\ell}_{m}\), where \(m\in S\), then \(i_{j}=i_{j+1}=m\).

Theorem 1

Assume that there exists a stable convex combination of the system matrices for system (2.1). Then Switching rule 1 is well defined and system (2.1) is uniformly exponentially stable under the switching rule.

Proof

We first prove the well-defined property of the switching rule, which means that there is a lower bound of dwell time between any two consecutive switching time instants. This shows that switchings are finite in any finite time interval.

Assume \(r_{m}\) and \(r_{m+1}\) are two consecutive switching time instants. Combining \(A_{0}=\sum_{i\in S}w_{i}A_{i}\), \(\mathbf{\ell}_{i}=A^{T}_{i}v\), \(i\in S\), and \(A_{0}^{T}v=-\mathbf{e}\) yields

$$w_{1}\mathbf{\ell}_{1}+w_{2}\mathbf{ \ell}_{2}+\cdots+w_{N}\mathbf {\ell }_{N}=- \mathbf{e}. $$

Furthermore,

$$w_{1}x^{T}\mathbf{\ell}_{1}+w_{2}x^{T} \mathbf{\ell}_{2}+\cdots +w_{N}x^{T}\mathbf{ \ell}_{N}=-x^{T}\mathbf{e}. $$

Due to \(x(r_{m})^{T}\mathbf{\ell}_{\tau(r_{m})}=\min_{i\in S}\{ x(r_{m})^{T}\mathbf{\ell}_{i}\}\), it follows

$$ x(r_{m})^{T}\mathbf{\ell}_{\tau(r_{m})} \leq-x(r_{m})^{T}\mathbf{e}. $$
(3.1)

For \(t\in[r_{m},r_{m+1})\), we can obtain

$$ x(r)=e^{A_{\tau(r_{m})}(r-r_{m+1})} x(r_{m+1}) $$
(3.2)

by system (2.1).

Since we know that \(r_{m+1}-t\leq r_{m+1}-r_{m}\leq\tau\) holds, there exists a positive constant δ such that \(\|e^{A_{\tau(r_{m})}(r-r_{m+1})}\|\leq \delta\). In detail, \(\delta=e^{-\frac{1}{2}\underline{\rho }(A+A^{T})\tau }\) if \(\underline{\rho}(A+A^{T})<0\), and \(\delta=1\) if \(\overline {\rho}(A+A^{T})\geq0\). So, it is clear that

$$ \bigl\Vert x(r)\bigr\Vert \leq\delta\bigl\Vert x(r_{m+1})\bigr\Vert . $$
(3.3)

Define the following function:

$$ f(r)=x(r)^{T}\mathbf{\ell}_{\tau(r_{m})}+x(r)^{T} \mathbf{e},\quad t\in[r_{m},r_{m+1}]. $$
(3.4)

From (3.1), (3.2), and (iii) in Switching rule 1, it follows that

$$ f(r_{m})\leq0,\qquad f(r_{m+1})\geq(1-r_{\tau(r_{m})})x(r_{m+1})^{T} \mathbf{e}>0. $$

In addition, the time derivation of (3.4) is

$$ \dot{f}(r)=x(r)^{T}A_{\tau(r_{m})}^{T}(\mathbf{ \ell}_{\tau (r_{m})}+\mathbf{e}). $$

Together with (3.3), we have

$$ \bigl\vert \dot{f}(r)\bigr\vert =\bigl\vert x(r)^{T}A_{\tau(r_{m})}^{T}(\mathbf{\ell}_{\tau (r_{m})}+ \mathbf{e})\bigr\vert \leq\mu, $$
(3.5)

where \(\mu=\delta\varepsilon\|x(r_{m+1})^{T}\|\), and \(\varepsilon=\|A_{\tau(r_{m})}^{T}(\mathbf{\ell}_{\tau (r_{m})}+\mathbf {e})\|\). Applying the differential mean value theorem to (3.5), one can deduce that

$$ f(r_{m+1})-f(r_{m})\leq\mu(r_{m+1}-r_{m}). $$
(3.6)

Then we have from (3.6)

$$ r_{m+1}-r_{m}\geq \frac{(1-r_{\tau(r_{m})})\underline{\lambda}_{\mathbf{e}}}{\delta \varepsilon}. $$

Owing to \(r_{\tau(r_{m})}\in(0, 1)\), \(\frac{(1-r_{\tau (r_{m})})\underline{\lambda}_{\mathbf{e}}}{\delta\varepsilon}>0\). This implies for each switching time interval, the dwell time has a lower bound. Thus, the well-defined property of switching rule is rendered.

We start to prove system (2.1) is uniformly exponentially stable. Choose \(V(x(r))=x(r)^{T}v\). The time derivation of V is

$$ \dot{V}\bigl(x(r)\bigr)=x(r)^{T}A_{\tau(r_{m})}^{T}v=x(r)^{T} \mathbf{\ell }_{\tau(r_{m})} $$
(3.7)

for \(t\in[r_{m},r_{m+1})\).

By (ii) in Switching rule 1, we get from (3.7)

$$ \dot{V}\bigl(x(r)\bigr)\leq-r_{\tau(r_{m})}x(r)^{T} \mathbf{e}=-\frac{r_{\tau(r_{m})} \underline{\lambda}_{\mathbf{e}}}{\overline{\lambda}_{v}}V\bigl(x(r)\bigr). $$
(3.8)

By the comparison principle, we have

$$ V\bigl(x(r)\bigr)\leq e^{-\frac{r_{\tau(r_{m})}\underline{\lambda}_{\mathbf {e}}}{\overline{\lambda}_{v}}(r-r_{m})}V\bigl(x(r_{m})\bigr), \quad t \in[r_{m},r_{m+1}), $$
(3.9)

from (3.8).

Moreover, we obtain

$$ V\bigl(x(r)\bigr)\leq e^{-\frac{r_{\tau(r_{m})}\underline{\lambda}_{\mathbf {e}}}{\overline {\lambda}_{v}}(r-r_{m})} e^{-\frac{r_{\tau(r_{m-1})}\underline{\lambda}_{\mathbf{e}}}{\overline{\lambda}_{v}}(r_{m}-r_{m-1})} e^{-\frac{r_{\tau(r_{0})}\underline{\lambda}_{\mathbf {e}}}{\overline {\lambda}_{v}}(r_{1}-r_{0})}V \bigl(x(r_{0})\bigr), $$

where \(t\in[r_{m},r_{m+1})\).

Define \(\beta=\min_{i=0,1,\ldots,m}\{\frac{r_{\tau (r_{i})}\underline {\lambda}_{\mathbf{e}}}{\overline{\lambda}_{v}} \}\). Then we have

$$ V\bigl(x(r)\bigr)\leq e^{-\beta(r-r_{0})}V\bigl(x(r_{0}) \bigr) $$
(3.10)

and

$$ x(r)^{T}v\leq e^{-\beta(r-r_{0})}x(r_{0})^{T}v. $$
(3.11)

By \(x(r)\succ0\) and the equivalent property of norm, we have from (3.10) and (3.11)

$$ x(r)^{T}v=\sum_{i=1}^{n}v_{i}x_{i} \geq\underline{\lambda}_{v} \sum_{i=1}^{n}x_{i} \geq\underline{\lambda}_{v}\bigl\Vert x(r)\bigr\Vert . $$

Similarly,

$$ x(r_{0})^{T}v=\sum_{i=1}^{n}v_{i}x_{i} \leq\overline{\lambda}_{v} \sum_{i=1}^{n}x_{i} \leq\overline{\lambda}_{v}\bigl\Vert x(r_{0})\bigr\Vert . $$

Then we deduce that

$$ \bigl\Vert x(r)\bigr\Vert \leq\alpha e^{-\beta(r-r_{0})}\bigl\Vert x(r_{0})\bigr\Vert , \quad \forall t>r_{0}, $$

where \(\alpha=\frac{\overline{\lambda}_{v}}{\underline{\lambda}_{v}}\).

Thus, system (2.1) is uniformly exponentially stable. □

Next we introduce Corollary 1, which presents a sufficient and necessary condition for the system (2.1).

Corollary 1

Suppose \(N=2\). Consider the stabilization of system (2.1) under the sense of the Lyapunov function, then system (2.1) is stability if and only if there exists a stable convex combination of system matrices.

Proof

The part of ‘if’ is easy. One could refer that to Theorem 1. We only give the proof of ‘only if’. System (2.1) having stability means that there exists a CLCLF \(W=x^{T}v\) satisfying

$$\dot{W}=x^{T}A^{T}_{\tau(r)}w< 0 $$

for any \(v\succ0\). It is easy to see that there exist \(\varsigma\in R^{+}\) and a vector \(M'\) satisfying

$$\dot{V}=x^{T}A^{T}_{1}v< -\varsigma x^{T}\mathbf{e}' $$

or

$$\dot{V}=x^{T}A^{T}_{2}v< -\varsigma x^{T}\mathbf{e}', $$

where \(\mathbf{e}'\in\Re^{n}\) and \(\mathbf{e}'\succ0\). That is to say \(\dot{V}=x^{T}A^{T}_{1}v<-\varsigma x^{T}\mathbf{e}'\) whenever \(\dot{V}=x^{T}A^{T}_{2}v\geq-\varsigma x^{T}\mathbf{e}'\), and \(\dot{V}=x^{T}A^{T}_{2}v<-\varsigma x^{T}\mathbf{e}'\) whenever \(\dot{V}=x^{T}A^{T}_{1}v\geq-\varsigma x^{T}\mathbf{e}'\). Here, we only prove the first case. The second case can be obtained similarly to the first one. By the compactness theorem, there exists a positive real number μ such that \(-x^{T}A^{T}_{1}v-\varsigma x^{T}\mathbf {e}'>\mu \). Between any two consecutive switching instants, \(x(r)\) is bound. Thus, there exists \(\kappa\in R^{+}\) satisfying

$$\kappa\geq x^{T}A^{T}_{2}v+\varsigma x^{T}\mathbf{e}'>0. $$

Set \(\varepsilon=\frac{\mu}{\kappa}\). We obtain

$$-x^{T}A^{T}_{1}v-\varsigma x^{T} \mathbf{e}'-\varepsilon \bigl(x^{T}A^{T}_{2}v+ \varsigma x^{T}\mathbf{e}'\bigr)>0. $$

Therefore,

$$x^{T}A^{T}_{1}v+\varepsilon x^{T}A^{T}_{2}v< -(1+ \varepsilon)\varsigma x^{T}\mathbf{e}'. $$

Define \(w_{1}=\frac{1}{1+\varepsilon}\), \(w_{2}=\frac{\varepsilon}{1+\varepsilon}\). The above inequality verifies \(A_{0}=w_{1}A_{1}+w_{2}A_{2}\) is a stable convex combination of system matrices. □

4 Numerical example

Finally, a numerical example is given to show our main results.

Example 2

Let us consider the system (2.1) with

$$\begin{aligned}& A_{1}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -1.2 &0.8&0.7\\ 0.2& -0.7&1.3\\ 1.7&0.2&-1.5 \end{array}\displaystyle \right ),\qquad A_{2}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -0.6 &0.1&0.4\\ 0.3& -0.5&0.3\\ 0.2&0.4&-0.4 \end{array}\displaystyle \right ), \\& A_{3}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -9.3 &0.1&0.9\\ 0.5& -3&5.1\\ 0&1.4&-2.4 \end{array}\displaystyle \right ). \end{aligned}$$

Choose \(w_{1}=w_{2}=0.1\) and \(w_{3}=0.8\). The stable convex combination of \(A_{1}\), \(A_{2}\), and \(A_{3}\) is

$$A_{0}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} -7.62 &0.17&0.83\\ 0.45& -2.52&4.24\\ 0.19&1.18&-2.11 \end{array}\displaystyle \right ). $$

Then we get \(v=(0.2110\ 1.7313\ 3.6115)^{T}\) and \(\mathbf{e}=(0.1425\ 0.0654\ 0.1044)^{T}\) by using the linprog toolbox in Matlab. Let \(\tau =2\) and \(r_{\tau(r_{i})}=0.5\), where \(i=1, 2, \ldots\) . Let there be given the initial condition \(x_{0}=(4\ 2\ 3)^{T}\). By item (i) in Switching rule 1, the first subsystem is first active. Then execute items (ii) and (iii), respectively, by a simple iterative process.