1 Introduction

In the field of control theory understanding system dynamics is not only an important part of controller design but also an objective on its own. Besides a catalog of well-known and almost universally sought system properties defined in the state-space and input-space, such as stability, reachability, observability and controllability, there is an indefinite amount of other, more or less specific ones. System dynamics can be analyzed from different perspectives and, even constraining oneself to the state-space analysis, one can find many mutually non-exclusive properties which—combined or on their own—can become a key to solving many problems which arise in control theory and controller design.

The main scope of this work is to present and analyze in a general way a set of patterns in discrete-time systems’ trajectories by comparing their evolution against a given region in the state-space. Those patterns emerged from a well-known theory of invariant systems and as such rely on the notions like that of corner regions. By building upon this framework, this work aims at creating new perspectives in the field of invariant systems, with a goal to both widen and deepen the understanding of dynamic systems behavior.

The proposed notions can be used, for example, to shed some light on the reachability of discrete systems, ensuring that it is impossible to omit the given corner region when searching for points reachable from its outside and vice versa. Thus, what mostly characterizes the proposed properties are their natural features of specific behavior independent of control signals from a specific region. Therefore, the main issue of this work is not to determine the ability to impact the system to behave in a certain way but to determine its specific properties that cause specific behavior independent of excitations.

The theory of invariant systems upon which this paper is based has been developed for years. The first significant concepts and results of the invariance theory were established by [1] and [2]. The theory has been investigated and developed by [3,4,5,6,7,8,9,10,11,12] and others. State-space invariance in particular is in a mature state due to the amount of work done in the topic of positive systems [13] and [14]. In this work we focus on discrete-time control systems expanding on the discrete-time invariant systems as introduced and analyzed in [15], where the so-called region-invariance properties of discrete-time control systems are considered; also, an already known notion of the positivity of control systems has been generalized to the general nonlinear discrete-time control systems, general regions in state-space, and controls from polyhedral cones in input-space. In this work we only rely on basic results from [15], such as the main definitions and results concerning region-invariance of linear and nonlinear systems in general form.

In this work, geometrical approach allows for the unification of treatment of a broad class of corner regions, both nonlinear and linear. Namely, thanks to the proposed class of corner regions defined by the means of diffeomorphisms, it is possible to uniquely transform any nonlinear or linear corner region, both in the state- and input-space, into the nonnegative orthants in \(\mathbb {R}^n\) and \(\mathbb {R}^m\), respectively. This provides both the possibility of a simplified geometric analysis of the issues discussed in this work and the possibility of giving alternative conditions that are simpler to verify in practice. Therefore, this presents an opportunity to create a fundamental common ground for the analysis of a broad class of dynamic systems.

The paper is organized as follows: Sect. 2 presents a quick characterization of nonlinear and linear discrete-time invariant control systems on corner regions (nonlinear and linear, respectively) in the state-space with controls belonging to a region that is a polyhedral cone in the input-space. In Sect. 3, both nonlinear and linear discrete-time control systems are characterized in terms of various introduced properties associated with particular regions in the state-space, completed with a set of examples for each property considered in both the nonlinear and LTI-case. Section 4 provides conclusions with the emphasis on potential future research avenues and the applicability of this work in control engineering problems.

1.1 Notation

This work relies on the notation as described below.

The sets of natural numbers and naturals with zero are denoted by \(\mathbb {N}\) and \(\mathbb {N}_0\), respectively. The set of all real numbers is denoted by \(\mathbb {R}\). The notation \(\mathbb {R}^n\) refers to n-dimensional vector space over the field of real numbers \(\mathbb {R}\). Non-negative and non-positive real numbers (both including zero) are denoted by \(\mathbb {R}_+\) and \(\mathbb {R}_-\), respectively. By \(\mathbb {R}^n_+\) (resp. \(\mathbb {R}^n_-\)) we mean the Cartesian product of n copies of \(\mathbb {R}_+\) (resp. \(\mathbb {R}_-\)), and call it a non-negative (non-positive) orthant. By \(\mathbb {R}^{n\times m}\) we denote the set of \(n\times m\) matrices with entries from the field \(\mathbb {R}\). The identity matrix of dimension \(n\times n\) is denoted by \(I_{n\times n}\). Let P denote a matrix, a vector or a vector-valued function. The notation \(P > 0\) (resp. \(P < 0\)) means that all elements of P are positive (resp. negative). The notation \(P\ge 0\) (resp. \(P\le 0\)) means that all elements of P are non-negative (resp. non-positive). By \(P \ngeq 0\) (resp. \(P\nleq 0\)) we mean that at least one element of P is negative (resp. positive). We call P a positive generalized permutation matrix if it possesses exactly one positive entry in each row and each column. A diagonal matrix P is called strictly positive diagonal matrix if all its diagonal entries are positive.

A square matrix P is called monotone if for all real vectors v, \(Pv\ge 0\) implies \(v\ge 0\). By \({{\,\mathrm{{Im_+}}\,}}P\) (resp. \({{\,\mathrm{{Im_{\ngeq }}}\,}}P\)) we mean the set of all possible linear combinations with non-negative coefficients, except for all being zeros (resp. with at least one negative coefficient) of column vectors of matrix P. Let \(P\in \mathbb {R}^{n\times m}\) be a matrix; then, by \({{\,\mathrm{{Vect}}\,}}P\) we denote the linear subspace of \(\mathbb {R}^n\) spanned by the column vectors of P. If P is a matrix, its ith column vector is denoted \(P_i\). For a set \(S\subset V\), by \(S^\textrm{c}\) we mean the complement of S, i.e., the set of elements of V that are not in S. For a set S we denote the Cartesian product as \(S^k = S\times \cdots \times S\) (k-times). Operation “\(\circ \)” denotes the composition of functions.

When dealing with time-dependent vectors and vector-valued functions, the lower subscript denotes the time index, whereas upper subscript indexes vector components, e.g., \(x^i_k\) denotes the ith component of vector x at time instant k.

2 Preliminaries

Due to the reliance of this work on the notion of state-space invariance let us recall some of the basic concepts. For a more in-depth analysis and proofs of the invoked theorems and propositions see [15].

We consider a discrete-time control system of the form

$$\begin{aligned} \Pi \,:\quad x_{k+1} = f(x_k,u_k), \end{aligned}$$
(1)

where \(x_k\in \mathbb {R}^n\) and \(u_k\in \mathbb {R}^m\) are the values of state and input vectors at time index k, respectively, and \(f:\, \mathbb {R}^n\times \mathbb {R}^m\rightarrow \mathbb {R}^n\) is a system map of irrelevant class. By \({\bar{x}}_k = {\bar{x}}_k(x_0,{\bar{u}}_{k-1})\) we mean the trajectory of \(\Pi \), i.e., the sequence \((x_0,\ldots ,x_k)\) of states, issued from \(x_0\) and excited by a control sequence \({\bar{u}}_{k-1} = (u_0,\ldots ,u_{k-1})\). When dealing with trajectories of an indefinite length issued from a given point \(x_0\) in time \(k=0\), we use the notation \({\bar{x}} = {\bar{x}}(x_0,{\bar{u}})\), where control sequence \({\bar{u}} = (u_0,u_1,\ldots )\). By \(x_k = x_k(x_0,\bar{u}_{k-1})\) we denote the end-point of \({\bar{x}}_k(x_0,{\bar{u}}_{k-1})\).

For the sake of brevity, the time-step index k is sometimes omitted, leaving implicit time-dependence of state and input vectors.

Throughout this work the specific state- and input-space subsets are defined as follows in (2), (3). A nonlinear corner region \(\mathscr {K}\) in the state-space is a set of the following form

$$\begin{aligned} \mathscr {K}= \{x\in \mathbb {R}^n:\; \varphi _i(x)\ge 0,\;1\le i\le n\}=\bigcap _{i=1}^n\{\varphi _i\ge 0\}, \end{aligned}$$
(2)

where \(\Phi =(\varphi _1, \ldots ,\varphi _n)^T\,:\,\mathbb {R}^n\rightarrow \mathbb {R}^n\) is a diffeomorphism (with \(\Phi \) and \(\Phi ^{-1}\) differentiable).

A polyhedral cone \(\mathcal {W}\) in the input-space is a set of the following form

$$\begin{aligned} \mathcal {W}= \{u\in \mathbb {R}^m:\; w^iu\ge 0,\;1\le i\le m\} = \bigcap _{i=1}^m\{w^iu\ge 0\}, \end{aligned}$$
(3)

where \(w^i\) for \(i=1,\ldots ,m\) are rows of a nonsingular matrix \(W\in \mathbb {R}^{m\times m}\).

A global diffeomorphism \(\Phi :\mathbb {R}^n\rightarrow \mathbb {R}^n\), defining \(\mathscr {K}\) with the help of  (2), gives rise, via \({\tilde{x}} = \Phi (x)\), to x-coordinates of the source \(\mathbb {R}^n\) and \(\tilde{x}\)-coordinates of the target \(\mathbb {R}^n\). For the sake of clarity and simpler identification of the spaces we are dealing with, in the following properties essential for further considerations of the diffeomorphism \(\Phi \), we use \({\tilde{\mathbb {R}}}^n\) to denote the target \(\mathbb {R}^n\) space and consequently \({\tilde{\mathbb {R}}}^n_+\) as nonnegative orthant in \({\tilde{\mathbb {R}}}^n\). These natural properties, which are immediate consequences of the definition of image and preimage of \(\Phi \), are the following:

  1. (i)

    \(\Phi (\mathscr {K}) = {\tilde{\mathbb {R}}}^n_+ = \{{\tilde{x}}^i\ge 0\}\) and \(\Phi ^{-1}({\tilde{\mathbb {R}}}^n_+) = \mathscr {K}\);

  2. (ii)

    \(\Phi (\mathscr {K}^\textrm{c}) = \left( {\tilde{\mathbb {R}}}^n_+\right) ^\textrm{c}\) and \(\Phi ^{-1}\left( \left( {\tilde{\mathbb {R}}}^n_+\right) ^\textrm{c}\right) = \mathscr {K}^\textrm{c}\).

Remark 1

The above result can of course be also applied to the map \(u\mapsto {\tilde{u}} = Wu\), being an isomorphism from \(\mathbb {R}^m\) to \(\mathbb {R}^m\), defining a polyhedral cone \(\mathcal {W}\) by (3).

Definition 1

Let \(\mathscr {K}\) be a nonlinear corner region in \(\mathbb {R}^n\) and \(\mathcal {W}\) a polyhedral cone in \(\mathbb {R}^m\). A nonlinear system \(\Pi \) of the form (1) is said to be \((\mathscr {K},\mathcal {W})\)-invariant if its trajectories \({\bar{x}}={\bar{x}}(x_0,{\bar{u}}) = (x_0,x_1,\ldots )\) are such that \(x_i\in \mathscr {K}\), \(i\ge 1\), for each \(x_0\in \mathscr {K}\) and each \({\bar{u}}=(u_0,u_1,\ldots )\) with all \(u_i\in \mathcal {W}\), \(i\ge 0\).

The following characterization of invariant discrete-time control systems in the nonlinear case is important to our work.

Proposition 1

The following conditions are equivalent for the nonlinear system \(\Pi \):

  1. (i)

    \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-invariant;

  2. (ii)

    \(\left( \varphi _i\circ f\right) (x,u)\ge 0\) for all \(1\le i\le n\), for each \(x\in \mathscr {K}\) and each \(u\in \mathcal {W}\);

  3. (iii)

    \(\left( \Phi \circ f\right) \left( \Phi ^{-1}(\tilde{x}),W^{-1}{\tilde{u}}\right) \ge 0\) for each \({\tilde{x}}\in \mathbb {R}^n_+\) and each \({\tilde{u}}\in \mathbb {R}^m_+\).

For a linear time-invariant case we define the following system

$$\begin{aligned} \Xi \,:\quad x_{k+1} = Ax_k+Bu_k, \end{aligned}$$
(4)

where \(x_k\in \mathbb {R}^n\), \(u_k\in \mathbb {R}^m\), and matrices \(A\in \mathbb {R}^{n\times n}\) and \(B\in \mathbb {R}^{n\times m}\).

Define a corner region \(\mathcal {K}\) in the form of a polyhedral cone

$$\begin{aligned} \mathcal {K}= \{x\in \mathbb {R}^n:\; k^ix\ge 0,\;1\le i\le n\}=\bigcap _{i=1}^n\{k^ix\ge 0\}, \end{aligned}$$
(5)

where \(k^i\), \(1\le i\le n\), are rows of a non-singular matrix K.

The cone \(\mathcal {K}\), given by (5), can be defined equivalently as

$$\begin{aligned} \mathcal {K}= \textrm{Im}_+K^{-1} \cup \{0\}, \end{aligned}$$
(6)

where the columns of matrix \(K^{-1}\) are the edge canonical vectors of \(\mathcal {K}\) (see [16]).

For such a system, we have the following result, see [15].

Corollary 1

The linear system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-invariant if and only if

$$\begin{aligned} KAK^{-1}\ge 0\quad \text {and}\quad KBW^{-1}\ge 0. \end{aligned}$$

3 Characterization of control systems with respect to a specified region in state-space

Based on the notion of a nonlinear corner region \(\mathscr {K}\) and that of a polyhedral cone \(\mathcal {W}\) defined in the state- and input-space, respectively, in this section a characterization of different properties of nonlinear and linear discrete-time control systems due to specific regions \(\mathscr {K}\) in the state-space is proposed.

3.1 Nonlinear systems

Let us consider the nonlinear control system \(\Pi \) defined by (1) and let \(\mathscr {K}\) be a nonlinear corner region in \(\mathbb {R}^n\) and \(\mathcal {W}\) a polyhedral cone in \(\mathbb {R}^m\) defined by (2) and (3), respectively.

Definition 2

A nonlinear system \(\Pi \) of the form (1) is said to be \((\mathscr {K},\mathcal {W})\)-excluded if \(x_k\notin \mathscr {K}\) for each \(x_0\notin \mathscr {K}\), each \(u_k\in \mathcal {W}\) and all \(k\in \mathbb {N}_0\).

Remark 2

The definition of \((\mathscr {K},\mathcal {W})\)-excluded system simply means that any trajectory starting outside \(\mathscr {K}\) will never reach \(\mathscr {K}\) at any time, that is it will always remain outside of \(\mathscr {K}\). Thus, the definition of \((\mathscr {K},\mathcal {W})\)-excluded allows for the analysis of systems invariant on some open subsets of \(\mathbb {R}^n\), which are actually invariant on the complement \(\mathscr {K}^\textrm{c}\) of a corner region \(\mathscr {K}\) in \(\mathbb {R}^n\). Such a case may be of interest, for example, when the region \(\mathscr {K}\) is considered forbidden or undesirable for the system. So, if the system is \((\mathscr {K},\mathcal {W})\)-excluded, then it is known that its evolution will only take place outside of \(\mathscr {K}\), provided that we start outside.

Proposition 2

The following conditions are equivalent for the nonlinear system \(\Pi \):

  1. (i)

    \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-excluded;

  2. (ii)

    \((\Phi \circ f)(x,u) \ngeq 0\) for each \(x\notin \mathscr {K}\) and each \(u\in \mathcal {W}\);

  3. (iii)

    \(\left( \Phi \circ f\right) \left( \Phi ^{-1}(\tilde{x}),W^{-1}{\tilde{u}}\right) \ngeq 0\) for each \({\tilde{x}}\notin \mathbb {R}^n_+\) and each \({\tilde{u}}\in \mathbb {R}^m_+\).

Proof

(i) \(\Rightarrow \) (ii): Since for any \(x_k\in \mathscr {K}^\textrm{c}\) and each \(u_k\in \mathcal {W}\) system \(\Pi \) does not evolve into \(\mathscr {K}\), i.e., \(x_{k+1} = f(x_k,u_k)\notin \mathscr {K}\), it means that \(\Phi (x_{k+1}) = (\Phi \circ f)(x_k,u_k) \ngeq 0\) for any \(k\in \mathbb {N}_0\), hence \((\Phi \circ f)(x,u) \ngeq 0\) for any \(x\in \mathscr {K}^\textrm{c}\) and each \(u\in \mathcal {W}\).

(ii) \(\Rightarrow \) (iii): Since \(\mathscr {K}\) is transformed into \(\mathbb {R}^n_+\) with the help of \(\Phi \) (by the definition of \(\mathscr {K}\)), and the cone \(\mathcal {W}\) is transformed into \(\mathbb {R}^m_+\) by means of the transformation matrix W (by the definition of \(\mathcal {W}\)), putting \({\tilde{x}} = \Phi (x)\) and \({\tilde{u}} = Wu\), there exist \(x = \Phi ^{-1}({\tilde{x}})\) and \(u = W^{-1}{\tilde{u}}\) such that for any \({\tilde{x}}\) and \({\tilde{u}}\) the points \(x=\Phi ^{-1}(\tilde{x})\) and \(u = W^{-1}{\tilde{u}}\) satisfy \(x\in \mathscr {K}\) and \(u\in \mathcal {W}\), respectively. Therefore, the condition \(\left( \Phi \circ f\right) (x,u)\ngeq 0\) for all \(x\notin \mathscr {K}\) and all \(u\in \mathcal {W}\) is equivalent to \((\Phi \circ f)\left( \Phi ^{-1}({\tilde{x}}),W^{-1}{\tilde{u}}\right) \ngeq 0\) for each \({\tilde{x}}\notin \mathbb {R}^n_+\) and \({\tilde{u}}\in \mathbb {R}^m_+\).

(iii) \(\Rightarrow \) (i): Since for each \({\tilde{x}}_k\notin \mathbb {R}^n_+\) and each \({\tilde{u}}_k\in \mathbb {R}^m_+\) the relation \((\Phi \circ f)(\Phi ^{-1}({\tilde{x}}_k),W^{-1}{\tilde{u}}_k) \ngeq 0\) holds, from definition of \(\mathscr {K}\) (and thereby by the property of \(\Phi \)) it follows that \(f(\Phi ^{-1}({\tilde{x}}_k),W^{-1}{\tilde{u}}_k)\notin \mathscr {K}\). Because \(\Phi ^{-1}({\tilde{x}}_k) = x_k\notin \mathscr {K}\) and \(W^{-1}{\tilde{u}}_k = u_k\in \mathcal {W}\), then \(f(x_k,u_k)=x_{k+1}\notin \mathscr {K}\). Since, as it has been already stressed, it holds for each \(x_k\notin \mathscr {K}\), it hence holds for any \(k\in \mathbb {N}_0\) implying system \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-excluded. \(\square \)

Example 1

Consider the following system

$$\begin{aligned} x_{k+1}= & {} f(x_k,u_k) \\ {}= & {} \begin{pmatrix} -\left( -x^1_k + \sin x_k^2\right) ^3 + \sin x_k^2 + \left( x^2_k\right) ^2u^1_ku^2_k\\ x^2_k \end{pmatrix} \end{aligned}$$

with a region \(\mathscr {K}\subset \mathbb {R}^2\) in the state-space (see Fig. 1a) defined by

$$\begin{aligned} \Phi (x) = \begin{pmatrix} -x^1 + \sin x^2\\ x^2 \end{pmatrix} \end{aligned}$$

and a cone \(\mathcal {W}\subset \mathbb {R}^2\) in the input-space (see Fig. 1b) given by the matrix \(W = -I_{2\times 2}\).

Fig. 1
figure 1

Regions from Ex. 1

From condition (ii) of Proposition 2, we get

$$\begin{aligned}{} & {} \left( \Phi \circ f\right) (x,u) \\ {}{} & {} \quad = \begin{pmatrix} \left( -x^1 + \sin x^2\right) ^3 - \left( x^2\right) ^2u^1u^2\\ x^2 \end{pmatrix} \ngeq 0 \end{aligned}$$

for any \(x\notin \mathscr {K}\) and \(u\in \mathcal {W}\); indeed, this always holds for \(x^2<0\) as well as for \(x^2\ge 0\) and \(-x^1 + \sin x^2<0\), which means that the system is \((\mathscr {K},\mathcal {W})\)-excluded. Alternatively, using condition (iii) of Proposition 2, we have

$$\begin{aligned}{} & {} \left( \Phi \circ f\right) (\Phi ^{-1}\left( \tilde{x}\right) ,W^{-1}\tilde{u}) \\ {}{} & {} \quad = \begin{pmatrix} \left( \tilde{x}^1\right) ^3 - \left( \tilde{x}^2\right) ^2\tilde{u}^1{\tilde{u}}^2\\ {\tilde{x}}^2 \end{pmatrix} \ngeq 0 \end{aligned}$$

for any \({\tilde{x}}\notin \mathbb {R}^2_+\) and \({\tilde{u}}\in \mathbb {R}^2_+\), which also shows that the system is \((\mathscr {K},\mathcal {W})\)-excluded.

Example 2

Let us consider the controlled Leslie–Gower nonlinear model of two competing species \(S_1\) and \(S_2\) in the same environment, with their populations consisting of \(x^1_k\) and \(x^2_k\) individuals at time k, respectively, given by

$$\begin{aligned} \Pi ^{\hbox {L-G}}:\quad \begin{pmatrix} x^1_{k+1}\\ x^2_{k+1} \end{pmatrix} = \begin{pmatrix} \frac{\lambda _1}{1 + \alpha _1x^1_k + \beta _1x^2_k}x^1_k\\ \frac{\lambda _2}{1 + \alpha _2x^1_k + \beta _2x^2_k}x^2_k \end{pmatrix} + \begin{pmatrix} \gamma _1\\ \gamma _2 \end{pmatrix}u. \end{aligned}$$

The uncontrolled part is the standard Leslie–Gower model (see [17, 18]), while the control part allows to modify the speed of changes of the densities of the populations. The real positive parameters \(\alpha _1\), \(\alpha _2\), \(\beta _1\), \(\beta _2\), \(\gamma _1\), \(\gamma _2\) correspond to various interaction cases.

One can easily see that \(\Pi ^{{\mathrm {L-G}}}\) is \((\mathbb {R}^2_+,\mathbb {R}_+)\)-invariant, which is expected and follows from the nature of the phenomenon described (populations \(x^1\) and \(x^2\) may only be nonnegative). Thus, it is entirely reasonable to limit ourselves to considering only the \(\mathbb {R}^2_+\) region in which trajectories of \(\Pi ^{{\mathrm {L-G}}}\) can evolve.

Moreover, taking the parameters such that \(\alpha _1 = \alpha _2 = \alpha \), \(\beta _1 = \beta _2 = \beta \), \(\lambda _2 \ge \lambda _1\) and \(\gamma _2 = a\gamma _1\), with \(a>0\), the \(\Pi ^{{\mathrm {L-G}}}\) is \((\mathcal {K},\mathbb {R}_+)\)-invariant, where \(\mathcal {K}\subset \mathbb {R}^2_+\) is defined by

$$\begin{aligned} \Phi (x) = Kx = \begin{pmatrix} x^1\\ -ax^1 + x^2 \end{pmatrix}. \end{aligned}$$

Indeed, from condition (ii) of Proposition 1, we have

$$\begin{aligned} (\Phi \circ f)(x,u)= & {} \begin{pmatrix} \frac{\lambda _1x^1}{1 + \alpha x^1 + \beta x^2} + \gamma _1u\\ \frac{-a\lambda _1x^1 + \lambda _2x^2}{1 + \alpha x^1 + \beta x^2} \end{pmatrix}\ge 0\\ {}{} & {} \quad \text {for all }x\in \mathcal {K}\text { and }u\ge 0. \end{aligned}$$

Similarly, from condition (iii) of Proposition 1, we have

$$\begin{aligned} (\Phi \circ f)(K^{-1}{\tilde{x}},{\tilde{u}})= & {} \begin{pmatrix} \frac{\lambda _1\tilde{x}^1}{1 + \alpha {\tilde{x}}^1 + \beta ({\tilde{x}}^2 + a{\tilde{x}}_1)} + \gamma _1\tilde{u}\\ \frac{a(\lambda _2-\lambda _1){\tilde{x}}_1 + \lambda _2{\tilde{x}}^2}{1 + \alpha {\tilde{x}}^1 + \beta ({\tilde{x}}^2 + a{\tilde{x}}_1)} \end{pmatrix}\\ {}\ge & {} 0\quad \text {for all }{\tilde{x}}\in \mathbb {R}^2_+\text { and }{\tilde{u}}\ge 0. \end{aligned}$$

This model, but with \(\lambda _1 = \lambda _2 = \lambda \), is also \((\mathcal {K},\mathbb {R}_+)\)-excluded in \(\mathbb {R}^2_+\). It does not make sense to consider this property in the entire \(\mathbb {R}^2\) due to the “nonnegative” nature of the model, so we will limit to consider the complement \({\hat{\mathcal {K}}}^{\textrm{c}}\subset \mathcal {K}^\textrm{c}\) of the cone \(\mathcal {K}\) in \(\mathbb {R}^2_+\) being the set of all \(x\in \mathbb {R}^2_+\) that are not in \(\mathcal {K}\), i.e.,

$$\begin{aligned} {\hat{\mathcal {K}}}^{\textrm{c}} = \mathbb {R}^2_+\setminus {\mathcal {K}} = \{x\in \mathbb {R}^2_+:\;x^2\ge 0,\, x^2 < ax^1\}. \end{aligned}$$
Fig. 2
figure 2

Sets from Ex. 2

Indeed, from condition (ii) of Proposition 2, we have

$$\begin{aligned} (\Phi \circ f)(x,u)= & {} \begin{pmatrix} \frac{\lambda x^1}{1 + \alpha x^1 + \beta x^2} + \gamma _1u\\ \frac{\lambda (x^2 - a x^1)}{1 + \alpha x^1 + \beta x^2} \end{pmatrix}\\ {}\ngeq & {} 0\quad \text {for all }x\in {\hat{\mathcal {K}}}^{\textrm{c}}\text { and }u\ge 0. \end{aligned}$$

In order to use condition (iii) of Proposition 2, we first map \({\hat{\mathcal {K}}}^{\textrm{c}}\) by means of \(\Phi \), which yields

$$\begin{aligned} (\hat{\mathbb {R}}^2_+)^\textrm{c}= & {} \Phi ({\hat{\mathcal {K}}}^{\textrm{c}}) = \{\tilde{x}\in \mathbb {R}^2:\; {\tilde{x}}^2<0,\;a{\tilde{x}}^1 + {\tilde{x}}^2 \ge 0\}\\ {}\subset & {} (\mathbb {R}^2_+)^\textrm{c}. \end{aligned}$$

Thus, we have

$$\begin{aligned} (\Phi \circ f)(K^{-1}{\tilde{x}},{\tilde{u}})= & {} \begin{pmatrix} \frac{\lambda {\tilde{x}}^1}{1 + \alpha {\tilde{x}}^1 + \beta ({\tilde{x}}^2 + a{\tilde{x}}_1)} + \gamma _1{\tilde{u}}\\ \frac{\lambda {\tilde{x}}^2}{1 + \alpha {\tilde{x}}^1 + \beta ({\tilde{x}}^2 + a{\tilde{x}}_1)} \end{pmatrix}\\ {}\ngeq & {} 0\quad \text {for all }{\tilde{x}}\in (\hat{\mathbb {R}}^2_+)^\textrm{c}\text { and }{\tilde{u}}\ge 0. \end{aligned}$$

Let us recall the notation of the Cartesian product \(S^k = S\times \cdots \times S\) (k-times) of a set S, which we use in the definition below and in the sequel in the context of elements’ sequence of a set.

Definition 3

A nonlinear system \(\Pi \) of the form (1) is said to be \((\mathscr {K},\mathcal {W})\)-catch if \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-invariant and there exist \(x_0\notin \mathscr {K}\) and \(k\in \mathbb {N}\) such that \(x_k\in \mathscr {K}\) for any \(\bar{u}_{k-1}=(u_0,\ldots ,u_{k-1})\in \mathcal {W}^k\), i.e., with all \(u_j\in \mathcal {W}\), \(0\le j\le k-1\).

Remark 3

The existence of some \(x_0\notin \mathscr {K}\) of \((\mathscr {K},\mathcal {W})\)-catch system implies the existence of a trajectory \({\bar{x}}_{k-1} = (x_0,\ldots ,x_{k-1})\), such that \(x_j\notin \mathscr {K}\) for \(0\le j\le k-1\), and \(x_k\in \mathscr {K}\) for some \(k\in \mathbb {N}\). Therefore, in view of the arbitrariness of the choice of \(x_0\notin \mathscr {K}\), one can choose the point \(x_{k-1}\notin \mathscr {K}\) as the initial point \(x_0\).

Additionally, since the system is \((\mathscr {K},\mathcal {W})\)-invariant (from the definition of \((\mathscr {K},\mathcal {W})\)-catching) with \(x_0\notin \mathscr {K}\) and \(x_{k-1}\notin \mathscr {K}\), there cannot exist any trajectory \({\bar{x}}_{k-1} = {\bar{x}}_{k-1}(x_0,{\bar{u}}_{k-2}) = (x_0,\ldots ,x_{k-1})\), with some \(x_i\in \mathscr {K}\), \(1\le i\le k-2\).

Remark 4

The case of a system being \((\mathscr {K},\mathcal {W})\)-catch intuitively means that there exists at least one point outside \(\mathscr {K}\) (trap point) in the state-space from which the system always goes to \(\mathscr {K}\) (for any control \(u\in \mathcal {W}\)) and that it remains there (for all controls \(u\in \mathcal {W}\)).

For example, a system with some separable non-controllable part may turn out to be of this nature. An example of such dynamics is the position in \(\mathbb {R}^3\) of some mechanical system which can be bound to remain near the surface of a given planet due to the gravitational force and atmospheric drag acting upon it [19].

Proposition 3

The following conditions are equivalent for the nonlinear system \(\Pi \):

  1. (i)

    \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-catch;

  2. (ii)

    \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-invariant and \((\Phi \circ f)(x,u) \ge 0\) for some \(x\notin \mathscr {K}\) and for each \(u\in \mathcal {W}\);

  3. (iii)

    \(\Pi \) is \((\mathscr {K},\mathcal {W})\)-invariant and \(\left( \Phi \circ f\right) \left( \Phi ^{-1}({\tilde{x}}),W^{-1}{\tilde{u}}\right) \ge 0\) for some \({\tilde{x}}\notin \mathbb {R}^n_+\) and each \({\tilde{u}}\in \mathbb {R}^m_+\).

Proof

(i) \(\Rightarrow \) (ii): There exists some \(x_k\notin \mathscr {K}\) such that \(x_{k+1}=f(x_k,u_k)\in \mathscr {K}\) for any \(u_k\in \mathcal {W}\). The condition \(x_{k+1}\in \mathscr {K}\) directly implies that \((\Phi \circ f)(x_k,u_k)\ge 0\) and thus setting \(x = x_k\) and \(u = u_k\) gives (ii).

(ii) \(\Rightarrow \) (iii): Since \(\mathscr {K}\) is transformed onto \(\mathbb {R}^n_+\) with the help of \(\Phi \) (by the definition of \(\mathscr {K}\)), and \(\mathcal {W}\) is transformed onto \(\mathbb {R}^m_+\) by means of W (by definition of \(\mathcal {W}\)), there exists (by property of \(\Phi \)) some \({\tilde{x}} = \Phi (x)\notin \mathbb {R}^n_+\) for some \(x\notin \mathscr {K}\), and \({\tilde{u}} = Wu\in \mathbb {R}^m_+\) for all \(u\in \mathcal {W}\). Therefore, taking \(x = \Phi ^{-1}({\tilde{x}})\) and \(u = W^{-1}\tilde{u}\), we obtain condition (iii).

(iii) \(\Rightarrow \) (i): Since for some \({\tilde{x}}_k\notin \mathbb {R}^n_+\) and each \({\tilde{u}}_k\in \mathbb {R}^m_+\) the relation \((\Phi \circ f)(\Phi ^{-1}({\tilde{x}}_k),W^{-1}{\tilde{u}}_k) \ge 0\) holds at some time instant k, from the definition of \(\mathscr {K}\) (and thereby by the property of \(\Phi \)) it follows that \(f(\Phi ^{-1}(\tilde{x}_k),W^{-1}{\tilde{u}}_k)\in \mathscr {K}\). Because \(\Phi ^{-1}({\tilde{x}}_k) = x_k\notin \mathscr {K}\) and \(W^{-1}{\tilde{u}}_k = u_k\in \mathcal {W}\), then \(f(x_k,u_k)=x_{k+1}\in \mathscr {K}\). Thanks to the \((\mathscr {K},\mathcal {W})\)-invariance, the trajectory segment \({\bar{x}} = {\bar{x}}(x_{k+1},{\bar{u}})\) remains within \(\mathscr {K}\) for any control sequence \({\bar{u}} = (u_{k+1},u_{k+2},\ldots )\) with \(u_j\in \mathcal {W}\), \(j\ge k+1\). \(\square \)

Example 3

Consider the following system

$$\begin{aligned}{} & {} x_{k+1} = f(x_k,u_k) \\ {}{} & {} \quad = \begin{pmatrix} \left( x^1_k - \left( x^2_k\right) ^2 + x_k^2\right) u_k + \left( x^1_k - \left( x^2_k\right) ^2 + x_k^2\right) ^2\\ x^1_k - \left( x^2_k\right) ^2 + x_k^2 \end{pmatrix} \end{aligned}$$

with a region \(\mathscr {K}\subset \mathbb {R}^2\) in the state-space (see Fig. 3) defined by

$$\begin{aligned} \Phi (x) = \begin{pmatrix} x^1 - \left( x^2\right) ^2\\ x^2 \end{pmatrix}, \end{aligned}$$

and the cone \(\mathcal {W}=\mathbb {R}_+\) in the input-space.

Fig. 3
figure 3

Nonlinear corner region \(\mathscr {K}\) in the state-space \(\mathbb {R}^2\) from Ex. 3

From condition (ii) of Proposition 3, we get

$$\begin{aligned} \left( \Phi \circ f\right) (x,u) = \begin{pmatrix} \left( x^1 - \left( x^2\right) ^2 + x^2\right) u\\ x^1 - \left( x^2\right) ^2 + x^2 \end{pmatrix} \ge 0, \end{aligned}$$

on one hand, for all \(x = (x^1,x^2)^T\in \mathscr {K}\) and any \(u\in \mathbb {R}_+\) (implying \((\mathscr {K},\mathcal {W})\)-invariance), and on the other hand, for any \(x = (x^1,x^2)^T\notin \mathscr {K}\) such that \(x^1 - \left( x^2\right) ^2 = -x^2\), and any \(u\in \mathbb {R}_+\). It means that the system is \((\mathscr {K},\mathcal {W})\)-catch. Alternatively, using condition (iii) of Proposition 3, we have

$$\begin{aligned} \left( \Phi \circ f\right) \left( \Phi ^{-1}({\tilde{x}}),W^{-1}{\tilde{u}}\right) = \begin{pmatrix} \left( {\tilde{x}}^1 + {\tilde{x}}^2\right) {\tilde{u}}\\ {\tilde{x}}^1 + {\tilde{x}}^2 \end{pmatrix} \ge 0 \end{aligned}$$

for any \({\tilde{x}} = ({\tilde{x}}^1,{\tilde{x}}^2)^T\notin \mathbb {R}^2_+\) such that \({\tilde{x}}^1 = -{\tilde{x}}^2\), and any \({\tilde{u}}\in \mathbb {R}_+\), which also means that the system is \((\mathscr {K},\mathcal {W})\)-catch.

Definition 4

A nonlinear system \(\Pi \) of the form (1) is said to be \((\mathscr {K},\mathcal {W})\)-escape in at most k steps if there exists \(k\in \mathbb {N}\) such that for each \(x_0\in \mathscr {K}\) and any \({\bar{u}}_{k-1}=(u_0,\ldots ,u_{k-1})\) with all \(u_j\in \mathcal {W}\), \(0\le j\le k-1\), there exists \(0 \le k'\le k\) such that \(x_{k'}\notin \mathscr {K}\) and k is the lowest such number.

Proposition 4

A nonlinear system \(\Pi \) of the form (1) is \((\mathscr {K},\mathcal {W})\)-escape in at most k steps if and only if for each \(x_0\in \mathscr {K}\) and each \(\bar{u}_{k-1}=(u_0,\ldots ,u_{k-1})\), such that \(u_j\in \mathcal {W}\), \(0\le j\le k-1\), the following conditions hold:

  1. (i)

    \(X_{\mathscr {K}-}^1 \cup X_{\mathscr {K}-}^2 \cup \cdots \cup X_{\mathscr {K}-}^{k} = \mathscr {K}\);

  2. (ii)

    \(X_{\mathscr {K}-}^1 \cup X_{\mathscr {K}-}^2 \cup \cdots \cup X_{\mathscr {K}-}^{k-1} \subsetneqq \mathscr {K}\),

where \(X_{\mathscr {K}-}^i = \{x_0\in \mathscr {K}:(\Phi \circ \underbrace{f\circ \dots \circ f}_{i\text {-times}})(x_0,{\bar{u}}_{i-1}) \ngeq 0\}\) for each \({\bar{u}}_{i-1}\in \mathcal {W}^{i}\).

The set \(X_{\mathscr {K}-}^i\) consists of all initial points \(x_0\in \mathscr {K}\), such that the end-points \(x_i\) of all trajectories \({\bar{x}}_i(x_0,\bar{u}_{i-1})\), where \({\bar{u}}_{i-1}\in \mathcal {W}^i\), satisfy \(x_i\notin \mathscr {K}\).

Proof

Based on the notation form of the \(\Pi \) system, we can write the state \(x_i\) in the following iterative form

$$\begin{aligned} x_i&= f(x_{i-1},u_{i-1})\nonumber \\&=f\left( f\left( \cdots f\left( f\left( x_0,u_0\right) ,u_1\right) \cdots \right) ,u_{i-2}),u_{i-1}\right) \nonumber \\&= \underbrace{f\circ \dots \circ f}_{i\text {-times}}\left( x_0,{\bar{u}}_{i-1}\right) . \end{aligned}$$
(7)

(Sufficiency) The existence of \(k\in \mathbb {N}\) such that condition (i) is satisfied means that for each \(x_0\in \mathscr {K}\) there exists \(i\in \mathbb {N}, i \le k\) such that \(\Phi (x_i)\ngeq 0\), i.e., \(x_i\notin \mathscr {K}\), which implies that any trajectory \({\bar{x}}\) from \(\mathscr {K}\) is bound to leave \(\mathscr {K}\) in at most k iterations. Furthermore, satisfying condition (ii), implies that there is no \({\tilde{k}} < k\) which satisfies condition (i).

(Necessity) If a system is \((\mathscr {K},\mathcal {W})\)-escape in at most k steps then for each \(x_0\in \mathscr {K}\) there exists \(k'\in \mathbb {N}\), \(k' \le k\) such that trajectory \({\bar{x}}_{k'-1}(x_0,{\bar{u}}_{k'-2}) = (x_0,\ldots ,x_{k'-1})\) lies in \(\mathscr {K}\), which means that \(\Phi (x_j) = \Phi \circ f(x_{j-1},u_{j-1})\ge 0\), for \(0\le j\le k'-1\); and \(x_{k'}\notin \mathscr {K}\) implies \(\Phi (x_{k'})\ngeq 0\). All this means, taking into account notation (7), that conditions (i) and (ii) are met. \(\square \)

Remark 5

This property means that the system located inside of \(\mathscr {K}\) is bound to leave it after a final (and well-defined) number of time steps. Such a case may be of interest when one wants to achieve a region in which the system is allowed to remain no longer than for a given amount of time. It should be noted, however, that this property does not exclude the possibility of the trajectory returning to \(\mathscr {K}\).

Remark 6

To each initial point \(x_0\in \mathscr {K}\) is assigned a number \(k'\), what is characterized by the sets \(X_{\mathscr {K}-}^{k'}\), \(1\le k'\le k\). If \(k=1\), then there is exactly one \(k'\), i.e., \(k'=1\), for all \(x_0\in \mathscr {K}\). So in this sense, in general, i.e., for \(k>1\), we can conclude that \(k'\) depends on \(x_0\). On the contrary, \(k'\) does not depend on \({\bar{u}}_{k-1}\), since, by definition, each set \(X_{\mathscr {K}-}^{k'}\), \(1\le k'\le k\), is defined for all possible \(\bar{u}_{k'-1}\).

Remark 7

The above definition of \(X_{\mathscr {K}-}^i\) does not guarantee that such sets are mutually disjoint, i.e., in general \(X_{\mathscr {K}-}^j \cap X_{\mathscr {K}-}^l \ne \emptyset , j \ne l\). For example, if a system is \((\mathscr {K},\mathcal {W})\)-escape in at most k steps, such that a trajectory segment \((x_i,\ldots ,x_k)\) lies in \(\mathscr {K}^\textrm{c}\), being a continuation of a trajectory issued from \(X_{\mathscr {K}-}^i\), \(1\le i\le k-1\), then \(X_{\mathscr {K}-}^1\subset X_{\mathscr {K}-}^2\subset \cdots \subset X_{\mathscr {K}-}^k\). These inclusions guarantee that any trajectory (starting at any \(x_0\in \mathscr {K}\)) will be outside \(\mathscr {K}\) at the kth time-instant.

A more strict definition would be necessary in order to define a more general notion of \((\mathscr {K},\mathcal {W})\)-escape with both upper and lower limits for the time of escape, i.e., \((\mathscr {K},\mathcal {W})\)-escape in at least k and at most l steps. Sufficient and necessary conditions for such definition could be easily constructed with the help of \({\hat{X}}_{\mathscr {K}-}^i\) defined as follows

for each \({\bar{u}}_{i-1}\in \mathcal {W}^i\), \({\bar{u}}_{j-1} \subset \bar{u}_{i-1}\).

Remark 8

The property of \((\mathscr {K},\mathcal {W})\)-escape in at most k steps can be extended to its limit \(k\rightarrow \infty \) (possibly in conjunction with \((\mathscr {K},\mathcal {W})\)-excluded) in order to describe systems which are bound to (permanently) leave \(\mathscr {K}\) after some indeterminate number of steps.

Remark 9

The demand that a system starting from any \(x_0\in \mathscr {K}\) leaves the set \(\mathscr {K}\) in the same number of \(k\ge 1\) steps, reduces to the only possibility, i.e., for \(k=1\). This is due to the fact that any trajectory \({\bar{x}}_k\) starting from any point \(x_0\in \mathscr {K}\) that leaves the set \(\mathscr {K}\) in \(k>1\) steps contains a point \(x_{k-1}\in \mathscr {K}\) from which the system leaves \(\mathscr {K}\) after 1 step.

Example 4

Consider the following system

$$\begin{aligned} x_{k+1} = f(x_k,u_k) = \begin{pmatrix} ex^1_k - e^{x^1_k}\\ x^2_ku_k \end{pmatrix} \end{aligned}$$

with region \(\mathscr {K}= \mathbb {R}^2_+\) in the state-space and \(\mathcal {W}=\mathbb {R}_+\) in the input-space. For any \(x_0 = (x_0^1,x_0^2)^T\in \mathbb {R}^2_+\) such that \(x_0^1\ne 1\), the state \(x_1\notin \mathbb {R}^2_+\). However, for \(x_0^1 = 1\), we have \(x_1^1=0\), and then \(x_2^1 = -1\). Therefore,

$$\begin{aligned} X_{\mathscr {K}-}^1&= \left\{ (x_0^1,x_0^2)^T\in \mathbb {R}^2_+:\; x_0^1\ne 1\right\} \\ X_{\mathscr {K}-}^2&= X_{\mathscr {K}-}^1 \cup \left\{ (x_0^1,x_0^2)^T\in \mathbb {R}^2_+:\; x_0^1= 1\right\} , \end{aligned}$$

and since \(X_{\mathscr {K}-}^1 \cup X_{\mathscr {K}-}^2 = \mathscr {K}\), system is \((\mathscr {K},\mathcal {W})\)-escape in at most 2 steps (see Fig. 4).

Fig. 4
figure 4

Sets \(X_{\mathscr {K}-}^1\) and \(X^2 = X_{\mathscr {K}-}^2{\setminus } X_{\mathscr {K}-}^1\) from Ex. 4

Example 5

Consider the following system

$$\begin{aligned} x_{k+1} = f(x_k,u_k) = \begin{pmatrix} x^1_k - 1\\ x^2_ku_k \end{pmatrix} \end{aligned}$$

with region \(\mathscr {K}= \mathbb {R}^2_+\) in the state-space and \(\mathcal {W}=\mathbb {R}_+\) in the input-space. For any \(x_0 = (x_0^1,x_0^2)^T\in \mathbb {R}^2_+\) such that \(x_0^1<1\), the state \(x_1\notin \mathbb {R}^2_+\). However, for \(x_0^1 \ge 1\), we have \(x_1^1\ge 0\). Therefore (see Fig. 5),

$$\begin{aligned} X_{\mathscr {K}-}^1&= \left\{ (x_0^1,x_0^2)^T\in \mathbb {R}^2_+:\; x_0^1<1\right\} \\ X_{\mathscr {K}-}^i&= X_{\mathscr {K}-}^{i-1} \\ {}&\quad \cup \left\{ (x_0^1,x_0^2)^T\in \mathbb {R}^2_+:\; \left\lfloor x_0^1 \right\rfloor = i-1 \right\} ,\quad i = 2,3,\ldots , \end{aligned}$$

where \(\left\lfloor x_0^1 \right\rfloor \) denotes the integer part of \(x_0^1\), and thus the system is \((\mathscr {K},\mathcal {W})\)-escape in at most \(\infty \) steps, because \(\mathscr {K}= X_{\mathscr {K}-}^\infty \). However, for each given \(x_0\in \mathbb {R}^2_+\) we know exactly the step number i at which the system leaves \(\mathscr {K}\), namely this is \(i = \left\lfloor x_0^1 \right\rfloor + 1\).

Fig. 5
figure 5

Sets \(X_{\mathscr {K}-}^1\) and \(X^i = X_{\mathscr {K}-}^i{\setminus } X_{\mathscr {K}-}^{i-1}\) for \(i=2,3,\ldots \), from Ex. 5

Definition 5

A nonlinear system \(\Pi \) of the form (1) is said to be \((\mathscr {K},\mathcal {W})\)-attractive in at most k steps if there exists \(k\in \mathbb {N}\) such that for all \(x_0\notin \mathscr {K}\) and any \(\bar{u}_{k-1}=(u_0,\ldots ,u_{k-1})\), with all \(u_j\in \mathcal {W}\), \(0\le j\le k-1\), there exists \(0 \le k'\le k\) such that \(x_{k'}\in \mathscr {K}\) and k is the lowest such number.

Proposition 5

A nonlinear system \(\Pi \) of the form (1) is \((\mathscr {K},\mathcal {W})\)-attractive in at most k steps if and only if for all \(x_0\notin \mathscr {K}\) and each \({\bar{u}}_{k-1}=(u_0,\ldots ,u_{k-1})\), such that \(u_j\in \mathcal {W}\), \(0\le j\le k-1\), and the following conditions hold:

  1. (i)

    \(X_{\mathscr {K}+}^1 \cup X_{\mathscr {K}+}^2 \cup \cdots \cup X_{\mathscr {K}+}^{k} = \mathscr {K}^c\);

  2. (ii)

    \(X_{\mathscr {K}+}^1 \cup X_{\mathscr {K}+}^2 \cup \cdots \cup X_{\mathscr {K}+}^{k-1} \subsetneqq \mathscr {K}^c\),

where \(X_{\mathscr {K}+}^i = \{x_0\notin \mathscr {K}:(\Phi \circ \underbrace{f\circ \dots \circ f}_{i\text {-times}})(x_0,{\bar{u}}_{i-1}) \ge 0\}\) for each \({\bar{u}}_{i-1}\in \mathcal {W}^i\).

The set \(X_{\mathscr {K}+}^i\) consists of all initial points \(x_0\) lying outside \(\mathscr {K}\), after issued from which the system is in \(\mathscr {K}\) in the ith step.

Proof

(Sufficiency) If condition (i) is satisfied, then for each \(x_0\notin \mathscr {K}\) there exists \(i\in \mathbb {N}, i \le k\) such that \(\Phi (x_i)\ge 0\), i.e., \(x_i\in \mathscr {K}\), which implies that any trajectory \({\bar{x}}\) from outside of \(\mathscr {K}\) is bound to enter \(\mathscr {K}\) in at most k iterations. Furthermore, satisfying condition (ii), implies that there is no \({\tilde{k}} < k\) which satisfies condition (i).

(Necessity) If a system is \((\mathscr {K},\mathcal {W})\)-attractive in at most k steps then for each \(x_0\notin \mathscr {K}\) there exists \(k'\in \mathbb {N},k' \le k\) such that any trajectory \({\bar{x}}_{k'-1}(x_0,{\bar{u}}_{k'-2}) = (x_0,\ldots ,x_{k'-1})\notin \mathscr {K}\), which means that \(\Phi (x_j) = \Phi \circ f(x_{j-1},u_{j-1})\ngeq 0\), for \(0\le j\le k'-1\); and \(x_{k'}\in \mathscr {K}\) implying \(\Phi (x_{k'})\ge 0\). All this means, taking into account notation (7), that conditions (i) and (ii) are met. \(\square \)

Remark 10

This property means that the system located outside of \(\mathscr {K}\) is bound to enter \(\mathscr {K}\) after a final (and well-defined) number of time steps. Such property may be considered, for example when looking for the emergence, disappearance, and longevity of temporarily restricted regions in a state-space (regions which are not achievable from at least one initial point for a given number of time steps). Moreover, if the system in question is parameterized by some parameter \(\lambda \), and a continuous change in its value causes the system to suddenly gain or lose the property of being \((\mathscr {K},\mathcal {W})\)-attractive in at most k steps, or the number of such steps k changes rapidly, this may indicate the occurrence of a bifurcation or a high sensitivity to a given parameter, respectively.

Remark 11

The above definition of \(X_{\mathscr {K}+}^i\) does not guarantee that these sets are mutually disjoint, i.e., in general \(X_{\mathscr {K}+}^j \cap X_{\mathscr {K}+}^l \ne \emptyset , j \ne l\). This follows from the same reasoning as given in Remark 7. The inclusions \(X_{\mathscr {K}+}^1\subset X_{\mathscr {K}+}^2\subset \cdots \subset X_{\mathscr {K}+}^k\) guarantee that any trajectory (starting at any \(x_0\in \mathscr {K}^\textrm{c}\)) will be inside \(\mathscr {K}\) at the kth time-instant.

With analogous purpose of defining \((\mathscr {K},\mathcal {W})\)-attractive in at least k steps and at most l steps the \({\hat{X}}_{\mathscr {K}+}^i\) could be defined as follows

for each \({\bar{u}}_{i-1}\in \mathcal {W}^i\), \({\bar{u}}_{j-1} \subset {\bar{u}}_{i-1}\).

Remark 12

The property of \((\mathscr {K},\mathcal {W})\)-attractivity in at most k steps can be extended to its limit \(k\rightarrow \infty \) (possibly in conjunction with \((\mathscr {K},\mathcal {W})\)-invariant) in order to describe systems which are bound to (permanently) enter \(\mathscr {K}\) after some indeterminate number of steps.

Example 6

Consider the following system

$$\begin{aligned} x_{k+1} = f(x_k,u_k) = \begin{pmatrix} e^{x^2_k+u_k} \\ x^1_k \end{pmatrix} \end{aligned}$$

with region \(\mathscr {K}= \mathbb {R}^2_+\) in the state-space and cone \(\mathcal {W}=\mathbb {R}_+\) in the input-space. For any \(x_0=\begin{pmatrix}a,b\end{pmatrix}^T\notin \mathscr {K}\), with \(a\ge 0\) and \(b<0\), we have \(x_{1}=\begin{pmatrix}e^{b+u_0},a\end{pmatrix}^T\in \mathscr {K}\) for any \(u_0\in \mathcal {W}\); moreover, \(x_k\in \mathscr {K}\), \(k\ge 1\). For any \(x_0=\begin{pmatrix}-a,b\end{pmatrix}^T\notin \mathscr {K}\), with \(a>0\), \(b\in \mathbb {R}\), and any \(u_0\in \mathcal {W}\), we have \(x_{1}=\begin{pmatrix}e^{b+u_0},-a\end{pmatrix}^T\notin \mathscr {K}\), but \(x_{2}=\begin{pmatrix}e^{-a+u_1},e^{b+u_0}\end{pmatrix}^T\in \mathscr {K}\) for any \({\bar{u}}_1\in \mathcal {W}^2\). Hence

$$\begin{aligned} X_{\mathscr {K}+}^1&= \{(a,b)^T\notin \mathscr {K}:a\ge 0,\,b<0\}\\ X_{\mathscr {K}+}^2&= X_{\mathscr {K}+}^1 \cup \{(-a,b)^T\notin \mathscr {K}:a>0,\,b\in \mathbb {R}\}, \end{aligned}$$

where \(X_{\mathscr {K}+}^1\subsetneqq \mathscr {K}^\textrm{c}\), \(X_{\mathscr {K}+}^2= \mathscr {K}^\textrm{c}\), and since, obviously, \(X_{\mathscr {K}+}^1 \cup X_{\mathscr {K}+}^2 = \mathscr {K}^\textrm{c}\), the system is \((\mathscr {K},\mathcal {W})\)-attractive in at most \(k=2\) steps (see Fig. 6), and then stays in \(\mathscr {K}\).

Fig. 6
figure 6

Sets \(X_{\mathscr {K}+}^1\) and \(X^2 = X_{\mathscr {K}+}^2{\setminus } X_{\mathscr {K}+}^1\) from Ex. 6

3.2 Linear systems

Let us consider an LTI system \(\Xi \) defined by (4). Consider also cones \(\mathcal {K}\) and \(\mathcal {W}\) defined, respectively, in (5) and (3).

Before we proceed to give the main results concerning linear systems, we present some properties of the cone \(\mathcal {K}\) itself, which will be helpful further below. First, define

$$\begin{aligned} \bar{\mathcal {K}} = \left\{ x\in \mathbb {R}^n:\; -Kx\ge 0\right\} = \left\{ x\in \mathbb {R}^n:\; Kx\le 0\right\} , \end{aligned}$$

and then we have the obvious equivalence

$$\begin{aligned} x\in \mathcal {K}\quad \Leftrightarrow \quad -x\in \bar{\mathcal {K}}. \end{aligned}$$

From the above property it follows that \(x\in \mathcal {K}^\textrm{c}\) does not imply, in general, \(-x\in \mathcal {K}^\textrm{c}\). Indeed, for example, if \(\mathcal {K}= \mathbb {R}_+\) and \(x<0\), that is \(x\in \mathbb {R}_+^\textrm{c}\), then \(-x\in \mathbb {R}_+\), i.e., \(-x\notin \mathbb {R}_+^\textrm{c}\). Similarly, for \(\mathcal {K}= \mathbb {R}^n_+\) and any nonzero \(x\in \mathbb {R}^n_-\subset \left( \mathbb {R}^n_+\right) ^\textrm{c}\), we have \(-x\in \mathbb {R}^n_+\), i.e., \(-x\notin \left( \mathbb {R}^n_+\right) ^\textrm{c}\). However, we have the following result.

Lemma 1

Let \(x\in \mathbb {R}^n\), where \(n\ge 2\). If \(x\in \mathcal {K}^\textrm{c}\setminus \left( \bar{\mathcal {K}}\setminus \{0\}\right) \), where \(\bar{\mathcal {K}}\setminus \{0\}\subset \mathcal {K}^\textrm{c}\), then \(-x\in \mathcal {K}^\textrm{c}\setminus ({\tilde{\mathcal {K}}}\setminus \{0\})\).

Proof

Let \(x\in \mathcal {K}^\textrm{c}\setminus \left( \bar{\mathcal {K}}\setminus \{0\}\right) \). It means that both \(x\notin \mathcal {K}\), i.e., \(Kx\ngeq 0\), and \(x\notin \bar{\mathcal {K}}\), that is \(Kx\nleq 0\). It means that Kx possesses both at least one negative and one positive element. Using these facts we conclude that \(-Kx\) possesses also both at least one negative and one positive element, which means that both \(-x\notin \mathcal {K}\) and \(-x\notin \bar{\mathcal {K}}\). Thus, \(-x\in \mathcal {K}^\textrm{c}\setminus \left( \bar{\mathcal {K}}\setminus \{0\}\right) \). \(\square \)

3.2.1 \((\mathcal {K},\mathcal {W})\)-excluded systems

Proposition 6

The following conditions are equivalent for the linear system \(\Xi \):

  1. (i)

    \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-excluded;

  2. (ii)

    \({\tilde{A}}\) is invertible, \(\mathbb {R}^n_+ \subset \tilde{\mathcal {A}}\), and \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{\mathcal {A}}}\);

  3. (iii)

    \({\tilde{A}}^{-1}\ge 0\) and \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{\mathcal {A}}}\);

  4. (iv)

    \({\tilde{A}}\) is monotone and \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{\mathcal {A}}}\),

where \({\tilde{A}} = KAK^{-1}\), \({\tilde{B}} = KBW^{-1}\), \(\tilde{\mathcal {A}} = \{z\in \mathbb {R}^n:\, {\tilde{A}}^{-1}z\ge 0\}\) and \(\bar{\tilde{\mathcal {A}}} = \{z\in \mathbb {R}^n:\, -{\tilde{A}}^{-1}z\ge 0\}\).

Proof

(i) \(\Rightarrow \) (ii): Since \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-excluded it means that \(x_{k+1} = Ax_k + Bu_k\notin \mathcal {K}\) for each \(x_k\in \mathcal {K}^\textrm{c}\) and each \(u_k\in \mathcal {W}\), and all \(k\in \mathbb {N}_0\). Then, from the definition of \(\mathcal {K}\) (or from Proposition 2), we obtain \(KAx_k + KBu_k \ngeq 0\) for each \(x_k\in \mathcal {K}^\textrm{c}\), each \(u_k\in \mathcal {W}\), and all \(k\in \mathbb {N}_0\). This condition can be rewritten, equivalently, as \(KAK^{-1}{\tilde{x}}_k + KBW^{-1}{\tilde{u}}_k \ngeq 0\) for all \(\tilde{x}_k\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\), each \({\tilde{u}}_k\in \mathbb {R}^m_+\), and all \(k\in \mathbb {N}_0\). Since it should hold for any \(\tilde{x}_k\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and any arbitrary \(\tilde{u}_k\in \mathbb {R}^m_+\), we get \({\tilde{A}}{\tilde{x}} + {\tilde{B}}{\tilde{u}} \ngeq 0\) for all \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and each \(\tilde{u}\in \mathbb {R}^m_+\).

Since it holds for all \({\tilde{u}}\in \mathbb {R}^m_+\), it also holds, in particular, for \({\tilde{u}} = 0\), and we obtain \({\tilde{A}}{\tilde{x}}\ngeq 0\) for all \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\). Assume \(\tilde{A}\) is non-invertible; set \(C = \ker {\tilde{A}}\), and then \(C\cap \left( \mathbb {R}^n_+\right) ^\textrm{c} \ne \emptyset \) because any (at least 1-dimensional) vector subspace in \(\mathbb {R}^n\) has a nonempty intersection with \(\left( \mathbb {R}^n_+\right) ^\textrm{c}\). So, take \(\tilde{x}\in C\cap \left( \mathbb {R}^n_+\right) ^\textrm{c}\), and then \({\tilde{A}}\tilde{x} = 0\) which leads to a contradiction.

Since \({\tilde{A}}\) is the matrix of a linear isomorphism which maps \(\mathbb {R}^n\) onto \(\mathbb {R}^n\), and \({\tilde{A}}{\tilde{x}}\ngeq 0\) for all \(\tilde{x}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\), it follows that \(\tilde{A}^{-1}\) is a matrix of transformation which maps \(\mathbb {R}^n_+\) onto some \(\mathcal {S}\subset \mathbb {R}^n_+\), that is \({\tilde{A}}^{-1}z\ge 0\) for all \(z\in \mathbb {R}^n_+\), which means that \(\mathbb {R}^n_+\subset {\tilde{\mathcal {A}}}\).

Condition \({\tilde{A}}{\tilde{x}} + {\tilde{B}}{\tilde{u}} \ngeq 0\) for all \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and each \(\tilde{u}\in \mathbb {R}^m_+\) means that, in particular, \({\tilde{A}}{\tilde{x}} + \tilde{B}{\tilde{u}} \ne 0\) for all \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and each \({\tilde{u}}\in \mathbb {R}^m_+\), i.e., \({\tilde{A}}{\tilde{x}} \ne -\tilde{B}{\tilde{u}}\) for all \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and each \({\tilde{u}}\in \mathbb {R}^m_+\). In particular, \({\tilde{A}}{\tilde{x}} \ne -{\tilde{B}}{\tilde{u}}\) for all \({\tilde{x}}\in \mathbb {R}^n_-{\setminus }\{0\}\) and each \({\tilde{u}}\in \mathbb {R}^m_+\), which can be rewritten as \(-{\tilde{A}}\tilde{x} \ne -{\tilde{B}}{\tilde{u}}\) for all \({\tilde{x}}\in \mathbb {R}^n_+{\setminus }\{0\}\) and each \({\tilde{u}}\in \mathbb {R}^m_+\), i.e., \({\tilde{A}}{\tilde{x}} \ne \tilde{B}{\tilde{u}}\) for all \({\tilde{x}}\in \mathbb {R}^n_+{\setminus }\{0\}\) and each \({\tilde{u}}\in \mathbb {R}^m_+\). It means that \(\textrm{Im}_+{\tilde{A}} \cap \textrm{Im}_+{\tilde{B}} = \emptyset \), that is \(\tilde{\mathcal {A}}\cap \textrm{Im}_+{\tilde{B}} = \emptyset \). It implies that either \(\textrm{Im}_+{\tilde{B}} \cap \tilde{\mathcal {A}}^\textrm{c}{\setminus }(\bar{\tilde{\mathcal {A}}}{\setminus }\{0\}) \ne \emptyset \) or \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{\mathcal {A}}}\). Knowing already that \(\tilde{\mathcal {A}}\supset \mathbb {R}^n_+\), which implies \(\bar{\tilde{\mathcal {A}}}\supset \mathbb {R}^n_-\), we conclude that \(\tilde{\mathcal {A}}^\textrm{c}{\setminus }(\bar{\tilde{\mathcal {A}}}{\setminus }\{0\}) \subset \left( \mathbb {R}^n_+\right) ^\textrm{c}\). Thus, assuming \(\textrm{Im}_+{\tilde{B}} \cap \tilde{\mathcal {A}}^\textrm{c}{\setminus }(\bar{\tilde{{\mathcal {A}}}}{\setminus }\{0\}) \ne \emptyset \), we obtain \(\textrm{Im}_+{\tilde{B}} \cap \left( \mathbb {R}^n_+\right) ^\textrm{c} \ne \emptyset \). Thanks to Lemma 1, it means that there exists some annihilating \({\tilde{A}}{\tilde{x}}\in \tilde{\mathcal {A}}^\textrm{c}{\setminus }(\bar{\tilde{\mathcal {A}}}{\setminus }\{0\})\), such that \({\tilde{A}}{\tilde{x}} = -{\tilde{B}}{\tilde{u}}\) for some \(\tilde{x}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and some \({\tilde{u}}\in \mathbb {R}^m_+\), contradicting \({\tilde{A}}{\tilde{x}} + {\tilde{B}}{\tilde{u}} \ngeq 0\) for all \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and each \(\tilde{u}\in \mathbb {R}^m_+\). Thus, we conclude that \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{\mathcal {A}}}\).

(ii) \(\Rightarrow \) (iii): From \(\mathbb {R}^n_+\subset {\tilde{\mathcal {A}}}\) we know that any \(z\in \mathbb {R}^n_+\) belongs also to the cone \(\tilde{{\mathcal {A}}}\), i.e., \({\tilde{A}}^{-1}z\ge 0\) for all \(z\in \mathbb {R}^n_+\), which implies \({\tilde{A}}^{-1}\ge 0\).

(iii) \(\Rightarrow \) (iv): We have \({\tilde{A}}^{-1}\ge 0\). Let us assume that \(z = {\tilde{A}}{\tilde{x}}\ge 0\). Then \({\tilde{x}} = \tilde{A}^{-1}z\ge 0\), which means that \({\tilde{A}}\) is a monotone matrix.

(iv) \(\Rightarrow \) (i): Since the matrix \({\tilde{A}}\) is monotone, from its definition we have \({\tilde{A}}{\tilde{x}} \ge 0\) implies \({\tilde{x}}\ge 0\). Thus, assuming \({\tilde{A}}{\tilde{x}}\ge 0\) for some \({\tilde{x}}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) leads to a contradiction. Therefore, \({\tilde{A}}{\tilde{x}}\ngeq 0\) for any \(\tilde{x}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\).

Since \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{{\mathcal {A}}}}\) and \({{\,\mathrm{{Im_{\ngeq }}}\,}}{\tilde{A}}\cap \mathbb {R}^n_+ =\emptyset \) we have \({\tilde{A}}{\tilde{x}} + {\tilde{B}}{\tilde{u}} \ngeq 0\) for all \(\tilde{x}\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and each \({\tilde{u}}\in \mathbb {R}^m_+\). Expressing it for \({\tilde{x}} = Kx\) and \({\tilde{u}} = Wu\), we get \(KAx + KBu\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) for all \(x\in \mathcal {K}^\textrm{c}\) and each \(u\in \mathcal {W}\). From definition of \(\mathcal {K}\), inequality \(K(Ax_k + Bu_k)\ngeq 0\) means that \(x_{k+1} = Ax_k + Bu_k\) does not belong to \(\mathcal {K}\) for all \(x_k\in \mathcal {K}^\textrm{c}\), each \(u_k\in \mathcal {W}\), and any \(k\in \mathbb {N}_0\). \(\square \)

Example 7

Consider the system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} -\frac{5}{3} &{} \frac{7}{3}\\ \frac{2}{3} &{} -\frac{1}{3} \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} \frac{2}{3} &{} 0\\ \frac{1}{3} &{} 3 \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\) defined be the matrices

$$\begin{aligned} K = \begin{pmatrix} 1 &{} -2\\ -2 &{} 1 \end{pmatrix} \quad \text {and}\quad W = \begin{pmatrix} 1 &{} 1\\ -1 &{} 1 \end{pmatrix}, \end{aligned}$$

respectively (see Fig. 7).

Fig. 7
figure 7

Cones from Ex. 7

Calculate

$$\begin{aligned} {\tilde{A}} = KAK^{-1}= & {} \begin{pmatrix} 1 &{} -2\\ -2 &{} 1 \end{pmatrix} \begin{pmatrix} -\frac{5}{3} &{} \frac{7}{3}\\ \frac{2}{3} &{} -\frac{1}{3} \end{pmatrix} \begin{pmatrix} -\frac{1}{3} &{} -\frac{2}{3}\\ -\frac{2}{3} &{} -\frac{1}{3} \end{pmatrix}\\= & {} \begin{pmatrix} -1 &{} 1\\ 2 &{} -1 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} {\tilde{B}} = KBW^{-1}= & {} \begin{pmatrix} 1 &{} -2\\ -2 &{} 1 \end{pmatrix} \begin{pmatrix} \frac{2}{3} &{} 0\\ \frac{1}{3} &{} 3 \end{pmatrix} \begin{pmatrix} \frac{1}{2} &{} -\frac{1}{2}\\ \frac{1}{2} &{} \frac{1}{2} \end{pmatrix}\\ {}= & {} \begin{pmatrix} -3 &{} -3\\ 1 &{} 2 \end{pmatrix}. \end{aligned}$$

All conditions of Proposition 6 are satisfied. Indeed, \({\tilde{A}}\) is monotone, the cone \(\tilde{\mathcal {A}} = \mathrm {Im_+}{\tilde{A}}\cup \{0\}\) is such that \(\mathbb {R}^2_+\subset \tilde{\mathcal {A}}\) (see Fig. 8), the matrix

$$\begin{aligned} {\tilde{A}}^{-1} = \begin{pmatrix} 1 &{} 1\\ 2 &{} 1 \end{pmatrix}\ge 0, \end{aligned}$$

and the cone \(\bar{\tilde{{\mathcal {A}}}} = \mathrm {Im_+}(-\tilde{A})\cup \{0\}\) is such that \(\textrm{Im}_+{\tilde{B}} \subset \bar{\tilde{\mathcal {A}}}\). Thus, the system is \((\mathcal {K},\mathcal {W})\)-excluded.

Fig. 8
figure 8

\((\mathcal {K},\mathcal {W})\)-excluded system from Ex. 7

Example 8

Consider a modified system from Example 7 with a new matrix

$$\begin{aligned} B = \begin{pmatrix} \frac{4}{3} &{} -2\\ \frac{2}{3} &{} 2 \end{pmatrix} \end{aligned}$$

and unchanged cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\).

Calculate

$$\begin{aligned} {\tilde{B}} = KBW^{-1}= & {} \begin{pmatrix} 1 &{} -2\\ -2 &{} 1 \end{pmatrix} \begin{pmatrix} \frac{4}{3} &{} -2\\ \frac{2}{3} &{} 2 \end{pmatrix} \begin{pmatrix} \frac{1}{2} &{} -\frac{1}{2}\\ \frac{1}{2} &{} \frac{1}{2} \end{pmatrix} \\ {}= & {} \begin{pmatrix} -3 &{} -3\\ 2 &{} 4 \end{pmatrix}. \end{aligned}$$

Then, \(\textrm{Im}_+{\tilde{B}} \cap \bar{\tilde{\mathcal {A}}}\ne \emptyset \), but \(\textrm{Im}_+{\tilde{B}}\) is not a subset of \(\bar{\tilde{\mathcal {A}}}\) (see Fig. 9). Thus, the system is not \((\mathcal {K},\mathcal {W})\)-excluded. Indeed, taking for example \({\tilde{x}}_k = (-0.1,3)^T\) and \({\tilde{u}}_k = (0,1)^T\) yields \({\tilde{x}}_{k+1} = {\tilde{A}}{\tilde{x}}_k + {\tilde{B}}{\tilde{u}}_k = (0.1,0.8)^T>0\). It corresponds to \(x_k = (-59/30,-28/30)^T\) and \(u_k = (-0.5,0.5)^T\), which gives \(x_{k+1} = Ax_k + Bu_k = (-17/30,-1/3)\in \mathcal {K}\), because \(Kx_{k+1} = (0.1,0.8)^T>0\).

Fig. 9
figure 9

Not \((\mathcal {K},\mathcal {W})\)-excluded system from Ex. 8

Remark 13

If the conditions of Proposition 6 are satisfied with the smallest possible inclusion, i.e., \(\tilde{\mathcal {A}}=\mathbb {R}^2_+\) or, equivalently, \(\textrm{Im}_+\tilde{A} = \mathbb {R}^n_+{\setminus }\{0\}\), then the matrix \({\tilde{A}}\) is a positive generalized permutation matrix (each column of the matrix \({\tilde{A}}\) lies on some axis of the canonical basis of \(\mathbb {R}^n\)).

In the case when \({\tilde{A}} = KAK^{-1}\) is a strictly positive diagonal matrix (a particular form of positive generalized permutation matrix), we have the following result.

Corollary 2

If the following conditions are satisfied:

  1. (i)

    the matrix A possesses all distinct real positive eigenvalues;

  2. (ii)

    the columns of the matrix \(K^{-1}\) are the eigenvectors of A;

  3. (iii)

    \(KBW^{-1} \le 0\),

then the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-excluded.

Proof

By virtue of condition (i) we know that the matrix A is diagonalizable over the field \(\mathbb {R}\), and, thanks to (ii), we know that \(KAK^{-1}\) is a diagonal matrix with eigenvalues on the diagonal. Thus, \({\tilde{A}} = KAK^{-1}\) is a strictly positive diagonal matrix and, thereby, a positive generalized permutation matrix, which means that \(\bar{\tilde{\mathcal {A}}} = \mathbb {R}^n_-\). Together with condition (iii), signifying that \(\textrm{Im}_+\tilde{B}\subset \mathbb {R}^n_-\), we get \(\textrm{Im}_+\tilde{B}\subset \bar{\tilde{\mathcal {A}}}\), which implies that the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-excluded. \(\square \)

Example 9

Consider the system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} \frac{5}{3} &{} \frac{1}{3}\\ \frac{2}{3} &{} \frac{4}{3} \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} \frac{5}{3} &{} \frac{2}{3}\\ -\frac{7}{3} &{} -\frac{4}{3} \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\) defined be the matrices

$$\begin{aligned} K = \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix} \quad \text {and}\quad W = \begin{pmatrix} 1 &{} 1\\ -2 &{} -1 \end{pmatrix}, \end{aligned}$$

respectively (see Fig. 10).

Fig. 10
figure 10

Regions from Ex. 9

Since

$$\begin{aligned} KAK^{-1} = \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix}\begin{pmatrix} \frac{5}{3} &{} \frac{1}{3}\\ \frac{2}{3} &{} \frac{4}{3} \end{pmatrix} \begin{pmatrix} \frac{1}{3} &{} \frac{1}{3}\\ -\frac{2}{3} &{} \frac{1}{3} \end{pmatrix} =\begin{pmatrix} 1 &{} 0\\ 0 &{} 2 \end{pmatrix} \end{aligned}$$

is a strictly positive diagonal matrix, and

$$\begin{aligned} KBW^{-1}= & {} \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix}\begin{pmatrix} \frac{5}{3} &{} \frac{2}{3}\\ -\frac{7}{3} &{} -\frac{4}{3} \end{pmatrix}\begin{pmatrix} -1 &{} -1\\ 2 &{} 1 \end{pmatrix} \\ {}= & {} \begin{pmatrix} 0 &{} -2\\ -1 &{} -1 \end{pmatrix}\le 0, \end{aligned}$$

the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-excluded. Indeed, in this case, we can see that the matrix A has two distinct eigenvalues \(\lambda _1 = 1\) and \(\lambda _2 = 2\), and the corresponding eigenvectors \(v_1 = (1/3,-2/3)^T\) and \(v_2 = (1/3,1/3)^T\), respectively, are the columns of matrix \(K^{-1}\).

3.2.2 \((\mathcal {K},\mathcal {W})\)-catch systems

Proposition 7

The system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-catch if and only if

  1. (i)

    \(KAK^{-1} \ge 0\) and \(KAK^{-1}\) is not a monotone matrix;

  2. (ii)

    \(KBW^{-1} \ge 0\).

Proof

(Sufficiency) Conditions \(KAK^{-1} \ge 0\) and \(KBW^{-1} \ge 0\) guarantee \((\mathcal {K},\mathcal {W})\)-invariance of \(\Xi \) (an intrinsic property of \((\mathcal {K},\mathcal {W})\)-catch), which is due to Corollary 1. Thanks to the condition \(KBW^{-1} \ge 0\), any \({\tilde{u}}_k\in \mathbb {R}^m_+\) can only drive \(x_{k+1}\) inside the cone \(\mathcal {K}\). From Proposition 6 we know that for a system with a matrix A such that \(KAK^{-1}\) is not a monotone matrix (with \(KBW^{-1} \ge 0\)), there exist points in \(\mathcal {K}^\textrm{c}\) from which the state of the system evolves into the cone \(\mathcal {K}\).

(Necessity) Since the \((\mathcal {K},\mathcal {W})\)-catch of \(\Xi \) implies (from its definition) \((\mathcal {K},\mathcal {W})\)-invariance of \(\Xi \), the conditions \(KAK^{-1} \ge 0\) and \(KBW^{-1} \ge 0\) must hold. On the other hand, the existence of \(x_k\in \mathcal {K}^\textrm{c}\) yielding \(x_{k+1}\in \mathcal {K}\) means (from the definition of \(\mathcal {K}\)) that \(KAx_k+ KBu_k\ge 0\) for some \(x_k\in \mathcal {K}^\textrm{c}\) and for each \(u_k\in \mathcal {W}\). It can be rewritten as \(KAK^{-1}{\tilde{x}}_k+ KBW^{-1}{\tilde{u}}_k\ge 0\) for some \({\tilde{x}}_k = Kx_k\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and for each \(\tilde{u}_k\in \mathbb {R}^m_+\). Since this condition should hold, in particular, for \({\tilde{u}}_k = 0\), we get \(KAK^{-1}{\tilde{x}}_k\ge 0\) for some \(\tilde{x}_k\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\), which in turn, by deducing from Proposition 6, holds for any \(KAK^{-1}\ge 0\) that is not a monotone matrix. \(\square \)

Remark 14

If system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-catch and \({{\,\textrm{rank}\,}}A<n\), then there exists infinitely many \(x_0\in \mathcal {K}^\textrm{c}\) from which the system goes into \(\mathcal {K}\). Indeed, \({{\,\textrm{rank}\,}}A < n\) implies \(\ker A \ne 0\), i.e., there exists a nonzero vector \(v\in \mathbb {R}^n\) such that \(V = {{\,\mathrm{{Vect}}\,}}\{v\}\cap \mathcal {K}^\textrm{c}\ne \emptyset \). Therefore, for any \(x_0\in V\subset \mathcal {K}^\textrm{c}\) we have \(Ax_0 = 0\in \mathcal {K}\). It means that for \((\mathcal {K},\mathcal {W})\)-catch systems with singular system matrix A there exists infinitely many \(x_0\in \mathcal {K}^\textrm{c}\) points belonging to a subset of null space of matrix A of dimension \(n-r\), where \(r={{\,\textrm{rank}\,}}A\). Of course, there may be other \(x_0\in \mathcal {K}^\textrm{c}\) points outside \(\ker A\) from which the system goes into \(\mathcal {K}\).

Example 10

Consider system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} \frac{2}{3} &{} \frac{1}{3}\\ -\frac{4}{3} &{} -\frac{2}{3} \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} -1 &{} -\frac{1}{3}\\ 1 &{} \frac{2}{3} \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\) defined by the matrices

$$\begin{aligned} K = \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix} \quad \text {and}\quad W = \begin{pmatrix} 1 &{} 1\\ -2 &{} -1 \end{pmatrix}, \end{aligned}$$

respectively (see Fig. 10). Since

$$\begin{aligned} KAK^{-1}= & {} \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix} \begin{pmatrix} \frac{2}{3} &{} \frac{1}{3}\\ -\frac{4}{3} &{} -\frac{2}{3} \end{pmatrix} \begin{pmatrix} \frac{1}{3} &{} \frac{1}{3}\\ -\frac{2}{3} &{} \frac{1}{3} \end{pmatrix} \\ {}= & {} \begin{pmatrix} 0 &{} 1\\ 0 &{} 0 \end{pmatrix}\ge 0 \end{aligned}$$

and is not monotone matrix, as well as

$$\begin{aligned} KBW^{-1}= & {} \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix} \begin{pmatrix} -1 &{} -\frac{1}{3}\\ 1 &{} \frac{2}{3} \end{pmatrix} \begin{pmatrix} -1 &{} -1\\ 2 &{} 1 \end{pmatrix} \\ {}= & {} \begin{pmatrix} 0 &{} 1\\ 1 &{} 1 \end{pmatrix}\ge 0, \end{aligned}$$

system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-catch. Indeed, if we take, e.g., \(x_k = (0,1)^T\notin \mathcal {K}\), we get \(x_{k+1} = Ax_k = (1/3,-2/3)^T\in \mathcal {K}\), because

$$\begin{aligned} Kx_{k+1} = \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix} \begin{pmatrix} \frac{1}{3}\\ -\frac{2}{3} \end{pmatrix} = \begin{pmatrix} 1\\ 0 \end{pmatrix}\ge 0. \end{aligned}$$

Since \({{\,\textrm{rank}\,}}A = 1\), that is \(\ker A = {{\,\mathrm{{Vect}}\,}}\{\left( \frac{1}{2},-1\right) ^T\}\), and \((\ker A)\cap \mathcal {K}\ne \{0\}\), i.e., \(\left( \frac{1}{2},-1\right) ^T\in \mathcal {K}\), we have points \(x_0\in \textrm{Im}_+\{\left( -\frac{1}{2},1\right) ^T\}\subset \ker A\) from which system \(\Xi \) goes to 0.

Concerning input term Bu of \(\Xi \), since all possible controls \(u\in \mathcal {W}\) can be parameterized as

$$\begin{aligned} u = W^{-1}{\tilde{u}}= & {} \begin{pmatrix} -1 &{} -1\\ 2 &{} 1 \end{pmatrix} \begin{pmatrix} a\\ b \end{pmatrix} \\ {}= & {} \begin{pmatrix} -a -b\\ 2a + b \end{pmatrix}\quad \text {for all }a,\,b\in \mathbb {R}_+, \end{aligned}$$

term \(Bu\in \mathcal {K}\), because

$$\begin{aligned} KBu= & {} \begin{pmatrix} 1 &{} -1\\ 2 &{} 1 \end{pmatrix} \begin{pmatrix} -1 &{} -\frac{1}{3}\\ 1 &{} \frac{2}{3} \end{pmatrix} \begin{pmatrix} -a -b\\ 2a + b \end{pmatrix}\\ {}= & {} \begin{pmatrix} b\\ a+b \end{pmatrix}\ge 0\quad \text {for all }a,\,b\in \mathbb {R}_+. \end{aligned}$$

3.2.3 \((\mathcal {K},\mathcal {W})\)-escape systems

Remark 15

In the case of linear system \(\Xi \), the property \((\mathcal {K},\mathcal {W})\)-escape in at most k steps is defined, naturally, for each \(x_0\in \mathcal {K}\) and each \({\bar{u}}_{k-1}\in \mathcal {W}^{k}\), except the \((x_0,\bar{u}_{k-1})=(0,0)\) pair (corresponding to the system remaining indefinitely in the equilibrium of \(\Xi \)).

Example 11

Consider the following system (discrete negation with delay)

$$\begin{aligned} x_{k+1} = \begin{pmatrix} u_k\\ -x^1_k \end{pmatrix} \end{aligned}$$

with region \(\mathcal {K}= \mathbb {R}^2_+\) in state-space and \(\mathcal {W}=\mathbb {R}_+\) in input-space. Obviously \(x_{k+1}\not \in \mathbb {R}^2_+\) for all \(x\in \mathbb {R}^2_+\) and all \(u\in \mathbb {R}\) except the \((x_0,{\bar{u}}_1)=(0,0)\) pair, which means that this system is \(\left( \mathbb {R}^2_+,\mathbb {R}_+\right) \)-escape in at most \(k=1\) step.

Proposition 8

A linear system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most k steps if and only if for each \(x_0\in \mathcal {K}\) and each \({\bar{u}}_{k-1}\in \mathcal {W}^k\) the following conditions hold:

  1. (i)

    \(X_{\mathbb {R}^n_+-}^1 \cup X_{\mathbb {R}^n_+-}^2 \cup \cdots \cup X_{\mathbb {R}^n_+-}^{k} = \mathbb {R}^n_+\);

  2. (ii)

    \(X_{\mathbb {R}^n_+-}^1 \cup X_{\mathbb {R}^n_+-}^2 \cup \cdots \cup X_{\mathbb {R}^n_+-}^{k-1} \subsetneqq \mathbb {R}^n_+\),

where

$$\begin{aligned} X_{\mathbb {R}^n_+-}^i = \{{\tilde{x}}_0\in \mathbb {R}^n_+ :\left( {\tilde{A}}^i\tilde{x}_0 + \textrm{Im}_+{\tilde{R}}^i\right) \cap \mathbb {R}^n_+ = \emptyset \}, \end{aligned}$$
(8)

and \({\tilde{A}} = KAK^{-1}\), \({\tilde{B}} = KBW^{-1}\), \({\tilde{R}}^i = ({\tilde{B}},{\tilde{A}}{\tilde{B}},\ldots ,{\tilde{A}}^{i-1}{\tilde{B}})\).

Proof

It follows directly from Proposition 4 where we use the fact that iterative formula (7) takes the form

$$\begin{aligned} x_i = A^ix_0 + \sum _{j=0}^{i-1}{A^{i-j-1}Bu_j}, \end{aligned}$$

by which we get

$$\begin{aligned} X_{\mathcal {K}-}^i= & {} \{x_0\in \mathcal {K}:KA^ix_0 \\ {}{} & {} \quad + \sum _{j=0}^{i-1}{KA^{i-j-1}Bu_j} \ngeq 0\}\quad \forall \bar{u}_{i-1}\in \mathcal {W}^i, \end{aligned}$$

and then

$$\begin{aligned} X_{\mathbb {R}^n_+-}^i&= \{{\tilde{x}}_0\in \mathbb {R}^n_+ :KA^iK^{-1}{\tilde{x}}_0 \\ {}&\quad + \sum _{j=0}^{i-1}{KA^{i-j-1}BW^{-1}{\tilde{u}}_j} \ngeq 0\}\\ {}&= \{\tilde{x}_0\in \mathbb {R}^n_+ :{\tilde{A}}^i{\tilde{x}}_0 \\ {}&\quad + \sum _{j=0}^{i-1}{\tilde{A}^{i-j-1}{\tilde{B}}{\tilde{u}}_j} \ngeq 0\}\quad \forall \bar{\tilde{u}}_{i-1}\in (\mathbb {R}^m_+)^i, \end{aligned}$$

which can be expressed as

$$\begin{aligned} X_{\mathbb {R}^n_+-}^i = \{{\tilde{x}}_0\in \mathbb {R}^n_+ :\left( {\tilde{A}}^i\tilde{x}_0 + \textrm{Im}_+{\tilde{R}}^i\right) \cap \mathbb {R}^n_+ = \emptyset \}. \end{aligned}$$

\(\square \)

Example 12

Consider system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} 0 &{} -1\\ 1 &{} 0 \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} 0\\ 1 \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}\) defined by the matrices

$$\begin{aligned} K = \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \quad \text {and}\quad W = 1, \end{aligned}$$

respectively (see Fig. 12a). Calculate

$$\begin{aligned} {\tilde{A}} = KAK^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} 0 &{} -1\\ 1 &{} 0 \end{pmatrix} \begin{pmatrix} -\frac{1}{2} &{} \frac{1}{2}\\ -\frac{1}{2} &{} -\frac{1}{2} \end{pmatrix} \\ {}= & {} \begin{pmatrix} 0 &{} -1\\ 1 &{} 0 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} {\tilde{B}} = KBW^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} 0\\ 1 \end{pmatrix} \\ {}= & {} \begin{pmatrix} -1\\ -1 \end{pmatrix}. \end{aligned}$$

We have

$$\begin{aligned} X_{\mathbb {R}^2_+-}^1&= \{ (a,b)^T\in \mathbb {R}^2_+ :b\ne 0\}\cup \{0\}\\ X_{\mathbb {R}^2_+-}^2&= X_{\mathbb {R}^2_+-}^1 \cup \{ (a,0)^T\in \mathbb {R}^2_+ :a\ne 0\}, \end{aligned}$$

because

$$\begin{aligned}{} & {} {\tilde{A}}\begin{pmatrix} a\\ b \end{pmatrix} \!=\! \begin{pmatrix} -b\\ a \end{pmatrix},\, {\tilde{A}}^2\begin{pmatrix} a\\ b \end{pmatrix} \!=\! \begin{pmatrix} -a\\ -b \end{pmatrix},\, {\tilde{A}}\begin{pmatrix} a\\ 0 \end{pmatrix} \!=\! \begin{pmatrix} 0\\ a \end{pmatrix},\\{} & {} {\tilde{A}}^2\begin{pmatrix} a\\ 0 \end{pmatrix} = \begin{pmatrix} -a\\ 0 \end{pmatrix},\,\\ {}{} & {} {\tilde{A}}{\tilde{B}} = \begin{pmatrix} 1\\ -1 \end{pmatrix}\!. \end{aligned}$$

Therefore, \(X_{\mathbb {R}^2_+-}^1 \subsetneqq \mathbb {R}^2_+\) and \(X_{\mathbb {R}^2_+-}^1\cup X_{\mathbb {R}^2_+-}^2 = \mathbb {R}^2_+\).

Moreover,

$$\begin{aligned} \begin{aligned} X_{\mathcal {K}-}^1&= \{ \frac{1}{2}(b-a,-a-b)^T\in \mathcal {K}:a\ge 0,\;b> 0\}\cup \{0\}\\ X_{\mathcal {K}-}^2&= X_{\mathcal {K}-}^1 \cup \{ -\frac{1}{2}(a,a)^T\in \mathcal {K}:a > 0\}, \end{aligned} \end{aligned}$$

where \(X_{\mathcal {K}-}^1 \subsetneqq \mathcal {K}\) and \(X_{\mathcal {K}-}^1\cup X_{\mathcal {K}-}^2 = \mathcal {K}\).

Therefore, system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most 2 steps (see Fig. 11).

Fig. 11
figure 11

Sets from Ex. 12

Proposition 9

Linear system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step if and only if

$$\begin{aligned} \textrm{Im}_+ ({\tilde{A}}, {\tilde{B}}) \cap \mathbb {R}^n_+ = \emptyset , \end{aligned}$$

where \({\tilde{A}} = KAK^{-1}\) and \({\tilde{B}} = KBW^{-1}\).

Despite the fact that the above result follows from Proposition 8 for \(k=1\), and thus for all \({\tilde{x}}_0\in \mathbb {R}^n_+\), a detailed proof of it is given for clarity.

Proof

(Necessity) Since the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step, it means that \(X_{\mathcal {K}-}^1 = \mathcal {K}\), so, for any state \(x_0\in \mathcal {K}\), state \(x_1\notin \mathcal {K}\), i.e., \(Kx_1\ngeq 0\), thus \(KAx_0 + KBu_0 \ngeq 0\) for all \(x_0\in \mathcal {K}\) and \(u_0\in \mathcal {W}\), except \((x_0,u_0)=(0,0)\). It can be equivalently rewritten as \(KAK^{-1}{\tilde{x}}_0 + KBW^{-1}{\tilde{u}}_0 \ngeq 0\) for all \(\tilde{x}_0\in \mathbb {R}^n_+\) and \({\tilde{u}}_0\in \mathbb {R}^m_+\), except \(({\tilde{x}}_0,\tilde{u}_0)=(0,0)\), which equivalently can be written as \(\textrm{Im}_+ ({\tilde{A}}, {\tilde{B}}) \cap \mathbb {R}^n_+ = \emptyset \).

(Sufficiency) Condition \(\textrm{Im}_+ ({\tilde{A}}, {\tilde{B}}) \cap \mathbb {R}^n_+ = \emptyset \) means that for any \({\tilde{z}}\in \textrm{Im}_+ ({\tilde{A}},{\tilde{B}})\) we have \({\tilde{z}} = {\tilde{A}}{\tilde{x}}_0 + \tilde{B}{\tilde{u}}_0\notin \mathbb {R}^n_+\) for all \({\tilde{x}}_0\in \mathbb {R}^n_+\) and \(\tilde{u}_0\in \mathbb {R}^m_+\), except \(({\tilde{x}}_0,{\tilde{u}}_0)=(0,0)\). So, \(z=K^{-1}{\tilde{z}} = K^{-1}({\tilde{A}}{\tilde{x}}_0 + {\tilde{B}}{\tilde{u}}_0) = Ax_0 + Bu_0\notin \mathcal {K}\), where we used \({\tilde{x}}_0 = Kx_0\) for all \(x_0\in \mathcal {K}\), and \({\tilde{u}}_0 = Wu_0\) for all \(u_0\in \mathcal {W}\), except \((x_0,u_0)=(0,0)\). All this together means that \(x_1 = Ax_0 + Bu_0\notin \mathcal {K}\) for all \(x_0\in \mathcal {K}\), \(u_0\in \mathcal {W}\), except \((x_0,u_0)=(0,0)\), whence \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step. \(\square \)

Below, necessary conditions for the system \(\Xi \) to be \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step, allowing a preliminary verification of this property, are provided.

Proposition 10

If the system \(\Xi \) is \((\mathscr {K},\mathcal {W})\)-escape in at most \(k=1\) step then:

  1. (i)

    \(\textrm{Im}_+ {\tilde{A}} \cap \mathbb {R}^n_+ = \emptyset \) and \(\textrm{Im}_+ {\tilde{B}} \cap \mathbb {R}^n_+ = \emptyset \), where \({\tilde{A}} = KAK^{-1}\) and \({\tilde{B}} = KBW^{-1}\);

  2. (ii)

    each column vector \({\tilde{A}}_i\), \(1\le i\le n\), of matrix \({\tilde{A}} = KAK^{-1}\), and each column vector \({\tilde{B}}_j\), \(1\le j\le m\), of matrix \({\tilde{B}} = KBW^{-1}\), must possess at least one negative entry;

  3. (iii)

    for each of the matrices \({\tilde{A}}, {\tilde{B}}, (\tilde{A}, {\tilde{B}})\), the sum of the elements of at least one of their rows must be less than 0;

  4. (iv)

    \(\ker A\cap \mathcal {K}= 0\) and \(\ker B\cap \mathcal {W}= 0\).

Proof

  1. (i)

    It follows from the condition of Proposition 9, which should hold for all \({\tilde{x}}_0\in \mathbb {R}^n_+\) and all \({\tilde{u}}_0\in \mathbb {R}^m_+\), except the pair \((x_0,u_0)=(0,0)\). In particular, for \({\tilde{u}}_0 = 0\), we get \({\tilde{A}}{\tilde{x}}_0 \ngeq 0\), where \({\tilde{A}} = KAK^{-1}\), for all \({\tilde{x}}_0\in \mathbb {R}^n_+\setminus \{0\}\). It means that \({\tilde{A}}\tilde{x}_0\notin \mathbb {R}^n_+\) for all \({\tilde{x}}_0\in \mathbb {R}^n_+{\setminus } \{0\}\), which equivalently can be written as \(\textrm{Im}_+ {\tilde{A}} \cap \mathbb {R}^n_+ = \emptyset \). Likewise, taking \({\tilde{x}}_0=0\), we get \(KBW^{-1}\tilde{u}_0 \ngeq 0\) for all \({\tilde{u}}_0\in \mathbb {R}^m_+{\setminus }\{0\}\). It means that \({\tilde{B}}{\tilde{u}}_0\notin \mathbb {R}^n_+\) for all \(\tilde{u}_0\in \mathbb {R}^m_+{\setminus } \{0\}\), which equivalently can be written as \(\textrm{Im}_+ {\tilde{B}} \cap \mathbb {R}^n_+ = \emptyset \).

  2. (ii)

    Since the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step, conditions \({\tilde{A}}{\tilde{x}}_0 = KAK^{-1}{\tilde{x}}_0\ngeq 0\) and \({\tilde{B}}{\tilde{u}}_0 = KBW^{-1}{\tilde{u}}_0\ngeq 0\) for all \(x_0\in \mathbb {R}^n_+\) and \(u_0\in \mathbb {R}^m_+\), except \((x_0,u_0)=(0,0)\), are satisfied, in particular it holds true for \(e_i = (0,\ldots ,0,1,0,\ldots ,0)^T\in \mathbb {R}^n_+\), \(1\le i\le n\), and \(e_j = (0,\ldots ,0,1,0,\ldots ,0)^T\in \mathbb {R}^m_+\), \(1\le j\le m\), with “1” at ith and jth entry, respectively. Therefore, \({\tilde{A}}_i = {\tilde{A}}e_i\ngeq 0\) and \({\tilde{B}}_j = {\tilde{B}}e_j\ngeq 0\).

  3. (iii)

    Since the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step, conditions \({\tilde{A}}{\tilde{x}}_0 = KAK^{-1}{\tilde{x}}_0\ngeq 0\) and \({\tilde{B}}{\tilde{u}}_0 = KBW^{-1}{\tilde{u}}_0\ngeq 0\) for all \(x_0\in \mathbb {R}^n_+\) and \(u_0\in \mathbb {R}^m_+\), except \((x_0,u_0)=(0,0)\), are satisfied, in particular it holds true for \(1_n = (1,\ldots ,1)^T\in \mathbb {R}^n_+\), and \(1_m = (1,\ldots ,1)^T\in \mathbb {R}^m_+\). Therefore, \({\tilde{A}} 1_n\ngeq 0, {\tilde{B}} 1_m\ngeq 0\) and \({\tilde{A}} 1_n + {\tilde{B}} 1_m = ({\tilde{A}}, {\tilde{B}})1_{n+m} \ngeq 0\) with each element corresponding to sum of elements over a particular row.

  4. (iv)

    Since the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step, state \(x_1 = Ax_0 + Bu_0\notin \mathcal {K}\), and thereby \(x_1\ne 0\), for all \(x_0\in \mathcal {K}\) and \(u_0\in \mathcal {W}\), except the pair \((x_0,u_0)=(0,0)\), which implies \(x_0\notin \ker A\) and \(u_0\notin \ker B\).

\(\square \)

Example 13

Consider the system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} 0 &{} -2\\ 0 &{} 0 \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} \frac{1}{2} &{} 1\\ -\frac{1}{2} &{} 0 \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\) defined by the matrices

$$\begin{aligned} K = \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \quad \text {and}\quad W = \begin{pmatrix} -1 &{} 0\\ 0 &{} 1 \end{pmatrix}, \end{aligned}$$

respectively (see Fig. 12).

Fig. 12
figure 12

Regions from Ex. 13

Calculate

$$\begin{aligned} {\tilde{A}} = KAK^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} 0 &{} -2\\ 0 &{} 0 \end{pmatrix} \begin{pmatrix} -\frac{1}{2} &{} \frac{1}{2}\\ -\frac{1}{2} &{} -\frac{1}{2} \end{pmatrix} \\ {}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} 1 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} {\tilde{B}} = KBW^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} \frac{1}{2} &{} 1\\ -\frac{1}{2} &{} 0 \end{pmatrix} \begin{pmatrix} -1 &{} 0\\ 0 &{} 1 \end{pmatrix} \\ {}= & {} \begin{pmatrix} 0 &{} -1\\ -1 &{} 1 \end{pmatrix}. \end{aligned}$$

Since

$$\begin{aligned} \textrm{Im}_+ ({\tilde{A}}, {\tilde{B}}) \cap \mathbb {R}^n_+ = \textrm{Im}_+ \begin{pmatrix} -1 &{} 0\\ 1 &{} -1 \end{pmatrix}\cap \mathbb {R}^2_+ = \emptyset , \end{aligned}$$

the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step. Moreover, necessary conditions of Proposition 10 are satisfied, because \(\textrm{Im}_+{\tilde{A}}\cap \mathbb {R}^2_+ = \emptyset \) and \(\textrm{Im}_+{\tilde{B}}\cap \mathbb {R}^2_+ = \emptyset \); each column vector of \({\tilde{A}}\) and \({\tilde{B}}\) possesses one negative entry; in each of the matrices \({\tilde{A}}\), \({\tilde{B}}\) and \((\tilde{A},{\tilde{B}})\) there exists at least one row whose sum of elements is negative; and

$$\begin{aligned} \ker A = {{\,\mathrm{{Vect}}\,}}\left\{ \begin{pmatrix} 1\\ 0 \end{pmatrix}\right\} , \quad \ker B = 0, \end{aligned}$$

where \((1,0)^T\notin \mathcal {K}\), thus \(\ker A\cap \mathcal {K}= 0\) and \(\ker B\cap \mathcal {W}= 0\).

Example 14

Consider the system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} -2 &{} 0\\ 0 &{} 0 \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} 0 &{} 0\\ -1 &{} -1 \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\) (as in Ex. 13) defined by the matrices

$$\begin{aligned} K = \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \quad \text {and}\quad W = \begin{pmatrix} -1 &{} 0\\ 0 &{} 1 \end{pmatrix}, \end{aligned}$$

respectively (see Fig. 12). Calculate

$$\begin{aligned} {\tilde{A}} = KAK^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} -2 &{} 0\\ 0 &{} 0 \end{pmatrix} \begin{pmatrix} -\frac{1}{2} &{} \frac{1}{2}\\ -\frac{1}{2} &{} -\frac{1}{2} \end{pmatrix} \\ {}= & {} \begin{pmatrix} -1 &{} 1\\ 1 &{} -1 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} {\tilde{B}} = KBW^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} 0 &{} 0\\ -1 &{} -1 \end{pmatrix} \begin{pmatrix} -1 &{} 0\\ 0 &{} 1 \end{pmatrix}\\ {}= & {} \begin{pmatrix} -1 &{} 1\\ -1 &{} 1 \end{pmatrix}. \end{aligned}$$

Since

$$\begin{aligned} \textrm{Im}_+ ({\tilde{A}}, {\tilde{B}}) \cap \mathbb {R}^n_+ = \textrm{Im}_+ \begin{pmatrix} -1 &{} 1 &{} -1 &{} 1\\ 1 &{} -1 &{} -1 &{} 1 \end{pmatrix}\cap \mathbb {R}^2_+ \ne \emptyset , \end{aligned}$$

the system \(\Xi \) is not \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step. Moreover, necessary conditions of Proposition 10 are not satisfied, because \(\textrm{Im}_+{\tilde{A}}\cap \mathbb {R}^2_+ = \{0\}\) and \(\textrm{Im}_+{\tilde{B}}\cap \mathbb {R}^2_+ \ne \emptyset \); second column vector of \({\tilde{B}}\) does not possess any negative entry; the sum of elements in each row of the matrices \({\tilde{A}}\), \({\tilde{B}}\) and \(({\tilde{A}},{\tilde{B}})\) is zero; and

$$\begin{aligned} \ker A = {{\,\mathrm{{Vect}}\,}}\left\{ \begin{pmatrix} 0\\ -1 \end{pmatrix}\right\} , \quad \ker B = {{\,\mathrm{{Vect}}\,}}\left\{ \begin{pmatrix} -1\\ 1 \end{pmatrix}\right\} , \end{aligned}$$

where \((0,-1)^T\in \mathcal {K}\) and \((-1,1)^T\in \mathcal {W}\), thus \(\ker A\cap \mathcal {K}\ne 0\) and \(\ker B\cap \mathcal {W}\ne 0\).

Example 15

Consider the system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} -\frac{1}{4} &{} -\frac{7}{4}\\ -\frac{1}{4} &{} \frac{1}{4} \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} 1 &{} -\frac{3}{4}\\ 0 &{} -\frac{1}{4} \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}^2\) (as in Ex. 13) defined by the matrices

$$\begin{aligned} K = \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \quad \text {and}\quad W = \begin{pmatrix} -1 &{} 0\\ 0 &{} 1 \end{pmatrix}, \end{aligned}$$

respectively (see Fig. 12). Calculate

$$\begin{aligned} {\tilde{A}} = KAK^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} -\frac{1}{4} &{} -\frac{7}{4}\\ -\frac{1}{4} &{} \frac{1}{4} \end{pmatrix} \begin{pmatrix} -\frac{1}{2} &{} \frac{1}{2}\\ -\frac{1}{2} &{} -\frac{1}{2} \end{pmatrix}\\= & {} \begin{pmatrix} -1 &{} -\frac{1}{2}\\ 1 &{} 1 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} {\tilde{B}} = KBW^{-1}= & {} \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} 1 &{} -\frac{3}{4}\\ 0 &{} -\frac{1}{4} \end{pmatrix} \begin{pmatrix} -1 &{} 0\\ 0 &{} 1 \end{pmatrix}\\ {}= & {} \begin{pmatrix} 1 &{} 1\\ -1 &{} -\frac{1}{2} \end{pmatrix}. \end{aligned}$$

Since

$$\begin{aligned} \textrm{Im}_+ ({\tilde{A}}, {\tilde{B}}) \cap \mathbb {R}^n_+ = \textrm{Im}_+ \begin{pmatrix} -1 &{} -\frac{1}{2} &{} 1 &{} 1\\ 1 &{} 1 &{} -1 &{} -\frac{1}{2} \end{pmatrix}\cap \mathbb {R}^2_+ \ne \emptyset , \end{aligned}$$

system \(\Xi \) is not \((\mathcal {K},\mathcal {W})\)-escape in at most \(k=1\) step. Indeed, e.g., for \({\tilde{x}}_0 = (0,1)^T\in \mathbb {R}^2_+\) and \({\tilde{u}}_0 = (0,1)^T\in \mathbb {R}^2_+\) we have \({\tilde{A}}{\tilde{x}}_0 = (-1/2,1)^T\notin \mathbb {R}^2_+\) and \({\tilde{B}}{\tilde{u}}_0 = (1,-1/2)^T\notin \mathbb {R}^2_+\), but \({\tilde{A}}{\tilde{x}}_0 + {\tilde{B}}\tilde{u}_0 = (1/2,1/2)^T\in \mathbb {R}^2_+\). However, some necessary conditions of Proposition 10 are satisfied, because: \(\textrm{Im}_+{\tilde{A}}\cap \mathbb {R}^2_+ = \emptyset \) and \(\textrm{Im}_+{\tilde{B}}\cap \mathbb {R}^2_+ = \emptyset \); each column vector of \({\tilde{A}}\) and \({\tilde{B}}\) possesses negative entry; in the matrices \({\tilde{A}}\), \({\tilde{B}}\) occur rows whose sum of elements is less than zero, unlike the matrix \(({\tilde{A}},{\tilde{B}})\); and since \({{\,\textrm{rank}\,}}A = 2\) and \({{\,\textrm{rank}\,}}B = 2\), we have \(\ker A\cap \mathcal {K}= 0\) and \(\ker B\cap \mathcal {W}= 0\).

3.2.4 \((\mathcal {K},\mathcal {W})\)-attractive systems

Proposition 11

A linear system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-attractive in at most k steps if and only if for each \(x_0\notin \mathcal {K}\) and each \(\bar{u}_{k-1}\in \mathcal {W}^k\) the following conditions hold:

  1. (i)

    \(X_{\mathbb {R}^n_++}^1 \cup X_{\mathbb {R}^n_++}^2 \cup \cdots \cup X_{\mathbb {R}^n_++}^{k} = \mathbb {R}^n_+\);

  2. (ii)

    \(X_{\mathbb {R}^n_++}^1 \cup X_{\mathbb {R}^n_++}^2 \cup \cdots \cup X_{\mathbb {R}^n_++}^{k-1} \subsetneqq \mathbb {R}^n_+\),

where

$$\begin{aligned} X_{\mathbb {R}^n_++}^i = \{{\tilde{x}}_0\notin \mathbb {R}^n_+ :\left( \tilde{A}^i{\tilde{x}}_0 + \textrm{Im}_+\tilde{R}^i\right) \cap \left( \mathbb {R}^n_+\right) ^\textrm{c} = \emptyset \}, \end{aligned}$$

and \({\tilde{A}} = KAK^{-1}\), \({\tilde{B}} = KBW^{-1}\), \({\tilde{R}}^i = ({\tilde{B}},{\tilde{A}}{\tilde{B}},\ldots ,{\tilde{A}}^{i-1}{\tilde{B}})\).

Proof

It follows directly from Proposition 5, where we use the fact that iterative formula (7) takes the form

$$\begin{aligned} x_i = A^ix_0 + \sum _{j=0}^{i-1}{A^{i-j-1}Bu_j}, \end{aligned}$$

by which we get

$$\begin{aligned} X_{\mathcal {K}+}^i= & {} \{x_0\notin \mathcal {K}:KA^ix_0 \\ {}{} & {} \quad + \sum _{j=0}^{i-1}{KA^{i-j-1}Bu_j} \ge 0\}\quad \forall \bar{u}_{i-1}\in \mathcal {W}^i, \end{aligned}$$

and then

$$\begin{aligned} X_{\mathbb {R}^n_++}^i&= \{{\tilde{x}}_0\notin \mathbb {R}^n_+ :KA^iK^{-1}\tilde{x}_0 \\ {}&\quad + \sum _{j=0}^{i-1}{KA^{i-j-1}BW^{-1}{\tilde{u}}_j} \ge 0\}\\&= \{{\tilde{x}}_0\notin \mathbb {R}^n_+ :{\tilde{A}}^i{\tilde{x}}_0 \\ {}&\quad + \sum _{j=0}^{i-1}{{\tilde{A}}^{i-j-1}{\tilde{B}}{\tilde{u}}_j} \ge 0\}\quad \forall \bar{{\tilde{u}}}_{i-1}\in (\mathbb {R}^m_+)^i, \end{aligned}$$

which can be expressed as

$$\begin{aligned} X_{\mathbb {R}^n_++}^i = \{{\tilde{x}}_0\notin \mathbb {R}^n_+ :\left( \tilde{A}^i{\tilde{x}}_0 + \textrm{Im}_+\tilde{R}^i\right) \cap \left( \mathbb {R}^n_+\right) ^\textrm{c} = \emptyset \}. \end{aligned}$$

\(\square \)

Example 16

Consider the system \(\Xi \) defined by the matrices

$$\begin{aligned} A = \begin{pmatrix} 0 &{} -1\\ 1 &{} 0 \end{pmatrix} \quad \text {and}\quad B = \begin{pmatrix} 0\\ 0 \end{pmatrix}, \end{aligned}$$

and the cones \(\mathcal {K}\subset \mathbb {R}^2\) and \(\mathcal {W}\subset \mathbb {R}\) defined by the matrices

$$\begin{aligned} K = \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \quad \text {and}\quad W = 1, \end{aligned}$$

respectively (see Fig. 12a). Calculate

$$\begin{aligned}{} & {} {\tilde{A}} = KAK^{-1} = \begin{pmatrix} -1 &{} -1\\ 1 &{} -1 \end{pmatrix} \begin{pmatrix} 0 &{} -1\\ 1 &{} 0 \end{pmatrix} \begin{pmatrix} -\frac{1}{2} &{} \frac{1}{2}\\ -\frac{1}{2} &{} -\frac{1}{2} \end{pmatrix} \\ {}{} & {} =\begin{pmatrix} 0 &{} -1\\ 1 &{} 0 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} {\tilde{B}} = KBW^{-1} = \begin{pmatrix} 0\\ 0 \end{pmatrix}. \end{aligned}$$

We have

$$\begin{aligned} X_{\mathbb {R}^2_++}^1&= \{ (a,-b)^T\notin \mathbb {R}^2_+ :a\ge 0,\,b> 0\},\\ X_{\mathbb {R}^2_++}^2&= \{ (-a,-b)^T\notin \mathbb {R}^2_+ :a> 0,\, b\ge 0\},\\ X_{\mathbb {R}^2_++}^3&= \{ (-a,b)^T\notin \mathbb {R}^2_+ :a> 0,\, b > 0\}, \end{aligned}$$

because

$$\begin{aligned}{} & {} {\tilde{A}}\begin{pmatrix} a\\ -b \end{pmatrix} = \begin{pmatrix} b\\ a \end{pmatrix},\, {\tilde{A}}^2\begin{pmatrix} a\\ -b \end{pmatrix} = \begin{pmatrix} -a\\ b \end{pmatrix},\, {\tilde{A}}^3\begin{pmatrix} a\\ -b \end{pmatrix} = \begin{pmatrix} -b\\ -a \end{pmatrix},\\{} & {} {\tilde{A}}\begin{pmatrix} -a\\ -b \end{pmatrix} = \begin{pmatrix} b\\ -a \end{pmatrix},\, {\tilde{A}}^2\begin{pmatrix} -a\\ -b \end{pmatrix} = \begin{pmatrix} a\\ b \end{pmatrix},\, {\tilde{A}}^3\begin{pmatrix} -a\\ -b \end{pmatrix} = \begin{pmatrix} -b\\ a \end{pmatrix},\\{} & {} {\tilde{A}}\begin{pmatrix} -a\\ b \end{pmatrix} = \begin{pmatrix} -b\\ -a \end{pmatrix},\, {\tilde{A}}^2\begin{pmatrix} -a\\ b \end{pmatrix} = \begin{pmatrix} a\\ -b \end{pmatrix},\, {\tilde{A}}^3\begin{pmatrix} -a\\ b \end{pmatrix} = \begin{pmatrix} b\\ a \end{pmatrix}. \end{aligned}$$

Therefore, \(X_{\mathbb {R}^2_++}^1\cup X_{\mathbb {R}^2_++}^2 \subsetneqq \mathbb {R}^2_+\) and \(X_{\mathbb {R}^2_++}^1\cup X_{\mathbb {R}^2_++}^2 \cup X_{\mathbb {R}^2_++}^3 = \mathbb {R}^2_+\).

Moreover,

$$\begin{aligned} X_{\mathcal {K}+}^1&= \{ \frac{1}{2}(-a-b,b-a)^T\notin \mathcal {K}:a\ge 0,\,b> 0\},\\ X_{\mathcal {K}+}^2&= \{ \frac{1}{2}(a-b,a+b)^T\notin \mathcal {K}:a> 0,\, b\ge 0\},\\ X_{\mathcal {K}+}^3&= \{ \frac{1}{2}(a+b,a-b)^T\notin \mathcal {K}:a> 0,\, b > 0\}, \end{aligned}$$

where \(X_{\mathcal {K}+}^1\cup X_{\mathcal {K}+}^2 \subsetneqq \mathbb {R}^2_+\) and \(X_{\mathcal {K}+}^1\cup X_{\mathcal {K}+}^2 \cup X_{\mathcal {K}+}^3 = \mathcal {K}\).

Therefore, the system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-attractive in at most 3 steps (see Fig. 13).

Fig. 13
figure 13

Sets from Ex. 16

Proposition 12

The linear system \(\Xi \) is \((\mathcal {K},\mathcal {W})\)-attractive in at most \(k=1\) step if and only if

  1. (i)

    \(A\le 0\) and \({\tilde{B}}\ge 0\) for \(n=1\);

  2. (ii)

    \(A = 0\) and \({\tilde{B}} \ge 0\) for \(n>1\).

Proof

(Sufficiency) In the case of system order \(n=1\), we have \(A=\tilde{A}\). Scalars \(A\le 0\) and \({\tilde{B}}\ge 0\) imply that \({\tilde{x}}_1 = {\tilde{A}}{\tilde{x}}_0 + {\tilde{B}}{\tilde{u}}_0 \ge 0\) for all \({\tilde{x}}_0 < 0\) and all \({\tilde{u}}_0\ge 0\). Thus, state \(x_1 = K^{-1}{\tilde{x}}_1\in \mathcal {K}\) for all \(x_0 = K^{-1}{\tilde{x}}_0\in \mathcal {K}^\textrm{c}\) and all \(u_0 = W^{-1}\tilde{u}_0\in \mathcal {W}\).

For \(n>1\), condition \(A = 0\) implies \({\tilde{A}} = 0\). The matrices \(A= 0\) and \({\tilde{B}}\ge 0\) imply that \({\tilde{x}}_1 = {\tilde{A}}\tilde{x}_0 + {\tilde{B}}{\tilde{u}}_0 = {\tilde{B}}{\tilde{u}}_0\ge 0\) for all \(\tilde{x}_0 \in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and all \(\tilde{u}_0\in \mathbb {R}^m_+\). Thus, state \(x_1 = K^{-1}{\tilde{x}}_1\in \mathcal {K}\) for all \(x_0 = K^{-1}{\tilde{x}}_0\in \mathcal {K}^\textrm{c}\) and all \(u_0 = W^{-1}{\tilde{u}}_0\in \mathcal {W}\).

(Necessity) Since the system is \((\mathcal {K},\mathcal {W})\)-attractive in at most \(k=1\) step, state \(x_1 = Ax_0 + Bu_0\in \mathcal {K}\) for all \(x_0\in \mathcal {K}^\textrm{c}\) and all \(u_0\in \mathcal {W}\). It is equivalent to say \({\tilde{x}}_1 = {\tilde{A}}{\tilde{x}}_0 + {\tilde{B}}{\tilde{u}}_0\in \mathbb {R}^n_+\) for all \({\tilde{x}}_0\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and all \(\tilde{u}_0\in \mathbb {R}^m_+\). In particular, it holds for \({\tilde{u}}_0 = 0\), and then \({\tilde{A}}{\tilde{x}}_0\in \mathbb {R}^n_+\) for all \(\tilde{x}_0\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\).

Let us assume that the system order \(n>1\). Without loss of generality let’s choose specific \({\tilde{x}}_0 = (\tilde{x}_0^1,\ldots ,{\tilde{x}}^{j-1}_0,-\xi ,x^{j+1}_0,\ldots ,\tilde{x}^n_0)^T\in \left( \mathbb {R}^n_+\right) ^\textrm{c}{\setminus }\left( \mathbb {R}^n_-{\setminus }\{0\}\right) \), for \(1\le j\le n\), and \(n>1\), where \(\xi >0\), and \(\tilde{x}^i_0\in \mathbb {R}\), \(i\in \{1,\ldots ,n\},i\ne j\), are almost arbitrary real numbers (neither all negative nor zero). Since \(-\tilde{x}_0\in \left( \mathbb {R}^n_+\right) ^\textrm{c}{\setminus }\left( \mathbb {R}^n_-{\setminus }\{0\}\right) \) too (by Lemma 1), relations \({\tilde{A}}\tilde{x}_0\ge 0\) and \(-{\tilde{A}} {\tilde{x}}_0\ge 0\) imply \({\tilde{A}}\tilde{x}_0 = 0\) (recall that we assume \({\tilde{u}}_0 = 0\)). It follows that either \({\tilde{A}} = 0\) or any specific \({\tilde{x}}_0\in \ker {\tilde{A}}\); however, because among specific \({\tilde{x}}_0\)’s there exist n independent vectors, we finally get \({\tilde{A}} = 0\). Nonsingularity of K implies \(A = 0\).

For \(n=1\), the relation \({\tilde{A}}{\tilde{x}}_0\ge 0\) for all \(\tilde{x}_0<0\) implies \({\tilde{A}}\le 0\). Since \({\tilde{A}} = A\), we get \(A\le 0\).

Since, in particular, \({\tilde{x}}_1 = {\tilde{A}}{\tilde{x}}_0 + \tilde{B}{\tilde{u}}_0\in \mathbb {R}^n_+\) holds for an arbitrarily small \(\tilde{x}_0\in \left( \mathbb {R}^n_+\right) ^\textrm{c}\) and arbitrarily big \(\tilde{u}_0\in \mathbb {R}^m_+\), we get \({\tilde{B}}\ge 0\). \(\square \)

Remark 16

It is worth noting that the conditions of Proposition 12 do not depend on \(\mathcal {K}\). In the case of \(n=1\), this is because there are only two possible cones, i.e., \(\mathcal {K}= \mathbb {R}_+\) and \(\mathcal {K}= \mathbb {R}_-\). Regardless of whether the scalar K defines \(\mathbb {R}_+\) or \(\mathbb {R}_-\), we have \({\tilde{A}} = KAK^{-1} = A\).

In the case of \(n>1\), the matrix \(A = 0\), making \({\tilde{A}} = KAK^{-1} = 0\). Thus, for \(u=0\), the transition in one step from any \(x_0\in \mathcal {K}^\textrm{c}\) is always to the origin.

Example 17

Let us consider the scalar system \(x_{k+1} = ax_k + bu_k\), where \(a\le 0\), \(b\ge 0\), together with the cones \(\mathcal {K}= \mathbb {R}_+\) and \(\mathcal {W}= \mathbb {R}_+\). Obviously, it is \((\mathcal {K},\mathcal {W})\)-attractive in at most \(k=1\) step.

Example 18

Let us consider the scalar system \(x_{k+1} = ax_k + bu_k\), where \(a\le 0\), \(b\ge 0\), together with the cones \(\mathcal {K}= \mathbb {R}_-\) and \(\mathcal {W}= \mathbb {R}_-\). Obviously, it is \((\mathcal {K},\mathcal {W})\)-attractive in at most \(k=1\) step, because \({\tilde{b}} = KbW^{-1} = b\ge 0\).

4 Conclusions

The aim of research presented in this work was to characterize various distinctive trajectory evolution types of a general nonlinear discrete-time control system. This was done with respect to a nonlinear region defined in the system’s state-space and with controls belonging to polyhedral cones in the input-space. This approach was derived from an existing basis of knowledge concerning region-invariant systems. The main result of this study is the introduction of four different classes of the dynamic systems in question and providing practically verifiable conditions to check the nature of the system against the introduced definitions. Both the derivation and the conditions themselves were based on the approach used in the section concerning invariance analysis. The choice of a particularly weak assumptions regarding model structure and state-space region definition minimized the applicability limit of this work allowing real-world systems to be tested against them.

The nonlinear definitions and proofs were translated to a linear time-invariant case in the next sections. This opened the possibility to introduce a more convenient set of system verification methods which resolve to a usage of a purely algebraic set of conditions, i.e., most of the conditions were expressed using matrices, their products and inverses. For this reason, from a purely computational point of view, algorithms enabling faster performance of this type of operations may prove helpful, especially in the case of large-sized matrices. Methods for such fast calculations, such as fast matrix multiplication, were presented in [20]. The analysis of both general nonlinear and specific linear cases together with a collection of examples creates a theoretical framework with perspectives for further development.

The presented approach to control systems analysis opens new questions in the field of system invariance and related systems. An example of such question is the potential existence of a closed catalog of system families dependent on their relation to state- and input-space regions.