Introduction

Differential-algebraic equations (DAEs) arise from the modelling of processes and systems in mechanics, control problems, gas industry, chemical kinetics, radio engineering, economics, and other fields (see, e.g., [1, 12, 18] and references therein). It is well known that DAEs are used in modelling various objects whose structure is described by directed graphs, e.g., electrical, gas, and neural networks. The application of DAEs in electrical circuit theory is described, for example, in [5, 7, 12, 16,17,18] and in gas network theory it is shown, e.g., in [1, 11]. In the papers [7] and [5], nonlinear electrical circuits described by regular and singular (i.e., nonregular) semilinear DAEs, respectively, have been studied by using the proved (in these works) theorems on the Lagrange stability, which also give conditions for the global solvability, and on the Lagrange instability. In [4], a gas network model similar to that presented in the work [11] has been considered. This model is described by an overdetermined system of differential and algebraic equations and is written in the form of a nonregular semilinear DAE. Note that underdetermined and overdetermined systems of differential and algebraic equations are particular cases of the systems which can be written as nonregular semilinear DAEs.

There are many works devoted to the study of the local solvability, structure, and the Lyapunov stability of equilibrium positions of DAEs and the methods of their numerical solution (see, e.g., the monographs [12, 18] and references therein). DAEs, also called descriptor equations (or systems), degenerate differential equations and differential equations (or dynamical systems) on manifolds. Most of the works on DAEs are related to the study of regular DAEs.

The existence and uniqueness of local solutions of nonregular (singular) semilinear DAEs in Banach spaces have been studied in [17], where the decomposition of a singular pencil into regular and purely singular components, which was called the RS-splitting of the pencil, and the block representations of the singular component in two special cases have been presented. Nonregular DAEs in finite-dimensional spaces have been studied, e.g., in [12], where the concept of the “strangeness index” of a DAE and a pair of matrices (or matrix functions) have been used. The present work uses the concept of an index (not the strangeness index) only for a regular block of the characteristic pencil (in the case when the DAE is singular) or the regular characteristic pencil (in the case when the DAE is regular) (see explanations in the “Problem statement. Preliminaries” section and Appendix 1).

Using a generalized canonical form (GCF) of time-varying nonregular linear DAEs, their solvability and some important issues related to the application of the least squares method to their numerical solution have been studied in [2]. A time-invariant nonregular linear DAE in GCF takes the form of the DAE with the characteristic pencil (i.e., the matrix pencil associated with the DAE) in the Weierstrass-Kronecker canonical form. This form (see [9]) is usually used for solving linear time-invariant DAEs. In the present paper, we use the special block form of the singular operator pencil that consists of the singular and regular blocks where invertible and zero subblocks are separated out, and the block form of the corresponding matrix pencil is different from the canonical form presented in [9].

In the present paper, we generalize and weaken conditions of the existence and uniqueness of global solutions of regular and singular (nonregular) semilinear DAEs (see the “Global solvability” section) that have been obtained in early works [4, 5, 7]. Also, we specify conditions of the blow-up of solutions of the DAEs (see the “The blow-up of solutions” section). The main differences are that the obtained (in this paper) conditions allow one to identify the sets of initial values for which the initial value problem has global solutions and/or for which solutions are blow-up in finite time, as well as the regions which the solutions cannot leave. As a consequence, more general conditions than in early works are found for the global boundedness of solutions (see the “Global solvability” section). Together, the conditions for the global solvability and the blow-up of solutions provide a criterion of the global solvability (see the “The criterion of global solvability” section). The obtained results allow one to solve problems that cannot be studied using the theorems from [4, 5, 7] or other known theorems on the global solvability of DAEs. In the “Examples” section, we consider a series of simple examples that demonstrate the application of the obtained theorems to the study of semilinear DAEs. In the “Problem statement. Preliminaries” section, a problem statement and some notations and definitions are given, and a system of differential equations (DEs) and algebraic equations (AEs), equivalent to the DAE under consideration, is presented. To reduce the DAE to the system of DEs and AEs, the special block form of the characteristic operator pencil, the direct decompositions of spaces, which reduce the pencil to this block form, and the projectors onto the subspaces from these decompositions are used. The main results related to this block form and the associated direct decompositions of spaces and projectors are given in Appendix 1.

The following standard notation will be used. By \(\textrm{L}(X,Y)\), we denote the space of continuous linear operators from X to Y; \(\textrm{L}(X,X)=\textrm{L}(X)\), and similarly, \(C((a,b),(a,b))=C(a,b)\); \(I_X\) denotes the identity operator in the space X; \(\mathcal {A}^{(-1)}\) denotes the semi-inverse operator of an operator \(\mathcal {A}\)  (\(A^{-1}\) denotes the inverse operator of A); \(\mathcal R(A)\) is the range of an operator A; \(\mathop {\textrm{Ker}}(A)\) is the kernel of an operator A; \(\overline{D}\) is the closure of a set D; \(D^c\), where D is a set in a space X, is the complement of D relative to X, i.e., \(D^c=X\setminus D\); \(\partial D\) is the boundary of a set D; \(L_1\dot{+} L_2\) is the direct sum of the linear spaces \(L_1\) and \(L_2\); \(\mathop {\textrm{Lin}}\{x_i\}_{i=0}^N\) is the linear span of a system \(\{x_i\}_{i=0}^N\); \(X'\) is the conjugate space of X (it is also called an adjoint or a dual space); \(A^{\mathop {\textrm{T}}}\) denotes the transposed operator (i.e., the adjoint operator acting in real linear spaces to which the transposed matrix corresponds) or the transposed matrix; \(\delta _{ij}\) is the Kronecker delta; the symbol \(<<\) means “much less than”; \(\partial _x\) denotes the partial derivative \(\frac{\partial }{\partial x}\); \(\partial _{(x_1,x_2,...,x_n)}=\frac{\partial }{\partial (x_1,x_2,...,x_n)}= \left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},...,\frac{\partial }{\partial x_n}\right)\); a dot \(\dot{\phantom{0}}\) denotes the time derivative \(\frac{d}{dt}\). Note that both \(A\subset B\) and \(A\subseteq B\) mean that A is a subset of B, i.e., A can be a proper subset of B (\(A\ne B\)) or be equal to B; if A is a proper subset of B, we write \(A\subsetneqq B\).

Let \(f:J\rightarrow Y\) where J is an interval in \({\mathbb R}\) and Y is a normed linear space. If \(J=[a,b)\), \(b\le +\infty\)  (\(J=(a,b]\), \(a\ge -\infty\)), then the derivative of the function f at the point a (at the point b) is understood as the derivative on the right (on the left) at this point (see, e.g., [14]). If \(f:[a,b)\rightarrow Y\) is continuously differentiable on (ab) and in addition the right-hand derivative (i.e., the derivative on the right) of f exists at the point a and is continuous from the right at a, then f is said to belong \(C^1([a,b),Y)\).

Problem statement. Preliminaries

In this section, we give a problem statement and introduce some notations and definitions which will be used in what follows (cf. [4, 5, 7, 13]).

We consider a differential-algebraic equation (or degenerate differential equation) of the form

$$\begin{aligned} \frac{d}{dt}[Ax]+Bx=f(t,x), \end{aligned}$$
(2.1)

where \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^m})\), \({\mathcal {T}}\subseteq [0,\infty )\) is an interval, \(D\subseteq {{\mathbb R}^n}\) is an open set, and \(A,\, B\in \textrm{L}({{\mathbb R}^n},{{\mathbb R}^m})\) (or A, B are \(m\times n\)-matrices). We use the initial condition

$$\begin{aligned} x(t_0)=x_0. \end{aligned}$$
(2.2)

In the work, the case when \(m=n\) and the operator A is, in general, degenerate (noninvertible) and the case when \(m\ne n\) are studied. In this paper, we use the DAE terminology and therefore Eq. (2.1) is called a semilinear differential-algebraic equation.

If \(m=n\) and the operator A is nondegenerate (invertible), then (2.1) can be reduced to an explicit ordinary differential equation (ODE); however, we will also call it a semilinear DAE, and, in this case, the results obtained in the paper remain valid (moreover, they generalize the results on the global solvability and the Lagrange stability and instability which are presented for ODEs in [13, Chapter IV]). In general, the main object of study in the paper is the equation of the form (2.1) that cannot be reduced to an explicit ODE, therefore, the main results and proofs will be formulated for it.

The pencil \(\lambda A+B\) of operators or matrices A, B (where \(\lambda\) is a complex parameter) is called regular if \(m=n=\mathop {\textrm{rank}}(\lambda A+B)\) (or \(m=n\) and \(\det (\lambda A+B)\not \equiv 0\)); otherwise, that is, if \(m\ne n\) or \(m=n\) and \(\mathop {\textrm{rank}}(\lambda A+B)<n\), the pencil is called singular, or nonregular, or irregular. Recall that the rank of the operator pencil \(\lambda A+B\) is the dimension of its range. The rank of the matrix pencil \(\lambda A+B\) equals the maximum number of its linearly independent columns or rows (i.e., the maximum number of columns or rows of the pencil that are linearly independent set of vectors for some \(\lambda =\lambda _0\)) and equals the largest among the orders of the pencil minors that do not vanish identically. Obviously, the ranks of the pencil of operators and the pencil of corresponding matrices coincide.

The pencil \(\lambda A+B\), associated with the linear part \(\frac{d}{dt}[Ax]+Bx\) of the DAE (2.1), is called characteristic. If the characteristic pencil \(\lambda A+B\) is singular (respectively, regular), then the DAE is called singular (respectively, regular), or nonregular, or irregular.

The nonregular semilinear DAE (2.1) corresponds to an overdetermined system of differential and algebraic equations (i.e., the number of equations m is greater than the number of unknowns n) if \(\mathop {\textrm{rank}}(\lambda A+B)=n<m\), to an underdetermined system (i.e., the number of equations is less than the number of unknowns) if \(\mathop {\textrm{rank}}(\lambda A+B)=m<n\), and to a closed system (i.e., the number of equations is equal to the number of unknowns) if \(n=m\) and \(\mathop {\textrm{rank}}(\lambda A+B)<n\).

The function x(t) is called a solution of Eq. (2.1on \([t_0,t_1)\subseteq {\mathcal {T}}\) if \(x\in C([t_0,t_1),D)\), \((Ax)\in C^1([t_0,t_1),{{\mathbb R}^m})\) and x(t) satisfies (2.1) on \([t_0,t_1)\). If the function x(t) additionally satisfies the initial condition (2.2), then it is called a solution of the initial value problem (IVP) (2.1), (2.2).

A solution x(t) (of an equation or inequality) is called global if it exists on the interval \([t_0,\infty )\) (where \(t_0\) is a given initial value).

A solution x(t) has a finite escape time or is blow-up in finite time and is called Lagrange unstable if it exists on some finite interval \([t_0,T)\) and is unbounded, that is, there exists \(T<\infty\) such that \(\lim \limits _{t\rightarrow T-0} \Vert x(t)\Vert =+\infty\). A solution x(t) is called Lagrange stable if it is global and bounded, that is, x(t) exists on the interval \([t_0,\infty )\) and \(\sup \limits _{t\in [t_0,\infty )} \Vert x(t)\Vert <\infty\).

The DAE (2.1) is called Lagrange unstable (respectively, Lagrange stable) for the initial point \((t_0,x_0)\) if the solution of IVP (2.1), (2.2) is Lagrange unstable (respectively, Lagrange stable) for this initial point. The DAE (2.1) is called Lagrange unstable (respectively, Lagrange stable) if each solution of IVP (2.1), (2.2) is Lagrange unstable (respectively, Lagrange stable).

A convex set containing a point \(x_0\in X\) that is contained in a ball \(\{x\in X\mid \Vert x-x_0\Vert \le \delta \}\) (where \(\delta \ge 0\)) or coincides with it will be called a neighborhood of the point \(x_0\) and will be denoted by \(N_\delta (x_0)\) (it is possible that \(N_\delta (x_0)=\{x_0\}\); in this case the neighborhood is degenerate). A neighborhood of some point that is an open (respectively, closed) set will be called an open (respectively, closed) neighborhood. By \(U_\delta (x_0)\) and \(\overline{N_\delta (x_0)}\), we denote the open neighborhood and closed neighborhood, respectively. Note that \(\overline{U_\delta (x_0)}\) denotes the closure of the open neighborhood \(U_\delta (x_0)\) (accordingly, \(\delta > 0\)). Sometimes we will denote a neighborhood (respectively, open neighborhood, closed neighborhood) of the point \(x_0\) simply by \(N(x_0)\) (respectively, \(U(x_0)\), \(\overline{N(x_0)}\)), without indicating the radius of the ball which contains it.

Also, if \(t\in [a,b]\), \(a,b\in {\mathbb R}\), \(a\ne b\), then by an open neighborhood \(U_\delta (a)\) of the point a, we mean a semi-open interval \([a,a+\delta )\), \(0<\delta <b-a\), and, similarly, by an open neighborhood \(U_\delta (b)\), we mean a semi-open interval \((b-\delta ,b]\), \(0<\delta <b-a\).

Let \(X,\, Y\) be a normed linear space, \(M\subseteq X\), and \(J\subset {\mathbb R}\) be an interval. We use these notations in the definitions given below.

A mapping f(tx) of a set \(J\times M\) into Y is said to satisfy locally a Lipschitz condition (or to be locally Lipschitz continuous) with respect to x on \(J\times M\) if for each (fixed) \((t_*,x_*)\in J\times M\) there exist open neighborhoods \(U(t_*)\), \(\widetilde{U}(x_*)\) of the points \(t_*\), \(x_*\) and a constant \(L\ge 0\) such that \(\Vert f(t,x_1)-f(t,x_2)\Vert \le L\Vert x_1-x_2\Vert\) for any \(t\in U(t_*)\), \(x_1,x_2\in \widetilde{U}(x_*)\).

We denote by \(\rho (M_1,M_2)=\inf \limits _{x_1\in M_1,\, x_2\in M_2}\Vert x_2-x_1\Vert\) the distance between the closed sets \(M_1\), \(M_2\) in X (\(\rho (x,M)=\inf \limits _{y\in M}\Vert x-y\Vert\) denotes the distance from the point \(x\in X\) to the set M).

A function \(W\in C(M,{\mathbb R})\), where \(M\ni 0\), is called positive definite if \(W(0)=0\) and \(W(x)>0\) for all \(x\in M\setminus \{0\}\). A function \(V\in C(J\times M,{\mathbb R})\) is called positive definite if \(V(t,0)\equiv 0\) and there exists a positive definite function \(W\in C(M,{\mathbb R})\) such that \(V(t,x)\ge W(x)\) for all \(t\in J\), \(x\in M\setminus \{0\}\).

A scalar function \(V\in C(Z,{\mathbb R})\) is called positive if \(V(z)>0\) for all \(z\in Z\), where Z is a set in a normed linear space.

A positive function \(v\in C^1([a,\infty ),(0,\infty ))\) satisfying a differential inequality \({\dot{v}\le \chi (t,v)}\) (or \({\dot{v}\ge \chi (t,v)}\) ), where \(\chi \in C([a,\infty )\times (0,\infty ),{\mathbb R})\), on \([a,\infty )\) is called a positive solution of the inequality on \([a,\infty )\).

Reduction of a singular (nonregular) DAE to a system of ordinary differential and algebraic equations

The special block form of the singular characteristic pencil \(\lambda A+B\) and the direct decompositions of spaces, which reduce the pencil to this block form, are used to reduce the singular DAE (2.1) to the system of ordinary differential equations (ODEs) and algebraic equations. The direct decompositions of spaces generate the projectors onto the subspaces from these decompositions, and the converse is also true. The main results related to the mentioned block form of the singular pencil and the corresponding direct decompositions of spaces and projectors are given in Appendix 1.

Throughout the paper, it is assumed that the regular block \(\lambda A_r+ B_r\) from (A.20) is a regular pencil of index not higher than 1 (see the definition in Appendix 1).

First, recall that the function f(tx) from (2.1) is defined on \({\mathcal {T}}\times D\), where \(D\subseteq {{\mathbb R}^n}\) is an open set, and introduce the direct decomposition of D, similar to the direct decomposition (A.37). The pair \(P_1\), \(P_2\) and the pair \(S_1\), \(S_2\) of mutually complementary projectors generate the decomposition of the set D into the direct sum of subsets

$$\begin{aligned} D=D_{s_1}\dot{+} D_{s_2}\dot{+} D_1\dot{+} D_2,\qquad D_{s_i}=S_iD,\quad D_i=P_iD,\quad i=1,2. \end{aligned}$$
(2.3)

Obviously, \(D_{s_i}\subseteq X_{s_i}\), \(D_i\subseteq X_i\) (\(i=1,2\)), where \(X_{s_i}\), \(X_i\) are defined in (A.1) and (A.26) (see Appendix 1), and \(D_{s_i}\), \(D_i\) are open sets.

By using the projectors (A.8) and (A.30), the singular semilinear DAE (2.1) is reduced to the equivalent system (cf. [5])

$$\begin{aligned} \frac{d}{dt} (A S_1 x)&=F_1 [f(t,x)-B x], \end{aligned}$$
(2.4)
$$\begin{aligned} \frac{d}{dt} (A P_1 x)&=Q_1 [f(t,x)-B x], \end{aligned}$$
(2.5)
$$\begin{aligned} 0&= Q_2 [f(t,x)-B x], \end{aligned}$$
(2.6)
$$\begin{aligned} 0&= F_2 [f(t,x)-B x], \end{aligned}$$
(2.7)

where \(F_1=F\), \(F_2=0\) if \(\mathop {\textrm{rank}}(\lambda A+B)= m<n\), and \(S_1=S\) (\(S_2=0\)) if \(\mathop {\textrm{rank}}(\lambda A+B)= n<m\) (see Appendix 1 for details). The results obtained in the paper will be formulated for the general case when \(\mathop {\textrm{rank}}(\lambda A+B) < n\) and \(\mathop {\textrm{rank}}(\lambda A+B) < m\). The properties of the projectors allow one to immediately obtain the corresponding results for the case when \(\mathop {\textrm{rank}}(\lambda A+B)= m<n\) or \(\mathop {\textrm{rank}}(\lambda A+B)= n<m\).

The system (2.4)–(2.7) is equivalent to

$$\begin{aligned}&\frac{d}{dt} (\mathcal {A}_{gen} x_{s_1})+ \mathcal {B}_{gen} x_{s_1} + \mathcal {B}_{und} x_{s_2} = F_1 f(t,x), \nonumber \\&\frac{d}{dt} (\mathcal {A}_1 x_{p_1})+ \mathcal {B}_1 x_{p_1} = Q_1 f(t,x),\\&\mathcal {B}_2 x_{p_2} =Q_2 f(t,x), \nonumber \\&\mathcal {B}_{ov}x_{s_1}=F_2 f(t,x),\nonumber \end{aligned}$$
(2.8)

where the operators defined in (A.13) and (A.33) are used and \(x_{s_i}=S_i x\in D_{s_i}\), \(x_{p_i}=P_i x\in D_i\), \(i=1,2\), are the components of the vector \(x\in D\) represented as (A.38), i.e., \({x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}}\). Note that the representation of x in the form \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}\) (A.38) is uniquely determined for each \(x\in {{\mathbb R}^n}\). Further, the system (2.8) can be reduced to the equivalent system (as in [4])

$$\begin{aligned}&\dot{x}_{s_1} =\mathcal {A}_{gen}^{(-1)}\big (F_1 f(t,x)-\mathcal {B}_{gen} x_{s_1}-\mathcal {B}_{und} x_{s_2}\big ), \end{aligned}$$
(2.9)
$$\begin{aligned}&\dot{x}_{p_1} =\mathcal {A}_1^{(-1)} \big (Q_1 f(t,x)-\mathcal {B}_1 x_{p_1}\big ), \end{aligned}$$
(2.10)
$$\begin{aligned}&\mathcal {B}_2^{(-1)} Q_2 f(t,x)- x_{p_2} = 0, \end{aligned}$$
(2.11)
$$\begin{aligned}&F_2 f(t,x)-\mathcal {B}_{ov}x_{s_1} = 0, \end{aligned}$$
(2.12)

where \(\mathcal {A}_{gen}^{(-1)}\), \(\mathcal {A}_1^{(-1)}\), \(\mathcal {B}_2^{(-1)}\) are the semi-inverse operators which can be found by using the relations (A.19), (A.35) and (A.36) (see Appendix 1), and \(x_{s_i}\in D_{s_i}\), \(x_{p_i}\in D_i\), \(i=1,2\), \({x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}}\) as above (recall that a dot \(\dot{\phantom{0}}\) denotes the time derivative \(\frac{d}{dt}\)).

In addition, the system (2.8) can be reduced to the equivalent system

$$\begin{aligned}&\dot{x}_{s_1} =A_{gen}^{-1} \big (F_1 f(t,x) - B_{gen} x_{s_1} - B_{und} x_{s_2}\big ), \end{aligned}$$
(2.13)
$$\begin{aligned}&\dot{x}_{p_1} =A_1^{-1} \big (Q_1 f(t,x) - B_1 x_{p_1}\big ), \end{aligned}$$
(2.14)
$$\begin{aligned}&B_2^{-1} Q_2 f(t,x) - x_{p_2} = 0, \end{aligned}$$
(2.15)
$$\begin{aligned}&F_2 f(t,x) - B_{ov} x_{s_1} = 0, \end{aligned}$$
(2.16)

where the operators defined in Statement A.1 and in Remark A.3, (A.32) (see Appendix 1) are used (note that \(A_{gen}^{-1}=\mathcal {A}_{gen}^{(-1)}\big |_{Y_{s_1}}\), \(A_1^{-1}=\mathcal {A}_1^{(-1)}\big |_{Y_1}\), \(B_2^{-1}=\mathcal {B}_2^{(-1)}\big |_{Y_2}\), \(B_1=\mathcal {B}_1\big |_{X_1}\), \(B_{gen}=\mathcal {B}_{gen}\big |_{X_{s_1}}\), \(B_{und}=\mathcal {B}_{und}\big |_{X_{s_2}}\), \(B_{ov}=\mathcal {B}_{ov}\big |_{X_{s_1}}\)) and the projectors \(F_i\), \(Q_i\) on the subspaces \(Y_{s_i}\), \(Y_i\), \(i=1,2\), are considered as the operators from \({{\mathbb R}^m}\) into \(Y_{s_i}\)\(Y_i\), respectively, (i.e., \(F_i\in \textrm{L}({{\mathbb R}^m},Y_{s_i})\), \(Q_i\in \textrm{L}({{\mathbb R}^m},Y_i)\) ) that have the same projection properties as the projectors \(F_i,Q_i\in \textrm{L}({{\mathbb R}^m})\) defined in Appendix 1 (see (A.8), (A.30)), that is, \(F_i y=F_i y_{s_i}=y_{s_i}\in Y_{s_i}\) and \(Q_i y=Q_i y_{p_i}=y_{p_i}\in Y_i\), \(i=1,2\), for any \(y\in {{\mathbb R}^m}\). Here we consider the projectors \(F_i\) and \(Q_i\) as the operators belonging to \(\textrm{L}({{\mathbb R}^m},Y_{s_i})\) and \(\textrm{L}({{\mathbb R}^m},Y_i)\), \(i=1,2\), since the spaces \(Y_{s_1}\), \(Y_1\), \(Y_2\) are the domains of definition of the (induced) operators \(A_{gen}^{-1}\), \(A_1^{-1}\), \(B_2^{-1}\), respectively, and in addition \({B_{ov}\in \textrm{L}(X_{s_1},Y_{s_2})}\). The described differences in the definitions of the projectors \(F_i\) and \(Q_i\) are, in general, formal and become significant only in the transition from the operators to the corresponding matrices. Therefore, we keep the same notations for the projectors \(F_i\), \(Q_i\) (\(i=1,2\)) both for the case when \(F_i,Q_i\in \textrm{L}({{\mathbb R}^m})\) and for the case when they are considered as the operators \(F_i\in \textrm{L}({{\mathbb R}^m},Y_{s_i})\), \(Q_i\in \textrm{L}({{\mathbb R}^m},Y_i)\), \(i=1,2\).

Note that we identify the representations of \(x\in {{\mathbb R}^n}\) in the form \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}\) (i.e., (A.38)) and in the form \(x=(x_{s_1},x_{s_2},x_{p_1},x_{p_2})\), where \(x_{s_i}\in X_{s_i}\) and \(x_{p_i}\in X_i\), \(i=1,2\), as indicated in Remark A.6 (see Appendix 1). The correspondence between these representations is established in Remark A.6. In Eqs. (2.13)–(2.16), \(x_{s_i}=S_i x\) and \(x_{p_i}=P_i x\) (\(x\in D\), \(x_{s_i}\in D_{s_i}\), \(x_{p_i}\in D_i\), \(i=1,2\)) as well as in the equations equivalent to them, and, taking into account the established correspondence between the representations of x, we set \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}=(x_{s_1},x_{s_2},x_{p_1},x_{p_2})\) for convenience. Therefore, in what follows, we will sometimes omit appropriate explanations, assuming that for x one representation can be replaced by another. For clarity, note that the projectors \(S_i:{{\mathbb R}^n}\rightarrow X_{s_i}\), \(P_i:{{\mathbb R}^n}\rightarrow X_i\), \(i=1,2\), (see (A.8), (A.30) in Appendix 1) are, in general, the operators in \({{\mathbb R}^n}\), and for the representation of \(x\in {{\mathbb R}^n}\) as the sum \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}\) (with respect to the direct sum \(X_{s_1}\dot{+} X_{s_2}\dot{+}X_1 \dot{+}X_2\)), where \(x_{s_i}=S_i x\) and \(x_{p_i}=P_i x\), we consider these projectors as the operators \(S_i, P_i\in \textrm{L}({{\mathbb R}^n})\), \(i=1,2\), however, for the representation of x as the ordered collection (column vector) \(x=(x_{s_1},x_{s_2},x_{p_1},x_{p_2})\) (with respect to the direct product \(X_{s_1}\times X_{s_2}\times X_1 \times X_2\)), we consider these projectors as the operators \(S_i\in \textrm{L}({{\mathbb R}^n},X_{s_i})\), \(P_i\in \textrm{L}({{\mathbb R}^n},X_i)\), \(i=1,2\), with the preservation of their projection properties (i.e., \(S_i x=S_i x_{s_i}=x_{s_i}\in X_{s_i}\) and \(P_i x=P_i x_{p_i}=x_{p_i}\in X_i\), \(i=1,2\), for any \(x\in {{\mathbb R}^n}\)). Since the mentioned differences in the definitions of the projectors \(S_i\), \(P_i\) are formal and become significant only in the transition from the operators to the corresponding matrices, we keep the same notations for the projectors \(S_i\), \(P_i\) in both cases.

Below, when proving the theorems on global solvability, we use the norm \(\Vert \cdot \Vert\) in \(X_{s_1}\dot{+} X_{s_2}\dot{+}X_1 \dot{+}X_2\), defined by \(\Vert x\Vert =\Vert x_{s_1}\Vert +\Vert x_{s_2}\Vert + \Vert x_{p_1}\Vert +\Vert x_{p_2}\Vert\) where \(\Vert x_{s_1}\Vert =\Vert x_{s_1}\Vert _{X_{s_1}}\), \(\Vert x_{s_2}\Vert =\Vert x_{s_2}\Vert _{X_{s_2}}\), \(\Vert x_{p_1}\Vert =\Vert x_{p_1}\Vert _{X_1}\) and \(\Vert x_{p_2}\Vert =\Vert x_{p_2}\Vert _{X_2}\) denote the norms of the components \(x_{s_1}\), \(x_{s_2}\), \(x_{p_1}\) and \(x_{p_2}\) in the subspaces \(X_{s_1}\), \(X_{s_2}\), \(X_1\) and \(X_2\), respectively. Taking into account the established correspondence \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}= (x_{s_1},x_{s_2},x_{p_1},x_{p_2})\), the norm \(\Vert x\Vert\) of \(x\in X_{s_1}\times X_{s_2}\times X_1\times X_2\) is defined in the same way and it coincides with the above-defined norm of the corresponding element \(x\in X_{s_1}\dot{+} X_{s_2}\dot{+}X_1 \dot{+}X_2\). Similarly, in \({\mathbb R}\times X_{s_1}\times X_{s_2}\times X_1\times X_2\) or \({\mathbb R}\times {{\mathbb R}^n}\), we use the norm \(\Vert (t,x)\Vert =\Vert t\Vert +\Vert x_{s_1}\Vert +\Vert x_{s_2}\Vert + \Vert x_{p_1}\Vert +\Vert x_{p_2}\Vert\). Generally, the inequality \(\Vert x\Vert _{{\mathbb R}^n}\le \Vert x_{s_1}\Vert _{{\mathbb R}^n}+\Vert x_{s_2}\Vert _{{\mathbb R}^n}+ \Vert x_{p_1}\Vert _{{\mathbb R}^n}+\Vert x_{p_2}\Vert _{{\mathbb R}^n}\) holds for any norm \(\Vert \cdot \Vert _{{\mathbb R}^n}\) in \({{\mathbb R}^n}=X_{s_1}\dot{+} X_{s_2}\dot{+}X_1 \dot{+}X_2\) due to the representation (A.38), and in this sense the chosen norm is “maximal”.

It is shown above that the singular semilinear DAE (2.1) can be reduced to the equivalent system (2.9)–(2.12) or (2.13)–(2.16). Note that the derivative \(\dot{V}_{(2.9),(2.10)}\) of a scalar function \(V\in C^1({\mathcal {T}}\times K_{s1},{\mathbb R})\), where \(K_{s1}\subseteq D_{s_1}\times D_1\) is an open set, along the trajectories of Eqs. (2.9) and (2.10) has the form (cf. [4])

$$\begin{aligned}&\dot{V}_{(2.9),(2.10)}(t,x_{s_1},x_{p_1}) = \partial _t V(t,x_{s_1},x_{p_1})+ \partial _{(x_{s_1},x_{p_1})} V(t,x_{s_1},x_{p_1})\cdot \Upsilon (t,x_{s_1},x_{s_2},x_{p_1},x_{p_2}) = \nonumber \\&=\partial _t V(t,x_{s_1},x_{p_1})+\partial _{x_{s_1}} V(t,x_{s_1},x_{p_1})\cdot \left[ \mathcal {A}_{gen}^{(-1)}\big (F_1 f(t,x)-\mathcal {B}_{gen} x_{s_1}-\mathcal {B}_{und} x_{s_2}\big )\right] + \nonumber \\&+\partial _{x_{p_1}} V(t,x_{s_1},x_{p_1})\cdot \left[ \mathcal {A}_1^{(-1)} \big (Q_1 f(t,x)-\mathcal {B}_1 x_{p_1}\big )\right] , \end{aligned}$$
(2.17)
$$\begin{aligned}&\Upsilon (t,x_{s_1},x_{s_2},x_{p_1},x_{p_2})=\begin{pmatrix} \mathcal {A}_{gen}^{(-1)}\big (F_1 f(t,x)-\mathcal {B}_{gen} x_{s_1}-\mathcal {B}_{und} x_{s_2}\big ) \\ \mathcal {A}_1^{(-1)} \big (Q_1 f(t,x)-\mathcal {B}_1 x_{p_1}\big ) \end{pmatrix}, \end{aligned}$$
(2.18)

where \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}\)  (\(x_{s_i}=S_i x\), \(x_{p_i}=P_i x\), \(i=1,2\)), \((x_{s_1},x_{p_1})\in K_{s1}\), \(x_{s_2}\in D_{s_2}\), \(x_{p_2}\in D_2\).

Reduction of a regular semilinear DAE to a system of ordinary differential and algebraic equations

Let the DAE (2.1) be regular, i.e., the characteristic pencil \(\lambda A+B\) be regular (\(n=m=\mathop {\textrm{rank}}(\lambda A+B)\)). In this case, the pair \(P_1\), \(P_2\) of mutually complementary projectors (see Appendix 1 and, in particular, Remark A.5) generate the decomposition of the open set D into the direct sum of subsets

$$\begin{aligned} D=D_1\dot{+} D_2,\qquad D_i=P_iD,\quad i=1,2, \end{aligned}$$
(2.19)

where \(D_i\subseteq X_i\) (\(i=1,2\)) are open sets.

We assume that the regular pencil \(\lambda A+B\) has index not higher than 1 (see Appendix 1). Then the regular semilinear DAE (2.1) can be reduced to the equivalent system of the form (2.5), (2.6), i.e.,

$$\begin{aligned} \frac{d}{dt} (A P_1 x)&=Q_1 [f(t,x)-B x], \end{aligned}$$
(2.20)
$$\begin{aligned} 0&=Q_2 [f(t,x)-B x]. \end{aligned}$$
(2.21)

Further, the system (2.20), (2.21) can be reduced to the equivalent system (2.10), (2.11), i.e.,

$$\begin{aligned}&\dot{x}_{p_1}=\mathcal {A}_1^{(-1)} \big (Q_1 f(t,x)-\mathcal {B}_1 x_{p_1}\big ), \end{aligned}$$
(2.22)
$$\begin{aligned}&\mathcal {B}_2^{(-1)} Q_2 f(t,x)- x_{p_2}=0, \end{aligned}$$
(2.23)

or to the equivalent system (as in [7])

$$\begin{aligned}&\dot{x}_{p_1}=G^{-1}\big (Q_1 f(t,x)-Bx_{p_1}\big ), \end{aligned}$$
(2.24)
$$\begin{aligned}&G^{-1}Q_2 f(t,x)-x_{p_2}=0, \end{aligned}$$
(2.25)

where \(x_{p_i}=P_i x\in D_i\), \(i=1,2\), \(x=x_{p_1}+x_{p_2}\) (see (A.41)), and G is the operator (A.43) (see Appendix 1). Note that \(\mathcal {A}_1^{(-1)}Q_i=G^{-1}Q_i\), \(i=1,2\), \(\mathcal {B}_2^{(-1)} Q_2=G^{-1}Q_2\) and \(Bx_{p_1}=BP_1 x_{p_1}=\mathcal {B}_1x_{p_1}\).

In addition, the system (2.20), (2.21) can be reduced to the equivalent system (2.14), (2.15), i.e.,

$$\begin{aligned}&\dot{x}_{p_1} =A_1^{-1} \big (Q_1 f(t,x) - B_1 x_{p_1}\big ), \end{aligned}$$
(2.26)
$$\begin{aligned}&B_2^{-1}Q_2 f(t,x)-x_{p_2} = 0. \end{aligned}$$
(2.27)

where the projectors \(Q_i\) on the subspaces \(Y_i\), \(i=1,2\), are considered as the operators \(Q_i\in \textrm{L}({{\mathbb R}^n},Y_i)\) that have the same projection properties as the projectors \(Q_i\) (by definition, \(Q_i\in \textrm{L}({{\mathbb R}^n})\)) defined in Appendix 1 (see (A.30) and Remark A.6), that is, \(Q_i y=Q_i y_{p_i}=y_{p_i}\in Y_i\), \(i=1,2\), for any \(y\in {{\mathbb R}^n}\). A similar remark regarding projectors is given for the system (2.13)–(2.16), and in general, for the regular DAE (2.1), remarks similar to those given in the “Reduction of a singular (nonregular) DAE to a system of ordinary differential and algebraic equations” section for the singular DAE hold.

Thus, the regular semilinear DAE (2.1) can be reduced to the equivalent system (2.22)–(2.23), or (2.24)–(2.25), or (2.26)–(2.27).

Note that the derivative \(\dot{V}_{(2.22)}\) of a scalar function \(V\in C^1({\mathcal {T}}\times K_1,{\mathbb R})\), where \(K_1\subset D_1\) is an open set (\(D_1\) is defined in (2.19)), along the trajectories of (2.22) has the form

$$\begin{aligned} \dot{V}_{(2.22)}(t,x_{p_1})&=\partial _t V(t,x_{p_1})+ \partial _{x_{p_1}} V(t,x_{p_1})\cdot \Pi (t,x_{p_1},x_{p_2}), \end{aligned}$$
(2.28)
$$\begin{aligned}&\Pi (t,x_{p_1},x_{p_2})=\mathcal {A}_1^{(-1)}\big (Q_1 f(t,x_{p_1}+x_{p_2})-\mathcal {B}_1 x_{p_1}\big ). \end{aligned}$$
(2.29)

Since \(\Pi (t,x_{p_1},x_{p_2})\) can be rewritten in the form \(\Pi (t,x_{p_1},x_{p_2})=G^{-1}\big (Q_1 f(t,x_{p_1}+x_{p_2})-Bx_{p_1}\big )\), then \(\dot{V}_{(2.22)}=\dot{V}_{(2.24)}\) is the derivative of the function V along the trajectories of (2.24) as well. Generally, in the case when the DAE (2.1) is regular, we can set \(S_i=F_i=0\), \(i=1,2\), and then the derivatives (2.17) and (2.28) are equivalent.

Consistency condition

Consider the manifold (cf. [5]) associated with the singular semilinear DAE (2.1):

$$\begin{aligned} L_{t_*}=\{(t,x)\in [t_*,\infty )\times {{\mathbb R}^n}\mid (F_2+Q_2)[f(t,x)-Bx]=0\}, \end{aligned}$$
(2.30)

where \({t_*\in {\mathcal {T}}}\). It can be represented as \(L_{t_*}=\!\{(t,x)\!\in \! [t_*,\infty )\times {{\mathbb R}^n}\mid F_2[f(t,x)-Bx]=0,\; Q_2[f(t,x)-Bx]=0\}\) or

$$L_{t_*}=\{(t,x)\in [t_*,\infty )\times {{\mathbb R}^n}\mid (t,x)\, \text {satisfies Eqs.}\, (2.11), (2.12)\,\}.$$

Thus, a point \((t,x)\in {\mathcal {T}}\times D\) belongs to \(L_{t_*}\) if and only if it satisfies Eqs. (2.11), (2.12) or the equivalent ones.

Also, we consider the manifold (cf. [7]) associated with the regular semilinear DAE (2.1):

$$\begin{aligned} L_{t_*}=\{(t,x)\in [t_*,\infty )\times {{\mathbb R}^n}\mid Q_2[f(t,x)-Bx]=0\}, \end{aligned}$$
(2.31)

where \({t_*\in {\mathcal {T}}}\), which can be represented as

$$L_{t_*}=\{(t,x)\in [t_*,\infty )\times {{\mathbb R}^n}\mid (t,x)\, \text {satisfies equation}\, (2.23)\,\}$$

(recall that Eqs. (2.23) and (2.11) are the same). If the DAE (2.1) is regular, then we can set \(S_i=F_i=0\), \(i=1,2\), and reduce the manifold (2.30) to (2.31).

Thus, we do not change the notation of the manifold \(L_{t_*}\) in the singular and regular cases, which allows us to avoid excess notation. It will be clear from the context what exactly is meant.

The initial values \(t_0\), \(x_0\) satisfying the consistency condition \((t_0,x_0)\in L_{t_*}\), where \(t_*\le t_0\), \(t_*\in {\mathcal {T}}\), are called consistent (the initial point \((t_0,x_0)\in L_{t_*}\), where \(t_*\le t_0\), is called consistent). In general, the point (tx) (the values t, x) satisfying the consistency condition \((t,x)\in L_{t_*}\), where \(t_*\in {\mathcal {T}}\), will be called consistent.

Obviously, an initial point \((t_0,x_0)\) for a solution of IVP (2.1), (2.2) must belong to the manifold \(L_{t_0}\) and, generally, the graph of the solution (i.e., the set of points (tx(t)), where t from the domain of definition of the solution x(t)) must lie in this manifold.

Global solvability

Global solvability of regular semilinear DAEs

For a clearer presentation of the results, we will first consider a regular semilinear DAE.

Below, the projectors and subspaces, described in Appendix 1, as well as the definitions and constructions, given in the “Problem statement. Preliminaries” section, are used. Recall that \(D_i=P_iD\), \(i=1,2\) (see (2.19)).

We first formulate all statements (theorems and corollaries) and then prove them.

Theorem 3.1

Let \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^n})\), where \(D\subseteq {{\mathbb R}^n}\) is some open set and \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let the operator pencil \(\lambda A+B\) be a regular pencil of index not higher than 1. Assume that there exists an open set \(M_1\subseteq D_1\) and a set \(M_2\subseteq D_2\) such that the following holds:

  1. 1.

    For any fixed \({t\in {\mathcal {T}}}\), \({x_{p_1}\in M_1}\) there exists a unique \({x_{p_2}\in M_2}\) such that \({(t,x_{p_1}+x_{p_2})\in L_{t_+}}\) (the manifold \(L_{t_+}\) has the form (2.31) where \(t_*=t_+\)).

  2. 2.

    A function f(tx) satisfies locally a Lipschitz condition with respect to x on \({\mathcal {T}}\times D\). For any fixed \(t_*\in {\mathcal {T}}\), \({x_*=x_{p_1}^*+x_{p_2}^*}\)  (\({x_{p_i}^*=P_ix_*}\), \({i=1,2}\))  such that \(x_{p_1}^*\in M_1\), \(x_{p_2}^*\in M_2\) and \({(t_*,x_*)\in L_{t_+}}\), there exist open neighborhoods \(U_\delta (t_*,x_{p_1}^*)= U_{\delta _1}(t_*)\times U_{\delta _2}(x_{p_1}^*)\subset {\mathcal {T}}\times D_1\) and \(U_\varepsilon (x_{p_2}^*)\subset D_2\)  (the numbers \(\delta , \varepsilon >0\) depend on the choice of \(t_*\), \(x_*\)) and an invertible operator \(\Phi _{t_*,x_*}\in \textrm{L}(X_2,Y_2)\) such that for each \((t,x_{p_1})\in U_\delta (t_*,x_{p_1}^*)\) and each \(x_{p_2}^1,\, x_{p_2}^2\in U_\varepsilon (x_{p_2}^*)\) the mapping

    $$\begin{aligned} \widetilde{F}(t,x_{p_1},x_{p_2}):= Q_2f(t,x_{p_1}+x_{p_2})-B\big |_{X_2}x_{p_2}:{\mathcal {T}}\times D_1\times D_2\rightarrow Y_2 \end{aligned}$$
    (3.1)

    satisfies the inequality

    $$\begin{aligned} \Vert \widetilde{F}(t,x_{p_1},x_{p_2}^1)- \widetilde{F}(t,x_{p_1},x_{p_2}^2)- \Phi _{t_*,x_*} [x_{p_2}^1-x_{p_2}^2]\Vert \le q(\delta ,\varepsilon )\Vert x_{p_2}^1-x_{p_2}^2\Vert , \end{aligned}$$
    (3.2)

    where \(q(\delta ,\varepsilon )\) is such that \(\lim \limits _{\delta ,\,\varepsilon \rightarrow 0} q(\delta ,\varepsilon )<\Vert \Phi _{t_*,x_*}^{-1}\Vert ^{-1}\).

  3. 3.

    If \(M_1\ne X_1\), then the following holds. The component \(x_{p_1}(t)=P_1x(t)\) of each solution x(t) with the initial point \((t_0,x_0)\in L_{t_+}\), for which \(P_ix_0\in M_i\), \(i=1,2\), can never leave \(M_1\) (i.e., it remains in \(M_1\) for all t from the maximal interval of existence of the solution).

  4. 4.

    If \(M_1\) is unbounded, then the following holds. There exists a number \({R>0}\)  (R can be sufficiently large), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{x_{p_1}\in M_1\mid \Vert x_{p_1}\Vert > R\}\), and a function \({\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})}\) such that:

    1. (4.a)

      \(\lim \limits _{\Vert x_{p_1}\Vert \rightarrow +\infty }V(t,x_{p_1})=+\infty\) uniformly in t on each finite interval \([a,b)\subset {\mathcal {T}}\);

    2. (4.b)

      for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_R\), \(x_{p_2}\in M_2\) such that \((t,x_{p_1}+x_{p_2})\in L_{t_+}\), the derivative (2.28) of the function V along the trajectories of (2.22) satisfies the inequality

      $$\begin{aligned} \dot{V}_{(2.22)}(t,x_{p_1})\le \chi \big (t,V(t,x_{p_1})\big ); \end{aligned}$$
      (3.3)
    3. (4.c)

      the differential inequality

      $$\begin{aligned} \dot{v}\le \chi (t,v)\qquad (t\in {\mathcal {T}}) \end{aligned}$$
      (3.4)

      does not have positive solutions with finite escape time.

Then there exists a unique global (i.e., on \([t_0,\infty )\)) solution of IVP (2.1), (2.2) for each initial point \((t_0,x_0)\in L_{t_+}\) for which \(P_ix_0\in M_i\), \(i=1,2\).

Corollary 3.1

Theorem 3.1 remains valid if condition 3 is replaced by

  1. 3.

    In the case when \(M_1\ne X_1\), the following holds. Let \(D_{\Pi ,i}\subseteq X_i\), \(i=1,2\), be sets such that the function \(\Pi\) of the form (2.29) is defined and continuous for all \(t\in {\mathcal {T}}\) and \(x_{p_i}\in D_{\Pi ,i}\), \(i=1,2\), where \(D_{\Pi ,i}\supset D_i\), \(D_{\Pi ,1}\times D_{\Pi ,2}\ne D_1\times D_2\), if the domain of definition of \(\Pi\) can be extended to \({\mathcal {T}}\times D_{\Pi ,1}\times D_{\Pi ,2}\) in this way, and \(D_{\Pi ,i}= D_i\) otherwise. Let \(D_{c,i}\subseteq X_i\), \(i=1,2\), be sets such that for any fixed \({t\in {\mathcal {T}}}\), \({x_{p_1}\in D_{c,1}\supset M_1}\) there exists a unique \(x_{p_2}\in D_{c,2}\supset M_2\) such that \({(t,x_{p_1}+x_{p_2})\in L_{t_+}}\) (i.e., \((t,x_{p_1}+x_{p_2})\) satisfies (2.21)) and \(D_{c,1}\times D_{c,2}\ne M_1\times M_2\) if such sets exist, and \(D_{c,i}= M_i\) otherwise. Further, let \(\widetilde{D_i}:= D_{\Pi ,i}\cap D_{c,i}\), \(i=1,2\), if the function \(\Pi\) depends on \(x_{p_2}\), and \(\widetilde{D_1}:= D_{\Pi ,1}\) if \(\Pi\) does not depend on \(x_{p_2}\).

    Below, \(\Pi\) is considered as the function (2.29) with the domain of definition \({\mathcal {T}}\times \widetilde{D_1}\times \widetilde{D_2}\) if it depends on \(x_{p_2}\) and \({\mathcal {T}}\times \widetilde{D_1}\) if it does not depend on \(x_{p_2}\). Assume that there exists a function \(W\in C({\mathcal {T}}\times X_1,{\mathbb R})\) and for each sufficiently small number \({r>0}\) there exists a closed set \(K_r=\{x_{p_1}\in M_1\mid \rho (K_r,M_1^c)=r\}\)  (\({M_1^c=X_1\setminus M_1}\)\(\rho (K_r,M_1^c)=\! \inf \limits _{k\,\in \, K_r,\, m\,\in \, M_1^c} \Vert m-k\Vert\) ) such that

    $$W(t_1,x_{p_1}^1)<W(t_2,x_{p_1}^2)$$

    for every \(x_{p_1}^1\in K_r\), \(x_{p_1}^2\in M_1^c\cap \widetilde{D_1}\) and \(t_1,t_2\in {\mathcal {T}}\) such that \(t_1\le t_2\), and, in addition, \(W(t,x_{p_1})\) has the continuous partial derivatives on \({\mathcal {T}}\times K_r^c\)  (\({K_r^c=X_1\setminus K_r}\)) and the inequality

    $$\begin{aligned} \dot{W}_{(2.22)}(t,x_{p_1})=\partial _t W(t,x_{p_1})+ \partial _{x_{p_1}} W(t,x_{p_1})\cdot \Pi (t,x_{p_1},x_{p_2})\le 0 \end{aligned}$$
    (3.5)

    holds for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in K_r^c\cap \widetilde{D_1}\), \(x_{p_2}\in \widetilde{D_2}\) such that \({(t,x_{p_1}+x_{p_2})\in L_{t_+}}\) (if \(\Pi\) does not depend on \(x_{p_2}\), then (3.5) holds for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in K_r^c\cap \widetilde{D_1}\)).

Corollary 3.2

Theorem 3.1 remains valid if condition 4 is replaced by

  1. 4.

    If \(M_1\) is unbounded, then the following holds. There exists a number \({R>0}\), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({{\mathcal {T}}\times M_R}\), where \({M_R=\{x_{p_1}\in M_1\mid \Vert x_{p_1}\Vert > R\}}\), and functions \({k\in C({\mathcal {T}},{\mathbb R})}\), \({U\in C(0,\infty )}\) such that: \({\lim \limits _{\Vert x_{p_1}\Vert \rightarrow +\infty }V(t,x_{p_1})=+\infty }\) uniformly in t on each finite interval \({[a,b)\subset {\mathcal {T}}}\); for each \({t\in {\mathcal {T}}}\), \({x_{p_1}\in M_R}\), \({x_{p_2}\in M_2}\) such that \({(t,x_{p_1}+x_{p_2})\in L_{t_+}}\), the inequality  \({\dot{V}_{(2.22)}(t,x_{p_1})\le k(t)\, U\big (V(t,x_{p_1})\big )}\)  holds; \(\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)} =\infty\)  (\(v_0>0\) is a constant).

Corollary 3.3

If in the conditions of Theorem 3.1 the sets \(M_1\) and \(M_2\) are bounded, then Eq. (2.1) is Lagrange stable for the initial points \((t_0,x_0)\in L_{t_+}\) for which \(P_ix_0\in M_i\), \(i=1,2\).

Theorem 3.2

Theorem 3.1 remains valid if condition 2 is replaced by

  1. 2.

    A function f(tx) has the continuous partial derivative with respect to x on \({\mathcal {T}}\times D\). For any fixed \(t_*\in {\mathcal {T}}\), \({x_*=x_{p_1}^*+x_{p_2}^*}\) such that \(x_{p_1}^*\in M_1\), \(x_{p_2}^*\in M_2\) and \({(t_*,x_*)\in L_{t_+}}\), the operator

    $$\begin{aligned} \Phi _{t_*,x_*}:=\left[ \partial _x (Q_2f)(t_*,x_*)- B\right] P_2:X_2\rightarrow Y_2 \end{aligned}$$
    (3.6)

    has the inverse \(\Phi _{t_*,x_*}^{-1}\in \textrm{L}(Y_2,X_2)\).

Remark 3.1

The operator \(\Phi _{t_*,x_*}\) (3.6) is defined as an operator from \(X_2\) into \(Y_2\); however, in general, the operator defined by the formula from (3.6), that is, \(\widehat{\Phi }_{t_*,x_*}:=\left[ \partial _x (Q_2f)(t_*,x_*)- B\right] P_2\) (where \(t_*\), \(x_*\) are fixed), is an operator from \({{\mathbb R}^n}\) into \({{\mathbb R}^n}\) with the rang \(\widehat{\Phi }_{t_*,x_*}{{\mathbb R}^n}= Y_2\), and \(\Phi _{t_*,x_*}=\widehat{\Phi }_{t_*,x_*}\big |_{X_2}\). Since it is assumed that \(\Phi _{t_*,x_*}\) is invertible, then \(\widehat{\Phi }_{t_*,x_*}{{\mathbb R}^n}=\widehat{\Phi }_{t_*,x_*}X_2=Y_2\)  (\(X_1=\mathop {\textrm{Ker}}(\widehat{\Phi }_{t_*,x_*})\)).

The operator \(\widehat{\Phi }_{t_*,x_*}\) has the semi-inverse \(\widehat{\Phi }_{t_*,x_*}^{(-1)}\), i.e., the operator \(\widehat{\Phi }_{t_*,x_*}^{(-1)}\in \textrm{L}({{\mathbb R}^n})\) such that \(\widehat{\Phi }_{t_*,x_*}^{(-1)}{{\mathbb R}^n}= \widehat{\Phi }_{t_*,x_*}^{(-1)}Y_2=X_2\) and \(\Phi _{t_*,x_*}^{-1}=\widehat{\Phi }_{t_*,x_*}^{(-1)}\big |_{Y_2}\), which is defined by the relations \(\widehat{\Phi }_{t_*,x_*}^{(-1)} \widehat{\Phi }_{t_*,x_*}= P_2\), \(\widehat{\Phi }_{t_*,x_*} \widehat{\Phi }_{t_*,x_*}^{(-1)}= Q_2\) and \(\widehat{\Phi }_{t_*,x_*}^{(-1)}= P_2\widehat{\Phi }_{t_*,x_*}^{(-1)}\).

The proof of Theorem

3.1 Recall that the DAE (2.1) is equivalent to the system (2.22), (2.23), where the representation of x in the form \(x=x_{p_1}+x_{p_2}\), \(x_{p_i}=P_i x\in D_i\), \(i=1,2\), is uniquely determined for each x, and that Eq. (2.23) is equivalent to (2.27).

Also, note that there is a correspondence between \(X_1 \dot{+}X_2\) and \(X_1\times X_2\) which is established in the same way as in Remark A.6 (see Appendix 1) and we identify the representations of \(x\in {{\mathbb R}^n}\) in the form \(x=x_{p_1}+x_{p_2}\) (see (A.41)) and the form \(x=(x_{p_1},x_{p_2})\) where \(x_{p_i}\in X_i\), \(i=1,2\).

We denote

$$\tilde{f}(t,x_{p_1},x_{p_2})= f(t,x_{p_1}+x_{p_2})=f(t,x),$$

consider the mapping \(F\in C({\mathcal {T}}\times D_1\times D_2,\,X_2)\) defined by

$$\begin{aligned} F(t,x_{p_1},x_{p_2}):= B_2^{-1}Q_2 \tilde{f}(t,x_{p_1},x_{p_2})-x_{p_2}, \end{aligned}$$
(3.7)

and write (2.27) in the form

$$\begin{aligned} F(t,x_{p_1},x_{p_2})=0. \end{aligned}$$
(3.8)

Thus, equation (2.23) is equivalent to (3.8). It follows from the definition of the manifold \(L_{t_+}\) that \((t,x)\in L_{t_+}\) if and only if (tx) (where \(x=x_{p_1}+x_{p_2}\)) satisfies (2.23) or (3.8).

Note that (3.8) can be rewritten as

$$\begin{aligned} x_{p_2}=x_{p_2}-\Phi _{t_*,x_*}^{-1}\widetilde{F}(t,x_{p_1},x_{p_2}), \end{aligned}$$
(3.9)

where \(\widetilde{F}\in C({\mathcal {T}}\times D_1\times D_2,Y_2)\) is the mapping (3.1) which can be represented as \(\widetilde{F}(t,x_{p_1},x_{p_2})= Q_2\tilde{f}(t,x_{p_1},x_{p_2})-B_2x_{p_2}=B_2 F(t,x_{p_1},x_{p_2})\) (since \(B_2=B\big |_{X_2}=\mathcal {B}_2\big |_{X_2}\)), and \(\Phi _{t_*,x_*}\) is the operator defined in condition 2.

Let an open set \(M_1\subseteq D_1\) and a set \(M_2\subseteq D_2\) be such that the theorem conditions hold. It follows from condition 1 that for any fixed \(t_*\in {\mathcal {T}}\), \(x_{p_1}^*\in M_1\) there exists a unique \(x_{p_2}^*\in M_2\) such that \((t_*,x_{p_1}^*+x_{p_2}^*)\in L_{t_+}\).

Below, we give a lemma which is proved in a similar way as Lemma 3.1 from [4].

Lemma 3.1

For any fixed \(t_*\in {\mathcal {T}}\), \(x_{p_i}^*\in M_i\), \(i=1,2\), for which \((t_*,x_{p_1}^*+x_{p_2}^*)\in L_{t_+}\), there exist open neighborhoods \(U_r(t_*,x_{p_1}^*)=U_{r_1}(t_*)\times U_{r_2}(x_{p_1}^*)\subset {\mathcal {T}}\times D_1\),  \(U_\rho (x_{p_2}^*)\subset D_2\) and a unique function \(x_{p_2}=\nu (t,x_{p_1})\in C(U_r(t_*,x_{p_1}^*),U_\rho (x_{p_2}^*))\) which satisfies the equality \(\nu (t_*,x_{p_1}^*)=x_{p_2}^*\) and a Lipschitz condition with respect to \(x_{p_1}\) on \(U_r(t_*,x_{p_1}^*)\) and is a solution of (3.8) with respect to \(x_{p_2}\), i.e., \(F(t,x_{p_1},\nu (t,x_{p_1}))=0\) for all \((t,x_{p_1})\in U_r(t_*,x_{p_1}^*)\) (the numbers \(\rho ,r>0\) depend on the choice of \(t_*\), \(x_{p_1}^*\), \(x_{p_2}^*\)).

From the above it follows that in some open neighborhood \(U_r(t_*,x_{p_1}^*)\) of each (fixed) \((t_*,x_{p_1}^*)\in {\mathcal {T}}\times M_1\) there exists a unique solution \(x_{p_2}=\nu _{t_*,x_{p_1}^*}(t,x_{p_1})\in C(U_r(t_*,x_{p_1}^*),U_\rho (x_{p_2}^*))\), where \(U_\rho (x_{p_2}^*)\) is an open neighborhood of the point \(x_{p_2}^* \in M_2\) such that \((t_*,x_{p_1}^*+x_{p_2}^*)\in L_{t_+}\), of (3.8) and this solution satisfies a Lipschitz condition with respect to \(x_{p_1}\) and the equality \(\nu _{t_*,x_{p_1}^*}(t_*,x_{p_1}^*)=x_{p_2}^*\). Further, we introduce a function

$$\begin{aligned} \eta :{\mathcal {T}}\times M_1\rightarrow M_2 \end{aligned}$$
(3.10)

and define it by \(\eta (t,x_{p_1})= \nu _{t_*,x_{p_1}^*}(t,x_{p_1})\) at the point  \((t,x_{p_1})=(t_*,x_{p_1}^*)\) for each point \((t_*,x_{p_1}^*)\in {\mathcal {T}}\times M_1\). Hence, the function \(x_{p_2}=\eta (t,x_{p_1})\) is continuous in \((t,x_{p_1})\), satisfies locally a Lipschitz condition with respect to \(x_{p_1}\) on \({\mathcal {T}}\times M_1\) and is a unique solution of Eq. (3.8) as well as the equivalent Eq. (2.23) with respect to \(x_{p_2}\) (for all \((t,x_{p_1})\in {\mathcal {T}}\times M_1\)). Now, we prove that the function \(\eta\) is unique on the whole domain of definition (cf. [7, Theorem 3.1], where an implicit function \(\eta (t,z)\) which is a solution of the equation equivalent to (2.25) has been obtained under different conditions). Suppose that there exists another function \(x_{p_2}=\sigma (t,x_{p_1})\) defined in the same way as \(\eta\) and, accordingly, having the same properties as \(\eta\), but differing from it at some point \((t_*,x_{p_1}^*)\in {\mathcal {T}}\times M_1\). It follows from condition 1 that \(\sigma (t_*,x_{p_1}^*)=\eta (t_*,x_{p_1}^*)=x_{p_2}^*\), which contradicts the assumption. This holds for each \((t_*,x_{p_1}^*)\in {\mathcal {T}}\times M_1\), and therefore \(\eta (t,x_{p_1})\equiv \sigma (t,x_{p_1})\).

Substituting the function \(x_{p_2}=\eta (t,x_{p_1})\) in (2.22), we get the equation

$$\begin{aligned} \dot{x}_{p_1}=\widetilde{\Pi }(t,x_{p_1}),\quad \widetilde{\Pi }(t,x_{p_1})=\mathcal {A}_1^{(-1)}\big (Q_1 \widetilde{f}(t,x_{p_1},\eta (t,x_{p_1}))-\mathcal {B}_1 x_{p_1}\big ), \end{aligned}$$
(3.11)

where \(\widetilde{\Pi }:{\mathcal {T}}\times M_1\rightarrow X_1\) is continuous on \({\mathcal {T}}\times M_1\) and satisfies locally a Lipschitz condition with respect to \(x_{p_1}\) on \({\mathcal {T}}\times M_1\), and \(\widetilde{\Pi }(t,x_{p_1})=\Pi (t,x_{p_1},\eta (t,x_{p_1}))\) where \(\Pi\) is defined in (2.29). Thus, for each initial point \((t_0,x_0)\in L_{t_+}\) for which \(P_1x_0\in M_1\) (then \(\eta (t_0,P_1x_0)=P_2x_0\in M_2\) by construction) there exists a unique solution \(x_{p_1}=x_{p_1}(t)\) of the IVP for Eq. (3.11) with the initial condition

$$\begin{aligned} x_{p_1}(t_0)=x_{p_1,0},\quad \text {where}\quad x_{p_1,0}=P_1x_0\in M_1, \end{aligned}$$
(3.12)

on some interval \([t_0,t_0+\delta _0)\), \(\delta _0>0\). By the extension theorems (see [10, 15]), the solution \(x_{p_1}(t)\) can be extended over a maximal interval of existence \(J_{max}\subseteq [t_0,\infty )\) (\(t_0\in J_{max}\)) and the extended solution is a unique solution of IVP (3.11), (3.12) on \(J_{max}\). Hence, IVP (2.1), (2.2) has a unique solution \(x(t)=x_{p_1}(t)+\eta (t,x_{p_1}(t))\) on the maximal interval of existence \(J_{max}\). Notice that, according to [10], \(J_{max}\) is a right maximal interval of existence since we extend the solution only to the right, however, for brevity, it will be called a maximal interval of existence.

Let us prove that the maximal interval of existence \(J_{max}\) of the solution \(x_{p_1}(t)\) is \([t_0,\infty )\). The proofs differ depending on the form of the open set \(M_1\).

Case 1. Assume that \(M_1=X_1\) (accordingly, \(M_1=D_1=X_1\)). In this case, the proof is carried out by analogy with the proof of the existence of a global (on \([t_0,\infty )\)) solution of a time-varying semilinear DAE which has been given in [8, Theorem 3.1] (there \(X_1\) depends on t, but the idea of the proof is the same) or of a time-invariant semilinear DAE, but for special forms of the functions V and \(\chi\), which has been given in [7, Theorem 3.1]. These proofs are based on the application of conditions of the type of condition 4. Further, we give the brief proof of the required statement. It follows from condition 4 that there exists a number \(R>0\), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{x_{p_1}\in X_1\mid \Vert x_{p_1}\Vert >R\}\), and a function \(\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})\) such that conditions (4.a), (4.c) hold and

$$\begin{aligned} \dot{V}_{(3.11)}(t,x_{p_1})=\partial _t V(t,x_{p_1})+ \partial _{x_{p_1}} V(t,x_{p_1})\cdot \widetilde{\Pi }(t,x_{p_1})\le \chi \big (t,V(t,x_{p_1})\big ) \end{aligned}$$
(3.13)

for \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_R\). Then, using the theorem [13, Chapter IV, Theorem XIII], we obtain that the solution \(x_{p_1}(t)\) of IVP (3.11), (3.12) exists on \(J_{max}=[t_0,\infty )\).

Case 2. Now, assume that \(M_1\) is a proper subset of \(X_1\), i.e., \(M_1\subsetneqq X_1\) (\(M_1\subseteq D_1\), \(D_1\subseteq X_1\), \(M_1\ne X_1\)).

By condition 3, the solution \(x_{p_1}(t)\) belongs to \(M_1\) for all \(t\in J_{max}\).

If the domain of definition of the function \(\widetilde{\Pi }(t,x_{p_1})=\Pi (t,x_{p_1},\eta (t,x_{p_1}))\) from (3.11) can be extended (with respect to \(x_{p_1}\)), namely, if there exists a set \(D_{\eta ,1}\supset M_1\) (\(D_{\eta ,1}\subseteq X_1\)) such that the function \(\eta (t,x_{p_1})\) is uniquely defined and continuous for all \(t\in {\mathcal {T}}\), \(x_{p_1}\in D_{\eta ,1}\) and there exists a set \(\widetilde{M}_1\supsetneqq M_1\) such that \(\widetilde{\Pi }(t,x_{p_1})\) is defined and continuous for all \(t\in {\mathcal {T}}\), \(x_{p_1}\in \widetilde{M}_1\), where \(\widetilde{M}_1\subseteq D_{\eta ,1}\), \(D_{\eta ,1}\ne M_1\) if the function \(\Pi\) having the form (2.29) depends on \(x_{p_2}\) and \(\widetilde{M}_1\subseteq X_1\) if this function does not depend on \(x_{p_2}\) (then we take \(D_{\eta ,1}= M_1\)), then we consider \(\widetilde{\Pi }\) as the function defined on \({\mathcal {T}}\times \widetilde{M}_1\). Otherwise, we consider \(\widetilde{\Pi }\) as the function defined on \({\mathcal {T}}\times \widetilde{M}_1\) where \(\widetilde{M}_1=M_1\), i.e., with the same domain of definition as above.

Consider Eq. (3.11), where \(\widetilde{\Pi }\in C({\mathcal {T}}\times \widetilde{M}_1,X_1)\) and \(\widetilde{M}_1\) is the set mentioned above. Obviously, the solution \(x_{p_1}(t)\) of IVP (3.11), (3.12), which is obtained above, is a solution of IVP (3.11), (3.12) where \(\widetilde{\Pi }\) is defined on \({\mathcal {T}}\times \widetilde{M}_1\). If \(\widetilde{M}_1\supseteq \overline{M_1}\), then we consider (3.11) where \(\widetilde{\Pi }\in C({\mathcal {T}}\times \overline{M_1},X_1)\). In this case, IVP (3.11), (3.12) has a solution \(x_{p_1}=\widehat{x}_{p_1}(t)\) on a (right) maximal interval of existence J and by the extension theorems (e.g., [10, p. 12–14, Theorem 3.1 and Corollary 3.2]) either \(J=[t_0,\infty )\), or \(J=[t_0,\beta )\) where \(\beta <\infty\) and \(\Vert \widehat{x}_{p_1}(t)\Vert \rightarrow \infty\) as \(t\rightarrow \beta -0\), or \(J=[t_0,\beta ]\) where \(\beta <\infty\) and \(\widehat{x}_{p_1}(\beta )\in \partial M_1\) (note that \({\mathcal {T}}\) is the unbounded interval \([t_+,\infty )\)). Since the solution \(x_{p_1}(t)\) belongs to \(M_1\) for all t from the maximal interval of existence \(J_{max}\) due to condition 3, then either \(J_{max}=[t_0,\infty )\), or \(J_{max}=[t_0,\beta )\) where \(\beta <\infty\) and \(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert =\infty\). By virtue of the extension theorem [10, p. 12–13, Theorem 3.1] (notice that we can consider the open interval \((t_+,\infty )\) instead of \([t_+,\infty )={\mathcal {T}}\) without loss of generality since a solution of IVP (3.11), (3.12) with the initial value \(t_0=t_+\) exists on some interval \([t_+,t_++\delta _1)\), \(\delta _1>0\), and that we extend the solution only to the right),  the same holds if \(\widetilde{M}_1=M_1\). Thus, either \(J_{max}=[t_0,\infty )\), or \(J_{max}=[t_0,\beta )\) where \(\beta <\infty\) and \(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert =\infty\).

The further proof is carried out for various forms of the set \(M_1\ne X_1\).

Case 2.1. Assume that the complement \(M_1^c\) of the set \(M_1\) (\(M_1\ne X_1\)) is bounded. Then there exists a number \(R_0>0\) such that \(\{x_{p_1}\in X_1\mid \Vert x_{p_1}\Vert \ge R_0\}\subset M_1\). As shown above, the solution \(x_{p_1}(t)\) remains all the time in \(M_1\). Due to condition 4, there exists a number \(R\ge R_0\), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{x_{p_1}\in X_1\mid \Vert x_{p_1}\Vert >R\}\), and a function \(\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})\) such that conditions (4.a), (4.c) are satisfied and the derivative of V along the trajectories of (3.11) satisfies the inequality (3.13) for all \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_R\). Thus, as in case 1, using the theorem [13, Chapter IV, Theorem XIII], we obtain that the solution \(x_{p_1}(t)\) of IVP (3.11), (3.12) exists on \(J_{max}=[t_0,\infty )\).

Case 2.2. Assume that \(M_1\) (\(M_1\ne X_1\)) and its complement \(M_1^c\) are unbounded sets. The solution \(x_{p_1}(t)\), as shown above, remains all the time in \(M_1\), and either \(J_{max}=[t_0,\infty )\), or \(J_{max}=[t_0,\beta )\), \(\beta <\infty\), and \(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert =\infty\). Due to condition 4, there exists a number \(R>0\), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{x_{p_1}\in M_1\mid \Vert x_{p_1}\Vert > R\}\), and a function \(\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})\) such that conditions (4.a), (4.c) are satisfied and the inequality (3.13) holds for all \(t\in {\mathcal {T}}\) and \(x_{p_1}\in M_R\). Then we carry out the proof by analogy with the proof of the theorem [13, Chapter IV, Theorem XIII] and obtain that the solution \(x_{p_1}(t)\) exists on \([t_0,\infty )\), i.e., \([t_0,\beta )=[t_0,\infty )\).

Case 2.3. Assume that the set \(M_1\) is bounded. Since the solution \(x_{p_1}(t)\) belongs to \(M_1\) for all \(t\in J_{max}=[t_0,\beta )\), \(\beta \le \infty\)  (\(J_{max}\) is the maximal interval of existence), then it is bounded on \(J_{max}\), i.e., \(\sup \limits _{t\in J_{max}}\Vert x_{p_1}(t)\Vert =const<\infty\). Consequently, it is impossible that \(\beta <\infty\) and \(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert =\infty\). Then it follows from the above that \(J_{max}=[t_0,\infty )\).

It is proved that \(J_{max}=[t_0,\infty )\). Thus, for each initial point \((t_0,x_0)\in L_{t_+}\) for which \(P_ix_0\in M_i\), \(i=1,2\), IVP (2.1), (2.2) has a unique global solution.

The proof of Corollary

3.1 We only need to prove that the solution \(x_{p_1}(t)\) of IVP (3.11), (3.12), which is obtained in the proof of Theorem 3.1, belongs to \(M_1\) for all \(t\in J_{max}\), where \(J_{max}\) is the maximal interval of existence of the solution. The rest of the proof is similar to the proof of Theorem 3.1.

Recall that the function \(\widetilde{\Pi }\) from (3.11) is defined by \({\widetilde{\Pi }(t,x_{p_1})=\Pi (t,x_{p_1},\eta (t,x_{p_1}))}\) where the function \({\Pi (t,x_{p_1},x_{p_2})}\) has the form (2.29) and is defined and continuous for \(t\in {\mathcal {T}}\), \(x_{p_i}\in D_i\), \(i=1,2\). If the domain of definition of \(\Pi\) can be extended (with respect to \(x_{p_1}\), \(x_{p_2}\)) to \({\mathcal {T}}\times D_{\Pi ,1}\times D_{\Pi ,2}\) where \(D_{\Pi ,i}\) are the sets introduced in condition 3, then \(\Pi (t,x_{p_1},x_{p_2})\) is defined and continuous for all \(x_{p_i}\in D_{\Pi ,i}\), \(i=1,2\), and \(t\in {\mathcal {T}}\). Note that if the point \((t,x_{p_1}+x_{p_2})\) satisfies (2.21), then it satisfies (2.23) and (3.8), since these equations are equivalent. Thus, if the domain of definition of the function (3.10) can be extended (with respect to \(x_{p_1}\)), namely, if there exists a set \(D_{\eta ,1}\supset M_1\) (\(D_{\eta ,1}\subseteq X_1\)) such that \(\eta (t,x_{p_1})\) is uniquely defined and continuous for all \(x_{p_1}\in D_{\eta ,1}\), \(t\in {\mathcal {T}}\), then \(D_{\eta ,1}\subseteq D_{c,1}\) and \(\eta :{\mathcal {T}}\times D_{\eta ,1}\rightarrow D_{c,2}\), where \(D_{c,i}\), \(i=1,2\), are the sets introduced in condition 3. If \(D_{c,1}= M_1\), then \(D_{\eta ,1}=M_1\). Obviously, the function \(\widetilde{\Pi }(t,x_{p_1})\) is defined and continuous for all \(t\in {\mathcal {T}}\) and \(x_{p_1}\in \widetilde{M}_1\), where \(\widetilde{M}_1= D_{\Pi ,1}\cap D_{\eta ,1}\subseteq \widetilde{D_1}\) if the function \(\Pi\) depends on \(x_{p_2}\) and \(\widetilde{M}_1= D_{\Pi ,1}=\widetilde{D_1}\) if \(\Pi\) does not depend on \(x_{p_2}\). Below \(\widetilde{\Pi }\) is considered as the function defined on \({\mathcal {T}}\times \widetilde{M}_1\) (obviously, \(\widetilde{M}_1\supseteq M_1\)).

Further, consider (3.11), where \(\widetilde{\Pi }\in C({\mathcal {T}}\times \widetilde{M}_1,X_1)\), and prove the following lemma.

Lemma 3.2

Assume that there exists a function \(W\in C({\mathcal {T}}\times X_1,{\mathbb R})\) and for each sufficiently small number \(r>0\) (\(r<<1\)) there exists a closed set \(K_r\subset M_1\) for which \(\rho (K_r,M_1^c)=r\), such that \({W(t_1,x_{p_1}^1)<W(t_2,x_{p_1}^2)}\) for every \(x_{p_1}^1\in K_r\), \(x_{p_1}^2\in M_1^c\cap \widetilde{M}_1\) and \(t_1,t_2\!\in \! {\mathcal {T}}\) such that \(t_1\le t_2\), and, in addition, \(W(t,x_{p_1})\) has the continuous partial derivatives on \({\mathcal {T}}\times K_r^c\) and

$$\dot{W}_{(3.11)}(t,x_{p_1})=\partial _t W(t,x_{p_1})+ \partial _{x_{p_1}} W(t,x_{p_1})\cdot \widetilde{\Pi }(t,x_{p_1})\le 0$$

for every \(t\in {\mathcal {T}}\), \(x_{p_1}\in K_r^c\cap \widetilde{M}_1\). Then each solution \(x_{p_1}(t)\) of (3.11) which satisfies the initial condition (3.12) can never leave \(M_1\).

Proof

Take arbitrary fixed \(r>0\) (it is sufficient to consider \(r<<1\); thus, r is sufficiently small), and choose a function \(W\in C({\mathcal {T}}\times X_1,{\mathbb R})\) and a closed set \(K_r=\{x_{p_1}\in M_1\mid \rho (K_r,M_1^c)=r\}\) that satisfies the conditions of the lemma. Note that \(\rho (x_{p_1},K_r)=\inf \limits _{k\,\in \, K_r}\Vert x_{p_1}-k\Vert <r\) for any point \(x_{p_1}\in M_1\). By [13, p. 116, Lemma 1] each solution \(x_{p_1}(t)\) of (3.11) which at some \(t_0\in {\mathcal {T}}\) is in the set \(K_r\) can never thereafter leave \(M_1\). Moreover, this holds for any (sufficiently small) \(r>0\), that is, for any closed set \(K_r\) specified above.

Since the set \(M_1\subset X_1\) is open, then it can be represented as the union \(M_1=\bigcup \limits _{r>0}K_r\) of closed sets (an infinite family of closed sets) \(K_r\subset M_1\) such that \(\rho (K_r,M_1^c)=r>0\) (i.e., \(K_r=\{x_{p_1}\in M_1\mid \rho (K_r,M_1^c)=r>0\}\)), where r is sufficiently small. Consequently, the initial value \(x_{p_1,0}\) of each solution \(x_{p_1}(t)\) of (3.11) satisfying the initial condition \(x_{p_1}(t_0)=x_{p_1,0}\in M_1\) belongs to one of the sets \(K_r\) for which the conditions of the lemma are fulfilled. Hence, this solution cannot leave \(M_1\). Thus, each solution of (3.11) satisfying the initial condition (3.12) can never leave \(M_1\).

It follows from condition 3 that the conditions of Lemma 3.2 holds and, therefore, the solution \(x_{p_1}(t)\) of IVP (3.11), (3.12) belongs to \(M_1\) for all \(t\in J_{max}\). Note that the set \(\widetilde{M}_1\) specified above denotes the same set that is denoted by \(\widetilde{M}_1\) in the proof of Theorem 3.1.

The rest of the proof coincides with the proof of Theorem 3.1.

The proof of Corollary

3.2 Let \(\chi (t,v):=k(t)\,U(v)\) where \({k\in C({\mathcal {T}},{\mathbb R})}\) and \({U\in C(0,\infty )}\) satisfies the relation \(\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)} =\infty\) (\(v_0>0\)), then the differential inequality (3.4) does not have positive solutions with finite escape time (see, e.g., [13]). Hence, condition 4 of Theorem 3.1, where the function \(\chi\) has the form \(\chi (t,v)=k(t)\,U(v)\), is fulfilled. Thus, all conditions of Theorem 3.1 hold.

The proof of Corollary

3.3 This statement follows immediately from the proof of Theorem 3.1. Indeed, since \(M_1\) and \(M_2\) are bounded, then \(\sup \limits _{t\in J_{max}}\Vert x_{p_1}(t)\Vert <\infty\) and \(\sup \limits _{t\in J_{max}}\Vert \eta (t,x_{p_1}(t)\Vert <\infty\), where \({J_{max}=[t_0,\infty )}\) is the maximal interval of existence of the solution \(x(t)=x_{p_1}(t)+\eta (t,x_{p_1}(t))\), and hence \({\sup \limits _{t\in J_{max}}\Vert x(t)\Vert <\infty }\).

The proof of Theorem

3.2 We will prove that condition 2 of Theorem 3.1 holds if condition 2 of Theorem 3.2 holds. Then all conditions of Theorem 3.1 are fulfilled.

Since f(tx) has the continuous partial derivative with respect to x on \({\mathcal {T}}\times D\), then it satisfies locally a Lipschitz condition with respect to x on \({\mathcal {T}}\times D\). Choose arbitrary fixed \(t_*\in {\mathcal {T}}\), \({x_*=x_{p_1}^*+x_{p_2}^*}\) such that \(x_{p_1}^*\in M_1\), \(x_{p_2}^*\in M_2\) and \({(t_*,x_*)\in L_{t_+}}\). Take the operator \(\Phi _{t_*,x_*}\) defined by (3.6) as the operator \(\Phi _{t_*,x_*}\) appearing in condition 2 of Theorem 3.1. Then \(\Phi _{t_*,x_*}=\partial _{x_{p_2}} \widetilde{F}(t_*,x_{p_1}^*,x_{p_2}^*)\) and \(\Phi _{t_*,x_*}\) has the inverse \(\Phi _{t_*,x_*}^{-1}\in \textrm{L}(Y_2,X_2)\) due to condition 2 of Theorem 3.2. Therefore, we have \(\big \Vert \widetilde{F}(t,x_{p_1},x_{p_2}^1)- \widetilde{F}(t,x_{p_1},x_{p_2}^2)- \partial _{x_{p_2}}\widetilde{F}(t_*,x_{p_1}^*,x^*_{p_2}) [x_{p_2}^1-x_{p_2}^2]\big \Vert \le \int \limits _0^1\big \Vert \partial _{x_{p_2}}\widetilde{F}\big (t,x_{p_1},x_{p_2}^2+\theta (x_{p_2}^1-x_{p_2}^2)\big ) - \partial _{x_{p_2}}\widetilde{F}(t_*,x_{p_1}^*,x^*_{p_2})\big \Vert d\theta \, \Vert x_{p_2}^1-x_{p_2}^2\Vert \le q(\delta ,\varepsilon )\Vert x_{p_2}^1-x_{p_2}^2\Vert\) for each \((t,x_{p_1})\in \overline{U_\delta (t_*,x_{p_1}^*)}\subset {\mathcal {T}}\times D_1\) and each \(x_{p_2}^1,x_{p_2}^2\in \overline{U_\varepsilon (x_{p_2}^*)}\subset D_2\), where \(U_\delta (t_*,x_{p_1}^*)\), \(U_\varepsilon (x_{p_2}^*)\) are some open neighborhoods of \((t_*,x_{p_1}^*)\), \(x_{p_2}^*\), and \(q(\delta ,\varepsilon )=\sup \limits _{(t,x_{p_1})\in \overline{U_\delta (t_*,x_{p_1}^*)},\, \tilde{x}_{p_2}\in \overline{U_\varepsilon (x_{p_2}^*)}} \big \Vert \partial _{x_{p_2}}\widetilde{F}(t,x_{p_1},\tilde{x}_{p_2}) -\partial _{x_{p_2}}\widetilde{F}(t_*,x_{p_1}^*,x^*_{p_2})\big \Vert \rightarrow 0\) as \(\delta ,\varepsilon \rightarrow 0\) since the function \(\partial _{x_{p_2}}\widetilde{F}(t,x_{p_1},x_{p_2})\) is continuous at the point \((t_*,x_{p_1}^*,x^*_{p_2})\). Consequently, condition 2 of Theorem 3.1 is fulfilled.

Below we present the theorem that was proved in [4] for a singular DAE and, as well as Corollary 3.3, it gives conditions for the Lagrange stability of the regular DAE (2.1).

Theorem 3.3

Let \(f\in C({\mathcal {T}}\times {{\mathbb R}^n},{{\mathbb R}^n})\), where \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let the operator pencil \(\lambda A+B\) be a regular pencil of index not higher than 1. Assume that condition 1 of Theorem 3.1, where \(M_i=X_i\), \(i=1,2\), holds and condition 2 of Theorem 3.1 or condition 2 of Theorem 3.2, where \(D={{\mathbb R}^n}\) and \(M_i=D_i=X_i\), \(i=1,2\), holds. In addition, let the following conditions be satisfied:

  1. 3.

    There exists a number \({R>0}\)  (R can be sufficiently large), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{x_{p_1}\in X_1\mid \Vert x_{p_1}\Vert >R\}\), and a function \({\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})}\) such that: \(\lim \limits _{\Vert x_{p_1}\Vert \rightarrow +\infty }V(t,x_{p_1})=+\infty\) uniformly in t on \({\mathcal {T}}\); for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_R\), \(x_{p_2}\in X_2\) such that \((t,x_{p_1}+x_{p_2})\in L_{t_+}\) the inequality (3.3) is satisfied; the differential inequality (3.4) does not have unbounded positive solutions for \(t\in {\mathcal {T}}\).

  2. 4.

    For all \((t,x_{p_1}+x_{p_2})\in L_{t_+}\), for which \(\Vert x_{p_1}\Vert \le M<\infty\)  (M is an arbitrary constant), the inequality  \(\Vert x_{p_2}\Vert \le K_M\)  or the inequality  \(\Vert Q_2 f(t,x_{p_1}+x_{p_2})\Vert \le K_M\),  where \(K_M=K(M)<\infty\) is some constant, is satisfied.

Then Eq. (2.1is Lagrange stable (for each initial point \((t_0,x_0)\in L_{t_+}\)).

Global solvability of singular (nonregular) semilinear DAEs

Below, the projectors and subspaces, described in Appendix 1, as well as the definitions and constructions, given in the “Problem statement. Preliminaries” section, are used. Recall that \(D_{s_i}=S_iD\), \(D_i=P_iD\), \(i=1,2\) (see (2.3)).

Theorem 3.4

Let \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^m})\), where \(D\subseteq {{\mathbb R}^n}\) is some open set and \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let the operator pencil \(\lambda A+B\) be a singular pencil such that its regular block \(\lambda A_r+B_r\), where \(A_r\), \(B_r\) are defined in (A.2), is a regular pencil of index not higher than 1. Assume that there exists an open set \(M_{s1}\subseteq D_{s_1}\dot{+}D_1\) and sets \(M_{s_2}\subseteq D_{s_2}\), \(M_2\subseteq D_2\) such that the following holds:

  1. 1.

    For any fixed \({t\in {\mathcal {T}}}\), \({x_{s_1}+x_{p_1}\in M_{s1}}\), \(x_{s_2}\in M_{s_2}\) there exists a unique \({x_{p_2}\in M_2}\) such that \({(t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}}\) (the manifold \(L_{t_+}\) has the form (2.30) where \(t_*=t_+\)).

  2. 2.

    A function f(tx) satisfies locally a Lipschitz condition with respect to x on \({\mathcal {T}}\times D\).  For any fixed \({t_*\in {\mathcal {T}}}\), \({x_*=x_{s_1}^*+x_{s_2}^*+x_{p_1}^*+x_{p_2}^*}\)  (\({x_{s_i}^*=S_ix_*}\), \({x_{p_i}^*=P_ix_*}\), \({i=1,2}\))  such that \({x_{s_1}^*+x_{p_1}^*\in M_{s1}}\), \({x_{s_2}^*\in M_{s_2}}\), \({x_{p_2}^*\in M_2}\) and \({(t_*,x_*)\in L_{t_+}}\), there exists a neighborhood \(N_\delta (t_*,x_{s_1}^*,x_{s_2}^*,x_{p_1}^*)= U_{\delta _1}(t_*)\times U_{\delta _2}(x_{s_1}^*)\times N_{\delta _3}(x_{s_2}^*)\times U_{\delta _4}(x_{p_1}^*)\subset {\mathcal {T}}\times D_{s_1}\times D_{s_2}\times D_1\), an open neighborhood \(U_\varepsilon (x_{p_2}^*)\subset D_2\) (the numbers \(\delta , \varepsilon >0\) depend on the choice of \(t_*\), \(x_*\)) and an invertible operator \(\Phi _{t_*,x_*}\in \textrm{L}(X_2,Y_2)\) such that for each \((t,x_{s_1},x_{s_2},x_{p_1})\in N_\delta (t_*,x_{s_1}^*,x_{s_2}^*,x_{p_1}^*)\) and each \(x_{p_2}^i\in U_\varepsilon (x_{p_2}^*)\), \(i=1,2\), the mapping

    $$\begin{aligned} \widetilde{\Psi }(t,x_{s_1},x_{s_2},x_{p_1},x_{p_2}):= Q_2f(t,x_{s_1}+x_{s_2}+x_{p_1}&+x_{p_2})- \nonumber \\&\quad -B\big |_{X_2}x_{p_2}:{\mathcal {T}}\times D_{s_1}\times D_{s_2}\times D_1\times D_2\rightarrow Y_2 \end{aligned}$$
    (3.14)

    satisfies the inequality

    $$\begin{aligned} \Vert \widetilde{\Psi }(t,x_{s_1},x_{s_2},x_{p_1},x_{p_2}^1)- \widetilde{\Psi }(t,x_{s_1},x_{s_2},x_{p_1},x_{p_2}^2)-\Phi _{t_*,x_*} [x_{p_2}^1-x_{p_2}^2]\Vert \le q(\delta ,\varepsilon )\Vert x_{p_2}^1-x_{p_2}^2\Vert , \end{aligned}$$
    (3.15)

    where \(q(\delta ,\varepsilon )\) is such that \(\lim \limits _{\delta ,\,\varepsilon \rightarrow 0} q(\delta ,\varepsilon )<\Vert \Phi _{t_*,x_*}^{-1}\Vert ^{-1}\).

  3. 3.

    If \(M_{s1}\ne X_{s_1}\dot{+} X_1\), then the following holds. The component \(x_{s_1}(t)+x_{p_1}(t)=(S_1+P_1)x(t)\) of each solution x(t) with the initial point \((t_0,x_0)\in L_{t_+}\), for which \((S_1+P_1)x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\), can never leave \(M_{s1}\) (i.e., it remains in \(M_{s1}\) for all t from the maximal interval of existence of the solution).

  4. 4.

    If \(M_{s1}\) is unbounded, then the following holds. There exists a number \({R>0}\)  (R can be sufficiently large), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{(x_{s_1},x_{p_1})\in X_{s_1}\times X_1\mid x_{s_1}+x_{p_1}\in M_{s1},\; \Vert x_{s_1}+x_{p_1}\Vert > R\}\), and a function \({\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})}\) such that:

    1. (4.a)

      \(\lim \limits _{\Vert (x_{s_1},x_{p_1})\Vert \rightarrow +\infty }V(t,x_{s_1},x_{p_1})=+\infty\) uniformly in t on each finite interval \([a,b)\subset {\mathcal {T}}\);

    2. (4.b)

      for each \(t\in {\mathcal {T}}\), \((x_{s_1},x_{p_1})\in M_R\), \(x_{s_2}\in M_{s_2}\), \(x_{p_2}\in M_2\) such that \((t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}\), the derivative (2.17) of the function V along the trajectories of (2.9) and (2.10) satisfies the inequality

      $$\begin{aligned} \dot{V}_{(2.9),(2.10)}(t,x_{s_1},x_{p_1})\le \chi \big (t,V(t,x_{s_1},x_{p_1})\big ); \end{aligned}$$
      (3.16)
    3. (4.c)

      the differential inequality (3.4) does not have positive solutions with finite escape time.

Then for each initial point \({(t_0,x_0)\in L_{t_+}}\) such that \((S_1+P_1)x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\), IVP (2.1), (2.2) has a unique global solution x(t) for which the choice of the function \(\phi _{s_2}\in C([t_0,\infty ),M_{s_2})\) with the initial value \({\phi _{s_2}(t_0)=S_2 x_0}\) uniquely defines the component \(S_2x(t)=\phi _{s_2}(t)\) when \({\mathop {\textrm{rank}}(\lambda A+B)<n}\)  (when \({\mathop {\textrm{rank}}(\lambda A+B)=n}\), the component \(S_2 x\) is absent).

Corollary 3.4

Theorem 3.4 remains valid if condition 3 is replaced by

  1. 3.

    In the case when \(M_{s1}\ne X_{s_1}\dot{+} X_1\), the following holds. Let \({D_{\Upsilon ,s_1}\subseteq X_{s_1}}\), \({D_{\Upsilon ,i}\subseteq X_i}\), \(i=1,2\), be sets such that the function \(\Upsilon\) of the form (2.18) is defined and continuous for all \({t\in {\mathcal {T}}}\), \({x_{s_1}\in D_{\Upsilon ,s_1}}\), \({x_{s_2}\in D_{s_2}}\), \({x_{p_i}\in D_{\Upsilon ,i}}\), \({i=1,2}\), where \(D_{\Upsilon ,s_1}\supset D_{s_1}\), \(D_{\Upsilon ,i}\supset D_i\) and \(D_{\Upsilon ,s_1}\times D_{s_2}\times D_{\Upsilon ,1}\times D_{\Upsilon ,2} \ne D_{s_1}\times D_{s_2}\times D_1\times D_2\), if the domain of definition of \(\Upsilon\) can be extended to \({{\mathcal {T}}\times D_{\Upsilon ,s_1}\times D_{s_2}\times D_{\Upsilon ,1}\times D_{\Upsilon ,2}}\) in this way, and \({D_{\Upsilon ,s_1}= D_{s_1}}\), \({D_{\Upsilon ,i}= D_i}\) otherwise.  Let \(D_{c,s1}\subseteq X_{s_1}\dot{+}X_1\), \(D_{c,2}\subseteq X_2\) be sets such that for any fixed \({t\in {\mathcal {T}}}\), \({x_{s_1}+x_{p_1}\in D_{c,s1}\supset M_{s1}}\), \(x_{s_2}\in M_{s_2}\) there exists a unique \(x_{p_2}\in D_{c,2}\supset M_2\) such that \({(t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}}\) and \(D_{c,s1}\ne M_{s1}\) or \(D_{c,2}\ne M_2\) if such sets exist, and \(D_{c,s1}=M_{s1}\), \(D_{c,2}= M_2\) otherwise. Further, let \({\widetilde{D}_{s1}:= \big (D_{\Upsilon ,s_1}\dot{+}D_{\Upsilon ,1}\big )\cap D_{c,s1}}\), \({\widetilde{D_2}:= D_{\Upsilon ,2}\cap D_{c,2}}\) if the function \(\Upsilon\) depends on \(x_{p_2}\), and \({\widetilde{D}_{s1}:= D_{\Upsilon ,s_1}\dot{+}D_{\Upsilon ,1}}\) if \(\Upsilon\) does not depend on \(x_{p_2}\).

    Below, \(\Upsilon\) is considered as the function (2.18) with the domain of definition \({\mathcal {T}}\times S_1\widetilde{D}_{s1}\times M_{s_2}\times P_1\widetilde{D}_{s1}\times \widetilde{D_2}\) if it depends on \(x_{p_2}\) and \({\mathcal {T}}\times S_1\widetilde{D}_{s1}\times M_{s_2}\times P_1\widetilde{D}_{s1}\) if it does not depend on \(x_{p_2}\). Assume that there exists a function \(W\in C({\mathcal {T}}\times X_{s_1}\times X_1,{\mathbb R})\) and for each sufficiently small number \({r>0}\) there exists a closed set \(K_r=\{x_{s_1}+x_{p_1}\in M_{s1}\mid \rho (K_r,M_{s1}^c)=r\}\)  (\({M_{s1}^c=(X_{s_1}\dot{+} X_1)\setminus M_{s1}}\)\(\rho (K_r,M_{s1}^c)=\! \inf \limits _{k\,\in \, K_r,\, m\,\in \, M_{s1}^c} \Vert m-k\Vert\) ) such that

    $$W(t_1,x_{s_1}^1,x_{p_1}^1)<W(t_2,x_{s_1}^2,x_{p_1}^2)$$

    for every \(x_{s_1}^1+x_{p_1}^1\in K_r\), \(x_{s_1}^2+x_{p_1}^2\in M_{s1}^c\cap \widetilde{D}_{s1}\) and \(t_1,t_2\in {\mathcal {T}}\) such that \(t_1\le t_2\), and, in addition, \(W(t,x_{s_1},x_{p_1})\) has the continuous partial derivatives on \({\mathcal {T}}\times S_1K_r^c\times P_1K_r^c\) (\({K_r^c=(X_{s_1}\dot{+} X_1)\setminus K_r}\)) and the inequality

    $$\begin{aligned} \dot{W}_{(2.9),(2.10)}(t,x_{s_1},x_{p_1})=\partial _t W(t,x_{s_1},x_{p_1})+ \partial _{(x_{s_1},x_{p_1})} W(t,x_{s_1},x_{p_1})\cdot \Upsilon (t,x_{s_1},x_{s_2},x_{p_1},x_{p_2})\le 0 \end{aligned}$$
    (3.17)

    holds for each \({t\in {\mathcal {T}}}\), \({x_{s_1}+x_{p_1}\in K_r^c\cap \widetilde{D}_{s1}}\), \({x_{s_2}\in M_{s_2}}\), \({x_{p_2}\in \widetilde{D_2}}\) such that \({(t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}}\) (if \(\Upsilon\) does not depend on \(x_{p_2}\), then (3.17) holds for each \(t\in {\mathcal {T}}\), \({x_{s_1}+x_{p_1}\in K_r^c\cap \widetilde{D}_{s1}}\), \({x_{s_2}\in M_{s_2}}\)).

Corollary 3.5

Theorem 3.4 remains valid if condition 4 is replaced by

  1. 4.

    If \(M_{s1}\) is unbounded, then the following holds. There exists a number \({R>0}\), a function \({V\in C^1\big ({\mathcal {T}}\times M_R,{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times M_R\), where \(M_R=\{(x_{s_1},x_{p_1})\in X_{s_1}\times X_1\mid x_{s_1}+x_{p_1}\in M_{s1},\; \Vert x_{s_1}+x_{p_1}\Vert > R\}\), and functions \({k\in C({\mathcal {T}},{\mathbb R})}\), \({U\in C(0,\infty )}\) such that: \(\lim \limits _{\Vert (x_{s_1},x_{p_1})\Vert \rightarrow +\infty }V(t,x_{s_1},x_{p_1})=+\infty\) uniformly in t on each finite interval \([a,b)\subset {\mathcal {T}}\); for each \({t\in {\mathcal {T}}}\), \({(x_{s_1},x_{p_1})\in M_R}\), \({x_{s_2}\in M_{s_2}}\), \({x_{p_2}\in M_2}\) such that \((t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}\), the inequality  \({\dot{V}_{(2.9),(2.10)}(t,x_{s_1},x_{p_1})\le k(t)\, U\big (V(t,x_{p_1})\big )}\)  holds; \(\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)} =\infty\)  (\(v_0>0\) is a constant).

Corollary 3.6

If in the conditions of Theorem 3.4 the sets \(M_{s1}\), \(M_{s_2}\) and \(M_2\) are bounded, then Eq. (2.1) is Lagrange stable for the initial points \((t_0,x_0)\in L_{t_+}\) for which \((S_1+P_1)x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\).

Theorem 3.5

Theorem 3.4 remains valid if condition 2 is replaced by

  1. 2.

    A function f(tx) has the continuous partial derivative with respect to x on \({\mathcal {T}}\times D\). For any fixed \(t_*\in {\mathcal {T}}\), \({x_*=x_{s_1}^*+x_{s_2}^*+x_{p_1}^*+x_{p_2}^*}\) such that \(x_{s_1}^*+x_{p_1}^*\in M_{s1}\), \(x_{s_2}^*\in M_{s_2}\), \(x_{p_2}^*\in M_2\) and \({(t_*,x_*)\in L_{t_+}}\), the operator

    $$\begin{aligned} \Phi _{t_*,x_*}:=\left[ \partial _x (Q_2f)(t_*,x_*)- B\right] P_2:X_2\rightarrow Y_2 \end{aligned}$$
    (3.18)

    has the inverse \(\Phi _{t_*,x_*}^{-1}\in \textrm{L}(Y_2,X_2)\).

Remark 3.2

(cf. [4, Remark 3.2]) The operator \(\Phi _{t_*,x_*}\) (3.18) (as well as (3.6)) is defined as an operator from \(X_2\) into \(Y_2\), however, in general, the operator defined by the formula from (3.18) is an operator from \({{\mathbb R}^n}\) into \({{\mathbb R}^m}\) with the rang \(Y_2\), i.e., \(\widehat{\Phi }_{t_*,x_*}:=\left[ \partial _x (Q_2f)(t_*,x_*)- B\right] P_2\in \textrm{L}({{\mathbb R}^n},{{\mathbb R}^m})\) and \(\widehat{\Phi }_{t_*,x_*}{{\mathbb R}^n}= Y_2\)  (\(t_*\), \(x_*\) are fixed). Since \(\Phi _{t_*,x_*}=\widehat{\Phi }_{t_*,x_*}\big |_{X_2}\) and it is assumed that \(\Phi _{t_*,x_*}\) is invertible, then \(\widehat{\Phi }_{t_*,x_*}{{\mathbb R}^n}=\widehat{\Phi }_{t_*,x_*}X_2=Y_2\)  (\({X_s\dot{+}X_1=\mathop {\textrm{Ker}}(\widehat{\Phi }_{t_*,x_*})}\)). The operator \(\widehat{\Phi }_{t_*,x_*}\) has the semi-inverse \(\widehat{\Phi }_{t_*,x_*}^{(-1)}\), i.e., the operator \(\widehat{\Phi }_{t_*,x_*}^{(-1)}\in \textrm{L}({{\mathbb R}^m},{{\mathbb R}^n})\) such that \(\widehat{\Phi }_{t_*,x_*}^{(-1)}{{\mathbb R}^m}= \widehat{\Phi }_{t_*,x_*}^{(-1)}Y_2=X_2\) and \(\Phi _{t_*,x_*}^{-1}=\widehat{\Phi }_{t_*,x_*}^{(-1)}\big |_{Y_2}\), which is defined by the relations \(\widehat{\Phi }_{t_*,x_*}^{(-1)} \widehat{\Phi }_{t_*,x_*}= P_2\), \(\widehat{\Phi }_{t_*,x_*} \widehat{\Phi }_{t_*,x_*}^{(-1)}= Q_2\) and \(\widehat{\Phi }_{t_*,x_*}^{(-1)}= P_2\widehat{\Phi }_{t_*,x_*}^{(-1)}\).

The proof of Theorems 3.4, 3.5and Corollaries 3.4, 3.5, 3.6. The proofs of Theorems 3.4, 3.5 and Corollaries 3.4, 3.5, 3.6 are carried out in a similar way as the proofs of Theorems 3.1, 3.2 and Corollaries 3.1, 3.2, 3.3, respectively.

In addition, the proofs of Theorems 3.4, 3.5 for \(D={{\mathbb R}^n}\), \(M_{s1}=X_{s_1}\dot{+}X_1\) and \(M_2=X_2\) in their conditions are carried out in the same way as the proofs of the theorems [4, Theorems 3.4, 3.2] (note that \(D_{s_2}\) in the theorems [4, Theorems 3.4, 3.2] denotes some set in \(X_{s_2}\), not the subset \(D_{s_2}=S_2D\) of D as in Theorems 3.4, 3.5).

Note that the theorem [4, Theorem 4.1], as well as Corollary 3.6, gives conditions for the Lagrange stability of the singular DAE (2.1), but in the case when \(D={{\mathbb R}^n}\) and \(M_{s1}=X_{s_1}\dot{+}X_1\) (conditions 1 and 2 of Theorem 3.4, where \(D={{\mathbb R}^n}\), \(M_{s1}=X_{s_1}\dot{+}X_1\) and \(M_2=X_2\), are similar to those contained in [4, Theorem 4.1]); thus, Corollary 3.6 gives more general conditions.

The blow-up of solutions

The blow-up of solutions (Lagrange instability) of regular semilinear DAEs

Theorem 4.1

Let \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^n})\), where \(D\subseteq {{\mathbb R}^n}\) is some open set and \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let the operator pencil \(\lambda A+B\) be a regular pencil of index not higher than 1. Assume that there exists an open (unbounded) set \(M_1\subseteq D_1\) and a set \(M_2\subseteq D_2\) such that condition 1 of Theorem 3.1, condition 2 of Theorem 3.1 (or condition 2 of Theorem 3.2) and condition 3 of Theorem 3.1 (or condition 3 of Corollary 3.1) hold and the following condition holds:

  1. 4.

    There exists a function \(V\in C^1\big ({\mathcal {T}}\times M_1,{\mathbb R}\big )\) positive on \({\mathcal {T}}\times M_1\) and a function \({\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})}\) such that:

    1. (4.a)

      for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_1\), \(x_{p_2}\in M_2\) such that \((t,x_{p_1}+x_{p_2})\in L_{t_+}\), the derivative (2.28) of the function V along the trajectories of (2.22) satisfies the inequality

      $$\begin{aligned} \dot{V}_{(2.22)}(t,x_{p_1})\ge \chi \big (t,V(t,x_{p_1})\big ); \end{aligned}$$
    2. (4.b)

      the differential inequality

      $$\begin{aligned} \dot{v}\ge \chi (t,v)\qquad (t\in {\mathcal {T}}) \end{aligned}$$
      (4.1)

      does not have global positive solutions.

Then for each initial point \((t_0,x_0)\in L_{t_+}\) for which \(P_ix_0\in M_i\), \(i=1,2\), IVP (2.1), (2.2) has a unique solution x(t) and this solution has a finite escape time (i.e., is blow-up in finite time).

Corollary 4.1

Theorem 4.1 remains valid if condition 4 is replaced by

  1. 4.

    There exists a function \(V\in C^1\big ({\mathcal {T}}\times M_1,{\mathbb R}\big )\) positive on \({\mathcal {T}}\times M_1\) and functions \({k\in C({\mathcal {T}},{\mathbb R})}\), \({U\in C(0,\infty )}\) such that: for each \({t\in {\mathcal {T}}}\), \({x_{p_1}\in M_1}\), \({x_{p_2}\in M_2}\) such that \((t,x_{p_1}+x_{p_2})\in L_{t_+}\) the inequality \({\dot{V}_{(2.22)}(t,x_{p_1})\ge k(t)\, U\big (V(t,x_{p_1})\big )}\)  holds; \({\int \limits _{k_0}^{\infty }k(t)dt=\infty }\) and \({\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)}<\infty }\)\({k_0,v_0>0}\) are constants).

The proof of Theorem

4.1 Just as in the proof of Theorem 3.1, or Theorem 3.2, or Corollary 3.1 (depending on which conditions hold), it is proved that there exists a unique solution \(x_{p_1}(t)\) of IVP (3.11), (3.12) on the maximal interval of existence \(J_{max}\) and either \(J_{max}=[t_0,\infty )\), or \(J_{max}=[t_0,\beta )\) where \(\beta <\infty\) and \(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert =\infty\). In addition, it is proved that IVP (2.1), (2.2) has the unique solution \(x(t)=x_{p_1}(t)+\eta (t,x_{p_1}(t))\) on the same maximal interval of existence \(J_{max}\). This holds for each initial point \((t_0,x_0)\in L_{t_+}\) for which \(P_ix_0\in M_i\), \(i=1,2\). Further, from condition 4 it follows that there exists a function \(V\in C^1\big ({\mathcal {T}}\times M_1,{\mathbb R}\big )\) positive on \({\mathcal {T}}\times M_1\) and a function \(\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})\) such that \(\dot{V}_{(3.11)}(t,x_{p_1})\ge \chi \big (t,V(t,x_{p_1})\big )\) for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_1\) and the inequality (4.1) does not have global positive solutions. Then, using [13, Chapter IV, Theorem XIV], we obtain that the solution \(x_{p_1}(t)\) has a finite escape time and hence \(J_{max}=[t_0,\beta )\), \(\beta <\infty\) (\(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert = \infty\)). Consequently, the solution x(t) of IVP (2.1), (2.2) has a finite escape time.

The proof of Corollary

4.1 Let \(\chi (t,v):=k(t)\,U(v)\) where \({k\in C({\mathcal {T}},{\mathbb R})}\), \({U\in C(0,\infty )}\) satisfies the relations \({\int \limits _{k_0}^{\infty }k(t)dt=\infty }\), \({\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)}<\infty }\)\({k_0,v_0>0}\)), then the differential inequality (4.1) does not have global positive solutions (see, e.g., [13]). Hence, condition 4 of Theorem 4.1, where the function \(\chi\) has the form \(\chi (t,v)=k(t)\,U(v)\), holds, and all conditions of Theorem 4.1 are fulfilled.

The blow-up of solutions (Lagrange instability) of singular semilinear DAEs

Theorem 4.2

Let \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^m})\), where \(D\subseteq {{\mathbb R}^n}\) is some open set and \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let the operator pencil \(\lambda A+B\) be a singular pencil such that its regular block \(\lambda A_r+B_r\), where \(A_r\), \(B_r\) are defined in (A.2), is a regular pencil of index not higher than 1. Assume that there exists an open (unbounded) set \(M_{s1}\subseteq D_{s_1}\dot{+}D_1\) and sets \(M_{s_2}\subseteq D_{s_2}\), \(M_2\subseteq D_2\) such that condition 1 of Theorem 3.4, condition 2 of Theorem 3.4 (or condition 2 of Theorem 3.5) and condition 3 of Theorem 3.4 (or condition 3 of Corollary 3.4) hold and the following condition holds:

  1. 4.

    There exists a function \({V\in C^1\big ({\mathcal {T}}\times \widehat{M}_{s1},{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times \widehat{M}_{s1}\), where \(\widehat{M}_{s1}=\{(x_{s_1},x_{p_1})\in X_{s_1}\times X_1 \mid x_{s_1}+x_{p_1}\in M_{s1}\}\), and a function \({\chi \in C({\mathcal {T}}\times (0,\infty ),{\mathbb R})}\) such that:

    1. (4.a)

      for each \(t\in {\mathcal {T}}\), \((x_{s_1},x_{p_1})\in \widehat{M}_{s1}\), \(x_{s_2}\in M_{s_2}\), \(x_{p_2}\in M_2\) such that \((t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}\), the derivative (2.17) of the function V along the trajectories of (2.9) and (2.10) satisfies the inequality

      $$\begin{aligned} \dot{V}_{(2.9),(2.10)}(t,x_{s_1},x_{p_1})\ge \chi \big (t,V(t,x_{s_1},x_{p_1})\big ); \end{aligned}$$
      (4.2)
    2. (4.b)

      the differential inequality (4.1) does not have global positive solutions.

Then for each initial point \((t_0,x_0)\in L_{t_+}\), for which \((S_1+P_1)x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\), IVP (2.1), (2.2) has a unique solution x(t) for which the choice of the function \(\phi _{s_2}\in C([t_0,\infty ),M_{s_2})\) with the initial value \({\phi _{s_2}(t_0)=S_2 x_0}\) uniquely defines the component \(S_2x(t)=\phi _{s_2}(t)\) when \({\mathop {\textrm{rank}}(\lambda A+B)<n}\) (when \({\mathop {\textrm{rank}}(\lambda A+B)=n}\), the component \(S_2 x\) is absent), and this solution has a finite escape time (i.e., is blow-up in finite time).

Corollary 4.2

Theorem 4.2 remains valid if condition 4 is replaced by

  1. 4.

    There exists a function \({V\in C^1\big ({\mathcal {T}}\times \widehat{M}_{s1},{\mathbb R}\big )}\) positive on \({\mathcal {T}}\times \widehat{M}_{s1}\), where \(\widehat{M}_{s1}=\{(x_{s_1},x_{p_1})\in X_{s_1}\times X_1 \mid x_{s_1}+x_{p_1}\in M_{s1}\}\), and functions \({k\in C({\mathcal {T}},{\mathbb R})}\), \({U\in C(0,\infty )}\) such that: for each \(t\in {\mathcal {T}}\), \((x_{s_1},x_{p_1})\in \widehat{M}_{s1}\), \(x_{s_2}\in M_{s_2}\), \(x_{p_2}\in M_2\) such that \((t,x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2})\in L_{t_+}\) the inequality \(\dot{V}_{(2.9),(2.10)}(t,x_{s_1},x_{p_1})\ge k(t)\, U\big (V(t,x_{s_1},x_{p_1})\big )\)  holds; \({\int \limits _{k_0}^{\infty }k(t)dt=\infty }\) and \({\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)}<\infty }\)\({k_0,v_0>0}\) are constants).

The proof of Theorem 4.2and Corollary 4.2 The proofs of Theorem 4.2 and Corollary 4.2 are carried out in a similar way as the proofs of Theorem 4.1 and Corollary 4.1, respectively. In addition, the proof of Theorem 4.2 for \(D={{\mathbb R}^n}\), \(M_{s1}=X_{s_1}\dot{+}X_1\) and \(M_2=X_2\) in its conditions is carried out in a similar way as the proof of the theorem [4, Theorem 5.1] (note that \(D_{s_2}\) in [4, Theorem 5.1] denotes some set in \(X_{s_2}\), not the subset \(D_{s_2}=S_2D\) of D as in Theorem 4.2).

The criterion of global solvability

The criterion of global solvability of regular semilinear DAEs

Theorem 5.1

Let \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^n})\), where \(D\subseteq {{\mathbb R}^n}\) is some open set and \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let \(\lambda A+B\) be a regular pencil of index not higher than 1. Let there exist an open set \(M_1\subseteq D_1\) and a set \(M_2\subseteq D_2\) such that conditions 1, 2, and 3 of Theorem 3.1 hold.

Then IVP (2.1), (2.2) has a unique solution x(t) for each initial point \({(t_0,x_0)\in L_{t_+}}\) such that \(P_ix_0\in M_i\), \(i=1,2\), which is global (i.e., exists on \([t_0,\infty )\)) if condition 4 of Theorem 3.1 is satisfied and has a finite escape time (i.e., exists on \([t_0,\beta )\) where \(\beta <\infty\) and \(\lim \limits _{t\rightarrow \beta -0}\Vert x_{p_1}(t)\Vert =\infty\)) if condition 4 of Theorem 4.1 is satisfied.

Corollary 5.1

Theorem 5.1 remains valid if any of the following replacements (or all of them) take place:

  • condition 2 of Theorem 3.1is replaced by condition 2 of Theorem 3.2;

  • condition 3 of Theorem 3.1is replaced by condition 3 of Corollary 3.1;

  • condition 4 of Theorem 3.1is replaced by condition 4 of Corollary 3.2,

  • condition 4 of Theorem 4.1is replaced by condition 4 of Corollary 4.1.

Proof

Both Theorem 5.1 and Corollary 5.1 follow directly from the theorems and corollaries proved above.

The criterion of global solvability of singular semilinear DAEs

Theorem 5.2

Let \(f\in C({\mathcal {T}}\times D,{{\mathbb R}^m})\), where \(D\subseteq {{\mathbb R}^n}\) is some open set and \({\mathcal {T}}=[t_+,\infty )\subseteq [0,\infty )\), and let the operator pencil \(\lambda A+B\) be a singular pencil such that its regular block \(\lambda A_r+B_r\), where \(A_r\), \(B_r\) are defined in (A.2), is a regular pencil of index not higher than 1. Let there exist an open set \(M_{s1}\subseteq D_{s_1}\dot{+}D_1\) and sets \(M_{s_2}\subseteq D_{s_2}\), \(M_2\subseteq D_2\) such that conditions 1, 2 and 3 of Theorem 3.4 hold.

Then for each initial point \({(t_0,x_0)\in L_{t_+}}\) such that \((S_1+P_1)x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\), IVP (2.1), (2.2) has a unique solution x(t) for which the choice of the function \(\phi _{s_2}\in C([t_0,\infty ),M_{s_2})\) with the initial value \({\phi _{s_2}(t_0)=S_2 x_0}\) uniquely defines the component \(S_2x(t)=\phi _{s_2}(t)\) when \({\mathop {\textrm{rank}}(\lambda A+B)<n}\)  (when \({\mathop {\textrm{rank}}(\lambda A+B)=n}\), the component \(S_2 x\) is absent), and this solution is global if condition 4 of Theorem 3.4holds and has a finite escape time if condition 4 of Theorem 4.2holds.

Corollary 5.2

Theorem 5.2 remains valid if any of the following replacements (or all of them) take place:

  • condition 2 of Theorem 3.4is replaced by condition 2 of Theorem 3.5;

  • condition 3 of Theorem 3.4is replaced by condition 3 of Corollary 3.4;

  • condition 4 of Theorem 3.4is replaced by condition 4 of Corollary 3.5,

  • condition 4 of Theorem 4.2is replaced by condition 4 of Corollary 4.2.

Proof

Both Theorem 5.2 and Corollary 5.2 follow directly from the theorems and corollaries proved above.

Examples

Example 1 (a regular DAE)

Consider the system of differential and algebraic equations

$$\begin{aligned} \dot{x}_1&=x_1^2, \end{aligned}$$
(6.1)
$$\begin{aligned} x_1+x_2&=\varphi (t), \end{aligned}$$
(6.2)

where \(\varphi \in C({\mathcal {T}},{\mathbb R})\), \({\mathcal {T}}= [0,\infty )\), and \(x_i=x_i(t)\), \(i=1,2\), are real functions. The system (6.1), (6.2) can be represented in the form of the DAE \(\frac{d}{dt}[Ax]+Bx=f(t,x)\), i.e., Eq. (2.1), where

$$\begin{aligned} x=\begin{pmatrix} x_1 \\ x_2 \end{pmatrix},\quad A=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix},\quad B=\begin{pmatrix} 0 &{} 0 \\ 1 &{} 1 \end{pmatrix},\quad f(t,x)=\begin{pmatrix} x_1^2 \\ \varphi (t) \end{pmatrix}, \end{aligned}$$
(6.3)

\(f\in C({\mathcal {T}}\times D,{\mathbb R}^2)\), \(D={\mathbb R}^2\). It can be readily verified that the characteristic pencil \(\lambda A+B\) is a regular pencil of index 1. The initial condition is given as \(x(t_0)=x_0\), i.e., (2.2). It is clear that the initial values \(t_0\), \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\) must satisfy (6.2), that is, \(x_{1,0}+x_{2,0}=\varphi (t_0)\). The consistency condition \((t_0,x_0)\in L_0\) means the same (as shown below).

A solution of the IVP for the system (6.1), (6.2with the initial condition (2.2has the form \(x_1(t)=\dfrac{1}{x_{1,0}^{-1}+t_0-t}\)\(x_2(t)=\varphi (t)-x_1(t)\) when \(x_{1,0}\ne 0\), and \(x_1(t)\equiv 0\)\(x_2(t)=\varphi (t)\) when \(x_{1,0}= 0\) (accordingly, a solution of the DAE (2.1), (6.3) is \(x(t)=(x_1(t),x_2(t))^{\mathop {\textrm{T}}}\) with the specified components \(x_1(t)\), \(x_2(t)\)). Obviously, the solution is global (i.e., exists on the interval \([t_0,\infty )\)) if \(x_{1,0}<0\), and it has the finite escape time \(t_0+x_{1,0}^{-1}\) (i.e., exists on the finite interval \([t_0,T)\), where \(T=t_0+x_{1,0}^{-1}\), and \(\lim \limits _{t\rightarrow T-0} \Vert x(t)\Vert =+\infty\)if \(x_{1,0}>0\). Let us prove this, using the theorems obtained above. First we prove that the solution is global if \(x_{1,0}<0\), then we prove that the solution has the finite escape time if \(x_{1,0}>0\); in both cases, \(x_{2,0}\in {\mathbb R}\) is such that (6.2) is satisfied, i.e., \(x_{2,0}=\varphi (t_0)-x_{1,0}\).

First, construct the pairs \(P_i:{\mathbb R}^2\rightarrow X_i\), \(i=1,2\), and \(Q_j:{\mathbb R}^2\rightarrow Y_j\), \(j=1,2\), of mutually complementary projectors satisfying (A.31) and the subspaces \(X_i\), \(Y_i\), \(i=1,2\), from the corresponding direct decompositions (A.26), where \(X_r=Y_r={\mathbb R}^2\), as indicated in Appendix 1 (see Remarks A.5, A.3). The projection matrices (which for brevity we will also call projectors) corresponding to the mentioned projectors with respect to the standard bases in \({\mathbb R}^2\) (a basis in \({\mathbb R}^n\) is standard if the ith coordinate of the basis vector \(e_j\), \(j=1,...,n\), is equal to \(\delta _{ij}\)) have the form:

$$P_1=\begin{pmatrix} 1 &{} 0 \\ -1 &{} 0 \end{pmatrix},\quad P_2=\begin{pmatrix} 0 &{} 0 \\ 1 &{} 1 \end{pmatrix},\quad Q_1=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix},\quad Q_2=\begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix}.$$

The mentioned subspaces have the form

$$X_1=\mathop {\textrm{Lin}}\{e_1\},\quad X_2=\mathop {\textrm{Lin}}\{e_2\},\qquad Y_1=\mathop {\textrm{Lin}}\{q_1\},\quad Y_2=\mathop {\textrm{Lin}}\{q_2\},$$

where \(e_1=(1,-1)^{\mathop {\textrm{T}}}\), \(e_2=(0,1)^{\mathop {\textrm{T}}}\), \(q_1=(1,0)^{\mathop {\textrm{T}}}\) and \(q_2=(0,1)^{\mathop {\textrm{T}}}\). Hence, \(\{e_1,\, e_2\}\) is the basis of \({\mathbb R}^2 = X_1\dot{+}X_2\) and \(\{q_1,\, q_2\}\) is the basis of \({\mathbb R}^2 =Y_1\dot{+}Y_2\). Note that \(D={\mathbb R}^2\) and \(D_i=X_i=P_i{\mathbb R}^2\), \(i=1,2\) (see (2.19)). The components of \(x\in {\mathbb R}^2\) represented as \(x= x_{p_1}+x_{p_2}\) (see (A.41)), where \(x_{p_i}\in X_i\), \(i=1,2\), have the form

$$x_{p_1}=P_1 x=x_1 e_1,\quad x_{p_2}=P_2 x=(x_1+x_2) e_2,$$

where \(e_1\), \(e_2\) are the basis vectors defined above. If we make the change of variables

$$x_1=z,\quad x_1+x_2=u,$$

then \(x_{p_1}=z\, e_1\), \(x_{p_2}=u\, e_2\) and the system (6.1), (6.2) takes the following form: \({\dot{z}=z^2}\), \({u=\varphi (t)}\).

Notice that \(\mathcal {A}_1=A\), \(\mathcal {B}_1=0\), \(\mathcal {B}_2=B\) and \(\mathcal {A}_1^{(-1)}=\left( {\begin{smallmatrix} 1 &{} 0 \\ -1 &{} 0 \end{smallmatrix}}\right)\) are the matrices corresponding to the operators (with respect to the standard bases in \({\mathbb R}^2\)) defined by (A.33) and (A.35).

The equation \(Q_2[f(t,x)-Bx]=0\) defining the manifold \(L_0\) (i.e., ((2.31)) where \(t_*=0\)) is equivalent to (6.2) or \(u=\varphi (t)\) (\(u=x_1+x_2\)). Consequently, condition 1 of Theorem 3.1 is satisfied for any set \(M_1\subseteq X_1\) and \(M_2=X_2\).

Choose

$$\begin{aligned} M_1=\{x_{p_1}=x_1 (1,-1)^{\mathop {\textrm{T}}}\mid x_1\in (-\infty ,0)\},\;\; M_2=X_2=\{x_{p_2}=(x_1+x_2) (0,1)^{\mathop {\textrm{T}}}\mid x_1+x_2\in {\mathbb R}\}, \end{aligned}$$
(6.4)

where \(x_1=z\), \(x_1+x_2=u\) if the new variables are used. Hence, \(x_{p_1}\in M_1\) iff \(x_1\in (-\infty ,0)\), and, in general, for any \(x=(x_1,x_2)^{\mathop {\textrm{T}}}=x_{p_1}+x_{p_2}\in M_1\dot{+} M_2\) (i.e., \(x_{p_1}\in M_1\) and \(x_{p_2}\in M_2\)), the components \(x_1\), \(x_2\) are such that \(x_1<0\), \(x_2\in {\mathbb R}\). Let us verify whether all conditions of Theorem 3.1 are satisfied.

The function f(tx) has the continuous partial derivative with respect to x on \({\mathcal {T}}\times D\) (recall that \({\mathcal {T}}= [0,\infty )\) and \(D={\mathbb R}^2\)) and therefore will use condition 2 of Theorem 3.2. Operator (3.6) takes the form \(\Phi _{t_*,x_*}=-B\big |_{X_2}=-B_2:X_2\rightarrow Y_2\) for any fixed \(t_*\in {\mathcal {T}}\), \(x_*\in D\), and hence the matrix \(\Phi _{t_*,x_*}=-1\) corresponds to the operator \(\Phi _{t_*,x_*}\) with respect to the bases \(e_2\) and \(q_2\) in \(X_2\) and \(Y_2\), respectively. Since the operator \(\Phi _{t_*,x_*}\) has the inverse \(\Phi _{t_*,x_*}^{-1}=-B_2^{-1}\in \textrm{L}(Y_2,X_2)\) for any fixed \(t_*\in {\mathcal {T}}\), \(x_*\in D\), then condition 2 of Theorem 3.2 is fulfilled.

Let us prove that the component \(x_{p_1}(t)\) of each solution x(t) of the DAE (2.1), (6.3) with the initial point \((t_0,x_0)\in L_0\) for which \(P_ix_0\in M_i\), \(i=1,2\), can never leave \(M_1\). Note that any \(t_0\in [0,\infty )\), \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\) such that \(x_{1,0}<0\) and \(x_{2,0}=\varphi (t_0)-x_{1,0}\in {\mathbb R}\) are consistent initial values (i.e., \((t_0,x_0)\in L_0\)) for which \(P_ix_0\in M_i\), \(i=1,2\). Since the DAE (2.1), (6.3) is equivalent to the system of Eqs. (2.22) and (2.23) which are equivalent to Eqs. (6.1) and (6.2), respectively, and \(x_{p_1}\in M_1\) iff \(x_1\in (-\infty ,0)\), then we need to prove that a solution \(x_1=\widehat{x}_1(t)\) of (6.1) with the initial values \(t_0\in [0,\infty )\), \(\widehat{x}_1(t_0)=x_{1,0}\in (-\infty ,0)\) can never leave the set \(\widehat{M}_1:=\{x_1\in (-\infty ,0)\}\). Suppose that the solution \(\widehat{x}_1(t)\) can leave \(\widehat{M}_1\), then it should cross the boundary \(\partial \widehat{M}_1=\{x_1=0\}\) of \(\widehat{M}_1\). Consequently, there exists \(t_1>t_0\) such that \(\widehat{x}_1(t_1)=0\). It is obviously that the IVP for Eq. (6.1) with the initial condition \(x_1(t_0)=0\) has the unique solution \(x_1=\widetilde{x}_1(t)\equiv 0\) on \([t_0,\infty )\). Since the solutions \(\widehat{x}_1(t)\) and \(\widetilde{x}_1(t)\) coincide at the point \(t_1\), then by uniqueness they coincide on \([t_0,t_1]\) and hence \(\widehat{x}_1(t)=0\) on \([t_0,t_1]\), which is impossible, since \(\widehat{x}_1(t_0)=x_{1,0}<0\). Thus, the solution \(\widehat{x}_1(t)\) cannot leave \(\widehat{M}_1\). It follows from the above that condition 3 of Theorem 3.1 is satisfied.

Define the function \(V(t,x_{p_1})\equiv V(x_{p_1}):=0.5x_{p_1}^{\mathop {\textrm{T}}}x_{p_1}=x_1^2\). This function is positive for \(x_{p_1}\ne 0\) and, obviously, satisfies condition (4.a) of Theorem 3.1. For the DAE (2.1) with (6.3), Eq. (2.22) takes the form

$${\dot{x}_{p_1}=\Pi (t,x_{p_1},x_{p_2})},\quad \text {where}\quad \Pi (t,x_{p_1},x_{p_2})\equiv \Pi (x_{p_1})=x_1^2\,(1,-1)^{\mathop {\textrm{T}}},$$

and the derivative of the function V along the trajectories of (2.22) satisfies the inequality

$$\begin{aligned} \dot{V}_{(2.22)}(x_{p_1})=2x_{p_1}^{\mathop {\textrm{T}}}\Pi (x_{p_1})= 4x_1^3<x_1^2=V(x_{p_1}) \end{aligned}$$
(6.5)

for each \(x_{p_1}\in M_R\) where \(M_R=\{x_{p_1}\in M_1\mid \Vert x_{p_1}\Vert > R\}=\{x_{p_1}=x_1 (1,-1)^{\mathop {\textrm{T}}}\mid x_1< -R\,\}\), \(R>0\) is some number. Therefore, the differential inequality (3.4) takes the form \(\dot{v}\le v\). It is easy to verify that this inequality does not have positive solutions with finite escape time. For example, the inequality \({\dot{v}\le v}\) can be written as \({\dot{v}\le k(t)\,U(v)}\), where \(k(t)\equiv 1\) and \(U(v)=v\), and since \(\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)} =\infty\) (\(v_0>0\)), then this inequality does not have positive solutions with finite escape time. Hence, condition 4 of Theorem 3.1 holds. Notice that condition 4 of Corollary 3.2, where the functions k, U, and V are the same as defined above, is also satisfied.

Thus, by Theorem 3.1 (or Theorem 5.1), where condition 2 is replaced by condition 2 of Theorem 3.2, there exists a unique global solution of IVP (2.1), (6.3), (2.2for each initial point \((t_0,x_0)\), where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\), such that \(t_0\in [0,\infty )\), \(x_{1,0}<0\), \(x_{2,0}\in {\mathbb R}\) and \(x_{2,0}=\varphi (t_0)-x_{1,0}\) (i.e., \((t_0,x_0)\) satisfies (6.2)).

Now, choose

$$M_1=\{x_{p_1}=x_1 (1,-1)^{\mathop {\textrm{T}}}\mid x_1\in (0,\infty )\}$$

and the same \(M_2=X_2\) as in (6.4) (where \(x_1=z\), \(x_1+x_2=u\) if the new variables are used). Hence, \(x_{p_1}\in M_1\) iff \(x_1\in (0,\infty )\).

Thus, consistent initial values \(t_0\), \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\), for which \(P_1x_0\in M_1\), have \(x_{1,0}>0\). Let us check whether the conditions of Theorem 4.1 on the blow-up of solutions are satisfied.

Recall that Theorem 4.1 contains the same conditions as Theorem 3.1 (or 3.2), except for condition 4, but it is necessary to check whether the conditions are satisfied for the new set \(M_1\).

It follows directly from the above that condition 1 of Theorem 3.1 and condition 2 of Theorem 3.2 are fulfilled.

We will prove that condition 3 of Theorem 3.1 holds. Since \(x_{p_1}\in M_1\) iff \(x_1\in (0,\infty )\), then, by the same arguments as above, we need to prove that a solution \(x_1=x_1(t)\) of (6.1) with the initial values \(t_0\in [0,\infty )\), \(x_1(t_0)=x_{1,0}\in (0,\infty )\) can never leave the set \(\widehat{M}_1:=\{x_1\in (0,\infty )\}\). Indeed, since \(\dot{x}_1=0\) along the boundary \(\partial \widehat{M}_1=\{x_1=0\}\) of \(\widehat{M}_1\) and \(\dot{x}_1>0\) inside \(\widehat{M}_1\), then each solution of (6.1) which at the initial moment \(t_0\in [0,\infty )\) is in \(\widehat{M}_1\) can never thereafter leave it.

Further, take the same function V as above, i.e., \(V(t,x_{p_1})\equiv V(x_{p_1}):=0.5x_{p_1}^{\mathop {\textrm{T}}}x_{p_1}=x_1^2\). Then \(\dot{V}_{(2.22)}(x_{p_1})=2x_{p_1}^{\mathop {\textrm{T}}}\Pi (x_{p_1})= 4x_1^3\) (see (6.5)) and, hence, it satisfies the equality

$$\begin{aligned} \dot{V}_{(2.22)}(x_{p_1})=4V^{\frac{3}{2}}(x_{p_1}) >V^{\frac{3}{2}}(x_{p_1}) \end{aligned}$$

for each \(x_{p_1}\in M_1\) (\(x_1>0\)). Therefore, the differential inequality (4.1) takes the form \(\dot{v}>v^{3/2}\). It is easy to verify that this inequality does not have global positive solutions. For example, it can be written as \({\dot{v}> k(t)\,U(v)}\), where \(k(t)\equiv 1\) and \(U(v)=v^{3/2}\), and since \({\int \limits _{k_0}^{\infty }k(t)dt=\infty }\) and \(\int \limits _{{\textstyle v}_0}^{\infty }\dfrac{dv}{U(v)}= 2v_0^{-\frac{1}{2}} <\infty\)\({k_0,v_0>0}\) are constants), then this inequality does not have global positive solutions. Hence, condition 4 of Theorem 4.1 holds. Notice that condition 4 of Corollary 4.1, where the functions k, U, and V are the same as defined above, is also fulfilled.

Thus, by Theorem 4.1 (or Theorem 5.1), for each initial point \((t_0,x_0)\), where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\), such that \(t_0\in [0,\infty )\), \(x_{1,0}>0\), \(x_{2,0}\in {\mathbb R}\) and \(x_{2,0}=\varphi (t_0)-x_{1,0}\), there exists a unique solution of IVP (2.1), (6.3), (2.2and this solution has a finite escape time (is blow-up in finite time).

Example 2 (a regular DAE)

Consider the system of differential and algebraic equations

$$\begin{aligned} \dot{x}_1&=2x_2, \end{aligned}$$
(6.6)
$$\begin{aligned} x_2&=x_1x_2-a^2 \end{aligned}$$
(6.7)

where \(a\ne 0\) (\(a\in {\mathbb R}\)), \(t,t_0\in {\mathcal {T}}= [0,\infty )\), and \(x_i=x_i(t)\), \(i=1,2\), are real functions. The system (6.6), (6.7) can be represented as the DAE (2.1), i.e., \(\frac{d}{dt}[Ax]+Bx=f(t,x)\), where

$$\begin{aligned} x=\begin{pmatrix} x_1 \\ x_2 \end{pmatrix},\quad A=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix},\quad B=\begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix},\quad f(t,x)\equiv f(x)=\begin{pmatrix} 2x_2 \\ x_1x_2-a^2 \end{pmatrix}, \end{aligned}$$
(6.8)

\(f\in C^\infty (D,{\mathbb R}^2)\), \(D={\mathbb R}^2\). It is easy to verify that the characteristic pencil \(\lambda A+B\) is a regular pencil of index 1. For the DAE (2.1), (6.8), we use the initial condition (2.2), i.e., \(x(t_0)=x_0\), where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\). Obviously, \(x_{1,0}\) and \(x_{2,0}\) must satisfy (6.7).

Construct the projectors (A.30) (where \(n=m=2\)) and the subspaces from the direct decompositions (A.26 ) (where \(X_r=Y_r={\mathbb R}^2\)). With respect to the standard bases in \({\mathbb R}^2\), the projection matrices

$$\begin{aligned} P_1=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix},\quad P_2=\begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix},\quad Q_1=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix},\quad Q_2=\begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix} \end{aligned}$$
(6.9)

correspond to the projectors \(P_i:{\mathbb R}^2\rightarrow X_i\), \(Q_i:{\mathbb R}^2\rightarrow Y_i\) (\(i=1,2\)) from (A.30). Therefore, the matrices \(\mathcal {A}_1=A\), \(\mathcal {B}_1=0\), \(\mathcal {B}_2=B\) and \(A_1^{(-1)}=\left( {\begin{smallmatrix} 1 &{} 0 \\ 0 &{} 0 \end{smallmatrix}}\right)\) correspond to the operators, defined by (A.33) and (A.35), with respect to the standard bases in \({\mathbb R}^2\). The subspaces from (A.26) have the form \(X_1=\mathop {\textrm{Lin}}\{e_1\}\), \(X_2=\mathop {\textrm{Lin}}\{e_2\}\), \(Y_1=\mathop {\textrm{Lin}}\{q_1\}\), \(Y_2=\mathop {\textrm{Lin}}\{q_2\}\), where \(e_1=(1,0)^{\mathop {\textrm{T}}}\), \(e_2=(0,1)^{\mathop {\textrm{T}}}\), \(q_1=(1,0)^{\mathop {\textrm{T}}}\) and \(q_2=(0,1)^{\mathop {\textrm{T}}}\). Thus, \(\{e_1,\, e_2\}\) is the basis of \({\mathbb R}^2 = X_1\dot{+}X_2\) and \(\{q_1,\, q_2\}\) is the basis of \({\mathbb R}^2 =Y_1\dot{+}Y_2\); obviously, the bases coincide with the standard basis of \({\mathbb R}^2\). With respect to the direct decomposition \({\mathbb R}^2 = X_1\dot{+}X_2\), a vector \(x\in {\mathbb R}^2\) can be uniquely represented in the form (A.41) where

$$\begin{aligned} x_{p_1}=P_1 x=x_1(1,0)^{\mathop {\textrm{T}}},\quad x_{p_2}=P_2 x=x_2(0,1)^{\mathop {\textrm{T}}}. \end{aligned}$$
(6.10)

Note that \(x_{p_1}=x_1\) with respect to the chosen basis \(e_1\) in \(X_1\) and \(x_{p_2}=x_2\) with respect to the chosen basis \(e_2\) in \(X_2\). The sets \(D_i\) (\(i=1,2\)) from the decomposition (2.19) where \(D={\mathbb R}^2\), have the form

$$\begin{aligned} D_1=X_1=P_1{\mathbb R}^2=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\mid x_1\in {\mathbb R}\},\;\; D_2=X_2=P_2{\mathbb R}^2=\{x_{p_2}=(0,x_2)^{\mathop {\textrm{T}}}\mid x_2\in {\mathbb R}\}. \end{aligned}$$
(6.11)

Further, using the theorems from the “Global solvability of regular semilinear DAEs” section, we will prove that the DAE (2.1), (6.8) with the initial condition (2.2) has a unique global solution for any initial values \(t_0\in {\mathcal {T}}\), \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\in {\mathbb R}^2\) such that (6.7) is satisfied and \(P_ix_0\in M_i\), \(i=1,2\), where \(M_i\) will be specified below.

The consistency condition \((t,x)\in L_0\) holds if (tx) satisfies Eq. (6.7), which is equivalent to the algebraic equation \(Q_2[f(t,x)-Bx]=0\) defining the manifold \(L_0\) and can be rewritten as

$$\begin{aligned} x_2 =a^2(x_1-1)^{-1}. \end{aligned}$$
(6.12)

Thus, for any fixed \({t\in {\mathcal {T}}}\), \(x_1\in {\mathbb R}\setminus \{1\}\) there exists a unique \(x_2\in {\mathbb R}\setminus \{0\}\) such that (6.12) is satisfied. Choose

$$\begin{aligned} M_1=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in X_1\mid x_1\in {\mathbb R}\setminus \{1\}\},\quad M_2=\{x_{p_2}=(0,x_2)^{\mathop {\textrm{T}}}\in X_2\mid x_2\in {\mathbb R}\setminus \{0\}\}. \end{aligned}$$
(6.13)

Then condition 1 of Theorem 3.1 holds. Note that \(x_{p_1}\in M_1\) iff \(x_1\in {\mathbb R}\setminus \{1\}\) and \(x_{p_2}\in M_2\) iff \(x_2\in {\mathbb R}\setminus \{0\}\).

The matrix corresponding to operator (3.6), where \(t_*\), \(x_*=(x_1^*,x_2^*)^{\mathop {\textrm{T}}}\) are fixed, has the form \(\Phi _{t_*,x_*}=x_1^*-1\) with respect to the basis \(e_2\) of \(X_2\) and the basis \(q_2\) of \(Y_2\). Hence, the operator \(\Phi _{t_*,x_*}\) has the inverse \(\Phi _{t_*,x_*}^{-1}\in \textrm{L}(Y_2,X_2)\) for any fixed \(t_*\in {\mathcal {T}}\), \(x_{p_1}^*=(x_1^*,0)^{\mathop {\textrm{T}}}\in M_1\) (i.e., \(x_1^*\ne 1\)) and \(x_{p_2}^*=(0,x_2^*)^{\mathop {\textrm{T}}}\in M_2\). Consequently, condition 2 of Theorem 3.2 holds.

The function \(\Pi\) defined in (2.29) has the form \(\Pi (t,x_{p_1},x_{p_2})\equiv \Pi (x_{p_1},x_{p_2})= (2x_2,0)^{\mathop {\textrm{T}}}\), and Eq. (2.22), which is equivalent to (6.6), takes the form \(\dot{x}_{p_1}=\Pi (x_{p_1},x_{p_2})\). Therefore, the sets \(D_{\Pi ,i}\), \(i=1,2\), specified in condition 3 of Corollary 3.1, have the form \({D_{\Pi ,i}=X_i}\). It is clear that \(D_{c,i}=M_i\), \(i=1,2\), and hence \({\widetilde{D_i}= M_i}\), \(i=1,2\), where \(D_{c,i}\) and \(\widetilde{D_i}\) are specified in the same condition. Define the function

$$W(t,x_{p_1})\equiv W(x_{p_1}):=-(x_1-1,0)\, (x_1-1,0)^{\mathop {\textrm{T}}}=-(x_1-1)^2$$

and a family of closed sets

$$K_r=\{x_{p_1}=(x_1,0)^{\textrm{T}}\in M_1\mid x_1\in (-\infty ,1-r]\cup [1+r, + \infty)\}, \quad 0 <\ r<< \;1,$$

i.e., \(r>0\) is sufficiently small. Further, we will verify that condition 3 of Corollary 3.1 is satisfied. Since \(M_1^c\cap \widetilde{D_1}=M_1^c\cap M_1=\emptyset\), then there is no need to check that the inequality \(W(x_{p_1}^1)<W(x_{p_1}^2)\) holds for \({x_{p_1}^1\in K_r}\) and \(x_{p_1}^2\in M_1^c\cap \widetilde{D_1}\), but generally it holds for \({x_{p_1}^1\in K_r}\) and \(x_{p_1}^2\in M_1^c=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\mid x_1=1\}\). Since \(\partial _{x_{p_1}} W(x_{p_1})=-2 (x_1-1,0)\), then \(\dot{W}_{2.22}(x_{p_1})=-4(x_1-1)x_2\). Recall that the consistency condition \((t,x)\in L_0\) is satisfied if (tx) satisfies (6.7) or (6.12). Hence, for each \(t\in {\mathcal {T}}\), \(x_{p_1}\in K_r^c\cap \widetilde{D_1}=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in X_1\mid x_1\in (1-r,1)\cup (1,1+r)\,\}\), \(x_{p_2}\in \widetilde{D_2}=M_2\) such that t, \(x_1\), \(x_2\) satisfy (6.12) (i.e., \((t,x_{p_1}+x_{p_2})\in L_0\)), the inequality (3.5), which takes the form

$$\dot{W}_{(2.22)}(x_{p_1})=-4(x_1-1)a^2(x_1-1)^{-1}=-4a^2\le 0,$$

is fulfilled. Consequently, condition 3 of Corollary 3.1 holds.

Define the function \({V(t,x_{p_1})\equiv V(x_{p_1}):=x_{p_1}^{\mathop {\textrm{T}}}x_{p_1}=x_1^2}\). Obviously, the function V satisfies condition (4.a) of Theorem 3.1, and \(\dot{V}_{(2.22)}(x_{p_1})=4x_1x_2\) (as mentioned above, (2.22) has the form \(\dot{x}_{p_1}=\Pi (x_{p_1},x_{p_2})\) where \(\Pi (x_{p_1},x_{p_2})= (2x_2,0)^{\mathop {\textrm{T}}}\)). For each \(t\in {\mathcal {T}}\), \(x_{p_1}\in M_R=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in M_1\mid |x_1|>R\,\}\), where \({R>>1}\) (i.e., \(R>0\) is sufficiently large), \(x_{p_2}\in M_2\) such that t, \(x_1\), \(x_2\) satisfy (6.12), the inequality

$$\dot{V}_{2.22}(x_{p_1})=4x_1a^2(x_1-1)^{-1}< 8a^2$$

holds. Therefore, the inequality  (3.4) takes the form \(\dot{v}\le 8a^2\), and it is easy to verify that this inequality does not have positive solutions with finite escape time. Hence, condition 4 of Theorem 3.1 is fulfilled.

Thus, by Theorem 3.1 (or Theorem 5.1), where conditions 2 and 3 are replaced by conditions 2 of Theorem 3.2 and 3 of Corollary 3.1respectively, there exists a unique global solution of IVP (2.1), (6.8), (2.2for each initial point \((t_0,x_0)\), where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\), such that \(t_0\in [0,\infty )\), \(x_{1,0}\in {\mathbb R}\setminus \{1\}\), \(x_{2,0}\in {\mathbb R}\setminus \{0\}\) (i.e., \(P_ix_0\in M_i\), \(i=1,2\)), and \(x_{2,0}=x_{1,0}x_{2,0}-a^2\) (i.e., \((t_0,x_0)\in L_0\)).

Indeed, a solution of the DAE (2.1), (6.8), which satisfies the initial condition (2.2where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\), has the form \(x(t)=(x_1(t),x_2(t))^{\mathop {\textrm{T}}}\) where \(x_1(t)=\mathop {\textrm{sgn}}(x_{1,0}-1)\, \big (4a^2(t-t_0)+(x_{1,0}-1)^2\big )^{1/2}+1\) and \(x_2(t)=\mathop {\textrm{sgn}}(x_{1,0}-1)a^2\big (4a^2(t-t_0)+(x_{1,0}-1)^2\big )^{-1/2}\), and therefore it exists and is global for \(t_0\ge 0\), \(x_{1,0}\ne 1\) and \(x_{2,0}=a^2(x_{1,0}-1)^{-1}\ne 0\).

Example 3 (a regular DAE)

In this section, we consider the system of differential and algebraic equations

$$\begin{aligned} \dot{x}_1+b\, x_1&=0, \end{aligned}$$
(6.14)
$$\begin{aligned} x_1^2+x_2^2&=1, \end{aligned}$$
(6.15)

where \(t\in {\mathcal {T}}\!= [0,\infty )\), \(b\in {\mathbb R}\), and \(x_i=x_i(t)\), \(i=1,2\), are real functions. It can be written as

$$\begin{aligned} \dot{x}_1 + b\, x_1&=0, \end{aligned}$$
(6.16)
$$\begin{aligned} x_2&=x_1^2+x_2^2+x_2-1 \end{aligned}$$
(6.17)

and be represented in the form of the DAE \(\frac{d}{dt}[Ax]+Bx=f(t,x)\), i.e., Eq. (2.1), where

$$\begin{aligned} x=\begin{pmatrix} x_1 \\ x_2 \end{pmatrix},\quad A=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix},\quad B=\begin{pmatrix} b &{} 0 \\ 0 &{} 1 \end{pmatrix},\quad f(t,x)=\begin{pmatrix} 0 \\ x_1^2+x_2^2+x_2-1 \end{pmatrix}, \end{aligned}$$
(6.18)

\(f\in C^\infty ({\mathcal {T}}\times D,{\mathbb R}^2)\), \(D={\mathbb R}^2\), and AB are such that \(\lambda A+B\) is a regular pencil of index 1. Consider the initial condition \(x_1(t_0)=x_{1,0}\), \(x_2(t_0)=x_{2,0}\), where \(t_0\in {\mathcal {T}}\), which for the DAE takes the form (2.2), where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\). Obviously, \(x_{1,0}\), \(x_{2,0}\) must satisfy (6.15).

As for the examples discussed above, we first construct the projectors (A.30) (where \(n=m=2\)) and the subspaces from the direct decompositions (A.26) (where \(X_r=Y_r={\mathbb R}^2\)). With respect to the standard bases in \({\mathbb R}^2\), the projection matrices of the form (6.9) correspond to the projectors \(P_i:{\mathbb R}^2\rightarrow X_i\), \(Q_i:{\mathbb R}^2\rightarrow Y_i\) (\(i=1,2\)) from (A.30). Therefore, the matrices \(\mathcal {A}_1=\mathcal {A}_1^{(-1)}=\left( {\begin{smallmatrix} 1 &{} 0 \\ 0 &{} 0 \end{smallmatrix}}\right)\), \(\mathcal {B}_1=\left( {\begin{smallmatrix} b &{} 0 \\ 0 &{} 0 \end{smallmatrix}}\right)\) and \(\mathcal {B}_2=\left( {\begin{smallmatrix} 0 &{} 0 \\ 0 &{} 1 \end{smallmatrix}}\right)\) correspond to the operators, defined by (A.33) and (A.35), with respect to the standard bases in \({\mathbb R}^2\). The subspaces from (A.26) have the form \(X_1=Y_1=\mathop {\textrm{Lin}}\{(1,0)^{\mathop {\textrm{T}}}\}\), \(X_2=Y_2=\mathop {\textrm{Lin}}\{(0,1)^{\mathop {\textrm{T}}}\}\). Denote by \(\{e_1=(1,0)^{\mathop {\textrm{T}}},\, e_2=(0,1)^{\mathop {\textrm{T}}}\}\) the basis of \({\mathbb R}^2 = X_1\dot{+}X_2\) and by \(\{q_1=(1,0)^{\mathop {\textrm{T}}},\, q_2=(0,1)^{\mathop {\textrm{T}}}\}\) the basis of \({\mathbb R}^2 =Y_1\dot{+}Y_2\). With respect to the direct decomposition \({\mathbb R}^2 = X_1\dot{+}X_2\), a vector \(x\in {\mathbb R}^2\) can be uniquely represented in the form (A.41) where \(x_{p_1}\), \(x_{p_2}\) have the form (6.10), i.e.,

$$x_{p_1}=P_1 x=x_1(1,0)^{\mathop {\textrm{T}}},\quad x_{p_2}=P_2 x=x_2(0,1)^{\mathop {\textrm{T}}}.$$

The subspaces \(D_i\), \(i=1,2\), from the decomposition (2.19), where \(D={\mathbb R}^2\), have the form (6.11).

Introduce the sets

$$\begin{aligned} M_1=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in X_1\mid x_1\in (-1,1)\},\;\; M_2=\{x_{p_2}=(0,x_2)^{\mathop {\textrm{T}}}\in X_2\mid x_2\in (0,1]\}. \end{aligned}$$
(6.19)

Then \(x_{p_1}\in M_1\) iff \(x_1\in (-1,1)\) and \(x_{p_2}\in M_2\) iff \(x_2\in (0,1]\). In addition, introduce the set

$$\begin{aligned} \widehat{M_2}=\{x_{p_2}=(0,x_2)^{\mathop {\textrm{T}}}\in X_2\mid x_2\in [-1,0)\}. \end{aligned}$$
(6.20)

The equation \(Q_2[f(t,x)-Bx]=0\) defining \(L_0\) is equivalent to Eq. (6.15). Hence, the consistent initial values \(t_0\), \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\) (i.e., \((t_0,x_0)\in L_0\)), for which \(P_1x_0\in M_1\) and \(P_2x_0\in M_2\) (respectively, \(P_2x_0\in \widehat{M_2}\)), are such that \(x_{1,0}\in (-1,1)\), \(x_{2,0}\in (0,1]\) (respectively, \(x_{2,0}\in [-1,0)\) ) and \(x_{1,0}^2+x_{2,0}^2 =1\).

Further, we will verify whether the conditions for the global solvability of IVP (2.1), (6.18), (2.2), which are given in the “Global solvability of regular semilinear DAEs” section, are satisfied.

Condition 1 of Theorem 3.1 holds since for any fixed \({t\in {\mathcal {T}}}\), \({x_{p_1}\in M_1}\) (i.e., \(x_1\in (-1,1)\) ) there exists a unique \({x_{p_2}\in M_2}\) (i.e., \(x_2\in (0,1]\) ) such that \({(t,x_{p_1}+x_{p_2})\in L_0}\) (i.e., (6.15)) is satisfied). Note that the same condition will hold if one replaces the set \(M_2\) by \(\widehat{M_2}\).

Condition 2 of Theorem 3.2 is fulfilled since for any fixed \(t_*\in {\mathcal {T}}\) and any fixed \(x_*=(x_1^*,x_2^*)^{\mathop {\textrm{T}}}\) such that \(x_1^*\in (-1,1)\) and \(x_2^*\in (0,1]\) there exists \(\Phi _{t_*,x_*}^{-1}=(2x_2^*)^{-1}\), where \(\Phi _{t_*,x_*}=2x_2^*\) is the matrix corresponding to operator (3.6) with respect to the bases \(e_2\) and \(q_2\) in \(X_2\) and \(Y_2\), respectively. Note that the same condition holds if \(x_2^*\) belongs to \([-1,0)\) instead of (0, 1].

Let us verify whether condition 3 of Corollary 3.1 is fulfilled (then condition 3 of Theorem 3.1 holds). The function defined in (2.29) has the form \({\Pi (t,x_{p_1},x_{p_2})\equiv \Pi (x_{p_1})=(-b\,x_1,0)^{\mathop {\textrm{T}}}=-b\, x_{p_1}}\) and (2.22) takes the form \({\dot{x}_{p_1}=\Pi (x_{p_1})}\). Since \(\Pi\) does not depend on \(x_{p_2}\), then \({\widetilde{D_1}= D_{\Pi ,1}=X_1}\), where \(\widetilde{D_1}\) and \(D_{\Pi ,1}\) are the sets specified in condition 3 of Corollary 3.1. Define the function \({W(t,x_{p_1})\equiv W(x_{p_1}):=x_{p_1}^{\mathop {\textrm{T}}}x_{p_1}=x_1^2}\) and a family of closed sets \({K_r=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in M_1\mid x_1\in [-1+r,1-r]\,\}}\), \({0<r<<1}\). Then, \({W(x_{p_1}^1)<W(x_{p_1}^2)}\) for every \({x_{p_1}^1\in K_r}\) and \({x_{p_1}^2\in M_1^c\cap \widetilde{D_1}=M_1^c=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in X_1\mid x_1\in (-\infty ,-1]\cup [1,+\infty )\,\}}\). In addition, if \({b\ge 0}\), then \({\dot{W}_{(2.22)}(x_{p_1})=-2 b x_1^2\le 0}\) for every \(x_{p_1}\in K_r^c\cap \widetilde{D_1}=K_r^c=\{x_{p_1}=(x_1,0)^{\mathop {\textrm{T}}}\in X_1\mid x_1\in (-\infty ,-1+r)\cup (1-r,+\infty )\,\}\). Hence, condition 3 of Corollary 3.1 holds (both for \({x_{p_2}\in M_2}\) and for \({x_{p_2}\in \widehat{M_2}}\) )  if \({b\ge 0}\).

Note that since the set \(M_1\) is bounded, then condition 4 of Theorem 3.1 does not need to be checked.

Thus, if \(b\ge 0\), then by Theorem 3.1 (or Theorem 5.1), where conditions 2 and 3 are replaced by conditions 2 of Theorem 3.2 and 3 of Corollary 3.1respectively, there exists a unique global solution of IVP (2.1), (6.18), (2.2for each initial point \((t_0,x_0)\), where \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\), for which \(t_0\in [0,\infty )\), \(x_{1,0}\in (-1,1)\) and \(x_{2,0}\in (0,1]\) (i.e., \(P_1x_0\in M_1\) and \(P_2x_0\in M_2\) where \(M_1\), \(M_2\) are defined in 6.19), and in addition, \(x_{1,0}^2+x_{2,0}^2 =1\) (i.e., \((t_0,x_0)\in L_0\)). The same statement is valid for \(x_{2,0}\) belonging to \([-1,0)\) (i.e., for \(P_2x_0\in \widehat{M_2}\) where \(\widehat{M_2}\) is defined in (6.20)) instead of (0, 1].

In addition, since the sets \(M_1\), \(M_2\) are bounded, then by Corollary 3.3 the DAE (2.1), (6.18is Lagrange stable for the initial points specified above (see the definition of the Lagrange stability of the DAE in the “Problem statement. Preliminaries” section).

Indeed, a solution of IVP (2.1), (6.18), (2.2has the form \(x(t)=(x_1(t),x_2(t))^{\mathop {\textrm{T}}}\) where \(x_1(t)=x_{1,0}\, \textrm{e}^{-b(t-t_0)}\), \(x_2(t)=\sqrt{1-x_{1,0}^2\, \textrm{e}^{-2b(t-t_0)}}\) for the initial value \(x_0=(x_{1,0},x_{2,0})^{\mathop {\textrm{T}}}\) such that \(x_{1,0}\in (-1,1)\) and \(x_{2,0}\in (0,1]\), and \(x_2(t)=-\sqrt{1-x_{1,0}^2\, \textrm{e}^{-2b(t-t_0)}}\) for the initial value \(x_0\) such that \(x_{1,0}\in (-1,1)\) and \(x_{2,0}\in [-1,0)\). Obviously, the solution exists and is global if \(b\ge 0\) and, in addition, it is bounded for all \(t\ge t_0\), namely, \(x_1(t)\in (-1,1)\) and \(x_2(t)\in (0,1]\) or \(x_2(t)\in [-1,0)\) (depending on the initial values) for all \(t\ge t_0\). This coincides with the conclusions obtained using the theorems.

Example 4 (a singular (nonregular) DAE)

Consider the system of differential and algebraic equations

$$\begin{aligned} \frac{d}{dt}(x_1-x_3)+x_1-x_2-x_3&= f_1(t,x), \end{aligned}$$
(6.21)
$$\begin{aligned} x_1+x_2-x_3&= f_2(t,x), \end{aligned}$$
(6.22)
$$\begin{aligned} 2x_2&=f_3(t,x). \end{aligned}$$
(6.23)

where the functions \(f_i\in C({\mathcal {T}}\times D,{\mathbb R})\), \(i=1,2,3\), have the continuous partial derivatives \(\partial _x f_i\) on \({\mathcal {T}}\times D\), \({\mathcal {T}}= [0,\infty )\), \(D={\mathbb R}^3\), and \(x_i=x_i(t)\), \(i=1,2,3\), are real functions. The system (6.21)–(6.23) can be represented in the form of the DAE \(\frac{d}{dt}[Ax]+Bx=f(t,x)\), i.e., Eq. (2.1), where

$$\begin{aligned} x=\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix},\; A=\begin{pmatrix} 1 &{} 0 &{} -1 \\ 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \end{pmatrix},\;\; B=\begin{pmatrix} 1 &{} -1 &{} -1 \\ 1 &{} 1 &{} -1 \\ 0 &{} 2 &{} 0 \end{pmatrix},\;\; f(t,x)=\begin{pmatrix} f_1(t,x) \\ f_2(t,x) \\ f_3(t,x) \end{pmatrix}. \end{aligned}$$
(6.24)

It can be readily verified that the characteristic pencil \(\lambda A+B\) is singular and \({\mathop {\textrm{rank}}(\lambda A+B)=2}\). The total defect of the pencil equals \(d(\lambda A+B)=2\) and the defects of \(\lambda A+B\) and \(\lambda A^{\mathop {\textrm{T}}}+B^{\mathop {\textrm{T}}}\) equal 1 (see definition A.1 in Appendix 1). The initial condition is given as \(x(t_0)=x_0\), i.e., (2.2). Obviously, the initial values \(t_0\), \(x_0=(x_{1,0},x_{2,0},x_{3,0})^{\mathop {\textrm{T}}}\) must satisfy (6.22) and (6.23).

The global solvability of the DAE (2.1), (6.24) (accordingly, the system (6.21)–(6.23) has been studied in [4, Section 10] by using the theorems [4, Theorems 3.2 and 7.1]. The same conditions as when applying the theorem [4, Theorem 3.2] will be obtained if we apply Theorem 3.4, where condition 2 is replaced by condition 2 of Theorem 3.5 and \(D={\mathbb R}^3\), and take \(M_{s1}=X_{s_1}\dot{+}X_1\) and \(M_2=X_2\).

In this section, we will consider a certain particular case of the system (6.21)–(6.23) in which a solution is blow-up.

The projection matrices corresponding to the projectors (see (A.7), (A.8) and (A.30) in Appendix 1) onto the “singular” subspaces, i.e., \(S_i:{\mathbb R}^3\rightarrow X_{s_i}\), \(F_i:{\mathbb R}^3\rightarrow Y_{s_i}\), \(i=1,2\), \(S=S_1+S_2:{\mathbb R}^3\rightarrow X_s\) and \(F=F_1+F_2:{\mathbb R}^3\rightarrow Y_s\), and onto the “regular” subspaces, i.e., \(P_i:{\mathbb R}^3\rightarrow X_i\), \(Q_i:{\mathbb R}^3\rightarrow Y_i\), \(i=1,2\), \(P=P_1+P_2:{\mathbb R}^3\rightarrow X_r\) and \(Q=Q_1+Q_2:{\mathbb R}^3\rightarrow Y_r\), are presented in [4, Section 10]. Also, in [4, Section 10], the matrices corresponding to the operators \(\mathcal {A}_r\), \(\mathcal {B}_r\), \(\mathcal {A}_{gen}\), \(\mathcal {B}_{gen}\), \(\mathcal {B}_{und}\), \(\mathcal {B}_{ov}\), \(\mathcal {A}_{gen}^{(-1)}\) defined by (A.11), (A.13) and (A.19) are given. The subspaces from the decompositions (A.37) and (A.39), where \(n=3\) and \(m=3\), have the following form (see [4, Section 10]):

$$\begin{aligned}\begin{gathered} X_s = X_{s_1}\dot{+}X_{s_2}=\mathop {\textrm{Lin}}\{s_i\}_{i=1}^2,\; {X_{s_1}=\mathop {\textrm{Lin}}\{s_1\}},\; X_{s_2}=\mathop {\textrm{Lin}}\{s_2\},\; X_r=X_2= \mathop {\textrm{Lin}}\{p\},\; X_1=\{0\}, \\ Y_s = Y_{s_1}\dot{+} Y_{s_2}= \mathop {\textrm{Lin}}\{l_i\}_{i=1}^2,\; {Y_{s_1} = \mathop {\textrm{Lin}}\{l_1\}},\; Y_{s_2} = \mathop {\textrm{Lin}}\{l_2\},\; Y_r=Y_2= \mathop {\textrm{Lin}}\{q\},\; Y_1=\{0\}, \end{gathered}\end{aligned}$$
$$\begin{aligned} s_1 =\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},\, s_2 =\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix},\, p =\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},\; l_1 =\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},\, l_2 =\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},\; q =\begin{pmatrix} -1/2 \\ 1/2 \\ 1 \end{pmatrix}. \end{aligned}$$
(6.25)

Since \(D={\mathbb R}^3\), then \(D_{s_i}= X_{s_i}\) and \(D_i= X_i\), \(i=1,2\), in the decomposition (2.3).

The components of \(x\in {\mathbb R}^3\) represented as \(x=x_{s_1}+x_{s_2}+x_{p_1}+x_{p_2}\) (see (A.38)) have the form

$$\begin{aligned} x_{s_1} = S_1 x=(x_1-x_3)(1,0,0)^{\mathop {\textrm{T}}},\; x_{s_2}=S_2x=x_3(1,0,1)^{\mathop {\textrm{T}}},\; x_{p_1}=P_1 x=0,\; x_{p_2}=P_2 x=x_2(0,1,0)^{\mathop {\textrm{T}}}. \end{aligned}$$

(notice that \(x_1-x_3\), \(x_3\), \(x_2\) are the coordinates of the vector \(x=(x_1,x_2,x_3)^{\mathop {\textrm{T}}}\) with respect to the basis \(s_1\), \(s_2\), p in \({\mathbb R}^3\), i.e., \(x=(x_1-x_3)s_1+x_3 s_2+ x_2 p\), where \(s_1\), \(s_2\), p are defined in (6.25)). If we make the change of variables

$$\begin{aligned} x_1-x_3=w,\quad x_3=\xi ,\quad x_2=u, \end{aligned}$$
(6.26)

then

$$x_{s_1}=w\, (1,0,0)^{\mathop {\textrm{T}}},\quad x_{s_2}=\xi \, (1,0,1)^{\mathop {\textrm{T}}},\quad x_{p_1}=0,\quad x_{p_2}=u\, (0,1,0)^{\mathop {\textrm{T}}},$$

and the system (6.21)–(6.23) can be represented in the form (the same form as in [4, (10.13)–(10.15)])

$$\begin{aligned} \dot{w}&=-w+ \widetilde{f}_1(t,w,\xi ,u)+ 0.5\widetilde{f}_3(t,w,\xi ,u), \end{aligned}$$
(6.27)
$$\begin{aligned} u&=0.5\widetilde{f}_3(t,w,\xi ,u), \end{aligned}$$
(6.28)
$$\begin{aligned} w&= \widetilde{f}_2(t,w,\xi ,u)-0.5\widetilde{f}_3(t,w,\xi ,u), \end{aligned}$$
(6.29)

where

$$\begin{aligned} \widetilde{f}(t,w,\xi ,u):=f(t,w+\xi ,u,\xi )=f(t,x_1,x_2,x_3)=f(t,x). \end{aligned}$$
(6.30)

Note that the system (6.27)–(6.29) is equivalent to the system (2.9)–(2.12) where (2.10) disappears since \(Q_1=0\) and \(P_1=0\). The equations \(Q_2[f(t,x)-Bx]=0\), \(F_2[Bx-f(t,x)]=0\) (as well as Eqs. (2.11) and (2.12)) defining the manifold \(L_0\) (i.e., (2.30) where \(t_*=0\)) are equivalent to (6.22), (6.23) (or (6.29), (6.28)).

Consider the particular case of the system (6.21)–(6.23when

$$\begin{aligned} \begin{aligned}&f_1(t,x)=(x_1-x_3-1)^3+x_1-x_3+x_2,\\&f_2(t,x)=x_1-x_3+x_2,\\&f_3(t,x)=-(x_2^3+3x_2^2+x_2+1)-(t+1)x_3^2. \end{aligned} \end{aligned}$$
(6.31)

For the new variables (6.26), \(\widetilde{f}_i(t,w,\xi ,u):=f_i(t,w+\xi ,u,\xi )\), \(i=1,2,3\), (see (6.30)) take the form

$$\begin{aligned} \widetilde{f}_1(t,w,\xi ,u)\!=\!(w-1)^3+w+u,\; \widetilde{f}_2(t,w,\xi ,u)\!=\!w+u,\; \widetilde{f}_3(t,w,\xi ,u)\!=\!-(u^3+3u^2+u+1)-(t+1)\xi ^2. \end{aligned}$$
(6.32)

Then Eqs. (6.27) and (6.28) take the form

$$\begin{aligned} \dot{w}&=(w-1)^3, \end{aligned}$$
(6.33)
$$\begin{aligned} u&=-0.5\big [(u^3+3u^2+u+1)-(t+1)\xi ^2\big ], \end{aligned}$$
(6.34)

and Eq. (6.29) becomes the identity (\(w=w\)) when we substitute (6.28) into it.

From (6.34) we obtain \(u=-(t+1)^{1/3}\xi ^{2/3}-1\). Therefore, for any fixed \(t\in {\mathcal {T}}\), \(w, \xi \in {\mathbb R}\) there exists a unique \(u\in {\mathbb R}\) such that (6.34) is fulfilled and, hence, (6.29) and (6.28) with \(\widetilde{f}_i\), \(i=1,2,3\), of the form (6.32) are satisfied. Consequently, condition 1 of Theorem 3.4 holds for any sets \(M_{s1}\subseteq X_{s_1}\), \(M_{s_2}\subseteq X_{s_2}\) and \(M_2\subseteq X_2\). Note that \(M_{s1}\subseteq X_{s_1}\) since \(X_1=\{0\}\).

For \(f_i\) (or \(\widetilde{f}_i\)), \(i=1,2,3\), of the form (6.31), the matrix corresponding to the operator (3.18), where \(t_*\) and \(x_*=(x_1^*,x_2^*,x_3^*)^{\mathop {\textrm{T}}}=(w_*+\xi _*,u_*,\xi _*)^{\mathop {\textrm{T}}}\) are fixed, has the form \(\Phi _{t_*,x_*}=\partial _{x_2}f_3(t_*,x_*)-2=-3(x_2^*+1)^2\) with respect to the basis p of \(X_2\) and the basis q of \(Y_2\). Hence, the operator \(\Phi _{t_*,x_*}\) has the inverse \(\Phi _{t_*,x_*}^{-1}\in \textrm{L}(Y_2,X_2)\) if \(x_2^*\ne -1\) or \(u_*\ne -1\) (note that \(\partial _{x_2}f_3(t,x)=\partial _u \widetilde{f}_3(t,w,\xi ,u)\)). Consequently, condition 2 of Theorem 3.5 holds if \(x_2^*=u_*\ne -1\).

Choose

$$\begin{aligned}&M_{s1}=\{x_{s_1}=(x_1-x_3)(1,0,0)^{\mathop {\textrm{T}}}\mid x_1-x_3\in (1,+\infty )\}, \nonumber \\&M_{s_2}=\{x_{s_2}=x_3(1,0,1)^{\mathop {\textrm{T}}}\mid x_3\in {\mathbb R}\setminus \{0\}\},\quad M_2=\{x_{p_2}=x_2(0,1,0)^{\mathop {\textrm{T}}}\mid x_2\in {\mathbb R}\setminus \{-1\}\}, \end{aligned}$$
(6.35)

where \(x_1-x_3=w\), \(x_3=\xi\) and \(x_2=u\) if the new variables (6.26) are used.

Let us prove that the component \(x_{s_1}(t)=S_1x(t)\) (recall that \(x_{p_1}=0\)) of each solution x(t) of the DAE (2.1), (6.24), (6.31) with the initial point \((t_0,x_0)\in L_0\), for which \(S_1x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\), can never leave \(M_{s1}\). Notice that any \(t_0\in [0,\infty )\), \(x_0=(x_{1,0},x_{2,0},x_{3,0})^{\mathop {\textrm{T}}}=(w_0+\xi _0,u_0,\xi _0)^{\mathop {\textrm{T}}}\) such that \(w_0\in (1,+\infty )\), \(u_0\in {\mathbb R}\setminus \{-1\}\), \(\xi _0\in {\mathbb R}\setminus \{0\}\) and \(u_0=-(t_0+1)^{1/3}\xi _0^{2/3}-1\) are consistent initial values for which \(S_1x_0\in M_{s1}\), \(S_2x_0\in M_{s_2}\) and \(P_2x_0\in M_2\). Since the DAE (2.1), (6.24), (6.31) is equivalent to the system (6.33), (6.34) and \(x_{s_1}=w(1,0,0)^{\mathop {\textrm{T}}}\in M_{s1}\) if and only if \(w\in (1,+\infty )\), then we need to prove that a solution \(w=w(t)\) of (6.33) with the initial values \(t_0\in [0,\infty )\), \(w(t_0)=w_0\in (1,+\infty )\) can never leave the set \(\widetilde{M}_{s1}:=\{w\in (1,+\infty )\}\). Indeed, since \(\dot{w}=0\) along the boundary \(\partial \widetilde{M}_{s1}=\{w=1\}\) of \(\widetilde{M}_{s1}\) and \(\dot{w}>0\) inside \(\widetilde{M}_{s1}\), then each solution of (6.33) which at the initial moment \(t_0\) is in \(\widetilde{M}_{s1}\) can never thereafter leave it. Hence, condition 3 of Theorem 3.4 holds.

Define the function \(V(t,x_{s_1},x_{p_1})\equiv V(x_{s_1}):=(w-1,0,0)(w-1,0,0)^{\mathop {\textrm{T}}}=(w-1)^2\). Obviously, the function V is positive for \(x_{s_1}=(w,0,0)^{\mathop {\textrm{T}}}\in M_{s1}\), i.e., \(w\in (1,+\infty )\). Note that Eq. (2.10) disappears and Eq. (2.9) is equivalent to (6.27). Namely, (2.9) takes the form \(\dot{x}_{s_1}=\big (\dot{w},\,0,\,0\big )^{\mathop {\textrm{T}}}=\big (-w+ \widetilde{f}_1(t,w,\xi ,u)+ 0.5\widetilde{f}_3(t,w,\xi ,u),\,0,\,0\big )^{\mathop {\textrm{T}}}\), where \(\widetilde{f}_i\), \(i=1,2,3\), have the from (6.32), and it is equivalent to Eq. (6.33) for \((t,w,\xi ,u)\) satisfying (6.27) and (6.28). Hence, \(\dot{V}_{(2.9),(2.10)}(x_{s_1})= \dot{V}_{(2.9)}(x_{s_1})=2(w-1)^4>(w-1)^4=V^2(x_{s_1})\) for each \(t\in {\mathcal {T}}\), \(x_{s_1}\in M_{s1}\), \(x_{s_2}\in M_{s_2}\), \(x_{p_2}\in M_2\) such that \((t,x_{s_1}+x_{s_2}+x_{p_2})\in L_0\). Therefore, the inequality (4.1) takes the form \(\dot{v}>v^2\), and it is easy to verify that this inequality does not have global positive solutions. Hence, condition 4 of Theorem 4.2 is fulfilled.

Thus, by Theorem 4.2 (or Theorem 5.2 and Corollary 5.2), for each initial point \((t_0,x_0)\in [0,\infty )\times {\mathbb R}^3\), where \(x_0=(x_{1,0},x_{2,0},x_{3,0})^{\mathop {\textrm{T}}}\), such that \(x_{1,0}-x_{3,0}>1\) (i.e., \(S_1x_0\in M_{s1}\)), \(x_{2,0}\ne -1\) (i.e., \(P_2x_0\in M_2\)), \(x_{3,0}\ne 0\) (i.e., \(S_2x_0\in M_{s_2}\)), and \(x_{2,0}=-(t_0+1)^{1/3}x_{3,0}^{2/3}-1\) (i.e., \((t_0,x_0)\in L_0\)), IVP (2.1), (2.2), where A, B and f(tx) are of the form (6.24), (6.31), has a unique solution x(t) with the component \(S_2x(t)=\varphi _{s_2}(t)=x_3(t)\, (1,0,1)^{\mathop {\textrm{T}}}\), where \(\varphi _{s_2}\in C([t_0,\infty ),M_{s_2})\) (accordingly, \(x_3\in C([t_0,\infty ),{\mathbb R}\setminus \{0\})\) ) is some function with the initial value \(\varphi _{s_2}(t_0)=x_{3,0}\), and this solution has a finite escape time (is blow-up in finite time).

Indeed, for each initial values \(t_0\in [0,\infty )\), \(x_0=(x_{1,0},x_{2,0},x_{3,0})^{\mathop {\textrm{T}}}\in {\mathbb R}^3\) such that \(x_{1,0}-x_{3,0}>1\), \({x_{2,0}\ne -1}\), \({x_{3,0}\ne 0}\) and \(x_{2,0}=-(t_0+1)^{1/3}x_{3,0}^{2/3}-1\), IVP (2.1), (6.24), (6.31), (2.2has the solution \(x(t)=(x_1(t),x_2(t),x_3(t))^{\mathop {\textrm{T}}}\) where \(x_1(t)=\dfrac{1}{\sqrt{-2(t-t_0)+(x_{1,0}-x_{3,0}-1)^{-2}}}+1+x_3(t)\), \(x_2(t)=-(t+1)^{\frac{1}{3}}x_3^{\frac{2}{3}}(t)-1\) and \(x_3\in C([t_0,\infty ),{\mathbb R}\setminus \{0\})\) is some function such that \(x_3(t_0)=x_{3,0}\). For example, take \(t_0=0\), \(x_{3,0}=1\) and \(x_3(t)=t+1\), then the initial values \(t_0=0\), \(x_0=(x_{1,0},-2,1)^{\mathop {\textrm{T}}}\), where \(x_{1,0}>2\), satisfy all the conditions specified above. Further, take \(x_{1,0}=3\), then the components of the solution x(t) have the form \({x_1(t)=\dfrac{1}{\sqrt{-2t+1}}+t+2}\), \({x_2(t)=-(t+2)}\), \({x_3(t)=t+1}\). Obviously, it exists on \([0,\frac{1}{2})\) and \(\Vert x(t)\Vert \rightarrow +\infty\) as \(t\rightarrow \frac{1}{2}\); hence, it has the finite escape time 0.5. In general, the solution of IVP (2.1), (6.24), (6.31), (2.2has the finite escape time, namely, it exists on \([t_0,T)\), where \(T=t_0+0.5(x_{1,0}-x_{3,0}-1)^{-2}\), and \(\lim \limits _{t\rightarrow T-0} \Vert x(t)\Vert =+\infty\).