Advertisement

Well-posedness of a class of two-point boundary value problems associated with ordinary differential equations

Open Access
Research
  • 401 Downloads

Abstract

This paper introduces the regular decoupling field to study the existence and uniqueness of solutions of two-point boundary value problems for a class of ordinary differential equations which can be derived from the maximum principle in optimal control theory. The monotonicity conditions used to guarantee the existence and uniqueness of such equations are initially a special case of the regular decoupling field method. More generally, in case of the homogeneous equations, this paper generalizes the application scope of the monotonicity conditions method by using the linear transformation method. In addition, the linear transformation method can be used to handle the situation where the monotonicity conditions and regular decoupling field method cannot be directly applied. These two methods overall develop the well-posedness theory of two-point boundary value problems which has potential applications in optimal control and partial differential equation theory.

Keywords

Ordinary differential equations Two-point boundary value problems Regular decoupling field Monotonicity conditions Linear transformation Optimal control theory 

1 Introduction

We consider the following binary first-order linear ordinary differential equations (ODEs):
$$ \textstyle\begin{cases} X_{t}'=a_{t}X_{t}+b_{t}Y_{t}+f_{t}, \\ -Y_{t}'=c_{t}X_{t}+d_{t}Y_{t}+g_{t}, \\ X_{0}=x_{0}, \qquad Y_{T}=HX_{T}, \end{cases}\displaystyle \quad0\leq t \leq T, $$
(1.1)
where \(a_{t}\), \(b_{t} \), \(c_{t}\), \(d_{t}\), \(f_{t}\), \(g_{t}\) are functions defined on \([0,T]\). Besides, \(x_{0}\in R\), \(H\in R\), \(T>0\) is the time duration.

In this paper, only a one-dimensional case is considered for simplicity, and the multi-dimensional cases are dealt similarly. The purpose is to find a pair of \((X_{t},Y_{t}) \in C[0,T]\), for arbitrary \(T>0\), to satisfy ODEs (1.1), which is called the well-posedness study. These are two-point boundary value problems for a class of ODEs. The Hamilton system derived from the Pontryagin maximum principle, which is a milestone of the optimal control theory, belongs to this class. This kind of ODEs can also be related to one kind of partial differential equations (PDEs) (see, for example, [1]). Solving such equations is of great significance in the field of optimal control. Only the linear case is discussed in this paper. Actually, the optimal state system, which comes from the classical linear quadratic (LQ) optimal control problem combined with the adjoint equations, belongs to this kind of ODEs. Therefore, the well-posedness of two-point boundary value problems for such ODEs on arbitrary time duration has very meaningful application background and practical significance. As can be observed in ODEs (1.1), the equations of \(X_{t}\) and \(Y_{t}\) both have \((X_{t},Y_{t})\) as their components, which makes two equations fully coupled together. It is impossible to solve each equation individually, then many methods adapted to ODEs with one unknown variable are no longer feasible (see, for example, [2, 3]).

Such kind of equations becomes a stochastic Hamilton system when taking random noise into consideration, which also can be called the forward–backward stochastic differential equations (FBSDEs). The well-posedness of FBSDEs is also hard to get, and it has widely practical applications in the field of stochastic optimal control as well as financial mathematics. On the well-posedness of FBSDEs on arbitrary time duration, Hu and Peng [4] and Peng and Wu [5] introduce the method of continuation by proposing the monotonicity conditions (see [6]). The existence and uniqueness of solutions are obtained by this method. By using the method of continuation, Wu [7] weakens the monotonicity conditions and obtains the existence and uniqueness of the solutions to two-point boundary value problems for ODEs (1.1) and also the corresponding comparison theorem. However, the monotonicity conditions have a strict restriction on coefficients. The method of continuation can only be used in some certain situations of ODEs (1.1). In case of FBSDEs, Ma et al. [8] introduce the unified approach which leads to the well-posedness of FBSDEs by means of regular decoupling field. The unified approach generalizes the work of solvability of FBSDEs, which makes monotonicity conditions a special case of the unified approach.

In this paper, the regular decoupling field method and the linear transformation method are introduced to study the existence and uniqueness of solutions of two-point boundary value problems for ODEs (1.1). The linear transformation method generalizes the application scope of monotonicity conditions. And these two methods develop the work of solvability of ODEs (1.1), which makes contribution to the optimal control theory.

After giving the preliminaries and assumptions in Section 2, the rest of the paper is organized as follows. In Section 3, for the two-point boundary value problems of ODEs (1.1), the regular decoupling field is introduced to guarantee the well-posedness of such ODEs. Moreover, in Section 4, for some cases that cannot be applied with the regular decoupling field directly, linear transformation method is introduced to weaken the coefficients restriction of the monotonicity conditions. These two methods develop the well-posedness theory of two-point boundary value problems for ODEs, and it is feasible to be applied to the research of the control theory and PDEs. At last, we conclude the main results of this paper and give the future research direction in Section 5.

2 Preliminaries and assumptions

We first assume the following.

Assumption 2.1

  1. (i)

    Homogeneous coefficients \(a_{t}\), \(b_{t} \), \(c_{t}\), \(d_{t}\) are uniformly boundary on \([0,T]\).

     
  2. (ii)
    Inhomogeneous coefficients \(f_{t}\) and \(g_{t}\) hold:
    $$\int_{0}^{T} {f_{t}}^{2} \,dt < \infty,\qquad \int_{0}^{T} {g_{t}}^{2} \,dt < \infty. $$
     

From Zhang’s lemma [9], it is easy to get the well-posedness of ODEs (1.1) on small duration.

Theorem 2.2

\(\exists\delta> 0\), whenever\(T \leq\delta\), if the terminal function is uniformly Lipschitz continuous in its spatial variable, ODEs (1.1) have a unique solution on\([0,T]\).

As can be seen, when \(T<\delta\), the terminal function of ODEs (1.1) is a linear function for \(X_{T}\), and \(H\in R\). Then the well-posedness is obvious according to Theorem 2.2. We introduce the decoupling field to study the well-posedness of ODEs (1.1) when T is arbitrary duration.

Definition 2.3

A binary function \(u(t,x)\): \([0,T] \times R \mapsto R\) with \(u(T,x)=g(x)\) is said to be a “decoupling field” of ODEs (1.1) if there exists a constant \(\delta>0\) such that, for any \(0=t_{1}< t_{2}\leq T\), \(\tilde{x} \in R\), with \(t_{2}-t_{1}\leq\delta\), the following equations have a unique solution \((X_{t},Y_{t})\) matching the equation \(u(t,X_{t})=Y_{t}\).
$$ \textstyle\begin{cases} X_{t}=\tilde{x}+\int_{t_{1}}^{t} (a_{s}X_{s}+b_{s}Y_{s}+f_{s})\,ds, \\ Y_{t}=u(t_{2},X_{t_{2}})+\int_{t}^{t_{2}} (c_{s}X_{s}+d_{s}Y_{s}+g_{s})\,ds. \end{cases}\displaystyle \quad t_{1} \leq t \leq t_{2}, $$
(2.1)
If the decoupling field \(u(t,x)\) of Definition 2.3 satisfies
$$ \frac{ \vert u(t,x_{1})-u(t,x_{2}) \vert }{x_{1}-x_{2}}\leq C, \quad t\in[0,T], x_{1}, x_{2} \in R, $$
(2.2)
where C is a constant independent of \(x_{1}\), \(x_{2}\), the \(u(t,x)\) is called a regular decoupling field. According to Theorem 2.2, ODEs (1.1) have a unique solution \((X_{t},Y_{t})\) on any \([t_{1},t_{2}]\). Because of the uniqueness, we always have \(u(t,X_{t})=Y_{t}\) on \([t_{1},t_{2}]\), which guarantees the existence of the decoupling field.

From the result of Ma et al. [8], we have the following.

Definition 2.4

Assume that Assumption 2.1 holds, if ODEs (1.1) have a decoupling field on arbitrary duration \([0,T]\), ODEs (1.1) have a unique solution \((X_{t},Y_{t}) \in C[0,T]\) on \([0,T]\).

3 Regularity of decoupling field

We first introduce some notations. Given \(a^{i}\) (\(i=1,2\)), let \((X^{i}_{t},Y^{i}_{t})\) (\(i=1,2\)) denote the solutions with initial condition \(a^{i}\). Then we define
$$ \hat{X}_{t}=X^{1}_{t}-X^{2}_{t}, \qquad \hat{Y}_{t}=Y^{1}_{t}-Y^{2}_{t}, \qquad \hat {u}(t)=\frac{u(t,X^{1}_{t})-u(t,X^{2}_{t})}{X^{1}_{t}-X^{2}_{t}}. $$
Since \(Y^{i}_{t}=u(t,X^{i}_{t})\) (\(i=1,2\)), we have
$$\hat{Y}_{t}=\hat{u}(t)\cdot\hat{X}_{t}. $$
Also, \((\hat{X}_{t}, \hat{Y}_{t})\) satisfies the following “variational equations”:
$$ \textstyle\begin{cases} \hat{X}_{t}=\hat{x}+\int_{0}^{t} ( a_{s} \hat{X}_{s}+b_{s} \hat{Y}_{s} )\,ds, \\ \hat{Y}_{t}=H\hat{X}_{T}+\int_{t}^{T} ( c_{s} \hat{X}_{s}+d_{s} \hat{Y}_{s} )\,ds, \end{cases}\displaystyle \quad0 \leq t \leq T, $$
(3.1)
where \(\hat{x}=x_{0}^{1}-x_{0}^{2}\).
Because of \(\hat{u}(t)=\frac{\hat{Y}_{t}}{ \hat{X}_{t}}\), \(\hat{u}(t)\) satisfies the following differential equation:
$$ d \hat{u}(t)=-\frac{\hat{Y}_{t}}{(\hat{X}_{t})^{2}}\,d\hat{X}_{t}+\frac{1}{\hat {X}_{t}}\,d \hat{Y}_{t}. $$
(3.2)
For the variational equations (3.1), we can get the following equation by integrating equation (3.2) from \([t,T]\):
$$ \hat{u}(t)=H+ \int_{t}^{T} F\bigl(s,\hat{u}(s)\bigr)\,ds, $$
(3.3)
where
$$ F\bigl(s,\hat{u}(s)\bigr)=b_{s}\bigl(\hat{u}(s)\bigr)^{2}+(a_{s}+d_{s}) \hat{u}(s)+c_{s}. $$
(3.4)

Equation (3.3) is called the “characteristic equation” of (1.1). According to the analysis in the last section, studying the well-posedness of ODEs (1.1) is essentially finding conditions that ensure the solution of equation (3.3) û bounded on \([0,T]\).

As can be seen from (3.4), equation (3.3) is a Riccati equation. The Riccati equation can be solved when knowing a specific solution of it. Otherwise, we should introduce the following comparison theorem to get the boundedness of solution of equation (3.3).

Lemma 3.1

Consider a differential equation on\([0,T]\)
$$ y_{t}=h+ \int_{t}^{T} F\bigl(s,y^{0}_{s} \bigr)\,ds $$
(3.5)
and its upper/lower boundary equation
$$ \textstyle\begin{cases} \overline{y}_{t}=\overline{h}-C^{1}+\int_{t}^{T} [\overline{F}(s,\overline {y}_{s})+g_{s}^{1}]\,ds, \\ \underline{y}_{t}=\underline{h}+C^{2}+\int_{t}^{T} [\underline{F}(s,\underline {y}_{s})-g_{s}^{2}]\,ds, \end{cases} $$
(3.6)
where, \(\underline{h}\), , \(\underline {F}\)are the upper/lower boundary functions ofhandF, which match
$$\underline{h}\leq h\leq\overline{h}, \qquad \underline{F}\leq F \leq \overline{F}. $$
If one of the following three situations holds true:
  1. (i)

    Equations (3.6) always have bounded solution\(\overline {y}_{t}\), \(\underline{y}_{t} \), \(t\in[0,T]\).

     
  2. (ii)

    For every\(t\in[0,T]\), the function\(y\mapsto F(t,y)\) (\(\underline{F}(y) \)or\(\overline{F}(y)\)) is uniformly Lipschitz continuous on\(y\in[\underline{y}_{t},\overline{y}_{t}]\), where the Lipschitz constant isL.

     
  3. (iii)

    For any\(t\in[0,T]\), \(C^{i}\geq\int_{t}^{T} e^{-\int_{s}^{T} \alpha _{r} \,dr}g_{s}^{i}\,ds\)always holds, whereαmatches\(|\alpha|\leq L\).

     

Then equation (3.5) has a unique solutionymatching\(\underline {y}\leq y \leq\overline{y}\).

Remark 3.2

Equations (3.6) are called the upper/lower boundary equations of ODE (3.5). The classical sufficient conditions for situation(iv) are: \(C^{i}\geq\int_{0}^{T} e^{L(T-t)}(g^{i}_{t})^{+}dt\). Particularly, that is satisfied if \(C^{i}=0\) and \(g^{i}\leq0\).

To apply Lemma 3.1, we denote the upper/lower bound of \(F(t,\hat{u}(t))\):
$$ \begin{aligned} \overline{F}\bigl(\hat{u}(t)\bigr)= \sup_{t\in[0,T]} F\bigl(t,\hat{u}(t)\bigr) =\overline {b}\bigl( \hat{u}(t)\bigr)^{2}+(\overline{a}+\overline{d})\hat{u}(t)+\overline{c}, \\ \underline{F}\bigl(\hat{u}(t)\bigr)= \inf_{t\in[0,T]} F\bigl(t, \hat{u}(t)\bigr)=\underline {b}\bigl(\hat{u}(t)\bigr)^{2}+( \underline{a}+\underline{d})\hat{u}(t)+\underline{c}, \end{aligned} $$
(3.7)
where \(\underline{\varphi}=\operatorname{inf}_{t\in[0,T]}{\varphi}_{t}\), \(\overline {\varphi}=\operatorname{sup}_{t\in[0,T]}{\varphi}_{t}\), \(\varphi=a,b,c,d\). We should remark that \(\underline{F}(\hat{u}(t))\) and \(\overline{F}(\hat{u}(t))\) are deterministic functions. Thus we have
$$\underline{F}\bigl(t,\hat{u}(t)\bigr) \leq F\bigl(t,\hat{u}(t)\bigr) \leq \overline {F}\bigl(t,\hat{u}(t)\bigr). $$

Case 1: Constant-Coefficient

We consider the following constant-coefficient case:
$$ \textstyle\begin{cases} X_{t}=x_{0}+\int_{0}^{t} ( aX_{s}+bY_{s}+f_{s})\,ds, \\ Y_{t}=HX_{T}+\int_{t}^{T} ( cX_{s}+dY_{s}+g_{s})\,ds, \end{cases}\displaystyle \quad0 \leq t \leq T, $$
(3.8)
where \(a,b,c,d \in R\).
It is obvious that \(\underline{F}(t,\hat{u}(t))=\overline{F}(t,\hat {u}(t))=F(\hat{u}(t))\), now equation (3.3) takes the form
$$ \hat{u}(t)=H+ \int_{t}^{T} \bigl[b\bigl(\hat{u}(s) \bigr)^{2}+(a+d)\hat{u}(s)+c\bigr]\,ds. $$
(3.9)
We have the following result.

Theorem 3.3

Assume that Assumption 2.1holds and all coefficients are constants. Equation (3.9) has a bounded solution for arbitrary T if there exists one of the following three situations:
  1. (i)

    \(F(H)\geq0\), and F has a zero point in\([H,\infty]\).

     
  2. (ii)

    \(F(H)\leq0\), and F has a zero point in\([-\infty,H]\).

     
  3. (iii)

    \(b=0\).

     

Proof

(i) There exists \(\lambda\geq H\) such that \(F(\lambda)=0\). Note that \(F(y)\) is locally Lipschitz continuous in y and
$$H=H+ \int_{t}^{T}\bigl[F(H)-F(H)\bigr]\,ds, \qquad \lambda=\lambda+ \int_{t}^{T}F(\lambda)\,ds. $$
According to Lemma 3.1 and Remark 3.2,
$$\begin{aligned} H&=H+ \int_{t}^{T} \bigl[F(H)-F(H)\bigr] \,ds \\ &\leq \hat{u}(t)=H+ \int_{t}^{T} \bigl[b\bigl(\hat{u}(s) \bigr)^{2}+(a+d)\hat{u}(s)+c\bigr] \,ds \\ &\leq \lambda=\lambda+ \int_{t}^{T}F(\lambda)\,ds, \end{aligned} $$
thus \(\hat{u}(t)\in[H,\lambda]\).

(ii) can be proved similarly.

(iii) Equation (3.9) becomes a solvable and linear equation
$$\hat{u}(t)=H+ \int_{t}^{T} \bigl[(a+d)\hat{u}(t)+c\bigr]\,ds. $$
It is obvious that \(\hat{u}(t)\) is bounded on \([0,T]\). □

Theorem 3.3(iii) is easy to check, but (i) and (ii) are not directly connected with the coefficients. Next, we give some equivalence conditions.

According to Vieta’s theorem, Theorem 3.3(i) equals the following two situations:
  • \(b<0\), \(F(H)\geq0\),

  • \(b>0\), \(F(H)\geq0\), \((a+d)^{2}-4bc\geq0\), \(H\leq-\frac{a+d}{2b}\).

Similarly, Theorem 3.3(ii) equals another two situations:
  • \(b>0\), \(F(H)\leq0\),

  • \(b<0\), \(F(H)\leq0\), \((a+d)^{2}-4bc\geq0\), \(H\geq-\frac{a+d}{2b}\).

This can be a criterion to judge whether ODEs with constant-coefficient are solvable or not (see Table 1). In Table 1,
$$F(H)=bH^{2}+(a+d)H+c. $$
Table 1

Situations matching Theorem 3.3

Case

b

F(H)

Assumptions

1

b<0

F(H)≥0

 

2

b>0

F(H)≥0

\((a+d)^{2}-4bc\geq0\), \(H\leq-\frac{a+d}{2b}\)

3

b>0

F(H)≤0

 

4

b<0

F(H)≤0

\((a+d)^{2}-4bc\geq0\), \(H\geq-\frac{a+d}{2b}\)

Next, we focus on the connections between Theorem 3.3 and the monotonicity conditions. According to Peng and Wu [5] and Wu [7], matching one of the following conditions, we can also get the well-posedness of ODEs (1.1) on \([0,T]\).

Lemma 3.4

(Monotonicity conditions)

ODEs (3.8) have a unique solution if one of the following cases holds:
  1. (i)
    ( x y ) ( c d a b ) ( x y ) β 1 | x | 2 β 2 | y | 2 , Open image in new window
    (3.10)
    where\(\beta_{1}\)and\(\beta_{2}\)are nonnegative constants. When\(\beta_{1} > 0\), \(H > 0\), then\(\beta_{2} \geq0\); when\(\beta_{2} > 0\), then\(\beta_{1} \geq0\), \(H \geq0\).
     
  2. (ii)
    ( x y ) ( c d a b ) ( x y ) β 1 | x | 2 + β 2 | y | 2 , Open image in new window
    (3.11)
    where\(\beta_{1}\)and\(\beta_{2}\)are nonnegative constants. When\(\beta_{1} > 0\), \(H < 0\), then\(\beta_{2} \geq0\); when\(\beta_{2} > 0\), then\(\beta_{1} \geq0\), \(H \leq0\).
     

According to Vieta’s theorem, Lemma 3.4 is equivalent to the following lemma.

Lemma 3.5

(Equivalence conditions)

  1. (i)
    A necessary and sufficient condition for Lemma 3.4(i) is
    $$ b< 0,\qquad H>0,\qquad (a-d)^{2}+4bc < 0. $$
    (3.12)
     
  2. (ii)
    A necessary and sufficient condition for Lemma 3.4(ii) is
    $$ b>0,\qquad H< 0,\qquad (a-d)^{2}+4bc < 0. $$
    (3.13)
     
As can be seen in Lemma 3.5(i),
$$(a-d)^{2}+4bc < 0. $$
Obviously, we have
$$(a+d)^{2}-4bc>0, $$
and also, \(b<0\), \(H>0\).
If \(F(H)\geq0\), Lemma 3.5(i) matches case 1 of Table 1; if \(F(H)\leq0\), \(H\geq-\frac{a+d}{-2b}\), Lemma 3.5(i) matches case 4 of Table 1; if \(F(H)\leq0\), \(H\leq-\frac{a+d}{-2b}\), we must have \(\lambda\geq H\) such that \(F(\lambda)=0\). Thanks to \(H>0\), the following relation holds true:
$$\begin{aligned} 0&=0+ \int_{t}^{T} \bigl[F(0)-c\bigr] \,ds \\ &\leq \hat{u}(t)=H+ \int_{t}^{T} F\bigl(\hat{u}(t)\bigr) \,ds \\ &\leq \lambda=\lambda+ \int_{t}^{T}F(\lambda)\,ds. \end{aligned} $$

According to Theorem 3.3, the analysis above means \(\hat{u}(t)\in [0,\lambda]\). Thus, Lemma 3.5(i) totally falls into the framework of Theorem 3.3.

Similarly, Lemma 3.5(ii) can be derived from case 2, case 3 of Table 1 and the case \(\hat{u}(t)\in[\lambda,0]\). It is concluded that the monotonicity conditions can be derived from Theorem 3.3, which means the monotonicity conditions are a special case of the regular decoupling field method.

An example is given where the coefficients match the assumption of Theorem 3.3. It is noted that this example does not match the monotonicity conditions.

Example 1

Consider the following ODEs:
$$ \textstyle\begin{cases} X_{t}=1+\int_{0}^{t} (2X_{s}+2Y_{s}) \,ds, \\ Y_{t}=\frac{1}{2}X_{T}+\int_{t}^{T} (-3X_{s}+Y_{s}) \,ds, \end{cases}\displaystyle \quad0\leq t \leq T, $$
(3.14)
where \(a=2\), \(b=2\), \(c=-3\), \(d=1\), \(H=\frac{1}{2}\).

According to Lemma 3.4, the signs of H and b should be different to match the monotonicity conditions. Thus, the well-posedness of equations (3.14) cannot be proved by using the framework of monotonicity conditions.

Nevertheless, we have \(b=2>0\), and
$$F(H)=b{H}^{2}+(a+d)H+c=\frac{1}{2}+\frac{3}{2}-3=-1 < 0. $$

Then ODEs (3.14) match Theorem 3.3(ii) (case 3 of Table 1). It is obvious that ODEs (3.14) have a unique solution thanks to the analysis above.

Case 2: Functional coefficients

In this part, we consider the case where coefficients of ODEs (1.1) are functions defined on \([0,T]\). In this case, \(F(s,\hat{u}(s))\) takes the form (3.4), and also Assumption 2.1 holds. It is noted that \(\overline {F}(t,\hat{u}(t))\)/\(\underline{F}(t,\hat{u}(t))\) defined in (3.7) is regarded as an upper/lower bound of \(F(s,\hat{u}(s))\). In analogy to Theorem 3.3, we have the following result.

Theorem 3.6

Assume that Assumption 2.1holds, equation (3.6) has a bounded solution, \(\underline{y}\)for arbitrary T if there exists one of the following three situations:
  1. (i)

    \(\underline{F}(t,H)\geq0\), and\(\overline{F}(t,H)\)has a zero point in\([H,\infty]\).

     
  2. (ii)

    \(\underline{F}(t,H)\leq0\), and\(\overline{F}(t,H)\)has a zero point in\([-\infty,H]\).

     
  3. (iii)

    \(b_{s}=0\).

     

Proof

(i) The proof is the same as that of Theorem 3.3(i). As a result of
$$\underline{F}(y)\leq F(t,y) \leq\overline{F}(y), $$
we have \(\hat{u}(t)\in[H,\lambda]\).

(ii) can be proved similarly, and \(\hat{u}(t)\in[\lambda,H]\).

(iii) If \(b_{s}\equiv0\), we have a positive constant \(C_{0}\) such that
$$-C_{0}[y+1] \leq\underline{F}(y)\leq\overline{F}(y) \leq C_{0}[y+1]. $$

According to Theorem 3.3(iii), the analytic solution of the upper/lower bound equation can be derived, which means equation (3.4) has a bounded solution on \([0,T]\). □

From two cases above, it is concluded that the regular decoupling field can be used to prove the well-posedness of two-point boundary value problems for ODEs (1.1), especially when ODEs (1.1) cannot match the monotonicity conditions. In the next section, the linear transformation method is used to generalize the framework of the monotonicity conditions.

4 The linear transformation method

Consider the following homogeneous ODEs:
$$ \textstyle\begin{cases} dX_{t}=(b_{1}X_{t}+b_{2}Y_{t})\,dt, \\ -dY_{t}=(f_{1}X_{t}+f_{2}Y_{t})\,dt, \\ X_{0}=a, \qquad Y_{T}=HX_{T}. \end{cases}\displaystyle \quad 0\leq t \leq T, $$
(4.1)
If equation (4.1) cannot meet the requirement of Theorem 3.3 or Lemma 3.4, consider the following transformation:
( X ˜ t Y ˜ t ) = A ( X t Y t ) = ( a 11 a 12 a 21 a 22 ) ( X t Y t ) , Open image in new window
where the transformation matrix \(A= \bigl( {\scriptsize\begin{matrix}{} a_{11} & a_{12} \cr a_{21} & a_{22} \end{matrix}} \bigr)\).
Then we have
$$ \textstyle\begin{cases} \tilde{X}_{t}=a_{11}X_{t}+a_{12}Y_{t},\\ \tilde{Y}_{t}=a_{21}X_{t}+a_{22}Y_{t}, \end{cases}\displaystyle \quad t\in[0,T]. $$
(4.2)
Substituting (4.1) into (4.2), we have
$$ \textstyle\begin{cases} d\tilde{X}_{t}=( \tilde{b}_{1} \tilde{X}_{t}+ \tilde{b}_{2} \tilde{Y}_{t})\,dt, \\ -d\tilde{Y}_{t}=( \tilde{f}_{1} \tilde{X}_{t}+ \tilde{f}_{2} \tilde{Y}_{t})\,dt, \\ \tilde{X}_{0}=\frac{ \vert A \vert }{a_{22}}a+\frac{a_{12}}{a_{22}}\tilde{Y}_{0},\qquad \tilde {Y}_{T}=\frac{a_{21}+a_{22}H}{a_{11}+a_{12}H}\tilde{X}_{T}, \end{cases}\displaystyle \quad 0\leq t \leq T, $$
(4.3)
where
$$ \textstyle\begin{cases} \tilde{b}_{1}= [a_{22}(a_{11}b_{1}-a_{12}f_{1})-a_{21}(a_{11}b_{2}-a_{12}f_{2})] / \vert A \vert , \\ \tilde{b}_{2}= [a_{11}(a_{11}b_{2}-a_{12}f_{2})-a_{12}(a_{11}b_{1}-a_{12}f_{1})] / \vert A \vert , \\ \tilde{f}_{1}= [a_{21}(a_{21}b_{2}-a_{22}f_{2})-a_{22}(a_{21}b_{1}-a_{22}f_{1})] / \vert A \vert , \\ \tilde{f}_{2}= [a_{12}(a_{21}b_{1}-a_{22}f_{1})-a_{11}(a_{21}b_{2}-a_{22}f_{2})] / \vert A \vert . \end{cases} $$
(4.4)
It is noted that in equations (4.3),
$$\tilde{X}_{0}=C_{1}+C_{2}\tilde{Y}_{0}, \quad C_{1},C_{2}\in R. $$
From Wu [10], Lemma 3.4 should be adjusted to match a new form of \(\tilde{X}_{0}\), where \(C_{2}\) is the coefficient of \(\tilde {Y}_{0}\) in \(\tilde{X}_{0}\).

Lemma 4.1

ODEs (3.8) have a unique solution if one of the following cases holds:
  1. (i)
    ( x y ) ( c d a b ) ( x y ) β 1 | x | 2 β 2 | y | 2 , C 2 0 , H 0 , Open image in new window
    (4.5)
    where\(\beta_{1}\)and\(\beta_{2}\)are nonnegative constants. Also, \(\beta _{1}\), \(\beta_{2}\), \(C_{2}\), andHcannot be 0 at the same time.
     
  2. (ii)
    ( x y ) ( c d a b ) ( x y ) β 1 | x | 2 + β 2 | y | 2 , C 2 0 , H 0 , Open image in new window
    (4.6)
    where\(\beta_{1}\)and\(\beta_{2}\)are nonnegative constants. Also, \(\beta _{1}\), \(\beta_{2}\), \(C_{2}\), andHcannot be 0 at the same time.
     
Note that coefficients after transforming should match the following inequality to meet the requirement of the monotonicity conditions. Denoting \(a_{11}/a_{12}=m\), \(a_{21}/a_{22}=n\), we have
$$ (m-n)^{2}(b_{1}-f_{2})^{2}+4 \bigl(b_{2}m^{2}+(b_{1}-f_{2})m-f_{1} \bigr) \bigl(b_{2}n^{2}+(b_{1}-f_{2})n-f_{1} \bigr)< 0. $$
(4.7)
Denote \(f(y)=b_{2}y^{2}+(b_{1}-f_{2})y-f_{1}\), then we have
$$ (m-n)^{2}(b_{1}-f_{2})^{2}+4f(m)f(n)< 0. $$
(4.8)
In analogy to Lemma 3.5, Lemma 4.1 is equivalent to the following assumption.

Lemma 4.2

Lemma 4.1holds if and only if one of the following two cases occurs:
  1. (i)

    \(\tilde{b}_{2}\leq0 \), \(C_{2}\leq0\), \(\tilde{H}\geq0\), \((m-n)^{2}(b_{1}-f_{2})^{2}+4f(m)f(n)<0\),

     
  2. (ii)

    \(\tilde{b}_{2}\geq0 \), \(C_{2}\geq0\), \(\tilde{H}\leq0\), \((m-n)^{2}(b_{1}-f_{2})^{2}+4f(m)f(n)<0\),

     
where\(m=\frac{a_{11}}{a_{12}}\), \(n=\frac{a_{21}}{a_{22}}\).

As Lemma 4.2 is not convenient to check, then derivation is as follows.

In Lemma 4.2(i),
$$\begin{aligned}& \begin{aligned} \tilde{b}_{2}&=\bigl[a_{11}(a_{11}b_{2}-a_{12}f_{2})-a_{12}(a_{11}b_{1}-a_{12}f_{1}) \bigr] / \vert A \vert , \\ &=\frac{[b_{2}m^{2}-(b_{1}+f_{2})m+f_{1}]a_{12}}{(m-n)a_{22}} \geq 0, \end{aligned} \\& C_{2}=\frac{a_{12}}{a_{22}} \geq 0, \\& \tilde{H}=\frac{a_{21}+a_{22}H}{a_{11}+a_{12}H}=\frac {(n+H)a_{22}}{(m+H)a_{12}} \leq 0. \end{aligned}$$
Similarly, in Lemma 4.2(ii),
$$\tilde{b}_{2}=\frac{[b_{2}m^{2}+(b_{1}+f_{2})m+f_{1}]a_{12}}{(m-n)a_{22}} \leq 0, \qquad C_{2}= \frac{a_{12}}{a_{22}} \leq 0, \qquad \tilde{H}=\frac{(n+H)a_{22}}{(m+H)a_{12}} \geq 0. $$

In summary, we have the following result.

Theorem 4.3

On the curve of function\(f(x)=b_{2}x^{2}+(b_{1}-f_{2})x-f_{1}\), if there are two pairs\((m,f(m))\)and\((n,f(n))\)matching the following criterion:
$$ \textstyle\begin{cases} (m-n)^{2}(b_{1}-f_{2})^{2}+4f(m)f(n) < 0, \\ [b_{2}m^{2}-(b_{1}+f_{2})m+f_{1}]/(m-n) \geq 0, \\ (n+H)/(m+H) \leq 0, \end{cases} $$
(4.9)
then by means of linear transformation\(\bigl( {\scriptsize\begin{matrix}{} mc_{1} & c_{1} \cr nc_{2} & c_{2} \end{matrix}} \bigr)\), equations (4.1) match Lemma 4.2, where\(c_{1}\), \(c_{2}\)are constants in R. When\(c_{1}/c_{2} \leq0\), ODEs (4.3) match Lemma 4.2(i); when\(c_{1}/c_{2} \geq0\), ODEs (4.3) match Lemma 4.2(ii).

In the following example, where the monotonicity conditions and regular decoupling field methods cannot be directly applied, the well-posedness of ODEs can be obtained by using the linear transformation method discussed in this section.

Example 2

Consider the following linear ODEs:
$$ \textstyle\begin{cases} X_{t}=1+\int_{0}^{t} (X_{s}+Y_{s}) \,ds, \\ Y_{t}=-X_{T}+\int_{t}^{T} (2X_{s}+Y_{s}) \,ds, \end{cases}\displaystyle \quad0\leq t \leq T. $$
(4.10)
According to Lemma 4.2, ODEs (4.10) cannot match the monotonicity conditions. In addition, it is noted that Theorem 3.3 cannot be applied directly. Here we consider the linear transformation method to get the well-posedness of ODEs (4.10): According to Theorem 4.3, we need to find two points on the curve \(f(y)=y^{2}-2\) which matches equation (4.9). (4.9) is as follows:
$$\textstyle\begin{cases} 4f(m)f(n) < 0,\\ m^{2}-2m+2/(m-n) \geq 0,\\ (n-1)/(m-1) \leq 0. \end{cases} $$
Take two points \((2,2)\) and \((-1,-1)\) which match equation (4.9), then the transformation matrix A is \(\bigl( {\scriptsize\begin{matrix}{} 2 & 1 \cr -1 & 1 \end{matrix}} \bigr)\). Take A into (4.3), the linear ODEs after transforming is as follows:
$$ \textstyle\begin{cases} \tilde{X}_{t}=-3+\tilde{Y}_{0}+\int_{0}^{t} (\frac{1}{3}\tilde{X}_{s}+\frac {2}{3}\tilde{Y}_{s}) \,ds, \\ \tilde{Y}_{t}=-2\tilde{X}_{T}+\int_{t}^{T} (-\frac{1}{3}\tilde{X}_{s}+\frac {1}{3}\tilde{Y}_{s}) \,ds, \end{cases}\displaystyle \quad0\leq t \leq T. $$
(4.11)

It is easy to check that equations (4.11) match Lemma 4.1(ii) which means (4.11) has a unique solution.

5 Conclusion

In this paper, we introduce two methods to solve the two-point boundary value problems of ODEs (1.1). The first method is the regular decoupling field which generates from the unified approach for FBSDEs. But for ODEs, the regular decoupling field method has a direct criterion which makes it easy to apply. Moreover, in this paper, it can be proved that the monotonicity conditions are a special case of the regular decoupling field method. The second method we introduce is the linear transformation method. It can be applied to cases where the monotonicity conditions and the regular decoupling field method can not. We also give examples in this paper to illustrate how these two methods develop the theory of the two-point boundary value problems for ODEs which has meaningful applications in optimal control and PDEs theory. In addition, the linear transformation method can also be generalized into stochastic cases. This provides another way to study the well-posedness of FBSDEs, which is our future research direction and has some potential applications.

Notes

Acknowledgements

This work was supported by the Natural Science Foundation of China (61573217), the National High-level Personnel of Special Support Program and the Chang Jiang Scholar Program of Chinese Education Ministry.

Authors’ contributions

All authors have contributed equally in this paper. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.
    Yong, J., Zhou, X.Y.: Stochastic controls: Hamiltonian systems and HJB equations. IEEE Trans. Autom. Control 46, 1846 (1999) MATHGoogle Scholar
  2. 2.
    Lakshmikantham, V., Murty, K.N., Turner, J.: Two-point boundary value problems associated with nonlinear fuzzy differential equations. Math. Inequal. Appl. 4, 527–533 (2001) MathSciNetMATHGoogle Scholar
  3. 3.
    Abbas, S., Benchohra, M., N’Guérékata, G.M.: Boundary value problems for differential equations with fractional order and nonlocal conditions. Surv. Math. Appl. 71, 2391–2396 (2009) MathSciNetGoogle Scholar
  4. 4.
    Hu, Y., Peng, S.: Solution of forward–backward stochastic differential equations. Probab. Theory Relat. Fields 103, 273–283 (1995) MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Peng, S., Wu, Z.: Fully coupled forward–backward stochastic differential equations and applications to optimal control. SIAM J. Control Optim. 37, 825–843 (1999) MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Yong, J.: Finding adapted solutions of forward–backward stochastic differential equations: method of continuation. Probab. Theory Relat. Fields 107, 537–572 (1997) MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Wu, Z.: One kind of two point boundary value problems associated with ordinary differential equations and application. J. Shandong Univ. Nat. Sci. 32, 16–23 (1997) Google Scholar
  8. 8.
    Ma, J., Wu, Z., Zhang, D.T., Zhang, J.F.: On well-posedness of forward–backward SDEs—a unified approach. Ann. Appl. Probab. 25, 2168–2214 (2015) MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Zhang, J.F.: The wellposedness of FBSDEs. Discrete Contin. Dyn. Syst., Ser. B 6, 927–940 (2006) MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Wu, Z.: Adapted solutions of forward–backward stochastic differential equations and their parameter dependence. Chin. Ann. Math., Ser. A 1, 55–62 (1998) MathSciNetMATHGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Mathematics and Zhongtai Securities Institute for Financial StudiesShandong UniversityJinanChina

Personalised recommendations