1 Introduction

Let A be a n × n matrix. Let b 1 and b 2 be two different vectors given in ℝnwith n ≥ 1 and write B for the n × 2 matrix b 1 , b 2 . We define

U = v 1 , v 2 T 2 ; v 1 v 2 = 0 ;

and

U = v ( ) : ( 0 , + ) 2 measurable ; v ( t ) U , for almost every t 0 , + .

Consider the following controlled system:

x ( t ) = A x ( t ) + B u ( t ) , t 0 , x ( 0 ) = x 0 ,
(1.1)

where the control function u ( ) = u 1 ( ) , u 2 ( ) T U. In this system, we can rewrite Bu(t) as u 1 ( t ) b 1 + u 2 ( t ) b 2 , while b 1 and b 2 are treated as two different controllers; u1(·) and u2(·) are treated as controls. Here, the controls u1(·) and u2(·) hold the property as following:

u 1 ( t ) u 2 ( t ) = 0 , for almost every t 0 , ,
(1.2)

and system (1.1) is called switching controlled system. The condition (1.2) ensures that, at almost every instant of time, at most one of controllers b 1 and b 2 is active. Such kind of switching controlled systems model a large class of problems in applied science.

The purpose of this article is to study a time optimal control problem for switching controlled system (1.1). We begin with introducing the problem which will be studied. To serve such purpose, we define two sets as:

U ̃ = v v 1 , v 2 T 2 ; v 1 v 2 = 0 and v 2 1
(1.3)

and

U ad = u ( ) L 0 , + ; 2 ; u ( t ) U ̃ , for almost every t 0 , + .
(1.4)

Here, 2 stands for the Euclid norm in ℝ2. (We will utilize the notation , 2 to represent the Euclid inner product in ℝ2.) Then, the time optimal control problem studied in the article is as:

P : inf u ( ) U ad T ; x T ; x 0 , u = 0 .

Throughout this article, we denote x(·; x0, u) with u(·) = (u1(·), u2(·))T, to the solution of the Equation (1.1) corresponding to the initial data x0 and controls u(·) = (u1(·), u2(·))T. Consequently, x(T; x0, u) stands for the state of the solution x(·; x0, u) at time T.

In the problem ( P ) the number

T * inf u ( ) U ad T ; x T ; x 0 , u = 0

is called the optimal time, while a control u*(·), in the set U ad , and holding the property that x(T*;x0,u*) = 0, is called an optimal control. This problem is to ask for such a control u*(·) in the constraint control set U ad that it derives the solution x(·;x0,u*) from the initial state x0 to the origin of ℝnin the shortest time.

Next, we present the main results obtained in this study.

Theorem 1.1. The problem ( P ) has at least one optimal control provided that the Kalman rank condition holds for A and B, and Re λ ≤ 0 for each eigenvalue λ of A.

Theorem 1.2. When the Kalman rank condition holds for A and B, any optimal control u* to ( P ) has the bang-bang property:

u * ( t ) 2 = 1 , f o r a l m o s t e v e r y t [ 0 , T * ] .
(1.5)

Remark 1.3. (i) The statement that the Kalman rank condition holds for A and B if and only if

r a n k B , A B , . . . , A n - 1 B = n .

(ii) Since any optimal control u*(·) to the problem ( P ) is a switching control, the statement that the bang-bang property (1.5) holds for the optimal control u*(·) is equivalent to the statement that for almost every t ∈ [0, T*], u*(t) is one of four vertices of the domain {(v1,v2)T∈ ℝ2; |v1 + v2| ≤ 1 and |v1-v2| ≤ 1}.

(iii) The condition that Reλ ≤ 0 for each eigenvalue λ of A is a condition to guarantee the existence of time optimal control under the control constraint: u ( t ) 2 1 for almost every t, even in the case where the switching constraint disappears (see [1, 2]).

In the classical time optimal control problem, the controls constraint are convex, and the existence of the optimal control can be get by the weak convergence methods. In our article, the controls set in the problem ( P ) lose the convexity. And it can not be researched by making use of methods in most studies from past publications (see [35]). Here, we utilize an idea from relax control theory to prove the existence theorem. Finally, we make use of the Pontragin maximum principle and the unique continuation property to obtain the bang-bang property for optimal controls to this problem.

With regard to the studies of time optimal control problems governed by ordinary partial differential equations and without the switching constrain to controls, there have been a lot of literatures. We would like to quote the following related articles [2, 69].

The rest of the article is structured as following: Section 2 presents the proof Theorem 1.1; Section 3 provides the proof of Theorem 1.2.

2 The existence of time optimal controls

We prove the existence result, namely, Theorem 1.1, as follows.

Proof. Write co U ̃ for the convex hull of the set U ̃ . Then, it is clear that

co U ̃ = v 1 , v 2 T 2 ; v 1 + v 2 1 and v 1 - v 2 1 .

Now, we define another constraint control set as:

U ̃ ad = u ( ) L 0 , + ; 2 ; u ( t ) co U ̃ , for almost every t 0 , + .

Since the Kalman rank condition and the condition that Re λ ≤ 0, for each eigenvalue λ of A, hold, we can utilize the same argument (see [[1], Theorem 2.6]) to get that system (1.1) is exact null controllable with the control constraint set U ̃ ad .

Next, we consider a new time optimal control problem:

P ̃ : inf u ( ) U ̃ ad T ; x T ; x 0 , u = 0 .

We denote T ̃ * to the optimal time for this problem.

Because the control set co U ̃ is convex, we can utilize the classical weak convergence method to prove that the problem ( P ̃ ) has at least one solution (see, for instance [[1], Theorem 3.1]). Namely, there exists at least one control u ̃ ( ) U ̃ ad such that the corresponding solution x ; x 0 , u ̃ to the equation (1.1) holds the property: x T ̃ * ; x 0 , u ̃ =0.

Let

K = v ( ) U ̃ ad ; x T ̃ * ; x 0 , v = 0 .

Then, one can easily check that is a convex, nonempty subset of L(0, + ∞; ℝ2), moreover, it is compact in the weak* topology. Therefore, we can apply the Krein-Milman theorem to get an extreme point u ̃ * ( ) in the set .

Now, we claim that for almost every t 0 , T ̃ * , u ̃ * ( t ) u ̃ 1 * ( t ) , u ̃ 2 * ( t ) T belongs to U ̃ . Here is the argument: in order to prove the statement that u ̃ * ( t ) U ̃ , it suffices to show that for almost every time t [ 0 , T ̃ * ] , the following two equalities stand:

u ̃ 1 * ( t ) + u ̃ 2 * ( t ) = 1 ,

and

u ̃ 1 * ( t ) - u ̃ 2 * ( t ) = 1 .

By seeking for a contradiction, we suppose that the above-mentioned statement was not true. Then there would exist a number ε with 0 < ϵ < 1, and a measurable subset F 0 , T ̃ * with a positive measure such that one of the following two statements stands:

u ̃ 1 * ( t ) + u ̃ 2 * ( t ) 1 - ε , for each t F ,
(2.1)

and

u ̃ 1 * ( t ) - u ̃ 2 * ( t ) 1 - ε , for each t F .
(2.2)

In the case where (2.1) holds, we define a functional I F : L(F) → ℝnby setting

I F α ( ) = F e A T ̃ * - s B α ( s ) d s ,

where α ( ) is a vector-valued function over F and is defined by α ( s ) = α ( s ) , α ( s ) T , for almost every sF. It is clear that I F is a bounded linear operator from L(F) to ℝn. L(F) is an infinite dimensional space, and ℝnis a finite dimensional space. Thus, the kernel of I F is not trivial. Namely, there exists a function β(·) holds the properties: it belongs to L(F); is non-trivial; satisfies β ( ) L ( F ) 1; and is such that I F (β(t)) = 0. Let β ( s ) = β ( s ) , β ( s ) T over F. We extend this function β ( ) over [0, +∞) by setting it to take the value (0, 0)Tover [0, +∞) \ F. We still denote this extension by β ( ) . Then, we construct two control functions as following:

v ( t ) = u ̃ * ( t ) + ε 2 β ( t ) , w ( t ) = u ̃ * ( t ) - ε 2 β ( t ) .

We will proof that both v(·) and w(·) belong to . Since

0 T ̃ * e A ( T ̃ * - s ) B β ( s ) d s = 0 ,

u ̃ * ( ) is an extreme point of , and x T ̃ * ; x 0 , u ̃ * =0, it follows at once that

x T ̃ * ; x 0 , v = x T ̃ * ; x 0 , w = 0 .
(2.3)

Thus, the remainder is to show that v(·) and w(·) belong to U ̃ ad , namely, for almost every t ∈ [0,+∞), v(t) and w(t) are in the set co U ̃ . With regard to t ∈ [0,+∞), there are only two possibilities: it belongs to either [0, +∞) \ F or F.

When t ∈ [0, +∞) \ F, we have that β ( t ) = 0 . Consequently, it holds that v ( t ) =w ( t ) = u ̃ * ( t ) . Along with the fact that u ̃ * ( ) is an extreme point of , this indicates that v ( t ) =w ( t ) co U ̃ , for almost all t ∈ [0, +∞) \ F.

When tF, we observe that

v ( t ) = v 1 ( t ) , v 2 ( t ) T u ̃ 1 * ( t ) + ε 2 β ( s ) , u ̃ 2 * ( t ) + ε 2 β ( s ) T
(2.4)

and

w ( t ) w 1 ( t ) , w 2 ( t ) T = u ̃ 1 * ( t ) - ε 2 β ( s ) , u ̃ 2 * ( t ) - ε 2 β ( s ) T .

On the other hand, one can easily check that

v 1 ( t ) + v 2 ( t ) = u ̃ 1 * ( t ) + ε 2 β ( s ) + u ̃ 2 * ( t ) + ε 2 β ( s ) u ̃ 1 * ( t ) + u ̃ 2 * ( t ) + ε β ( s ) 1

and

v 1 ( t ) v 2 ( t ) = u ̃ 1 * ( t ) ε 2 β ( s ) u ̃ 2 * ( t ) ε 2 β ( s ) u ̃ 1 * ( t ) - u ̃ 2 * ( t ) 1 .

These, together with (2.4), yields that v ( t ) co U ̃ , for almost every tF. Similarly, we can derive that w ( t ) co U ̃ , for almost every tF.

Therefore, we have proved that for almost every t ∈ [0,+∞), both v(t) and w(t) belong to the set co U ̃ . Combined with (2.3), this shows that

both v ( ) and w ( ) belong to K .
(2.5)

However, it is obvious that u ̃ * ( t ) = 1 2 v ( t ) + 1 2 w ( t ) . Along with (2.5), this contradicts to the fact that u ̃ * ( ) is an extreme point of .

In the case where (2.2) holds, we can utilize the same arguments as above to get a contradiction to the fact that u ̃ * ( ) is an extreme point of .

Thus, we have proved that u ̃ * ( t ) U ̃ for almost every t [ 0 , T ̃ * ] . In summary, we conclude that the above-mentioned claim stands.

Next, we define another control function ū ( ) by setting

ū ( t ) = u ̃ * ( t ) , t [ 0 , T ̃ * ] , ( 0 , 0 ) T , t T ̃ * , + .

By the above-mentioned claim, we can easily find that ū ( ) belong to U ad , and is an optimal control for problem ( P ̃ ) . Since T* is the optimal time to the problem ( P ) , from the facts that x T ̃ * ; x 0 , ū =0 and ū ( ) U ad , we deduce that

T * T ̃ * .

However, it is clear that U ad U ̃ ad . Thus, we necessarily have

T ̃ * T * .

Therefore, it holds that

T ̃ * = T * .

This indicates that ū ( ) is an time optimal control to the problem ( P ) . Hence, we have completed the proof of Theorem 1.1.

3 The bang-bang property

This section is devoted to proving Theorem 1.2.

Proof. Let u * ( ) = u 1 * ( ) u 2 * ( ) T U ad be a time optimal control for problem ( P ) . We aim to show that u*(·) holds the bang-bang property (1.5). By the classic discuss, we can get the Pontryagain's maximum principle for the problem ( P ) (see [10, 11]). Namely, there exists a multiplier ξ0 in ℝn, with ξ 0 n =1, and such that the following maximum principle stands:

B T ψ ( t ) , u * ( t ) 2 = max v U ̃ B T ψ ( t ) , v 2 , for almost every t [ 0 , T * ] ,
(3.1)

where ψ(t) is the solution of the following adjoint equation:

ψ ( t ) = - A T ψ ( t ) , t [ 0 , T * ] , ψ ( T * ) = ξ 0 .
(3.2)

Then, by the the Kalman rank condition, we obtain that

B T ψ ( t ) 0 , for almost every t [ 0 , T * ] .
(3.3)

Besides, it follows from (1.3), namely, the definition of U ̃ , that v U ̃ if and only if -v U ̃ . This, together with (3.3) and (3.1), immediately gives the inequality:

B T ψ ( t ) , u * ( t ) 2 > 0 , for almost every t [ 0 , T * ] .
(3.4)

Next, we define subsets E k with k = 1, 2,..., by setting

E k = t [ 0 , T * ] ; u * ( t ) 2 1 - 1 k .

By contradiction, we suppose that u*(·) did not have the bang-bang property, namely, (1.5) did not hold for u*(·). Then, there would exist a natural k such that m(E k ) > 0. Therefore, we could find a positive number C > 1 such that

C u * ( t ) 2 1 , for each t E k .

Now, we construct another control ū ( ) in the following manner:

ū ( t ) = u * ( t ) for almost every t [ 0 , T * ] \ E k , C u * ( t ) , for each t E k .

It is obvious that ū ( ) U ad . However, by the construction of ū ( ) and by (3.4), we can easily obtain the inequality:

B T ψ ( t ) , u * ( t ) 2 < B T ψ ( t ) , ū ( t ) 2 , for almost every t E k ,

which leads to a contradiction, because of (3.1).

In summary, we conclude that the proof of Theorem 1.2 has been completed.