Infeasible and Critically Feasible Optimal Control

We consider optimal control problems involving two constraint sets: one comprised of linear ordinary differential equations with the initial and terminal states specified and the other defined by the control variables constrained by simple bounds. When the intersection of these two sets is empty, typically because the bounds on the control variables are too tight, the problem becomes infeasible. In this paper, we prove that, under a controllability assumption, the ``best approximation'' optimal control minimizing the distance (and thus finding the ``gap'') between the two sets is of bang--bang type, with the ``gap function'' playing the role of a switching function. The critically feasible control solution (the case when one has the smallest control bound for which the problem is feasible) is also shown to be of bang--bang type. We present the full analytical solution for the critically feasible problem involving the (simple but rich enough) double integrator. We illustrate the overall results numerically on various challenging example problems.


Introduction
Optimal control problems are infinite-dimensional optimization problems, involving processes evolving with time.Infeasibility in optimal control arises in many situations, most typically when resources for a process are overly limited: for example, insufficient amount of insecticides for a dengue epidemic [30] or a highly restricted driving motor capacity of a vehicle [34].Infeasibility can also arise when one aims to achieve initial or terminal states which are not realistic or when there are state constraints which are too restrictive [36].In this paper, we consider infeasible and critically feasible optimal control problems, where the dynamics are governed by linear ordinary differential (state) equations with initial and end states specified and the control variables constrained by simple bounds.
Infeasibility is also widely encountered in finite-dimensional optimization problems: Error in measurements, for example the noise in images taken during computer tomography, may give rise to an inconsistent set of equations in a pertaining optimization model, making the problem infeasible [31].In [14], and in its extension [13], algorithms, which incorporate sequential quadratic programming methods, are proposed for infeasible finite-dimensional nonconvex optimization problems.Under a set of conditions, these algorithms are shown to be convergent to an infeasible stationary point, minimizing a measure of infeasibility [13,14].
In their paper [6], Bauschke and Moursi study the Douglas-Rachford (DR) algorithm for finding a point in the intersection of two nonempty closed and convex sets in (possibly infinitedimensional) Hilbert spaces.They show that, for the case when the intersection of the two sets is empty, i.e., when the problem is infeasible, the DR algorithm finds a pair of points in the respective sets which minimize (as a measure of feasibility) the distance between the two sets; in other words, the DR algorithm finds the "gap" between the two constraint sets assuming that the gap is attained.The work in [6] has been further generalized in [7].Indeed, the authors in [7] proved that under mild assumptions (see also [24]) the DR algorithm can find a generalized solution (this is also known as normal solution) (see [4,Definition 3.7]) for inconsistent convex optimization problems, i.e., when the solution set is empty.
The results in [7] are not only applicable to infinite-dimensional problems (e.g.optimal control problems), but also if one of the constraints is hard, that is a particular constraint must be satisfied, then a solution satisfying the hard constraint, that is an implementable solution, can be returned.In the present paper, we use the idea of the minimization of the distance between the two sets, namely finding the "gap" between the two sets (assuming it is attained), as our motivation in finding a best approximation solution to infeasible optimal control problems so that the optimal control we find is also implementable.
The optimal control problems we consider have two constraint sets: one involves the ODE with specified initial and end states (this set is an affine subspace, which is closed and convex) and the other involves the box constraint on the control (this set is a box, which is also closed and convex).We pose the problem of finding a best approximation pair to the infeasible problem as one of minimizing the distance between these two constraint sets and finding the "gap".In practical optimal control problems, the control variable is expected to satisfy the simple bounds imposed on it; therefore, we regard the box as a hard constraint set.
A best approximation pair of Problem (Pf) below (see equation (10) below) can be expressed as a solution of a minimization problem which is strongly convex w.r.t.one the variables (see Lemmas 1-2).
We prove that, under a controllability assumption, the control variable that belongs to the box and solves the best approximation problem is of bang-bang type, i.e., the value of the control variable switches between its lower and upper bounds.Interestingly, the sign of a gap function component determines which bound value the corresponding control variable component in the box must take; in other words, a gap function component plays the role of a switching function.We also formulate the problem of finding a critically feasible solution, i.e., a solution for the least bound on the control resulting in a nonempty intersection of the two constraint sets.We prove that the critically feasible optimal control is also of bang-bang type.For the case of a double integrator problem, which is often employed as part of case studies for optimal control, we derive the full analytical solution for the critically feasible optimal control problem.
For a numerical illustration of the results, both for the critically feasible and infeasible cases, we study example problems involving (i) a double integrator, (ii) a damped oscillator and (iii) a machine tool manipulator, in the order of increasing numerical difficulty.
The paper is organized as follows.In Section 2, we introduce the optimal control problem and define the two constraint sets, namely the affine space and the box.In Section 3, we define the problem of best approximation, provide the maximum principle, discuss controllability and existing results, and derive the first main result of the paper on infeasible problems in Theorem 3. In Section 4, we introduce the concept of critical feasibility and provide the second main result in Theorem 4. We also provide the full critically feasible solution for a problem involving the double integrator in Theorem 5.In Section 5, we carry out numerical experiments on various example problems to illustrate the results of the paper.Finally in Section 6, we provide concluding remarks and comment on future lines of research.

Preliminaries
We consider optimal control problems where the aim is to find a control u which minimizes a general functional subject to the differential equation constraints with ẋ := dx/dt, and the boundary conditions In the optimal control problem above the time horizon is set to be [0, 1], but without loss of generality it can be taken as any interval [t 0 , t f ], with t 0 and t f specified.The integrand function We define the state variable vector x : [0, 1] → IR n with x(t) := (x 1 (t) . . ., x n (t)) ∈ IR n and the control variable vector It is realistic, especially in practical situations, to consider restrictions on the values u is allowed to take.In many applications, it is common practice to impose simple bounds on the components of u(t); namely, where, respectively, the lower and upper bound functions a i , a i : [0, 1] → IR are continuous and that a i (t) < a i (t), for all t ∈ [0, 1], i = 1, . . ., m.We define for convenience a := (a 1 . . ., a m ) and a := (a 1 . . ., a m ), and write in concise form a(t) ≤ u(t) ≤ a(t); in other words, we formally state as an expression alternative but equivalent to (4).
The objective functional in (1) and the constraints in (2)-( 3) and ( 4) can be put together to present the optimal control problem as follows. (P) We split the constraints of Problem (P) into two sets: We assume that the control system ẋ(t) = A(t)x(t) + B(t)u(t) is controllable-See the precise definition in Section 3.3.Then there exists a (possibly not unique) u(•) such that, when this u(•) is substituted, the boundary-value problem given in A has a solution x(•).In other words, A = ∅.Also, clearly, B = ∅.Recall that ϕ is affine, so the constraint set A is an affine subspace.We note that by [9, Corollary 1], A is closed.(8) Given that B is a box, the constraints turn out to be two convex sets in Hilbert space.In particular, we note that B is closed in L 2 (0, 1; IR m ).It will be convenient to use the expression where b i (t) is the ith column of the matrix B(t), interpreted as the column vector associated with the ith control component u i .
If A ∩ B = ∅ , then one has a feasible LQ optimal control problem.The feasibility problem is posed as one of finding an element in A ∩ B, namely: If, however, A ∩ B = ∅ , then the problem is said to be infeasible.The feasibility problem in (9) has obviously no solution in this case, but in Section 3 we will pose the problem of finding (in some sense) a best approximation solution.

Best Approximation Solution to the Infeasible Problem
Consider the case when A ∩ B = ∅.We define the best approximation pair as (u * A , u * B ) ∈ A × B which minimizes the squared distance between the two sets.Namely (u * A , u * B ) is in this case required to solve min where • L 2 is the L 2 norm.In other words, we want to minimize the "gap" between the two sets.Observe that A − B is convex, closed (by, e.g., [5,Proposition 3.42]), and nonempty.Therefore, it follows from [2, Section 2] and the fact that A − B is closed that We define the gap (function) vector (see [6]) Using u A = v + u B and the definitions of A and B in ( 6)-( 7), the problem in (10) can be rewritten in the format of a classical, or standard, optimal control problem as follows.
(Pf) Proof.The constraint set of Problem (Pf) can be written as follows: where A, B are as in ( 6) and (7), respectively.Consider the map defined by Ψ(z, x) := (z + x, x).The map Ψ is a linear bijection which is continuous in L 2 .The convexity of D now follows from the fact that Ψ −1 is linear and A × B (being the product of an affine set and a box), is convex.Note also that A × B is closed because each factor is closed.Indeed, A is closed by (8) Proof.Note first that (Pf) can be equivalently written as having for objective function where ι B is the indicator function of the set B and B is as in (7).Now the second statement follows directly from the fact that h is strongly convex in the variable v.We proceed next to prove the first statement.Since the functions a, a are continuous, the set B is bounded, and hence h is coercive in both variables.Consider the set D as in the proof of Lemma 1.The coerciveness of h allows us to find a closed ball B[0, R] such that a solution of (Pf) (if any) must be in 2 is continuous and convex, it is weakly lower-semicontinuous. Recall that the set B is closed and convex, and hence weakly closed and convex, Therefore, the function h 2 := ι B is weakly lower-semicontinuous. Altogether, h = h 1 + h 2 is weakly lower-semicontinuous.We can now consider the problem (PD) min By construction, the solution set of Problem (PD) is S f .Since h is weakly lower-semicontinuous and D 0 is weakly compact, Problem (PD) has a solution, and hence Problem (Pf) has (the same) solution(s).✷

Maximum Principle for Problem (Pf)
In what follows we will derive the necessary conditions of optimality for Problem (Pf), using the maximum principle.Various forms of the maximum principle and their proofs can be found in a number of reference books-see, for example, [28, Theorem 1], [20,Chapter 7], [33,Theorem 6.4.1], [26,Theorem 6.37], and [17,Theorem 22.2].We will state the maximum principle suitably utilizing these references for our setting and notation.
The expression in (20) prompts two types of optimal control that are widely studied in the optimal control literature, as elaborated next.20) is referred to be of bang-bang type in the interval [t ′ , t ′′ ].In this case, the optimal control might switch from u B,i (t) = a i (t) to u B,i (t) = a i (t), or vice versa, at some finitely many switching times in [t ′ , t ′′ ].However, if v i (t) = 0 for a.e.t ∈ [s ′ , s ′′ ] ⊂ [0, 1], s ′ < s ′′ , then the optimal control is said to be of singular type in the interval [s ′ , s ′′ ].Note that in general the optimal control might also switch from a bang-arc to a singular arc, and vice versa.

Bang-Bang and Singular Types of Optimal
The optimality conditions we have just derived in ( 19)- (20) for Problem (Pf) give rise to Theorem 3 stated further below.If the dynamical control system is controllable, a definition of which is to be provided next, then the theorem eliminates the singularity for u i , i.e., that the condition v i (t) = 0 in (20) can happen only at isolated time instants, and expresses the optimal u B,i (•) as a control which is of bang-bang type.
Before stating Theorem 3 on the best approximation solution we first discuss the concept of controllability and some existing results.

Controllability
The state equation, or the control system, is said to be controllable on a finite interval [t 0 , t f ] if given any initial state x(t 0 ) = x 0 there exists a continuous control u(•) such that the corresponding solution of (21) satisfies The solution of the (uncontrolled) system ẋ(t) = A(t)x(t), with x(0) = x 0 , is given by The matrix W (t 0 , t f ) defined above is called the controllability Gramian, and in general it is not easy to compute, making Theorem 1 rather impractical.Hence, we present next a computable version of this result.Suppose that A(•) and B(•) are not only continuous but also "sufficiently" smooth.Let With these definitions, a (much more easily) computable version of Theorem 1 can be given as follows.
Checking ( 23) is in general far easier than checking the invertibility of W (t 0 , t f ).
Component-wise Controllability.We call the control system in (21) controllable w.r.t.u i on [t 0 , t f ] if given any initial state x(t 0 ) = x 0 there exists a continuous ith component u i (•) of the control u(•) such that the corresponding solution of satisfies x(t f ) = 0. Then clearly Theorems 1 and 2, with B(t) replaced by b i (t), hold for the system in (24), as the component-wise definition of controllability is stronger than that for the more general definition we gave originally.

Best approximation solution
Next, we provide in a theorem the best approximation solution in the set B, in the case when the constraint sets A and B are disjoint.
Theorem 3 (Gap Vector and the Best Approximation Control in B) With the notation of Problem (P f ), assume that A ∩ B = ∅.Then the optimal gap vector is given by v(t) = −B T (t)λ(t), for all t ∈ [0, 1], where λ(•) solves (15).Moreover, suppose that A(•) and B(•) are sufficiently smooth and that the control system (21) is controllable w.r.t.u i on any In other words, such u B,i is of bang-bang type.
Let p i,0 (t) := b i (t).Equations (26a)-(26c) can be rewritten as where Note that p i,k (t), k = 1, 2, . .., are the same as K j (t), k = 1, 2, . .., in (22a)-( 22b), but with B(t) replaced by b i (t).From (27a)-(27b), one gets where Suppose that the control system is controllable w.r.t.u i on [t ′ , t ′′ ].Then by Theorem 2 there exists some t c ∈ [t ′ , t ′′ ] such that rank Q i c (t c ) = n.This implies from (29) that λ(t c ) = 0, and that in turn implies by the ODE in (15) that λ(t) = 0 for all t ∈ [0, 1], which is not allowed by the maximum principle.Therefore one cannot have that v i (t) = 0 for a.e.[t ′ , t ′′ ] ⊂ [0, 1], as a result giving rise to (25). ✷ Remark 1 (The Best Approximation Control in A) Consider the expressions for the ith component of the optimal gap vector v(•) and the ith component of the best approximation control u B (•), given as in ( 12) and ( 25), respectively.One can then simply express the ith component of the best approximation control in the affine set A as for a.e.t ∈ [0, 1].We observe that while v i (•) as given in ( 18) is continuous, u A,i (•) is piecewise continuous.✷ Remark 2 (Time-invariant Systems) Suppose that the control system in ( 21) is timeinvariant; namely that A(t) = A and B(t) = B, A and B constant matrices, for all t ∈ [0, 1].This is a widely encountered case in control theory although the time-varying case is more general.We note that, in (28a)-(28b), ṗi,k−1 = 0 and so we can write In turn the rank condition rank The condition in (32) for time-invariant control systems is referred to as the Kalman controllability rank condition [29] in control theory.In conclusion, for invariant systems, if the rank condition in (32) holds then the control component u B,i for the infeasible optimal control problem is of bang-bang type as given in (25).✷

Critical Feasibility
Suppose that a i (t) = a > 0 and a i (t) = −a for all t ∈ [0, 1] and i = 1, . . ., m.Since it is assumed that A = ∅, if a = ∞ or large enough, the optimal control problem given in (1)-( 4) is feasible, i.e., A ∩ B = ∅.By the same token, if a is small enough, the problem is infeasible for some specified initial and terminal end states, i.e., A ∩ B = ∅.In fact, from the geometry of the sets A and B a , where B a indicates the explicit dependence of B on a, there exists a critical bound a c such that for all a < a c , A ∩ B a = ∅, since B a is strictly contained by (or strictly smaller than) B ac .By this definition, when a = a c we say that the problem is critically feasible.
We are interested in knowing when a problem becomes critically feasible.In other words, we want to find the smallest value a c of a for which the problem is feasible.We can pose this question as a new (parametric) optimal control problem, where the parameter a is to be minimized subject to the constraint sets A and B a : the optimal value of which will be a c .

Remark 3
We observe that |u i (t)| ≤ a, i = 1, . . ., m, can be written as u(t) ∞ ≤ a, where • ∞ is the ℓ ∞ -norm in IR m .By also observing that the problem of "minimizing the value of the variable a subject to u(t) ∞ ≤ a, for a.e.t ∈ [0, 1]," is equivalent to "minimizing the L ∞ -norm of u," Problem (Pcf) can be re-written as follows. (Pcf1) It is interesting to note that Problem (Pf1) is a generalized form of the problem studied in [22].In what follows we will use the procedure in [22].✷ Before we apply the maximum principle, it is convenient to re-write Problem (Pcf) as an optimal control problem in standard (or classical) form.First, we define a new state variable y(t) := a and a new control variable w(t) := u(t)/a .
Problem (Pcf) can then be re-cast using these new variables as (Pcf2) The Hamiltonian function
We show next that the solution of (Pcf) is bang-bang.Namely, there is no nontrivial subinterval of [0, 1] where b T i (t)λ(t) vanishes almost everywhere.

Solution to the critically feasible problem
Theorem 4 (Critically Feasible Control) Suppose that the system and control matrices A(•) and B(•) are sufficiently smooth.Assume that, for some i = 1, . . ., m, the control system (21) is controllable w.r.t.u i on any [s ′ , s ′′ ] ⊂ [0, 1], s ′ < s ′′ .Then, the ith component u i (•) of the critically feasible control for the optimal control problem in (1)-( 4), with a i (t) = a c and a i (t) = −a c , is given as for a.e.t ∈ [0, 1], where λ(•) solves (15).In other words, such u i is of bang-bang type.

A double integrator problem
References [3,10] studied applications of splitting and projection methods to the feasible problem of finding the so-called minimum-energy control of the double integrator, the problem stated as (PDI) Although Problem (PDI) constitutes a relatively simple instance of an optimal control problem, a solution to it can only be found numerically.This is the first reason why we find it interesting.Secondly, (PDI) acts as a building block in, for example, the problem of finding cubic spline interpolants with constrained acceleration, an active area of research in numerical analysis and approximation theory.A much wider range of optimal control problems involving the double integrator have been studied in the relatively recent book [23], however it does not include Problem (PDI).Problem (PDI) is simple and yet rich enough to study when introducing and illustrating many basic and new concepts or when testing new numerical approaches in optimal control-see, in addition to [3,10,23], also [11,21].
With a large enough a (so that the constraint |u(t)| ≤ a never becomes active for any t ∈ [0, 1]), Problem (PDI) can be solved analytically to find a cubic curve x 1 (t), satisfying the initial point and velocity s 0 and v 0 , and the terminal point and velocity s f and v f , respectively -see [3] for the working of such an unconstrained solution.A small enough a, on the other hand, restricts the values the function u can take and thus rules out finding an analytical solution and necessitates the use of a numerical procedure for finding an approximate solution.This altogether furnishes a testimony to the practical significance of such a simple looking problem like (PDI).
Critical Bound a c : Going From Feasible to Infeasible.For the numerical experiments in [3], the special case when s 0 = s f = v f = 0 and v 0 = 1 was considered.The feasible optimal control for this instance of Problem (PDI) is where 0 ≤ t 1 < t 2 ≤ 1 are the so-called junction times.When the value of a is too small, problem (PDI) becomes inconsistent.Namely, there exists a critical value a c > 0 such that, when a < a c Problem (PDI) is infeasible.Thus, the control constraint will be active when a ∈ [a c , 4).The latter is the consistent, or the feasible, case, for which the control solution u is still active.If a = 4 or larger, then t 1 = 0 and t 2 = 1 and the solution is the same as that of the case when u is unconstrained.In other words, when a ≥ 4 the bound constraint on u becomes superfluous.When a ∈ (a c , 4), the solution u of problem (P) given in (39) is continuous over the time horizon [0, 1].On the other hand, when a = a c , as has been stated in Theorem 4 that the control solution has to be of bang-bang type, or discontinuous.In Remark 2.1 of [3], it is observed that based on the numerical experiments conducted, without elaborating further.It is also reported in the same remark that when a = a c the unique feasible solution appears to be bang-bang, i.e., in particular, u(t) switches once from −a c to a c at the switching time confirming our statement above that the optimal control u in this case is discontinuous.
How to Find the Solution for a c and t c .The Hamiltonian function H : IR 3 ×IR×IR 3 → IR for Problem (Pcf2) emanating from Problem (PDI) is where (λ 1 (t), λ 2 (t), µ(t)) ∈ IR 3 is the adjoint variable (or costate) vector such that with the transversality conditions This leads to the solutions where c 1 and c 2 are unknown real constants.
The following is a straightforward corollary to Theorem 4 for the double integrator problem.
Corollary 1 (Critically Feasible Control) The critically feasible optimal control u c for the double integrator problem is of bang-bang type with at most one switching; namely where t c is the switching time.
Proof.The lemma follows from the expression in (38) in Theorem 4 and the linearity of λ 2 in (43) (which implies that λ 2 can change sign at most once).✷ A similar line of proof with a < a c (the infeasible case) results in the following corollary to Theorem 3.

Corollary 2 (Best Approximation Control in B)
The best approximation optimal control u B for the double integrator problem is of bang-bang type with at most one switching; namely where t s is the switching time.
The theorem we present below provides the full analytical solution to Problem (PDI) when a = a c .Theorem 5 (Full Critically Feasible Solution to Problem (PDI)) and with r and the switching time t c given in the following two cases.

Numerical Experiments
For computations numerically solving the three problems in Sections 5.1-5.3,we employ the AMPL-Ipopt computational suite: AMPL is an optimization modelling language [18] and Ipopt is an Interior Point Optimization software [35] (version 3.12.13 is used here).
The suit is commonly utilized to solve discretized optimal control problems.We discretize the optimal control problems (Pf) and (Pcf) using the Euler scheme, with the number of time discretization nodes (or time partition points) set in most of the cases as 2000.The number of these nodes is increased (as reported in situ) only when a c (in the case of critically infeasible solution) needs to be reported with a higher accuracy.The Euler scheme is more suitable than higher-order Runge-Kutta discretization for these problems as the solutions exhibit bang-bang types of control making the state variable solutions only of C 0 class of functions.Numerical chatter is evident when a higher-order discretization scheme, such as the trapezoidal rule, is used.With 2000 grid points, the resulting large-scale finite-dimensional problems have about 6000 variables and 4000 constraints for the double integrator and the damped oscillator problems, and 16000 variables and 4000 constraints for the machine tool manipulator problem.We set the tolerance tol for Ipopt to 10 −8 in all problems.We note that AMPL can also be paired with other optimization software, such as Knitro [15], SNOPT [19] or TANGO [1,8].
All three example problems in Sections 5.1-5.3 have a single control variable and the constraint on the control is given as where a is a positive constant.The optimality condition (25) can then be written for this particular case as for a.e.t ∈ [0, 1].We will conveniently verify the optimality of the numerical results using (58).
First of all, we establish that the double integrator control system is controllable since rank We have solved Problem (Pcf) to find the critically feasible solution depicted in Figure 1, where the solution curves for u A , u B and v are graphed.With 10000 time partition points, we obtained a c ≈ 2.414 (2000 time partition points only yields a c ≈ 2.4), which reconfirms the analytical solution a c = 1 + √ 2 ≈ 2.4142 that was reported in Remark 5. We also observe (after zooming into the plot) that t c = 0.707 which agrees with t c = 1/ √ 2 ≈ 0.7071 in Remark 5 up to three dp.The control u A overlaps u B since, in the critically feasible case, A ∩ B = ∅ and so the gap function v is the zero function.The graph of u = u A = u B in Figure 1a in turn verifies the analytical expression in (57).For the infeasible case, i.e., when a < a c , it is no longer possible to get a solution analytically, even for the relatively simple-looking double integrator problem.In Figures 1b-1d, the solution plots for a = 2, 1.5 and 1 are shown, respectively.The solution for u B is of bangbang type with one switching, verifying Corollary 2. The role of v as a switching function is clear from these plots.We recall the fact that v = −λ 2 by Theorem 3, and point that v appears linearly in the plots since λ(t) is linear in t.The switching time for each case shown in Figures 1b-1d is found graphically as: (b) t s ≈ 0.701, (c) t s ≈ 0.693 and (d) t s ≈ 0.685.Further numerical experiments with even smaller a suggest that as a → 0, t s → 2/3.

Damped oscillator
While the ODE underlying the double integrator problem is z(t) = u(t), the ODE underlying the damped oscillator problem is z(t) + 2 ζ ω n ż(t) + ω 2 n (t) z(t) = u(t), with the damping and stiffness terms added, where the parameter ω n > 0 is the natural frequency and the parameter ζ ≥ 0 is the damping ratio of the system.When ζ = 0 the system is referred to as the (simple) harmonic oscillator.Defining the state variables x 1 := z and x 2 := ż (as in the case of the double integrator), one gets, for the case of the damped oscillator, again using the notation in Problems (Pf) and (Pcf).As the time interval of the problem we take [0, 1], and set the boundary conditions to be the same as those of the double integrator problem: x(0) = (0, 1) and x(1) = (0, 0).We set the values of the parameters as ω n = 20 and ζ = 0.1.First, we can assert that the damped oscillator control system is controllable since rank Numerical solutions to Problems (Pf) and (Pcf) are depicted in Figure 2: The critically feasible solution to (Pcf) appears in Figure 2a and the infeasible solutions to (Pf) appear in Figures 2b-2d.With 2 × 10 5 time partition points, we have obtained a c ≈ 0.475, correct to three dp.No analytical solution is available.As expected from Theorem 4, the control u B is of bang-bang type, and it overlaps with u A .The control u B appears to be periodic with six switchings.Further experiments with various other boundary conditions not only result in different a c but also in different number of switchings; but the control u B still appears to be periodic.
In Figures 2b-2d, we provide the respective solution plots for a = 0.4, 0.3 and 0.2.The solution for u B is of bang-bang type as asserted by Theorem 3. It is observed that not only the control u B appears to be periodic but also the switching times seem to remain the same as those in the critically feasible solution in Figure 2a.The role of the gap function v as a switching function is clear from these plots, verifying (58).

Machine tool manipulator
A linear ODE model and an associated optimal control problem for a machine tool manipulator is described in [16].Using the notation in Problems (Pf) and (Pcf), one has that Clearly, the control system has seven state variables and one control variable.In [16], the time interval for the dynamics is chosen to be [0, 0.0522], and the boundary conditions are imposed as x(0) = (0, 0, 0, 0, 0, 0, 0), x(0.0522) = (0, 0.0027, 0, 0, 0.1, 0, 0).Moreover, the control variable is constrained as −2000 ≤ u(t) ≤ 2000, under which the problem is feasible.
A minimum-energy control model for this machine tool manipulator has also subsequently been studied in [9,10,12].
It can easily be verified that the machine tool manipulator control system is controllable as rank Numerical solutions to Problems (Pf) and (Pcf) are depicted in Figure 3: The critically feasible solution to (Pcf) is depicted in Figure 3a and the infeasible solutions to (Pf) appear in Figures 3b-3d.With 10000 time partition points, and implementing SNOPT (instead of Ipopt) with AMPL, we obtained a c ≈ 1769.46-Just on this occasion Ipopt was not successful in getting a solution.As asserted by Theorem 4, the control u B is of bang-bang type, and it overlaps with u A .The control u B appears to have five switchings.
In Figures 3b-3d, we provide the solutions for a = 1500, 1000 and 500.The solution for u B is of bang-bang type as asserted by Theorem 3. We observe that the number of switchings decreases with decreasing a: With a = 500, and by other experiments with a < 500, numerical solutions suggest that there is only one switching.The role of the gap function v as a switching function is clear from these plots for this example as well, verifying (58).

Conclusion
We have studied a class of infeasible and critically feasible optimal control problems and proved that the best approximation control in the box constraint set is of bang-bang type for each problem.We presented a full analytical solution for the critically feasible double integrator problem.We numerically illustrated these results on three increasingly difficult example problems.For numerical computations, we discretized the example problems and solved large-scale optimization problems using popular optimization software.
The numerical scheme described in this paper can further be improved: Since the solution structure is known to be of bang-bang type, one can solve problems discretized over a coarse time grid first, and then, once there is a rough idea about the number of switchings and the places of the switchings, a switching time parameterization technique (see [22,25,27]) can be implemented to find the switching times accurately.
The paper [7] motivated us in looking at infeasible optimal control problems and study the properties of the gap (function) vector.Reference [7] also studies in a theoretical setting an application of the Douglas-Rachford algorithm to infinite-dimensional infeasible optimization problems in Hilbert space.A next step would be to employ the Douglas-Rachford algorithm to solve the infeasible optimal control problems we are looking at in the present paper.It would also be interesting to employ and test the Peaceman-Rachford algorithm [5, Section 26.4 and Proposition 28.8], which is another projection type method, for the class of problems we have studied.
It would be interesting to extend the applications in this paper to the case of infeasible and critically infeasible nonconvex optimal control problems, including those with state constraints, and carry out numerical experiments, although no theory is available yet for such more general classes of problems in infinite dimensions.
which is nothing but the case in part (b) of the theorem.Finally one gets u(t) = sgn(c 1 ) a c = v f − v 0 and thus a c = |v f − v 0 | as required by (51) and (52).Case (II): Suppose that c 1 = 0. Then λ 2 (t) = −c 2 , c 2 = 0, and by (44) u(t) = sgn(c 2 ) a c , for all t ∈ [0, 1] (no switching).The rest of the arguments follows similarly to the case when c 2 = 0 above simply by replacing c 1 by c 2 in the expressions.This case also corresponds to and proves part (b) of the theorem.Case (III): Finally suppose that λ 2 (t) = −c 1 t − c 2 with both c 1 = 0 and c 2 = 0. Then by Corollary 1, observing that λ 2 (0) = −c 2 ,
Denote by S f the set of solutions of Problem (Pf).Recall that, for any given set C of a Hilbert space H, the indicator function of C, denoted by ι C : H → R ∪ {+∞}, is defined as ι C (x) = 0 for every x ∈ C, and ι C (x) = +∞ for every x ∈ C. We show in this section the main properties of Problem (Pf).
. The set B is closed in L 2 ([0, 1]; IR m ) because every sequence converging in L 2 ([0, 1]; IR m ) has a subsequence converging a.e. in [0, 1].The latter implies that every limit in the topology of L 2 must belong to B. Altogether, the set A × B is closed in L 2 ([0, 1]; IR m ) and therefore D is closed because it is the preimage of a closed set by a continuous function.The fact that the closedness holds for both the strong and weak topology follows from convexity.✷ is the state transition matrix, or the fundamental matrix, of the differential equation.The system in (21) is controllable on [t 0 , t f ] if and only if the n × n (Gramian) matrix