A minimum-time obstacle-avoidance path planning algorithm for unmanned aerial vehicles

In this article, we present a new strategy to determine an unmanned aerial vehicle trajectory that minimizes its flight time in presence of avoidance areas and obstacles. The method combines classical results from optimal control theory, i.e. the Euler-Lagrange Theorem and the Pontryagin Minimum Principle, with a continuation technique that dynamically adapts the solution curve to the presence of obstacles. We initially consider the two-dimensional path planning problem and then move to the three-dimensional one, and include numerical illustrations for both cases to show the efficiency of our approach.


Introduction
Unmanned aerial vehicle (UAV) path planning is a current research topic having the purpose of making the UAV capable of performing a given mission autonomously. This topic has recently attracted the attention of many researchers due to the increasing number of potential civilian and military UAV applications, e.g. environmental monitoring, monitoring of areas affected by natural disasters such as earthquakes, floods, and fires, search and rescue operations in emergency situations, and remote sensing to create maps identifying areas of interest. Modern mission planning tools require the optimization of different UAV performances, according to the given mission specifications, e.g. the flight time or the energy consumption in terms of fuel or battery power energy. A common issue in almost every path generation strategy is the presence of interdicted areas or obstacles in urban or orographic environments. Therefore, obstacle avoidance methods turn out to be crucial in UAV path planning (some details may be found in [1][2][3]). Recent techniques include cell decomposition, roadmap, artificial potential field, potential flow and optimal control methods, among others.
In cell decomposition methods, the operational area is partitioned into nonoverlapping similar shaped small regions called cells. The trajectory is generated connecting by straight lines the cell centres from the starting point to the arrival point using search algorithms like A*, PSO (Particle Swarm Optimization), RRT (Rapidlyexploring Random Tree) [4][5][6][7]. The cells occupied by obstacles are excluded in the trajectory generation process. In roadmap methods, a network of straight lines connecting the starting point to the arrival point without passing through any obstacle (a roadmap) is generated. Then a search algorithm is employed to find the path which best fits the criteria required by the given mission [7,8]. In artificial potential field methods, the arrival point is treated as an attractive potential and the obstacles are treated as repulsive potentials. An artificial potential force is then computed and applied to the UAV as a control input. Therefore, the UAV is attracted towards the arrival point and repelled by the obstacles [9][10][11]. In potential flow methods, obstacle avoidance path planning is based on the concept that a uniform fluid flow creates a natural path around an obstacle [12].
Optimal control methods form an important framework to address obstacle avoidance path planning problems. They are divided into two main categories: direct methods and indirect methods [13][14][15]. We will recall the ideas behind them in the next section. In this paper, we focus our attention on the indirect approach which relies on two results -the Euler-Lagrange Theorem and the Pontryagin Minimum Principle [13,16] -stating necessary conditions for the optimal solution. Using these conditions, a Hamiltonian boundary value problem (BVP) is built, and among its solutions there is the optimal one. In more detail, the problem of our interest is the determination of a UAV trajectory that minimizes the associated flight time in presence of avoidance areas and obstacles. An example addressing a similar problem via the indirect approach is [17], where the constrained optimal control equations have been cast as an unconstrained system formulated in such a way as to preserve the information on the constraints. However, the numerical solutions presented, obtained by means of the MATLAB code bvp4c, do not completely satisfy the constraints.
It should be noticed that the general purpose solvers available in most numerical computing environments often lack robustness and efficiency in handling the Hamiltonian BVPs emerging from the above procedure. A further delicate aspect concerns the choice of the initial guess to be used during the solution of the nonlinear problems arising from the discretization of the BVP, especially in presence of multiple local solutions. These drawbacks have somehow discouraged the study of indirect methods in favour of direct ones. Our approach with the indirect method is motivated by a consolidated experience in the computer simulation of Hamiltonian dynamical systems and in the numerical solution of BVPs [18][19][20][21][22][23][24][25]. We propose the use of the code bvptwp, available in MATLAB [26], Fortran and R [27][28][29][30], to efficiently solve BVPs arising from the application of indirect methods. Furthermore, we employ a new homotopy continuation technique to overcome the problem of selecting a suitable initial profile to get the optimal trajectory. The combination of these tools allows us to efficiently deal with several involved scenarios.
With this premise, the paper is organized as follows. In Section 2, we recall those concepts of optimal control theory that will be exploited later in deriving the Hamiltonian BVP. In Section 3, we study the two-dimensional path planning problem, the method we have realized to solve it, and the corresponding simulation results. Section 4 deals with a generalization to the three-dimensional path planning problem. Finally, in Section 5, we have summarized our findings.

Optimal control problem definition
Let us consider a controlled dynamical system given by the set of ordinary differential coupled with separated boundary conditions where t is the time variable; t 0 and t f are the initial and final times respectively -notice that t f may be unknown since, for example, it is what we want to minimize; -x(t) ∈ R n is the vector of the state variables; x 0 and x(t f ) are the system initial and final states respectively -notice that x(t f ) may be implicitly defined, since Ψ in (1c) may be a nonlinear function; -u(t) ∈ R m is the vector of the control variables, i.e. time functions provided as input to the system to control its evolution over time, hereafter simply referred to as the control; -f : [t 0 , t f ] × R n × R m → R n is a suitably regular function defining the vector field of the controlled dynamical system.
The control u(t) influences the performance of the dynamical system by means of the functional called cost functional or performance index. This functional consists of the sum of two terms: the first one solely depends on the system final state, while the second one takes into account the overall system evolution over time. The integrand function L in the second term is called Lagrangian function. An optimal control problem is an optimization problem which consists in determining the control u * (t) that minimizes or maximizes the performance index. Without loss of generality, we can always think of an optimal control problem as a minimization problem: where the state variables x(t), for a given control u(t), must satisfy (1a)-(1c).

Methods for solving optimal control problems
The best known methods in the literature to solve an optimal control problem are divided into two categories: direct and indirect methods. The basic idea of direct methods consists in transforming the original problem into a standard nonlinear programming problem. This is achieved in two sequential phases, according to the discretize then optimize paradigm. First, the problem is discretized by introducing a temporal mesh and approximating the solution x(t) and the control u(t) by means of predetermined piecewise polynomial functions. Then, appropriate collocation conditions are imposed on them in order to obtain a solution that accurately approximates the continuous model. The collocation conditions define a finite set of equality constraints which, together with the cost functional, constitute a nonlinear programming problem. For more information on direct methods, we recommend a paper by Matthew Kelly [15], that covers all of the basics required to understand and implement direct collocation methods and collects a number of interesting references. Recent results about convergence of direct methods can be found in [31]. Interesting comparisons between direct and indirect methods are presented in [32,33].
Indirect methods, on the other hand, follow the optimize then discretize paradigm. They are based on the variational calculus and build a Hamiltonian BVP, whose set of solutions includes the optimal one. Below, we recall the resolution procedure.
Let us introduce the Hamiltonian function where λ(t) = (λ 1 (t), λ 2 (t), . . . , λ n (t)) is the vector of the so-called costate variables. Let us denote by H x , H u the gradient of H with respect to x and u, respectively, and by u * (t) the optimal control and by x * (t) the corresponding state. Then the Euler-Lagrange theorem [16, pp. 45-56] states that, if f, L, φ and Ψ are sufficiently smooth, then λ(t) is the solution of the adjoint differential equatioṅ and u * (t) and x * (t) satisfy the algebraic equation system from which we derive the optimal control u * (t), and the transversality condition that yields the boundary conditions for the costate variables. Here, to simplify the notation, we have set Equation (4) admits, in general, multiple solutions but, exploiting the Pontryagin Minimum Principle [16, pp. 95-101], the optimal control u * (t) must minimize the Hamiltonian function, evaluated at x * (t), among all the admissible controls u(t): Equations (3) and (4) are called Euler-Lagrange equations. Once the optimal control u * (t) is determined from (4), its expression is plugged into (1a) and (3) to obtain a system of ordinary differential equations for the state and the costate variableṡ with the boundary conditions given by (1b), (1c) and (5) Thus, we have got a Hamiltonian BVP that we can solve numerically. In our application context, the above BVP will be integrated with a set of constraints representing interdicted areas that the solution trajectory should not cross. A penalty function approach is then exploited to incorporate these additional requirements inside the original cost functional (2). As we will see in the next section, these penalty functions add singularities in the problem, so that its numerical solution has to be handled with care. Hereafter, we give a brief description of the code we have considered for this purpose.

Numerical solution of the boundary value problem
It is well known that the two most involved implementation issues in devising general purpose codes for BVPs are the mesh selection strategy and the efficient solution of the nonlinear system arising from the discretization of the continuous problem by means of a suitable numerical method. The presence of singularities makes such aspects even more relevant, since they heavily influence the behaviour of the resulting procedure. For our numerical simulations, we have considered the MATLAB environment [34] and the code bvptwp, which is a MATLAB translation of the Fortran codes twpbvpc, twpbvplc and acdcc [26,30,35]. The code bvptwp allows the user to choose between two techniques, one of which gets information about the conditioning of the problem (see [21,25] for a complete description). This particular feature is exploited in the mesh selection strategy, which is able to adapt the mesh points in order to cope with specific conditioning issues that may emerge during the solution of the BVP, especially when the trajectory gets close to a singular point. The code incorporates two deferred correction schemes, the former based on Mono-Implicit Runge-Kutta (MIRK) methods and the latter on Lobatto formulae. In our simulations, we have used the implementation based on Lobatto formulae, since they are more efficient for stiff problems and are more appropriate for problems with a Hamiltonian structure. The MATLAB and Fortran release of the codes are freely available at the url [36] together with many test examples (see also [28]). An interface of the Fortran codes in the R environment is also available [29].
For comparison purposes we have also considered the MATLAB built-in functions bvp4c and bvp5c. These latter are very efficient for the solution of smooth problems, but suffer from lack of robustness when applied to singularly perturbed problems (see Example 1 in Section 3.4).
Finally, it is worth mentioning that a BVP can admit more than one solution. In this case, these solutions represent local minima for the original problem, among which the global minimum is hidden. Since the solution calculated by a numerical method depends on the initial guess, particular attention must be paid to the choice of the latter. We give a detailed description of this aspect in the next section.

2D path planning problem
Let us turn to the problem of our interest, i.e. to determine a UAV trajectory that minimizes the flight time in presence of avoidance areas. In general, the UAV motion is a three-dimensional motion. However, some salient flight phases occur at a constant altitude, i.e. in a plane. Therefore, we first consider the two-dimensional case, and later we will move to the three-dimensional one.

Problem formulation
The problem we intend to solve is the determination of the two-dimensional trajectory that minimizes the UAV flight time from a starting point (x 0 , y 0 ) to an end point (x f , y f ) in presence of avoidance areas. As a working assumption, we neglect the rigid body structure of the UAV, which we represent as a material point, corresponding to its centre of mass. 1 Furthermore, we assume that the UAV motion occurs with constant velocity in modulus V . Without loss of generality, we suppose that the initial time is t 0 = 0. Thus, the UAV motion is described by the ordinary differential equationsẋ with boundary conditions where -x = x(t), y = y(t), respectively the abscissa and the ordinate of the UAV in an appropriate Cartesian reference, are the state variables; -θ = θ(t) ∈ [−π, π], the yaw angle ( Fig. 1), which describes the UAV rotation around the vertical axis passing through its centre of mass, is the control variable.
Hence, we do not consider neither the pitch angle, which describes the UAV rotation around the transverse axis passing through its centre of mass, the motion being two-dimensional, nor the roll angle, which describes the UAV rotation around the longitudinal axis passing through its centre of mass, since we are neglecting the rigid body structure of the UAV. For simplicity, we will not introduce constraints on the control variable.
The performance index to be minimized is the flight time (observe that the final time t f dependence on the control is not explicit) Regarding the avoidance areas, since we are working in the two-dimensional space R 2 , we enclose them in ellipses in order to obtain sufficiently regular constraints. Therefore, the set of constraints with which we approximate the avoidance areas is and x i , y i , a i and b i are, respectively, the centre coordinates and the axis lengths of the i-th ellipse. Furthermore, we also consider ellipses whose first axis is x = x cos α + y sin α, y = −x sin α + y cos α.
In conclusion, the problem of our interest is the following: where the state variables x(t) and y(t), given a control θ(t), must satisfy the equationsẋ and the additional constraints This problem is a constrained optimal control problem because of the additional algebraic constraints (6), so we cannot solve it directly using the Euler-Lagrange Theorem and the Pontryagin Minimum Principle. Let us frame an auxiliary unconstrained optimal control problem formulated in such a way as to preserve the information on the additional constraints, with which we approximate the above-mentioned problem.
After that, we shall devise a continuation technique to solve the auxiliary problem.

Auxiliary problem formulation
For each constraint, let us define the function where k i > 0 is a parameter. The function h i has a singularity on the boundary of the i-th ellipse, is positive outside the ellipse and negative inside it, and is close to zero outside a neighbourhood of the ellipse. We can adjust the size of this neighbourhood by tuning the parameter k i . Then, let us define the function P (x, y; k 1 , . . . , k p ) = where the state variables x(t) and y(t), given a control θ(t), must satisfy the equationsẋ Hereafter, to keep the notation simple, we will omit the explicit dependence of h i , i = 1, . . . , p, and P on the parameters k 1 , . . . , k p . The new cost functional (7) approximates the flight time where P (x, y) ≈ 0, i.e. away from the ellipses, is far greater than the flight time in a narrow neighbourhood of the ellipses, outside them, where P (x, y) 0, and has singularities on the boundary of each ellipse. Apart from its regularity features outside the interdicted areas, the reason for choosing the penalty function P is elucidated in the example in Fig. 2. The left picture shows the level sets of the function P (x, y) corresponding to an ellipse of centre (0, 0), axes of length 2, 1, whose first axis is rotated counterclockwise by an angle of π/4 radians with respect to the x-axis, and to a circumference of centre (2.5, 1) and radius 0.5, with k 1 = k 2 = 10 −3 . We see that P (x, y) assumes small (positive) values outside a narrow neighbourhood of the two ellipses, while it diverges to infinity when (x, y) approaches the boundary of each ellipse. This latter aspect is better visible in the right plot of Fig. 2, that shows the graph of the function P along the straight line joining the centres of the two ellipses parameterized with respect to the abscissa (x = t, y = t/2.5, t ∈ [0, 1]) and confined to the segment QR lying in the fly zone. By virtue of (7), the two asymptotes act as barriers to prevent that the UAV may cross the ellipses while, at the same time, the trajectory may approach the ellipses closely as far as |P (x, y)| 1. In conclusion, when we apply the Euler-Lagrange Theorem and the Pontryagin Minimum Principle to minimize the modified cost functional (7) with respect to the control, we get an optimal control which makes the UAV trajectory to possibly approach an ellipse very closely without touching it, so that the cost functional essentially returns the flight time, provided that the parameters k i are chosen small enough.
However, when solving the BVP numerically, the above argument partially evaporates due to the discrete nature of the numerical solution. In fact, while a continuous trajectory (x(t), y(t)) crossing an ellipse would necessarily encounter a singular point of the cost functional, the same argument does not apply for the discrete orbit (x j , y j ) ≈ (x(t j ), y(t j )) unless the stepsize of integration is chosen sufficiently small, which would dramatically affect the efficiency of the overall procedure. In order to prevent this situation from occurring, we have implemented a continuation technique, which generates a preliminary trajectory considering null axes for each ellipse. Subsequently, the axes are gradually increased until they reach their final values and, at each step, the solution obtained at the previous step is readapted in order to satisfy the constraints at the current step. A formal description of this continuation technique is reported below. Hereafter we apply the Euler-Lagrange Theorem, Fig. 2 Level sets of the function P (x, y) corresponding to an ellipse of centre (0, 0), axes of length 2, 1, whose first axis is rotated counterclockwise by an angle of π/4 radians with respect to the x-axis, and to a circumference of centre (2.5, 1) and radius 0.5, with k 1 = k 2 = 10 −3  (7) and (8). Defining the Hamiltonian function the Euler-Lagrange equations becomė The optimal control is not uniquely defined by (9), since cos θ = ± λ 1 both satisfy (9). Therefore, according to the Pontryagin Minimum Principle, we choose the optimal control corresponding to the negative solution. The transversality condition reads Since the final state is given and φ = 0, we have that dx f = dy f = dφ f = 0, and the transversality condition reduces to H (t f )dt f = 0. It has to be satisfied for all dt f , thus we arrive at the additional boundary condition H (t f ) = 0 for the costate variables. Let us recall that t f = t f (θ ) is unknown: indeed, it is the performance index to be minimized. By setting s = t/t f , the integration interval becomes [0, 1], t f becomes a further state variable and we obtain the additional equationṫ f = 0. Therefore, the state and the costate variables are the solution of the system of ordinary differential equationsẋ coupled with the boundary conditions and i (x, y;

The continuation technique
The equation g (n) i (x, y; N i ) = 0 represents an ellipse of centre (x i , y i ) and axes a i √ n/N i , b i √ n/N i . When n = 0, each ellipse shrinks to its centre, while its axes increase with n until, for n = N i , they match the values a i and b i , yielding the original shape. Let us define the functions  , y), we simply set g (n) i (x, y; N i ) = g i (x, y) for n > N i , so h (n) i (x, y; k i , N i ) = h i (x, y) and P (N) (x, y; k 1 , . . . , k p , N 1 , . . . , N p ) = P (x, y). In the sequel, to simplify the notation, we will omit the explicit dependence of g . . . , p, and P (n) , n = 0, . . . , N, on the parameters k 1 , . . . , k p , N 1 , . . . , N p . The continuation technique consists in solving the BVPs (10)- (12) with P (x, y) replaced by P (n) (x, y), sequentially for n = 0, . . . , N. The solution obtained after solving the n-th problem is used in the code as initial guess to solve the subsequent BVP. At the very first step (n = 0), the initial guess is just the straight line joining (x 0 , y 0 ) and (x f , y f ), i.e. the optimal trajectory in absence of constraints. In principle, it may happen that an obstacle centre lies on the initial guess or is very close to it. In this case, the singularity introduced in that point prevents the continuation from starting since P (0) (x, y) is Infinity. There are several ways to overcome this issue. We have chosen to slightly move the obstacle centre along the line orthogonal to the initial guess, and to increase its axes in such a way that the new adjusted obstacle contains the original one.
The above procedure guarantees a fast convergence of the nonlinear scheme solver embedded in the code towards the local solution which is closest to the initial profile given in input. From a geometrical viewpoint, the continuation procedure defines a sequence of homotopic solution curves in the phase plane, originating from the degenerate case where all the constraints reduce to points. As long as the ellipses expand, the singularities introduced in the functions P (n) (x, y) adapt the corresponding trajectories in order to keep them outside the ellipses.
Once the solution of the last BVP (n = N) has been determined, the state variables x(t) and y(t) give the UAV trajectory, and the costate variables λ 1 (t) and λ 2 (t) give the optimal control θ(t) ∈ [−π, π] via the relations The values of the parameters k i and N i are empirical and based on the numerical simulations, though we have employed a dynamic selection strategy to speed up the overall procedure, according to the following scheme: Notice that, at each continuation step n, the parameters k i and N i are increased until the minimum of g (n+1) i (x, y) computed along the n-th solution trajectory x (n) j , y (n) j is greater than or equal to 0.02, for all i = 1, . . . , p. This check assures that the n-th solution trajectory is outside and not so close to each ellipse at step n + 1. This is needed because, if the n-th solution trajectory crosses some ellipses at step n + 1, then it crosses the singularities introduced along them, making the continuation get stuck. Furthermore, every time k i is updated, the n-th problem has to be solved again since the functions h (n) i (x, y) and P (n) (x, y) depend on k i . So k i is increased whenever N i becomes a multiple of 10, in order to prevent the execution time from increasing too much.
Finally, let us notice that the continuation technique described so far applies to all the obstacles simultaneously. As an alternative, the continuation can be applied to an obstacle at a time, starting from the one with the largest axes to the one with the smallest axes decreasingly. To achieve this, we first define the functions 1 (x, y), n = 0, . . . , N 1 , and P (N p ) p (x, y) = P (x, y), and then, after sorting the obstacles as mentioned above, set up the following scheme: Therefore, this variant consists in solving the BVPs (10)- (12) with P (x, y) replaced by P (n) i (x, y), sequentially for i = 1, . . . , p, and sequentially for n = 0, . . . , N i . The solution obtained after solving the n-th problem in the i-th iteration is used in the code as initial guess to solve -the subsequent BVP at step n + 1, if n < N i ; -the first problem in the (i + 1)-th iteration, if n = N i . The initial guess for i = 1 and n = 0, as well as the parameter initialization and tuning, is as before.

Simulation results
In this subsection, we report two examples showing the results of the twodimensional numerical simulations we have carried out. For the first example, we compare the results given by the two continuation strategies, in terms of the corresponding flight time, as well as the performance of the code bvptwp with that of the codes bvp4c and bvp5c. As comparison criteria, we have considered, for the computed solution of the last BVP in the continuation process, the number of points in the final output mesh (nMeshPoints) and the total number of function evaluations defining the ordinary differential equation system (nODEevals) and the boundary conditions (nBCevals). In addition, we have reported the execution time expressed in seconds (time) for the overall continuation. All computations have been carried out on an Intel i7 quad-core CPU with 16GB of memory, running MATLAB R2020b.
For both examples, we assume that the UAV must move from the point (x 0 , y 0 ) = (0, 0) to the point (x f , y f ) = (10, 10) with constant velocity in modulus V = 1, while the avoidance areas are disjoined and defined as follows.
-Example 1: Three circumferences having radius 1 and centres (2.2, 1.8), (3.8, 4.2) and (6.2, 3.8), respectively; an ellipse having centre (7.4, 7.8) and axes of length 2.25 and 1.5, whose first axis is rotated counterclockwise by an angle of π/4 radians with respect to the x-axis. The final parameter values of the continuation strategies for the codes bvptwp, bvp4c and bvp5c are reported in Table 1. The corresponding trajectory and the yaw angle that realizes it, expressed in radians, are shown in Figs. 3 and 4 for the two continuation strategies. Table 2 summarizes the performance of the code bvptwp in terms of the output parameters listed at the beginning of this subsection, and compares its behaviour with that of the MATLAB built-in codes bvp4c and bvp5c, showing its efficiency and robustness. This example has been suitably conjectured in such a way that the two continuation strategies could lead to two different local minima of the underlying  optimization problem, which is not the case for general problems. In the following example, the first continuation strategy even fails to converge, which makes the second strategy more robust, despite a bit more expensive.

3D path planning problem
The three-dimensional problem is conceptually similar to the two-dimensional one, the ordinary differential equations governing the UAV motion now beinġ with the boundary conditions where x = x(t), y = y(t), z = z(t), respectively the abscissa, ordinate and altitude of the UAV in an appropriate Cartesian reference, are the state variables; -θ = θ(t) ∈ [−π, π], γ = γ (t) ∈ [−π/2, π/2], respectively the yaw and pitch angles, are the control variables (see Fig. 1).
We still do not consider the roll angle since we are neglecting the rigid body structure of the UAV, and we will not introduce constraints on the control variables. The performance index to be minimized is again the flight time that now is a function of the pitch angle, too.
Regarding the avoidance areas, since we are working in the three-dimensional space R 3 , we enclose them in cylinders or ellipsoids in order to obtain sufficiently regular constraints. Therefore, the set of constraints with which we approximate the avoidance areas is and x i , y i , a i and b i are, respectively, the axis coordinates and the axis lengths of the i-th cylinder, and x j , y j , z j , a j , b j and c j are, respectively, the centre coordinates and the axis lengths of the j -th ellipsoid. Cylinders are suitable for areas to avoid no matter what the altitude is, while ellipsoids are suitable for finite height avoidance areas. Thus, the optimal control problem in the three-dimensional case is the following: where the state variables x(t) and y(t), given a control (θ (t), γ (t)), must satisfy the equationsẋ = V cos θ cos γ, and the additional constraints The formulation of the auxiliary unconstrained optimal control problem described in the previous section is now adapted in order to tackle the three-dimensional problem.
According to the Euler-Lagrange Theorem, to determine the optimal controls θ and γ , we consider the Hamiltonian function and compute its stationary points with respect to the controls from which we have that , and cos γ = ± We choose the negative solution for the yaw angle θ in order to be consistent with the two-dimensional case, the positive and negative ones for the cos γ and sin γ respectively such that the Pontryagin Minimum Principle [16, pp. 95-101] is satisfied. The continuation technique remains identical to the two-dimensional case.

Simulation results
Among the three-dimensional numerical simulations we have carried out, we here report an example, simulating an urban environment, where the two continuation strategies behave differently, again emphasizing that for general and less involved problems, the two approaches lead to the same results. Finally, we consider a scenario reproducing an orographic environment by means of a set of ellipsoids.

Conclusions
We have considered the problem of determining a UAV trajectory that minimizes the flight time in presence of avoidance areas and obstacles. Incorporating these constraints in the cost functional, with the aid of the Euler-Lagrange Theorem and the Pontryagin Minimum Principle, we have derived a boundary value problem with a Hamiltonian structure. The use of the code bvptwp combined with a suitable continuation strategy to manage the additional constraints has shown very efficient in comparison to the standard solvers available in MATLAB.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.