First-Order Necessary Conditions in Optimal Control

In an earlier analysis of strong variation algorithms for optimal control problems with endpoint inequality constraints, Mayne and Polak provided conditions under which accumulation points satisfy a condition requiring a certain optimality function, used in the algorithms to generate search directions, to be nonnegative for all controls. The aim of this paper is to clarify the nature of this optimality condition, which we call the first-order minimax condition, and of a related integrated form of the condition, which, also, is implicit in past algorithm convergence analysis. We consider these conditions, separately, when a pathwise state constraint is, and is not, included in the problem formulation. When there are no pathwise state constraints, we show that the integrated first-order minimax condition is equivalent to the minimum principle and that the minimum principle (and equivalent integrated first-order minimax condition) is strictly stronger than the first-order minimax condition. For problems with state constraints, we establish that the integrated first-order minimax condition and the minimum principle are, once again, equivalent. But, in the state constrained context, it is no longer the case that the minimum principle is stronger than the first-order minimax condition, or vice versa. An example confirms the perhaps surprising fact that the first-order minimax condition is a distinct optimality condition that can provide information, for problems with state constraints, in some circumstances when the minimum principle fails to do so.


Introduction
Convergence analysis of optimal control algorithms typically aims to demonstrate that accumulation points of sequences of control functions, generated by the algorithm under consideration, satisfy necessary conditions of optimality. The nature of the necessary conditions is not fixed, but depends on the algorithm and on the analytical techniques employed in the convergence analysis.
In a series of papers [6,9], Mayne and Polak, building on earlier work by Jacobson and Mayne [3], proposed algorithms (with accompanying convergence analysis) for solving optimal control problems with endpoint inequality constraints, based on strong variations. In consequence of the strong variations employed in the algorithms, one might expect that the necessary conditions featuring in the convergence analysis would be the minimum principle. (This is on account of fact that the minimum principle can be proved by consideration of strong variations to the control.) But, in fact, another condition was used in this earlier literature, a necessary condition of optimality that we, in this paper, shall call the first-order minimax condition. Necessary conditions similar to an integral version of the first-order minimax condition, but applicable to optimal control problems with pathwise state constraints (we shall refer to them as 'state constraint problems'), are implicit in the convergence analysis of feasible directions algorithms due to Pytlak and Vinter [11].
The first-order minimax condition (for state constraint-free problems) originates in a systematic approach to algorithm construction in nonlinear programming and optimal control, due to Polak [8]; the idea, in the case of optimal control problems, is to find, for a given control u , an 'optimality function' u → θ(u, u ) with the property that arg min{u → θ(u, u )} provides, in the event that min u θ(u, u ) < 0 , search directions for the reduction of cost and endpoint constraint violations. The convergence analysis involves showing that accumulation pointsū satisfy min u θ(u,ū) = 0. For the optimal control problems treated by Mayne and Polak in [6,9], 'min u θ(u,ū) = 0' can be interpreted as our first-order minimax condition.
Investigating the strength of a necessary condition based on particular computational scheme gives insights into pathological situations when the necessary condition is satisfied at some non-minimizing process and hence into circumstances when the scheme might fail to yield a minimizer. This is the reason why, in this paper, we investigate the strength of two necessary conditions previously encountered in algorithm analysis, by comparing them with the known minimum principle.
We provide a rather complete picture of the relations between the following necessary conditions of optimality, both for problems with and without pathwise state constraints: the first-order minimax condition (on which the algorithms in [6,9] are based), the integrated first-order minimax condition (implicit in the algorithm convergence analysis of [11]), and the minimum principle. For problems without state constraints, our investigations reveal that the minimum principle is a stronger necessary condition than the first-order minimax condition and that the integrated first-order minimax condition and the minimum principle are equivalent. An example is provided demonstrating that the minimum principle is strictly stronger than the first-order minimax condition.
When we pass to problems with pathwise state constraints, we find, once again, that the integrated first-order minimax condition and the minimum principle are equivalent. But we discover that the first-order minimax condition is neither stronger nor weaker than the minimum principle. Thus, the first-order minimax condition is revealed to be an optimality condition that is distinct from the minimum principle. An example illustrates how it can be used to show that a certain admissible process is not a minimizer, when the minimum principle fails to do so.
It is well known that first necessary conditions for free right endpoint problem can be simply derived, by consideration of needle variations and performance of elementary gradient calculations. When general endpoint constraints are present, the derivation of necessary conditions akin to the minimum principle necessitates the use of a more sophisticated analysis (based on separation of approximating cones of the reachable set [2], or non-smooth perturbation methods [7]). We observe, however, that when the endpoint constraints take the form of inequality constraints, the derivation of necessary conditions based on simple gradient calculations becomes once again possible. (The 'necessary conditions' referred to here are 'first-order minimax conditions.') There is a parallel here with the derivation of Fritz John-type first-order necessary conditions in nonlinear programming where, as is well known, the analysis is greatly simplified, if the constraints are inequality constraints [5], not mixed inequality and equality constraints. A secondary purpose of this paper is to make explicit these simplifications, also in an optimal control context.

Problem Formulation
Consider the optimal control problem with endpoint inequality constraints: (1)) over absolutely continuous functions x and meas. functions u such thaṫ The data comprise functions g j : R n → R, j = 0, . . . , r and f : R n × R m → R n and a closed set Ω ⊂ R m .
A pair of functions (x, u) is called a process if x is absolutely continuous, u is Lebesgue measurable,ẋ(t) = f (x(t, u(t)) a.e., u(t) ∈ Ω a.e. and x(0) = x 0 . If x also satisfies the right endpoint constraints in (P), we say that (x, u) is an admissible process. The first component x of a process (x, u) is called a state trajectory. The second component is called a control function.
We shall also consider a generalization that includes state constraints h k (x(t)) ≤ 0, for all t ∈ [0, 1] and k = 1, . . . N s , for given functions h k : R n → R, k = 1, . . . , N s . (1)) over absolutely continuous functions x and meas. functions u such thaṫ 'Admissible processes for (S)' are understood in the obvious sense. The following hypotheses will be invoked: (H1): The set Ω is compact, and the functions g j , j = 0, . . . N and h k , k = 1, . . . , N s are continuously differentiable, and there exists c > 0 such that Notation: In Euclidean space, the Euclidean length of a vector x is denoted by |x| and the closed unit ball in Euclidean space is written B. Given numbers a and b, we write a ∨ b := max{a, b} and a ∧ b := min{a, b}. Given x ∈ L ∞ ([0, 1]; R n ), we write the L ∞ norm of x as ||x|| L ∞ . N BV + ([0, 1]; R N s ) denotes the space of N s -tuples of (normalized) increasing functions ν = {ν k } on [0, 1] such that each ν k is right continuous on (0, 1). For each k, dν k is the Stieltjes measure associated with ν k . R + denotes [0, ∞).
For a given nominal admissible process (x,ū), S(t, s) is the transition matrix for the linear systemẏ i.e., for each s ∈ [0, 1], t → S(t, s) is the unique solution on [0, 1] of the matrix differential equation 3 Analytical Tools 1. Relaxation. The following optimal control problem is known as the relaxation of (P).
to be any relaxed process for (P) and any δ > 0. Then, there exists a process (x , u ) for (P) such that ||x − x || L ∞ ≤ δ .

A Minimax Theorem.
Minimax theorems originate in the game theory literature. We shall make use of their important role also in variational analysis. Then, there exists (x * , y * ) ∈ X × Y which is a saddlepoint for F, i.e., For a proof, see, e.g., [10,Thm. 3.4.6].
Proof Choose constants K 1 , R > 0 and k with the following properties: We deduce from hypothesis (H2) and Gronwall's inequality that, for any process Using the facts that x σ andx are state trajectories with the same initial state, we can show that |Δf (s, u)|ds for each t ∈ [0, 1]. But then, by Gronwall's inequality, in which K := 2e k R . We have confirmed property (a).
To complete the proof, it is required only to establish property (c). (The proof of property (b) follows from (c), when we substitute g j in place of h k in the analysis and select t = 1.) Take any index value k and t ≥ t. We have The second term on the right can be written ('integration by parts'). We know, however, that, under the differentiability hypotheses on the g j 's and x → f (x, u), there exists a modulus of continuity θ 1 (that does not depend on k or t ) such that Combining these relations and noting property (a), we find that, for some M > 0 that does not depend on i, Here, as before, σ = σ ∧ (t − t)). We have confirmed property (c) with modulus of continuity θ(s) The preceding proposition provides estimates on solutions to the controlled differential equationẋ = f (x, u) induced by needle variations. Similar analysis yields estimates on solutions, induced by another kind of 'local' variation: Proposition 3.2 Take a process (x,ū). Take any control function u ∈ U. For each σ ∈ [0, 1], define x σ to be the solution of the differential equation (Notice that x σ is the relaxed state trajectory corresponding to the relaxed control Then, there exists a K > 0 (independent of σ ) and a continuity modulus θ (i.e., a function θ : (0, ∞) → (0, ∞), such that lim s↓0 θ(s) = 0), and (a): ||x σ −x|| L ∞ ≤ K σ , (b): for any index value j ∈ {0, . . . , r } and process (x, u),

Necessary Conditions of Optimality (No State Constraints)
This section provides three sets of necessary conditions for an admissible process (x,ū) to be a minimizer for (P).

Theorem 4.1 (The First-Order Minimax Condition)
Let (x,ū) be a minimizer for (P). Assume (H1)-(H2) are satisfied. Then, for a.e. t ∈ [0, 1] Proof We can always arrange, by adding a constant to g 0 if necessary, that Since (x,ū) is a minimizer, we must have for all processes (x, u) for (P). (Otherwise, there would exist a process for which the cost is reduced and all the endpoint constraints are satisfied, a contradiction.) Assume that the assertions of the theorem are false. Since T is a set of full measure, there exist a time t ∈ T , u ∈ Ω and a number γ > 0 such that for all j ∈ I (x). Take σ i ↓ 0 and, for each i, let u i ∈ U to be Recalling that g 0 (x(1)) = 0 and that g j (x(1)) ≤ 0 for j > 0, we deduce from Proposition 3.1 that there exists a modulus of continuity θ(.) such that, for each j ∈ {0, . . . , r } and i = 1, 2, . . . , By properties of Lebesgue points, there exists a modulus of continuity ψ : (0, ∞) → (0, ∞) (that does not depend on j) such that, for all i, It follows from (7), (8) and (9) that, for all i sufficiently large, Note, on the other hand, that for each j / ∈ I (x), g j (x(1)) < 0. So there exists γ 1 > 0 such that It follows that, for all i sufficiently large, This contradicts (6) when x = x i . The proof is concluded.
Proof Assume again, without loss of generality, that g 0 (x(1)) = 0. Take any u ∈ U and σ ∈ [0, 1]. Let x σ be the relaxed state trajectory of Proposition 3.2. We claim that This follows from the fact that ( is a relaxed process. Indeed if, to the contrary, there existsr > 0 such that then, according to the relaxation theorem and in view of the continuity of the g k 's, we would be able to find an (unrelaxed) state trajectory x and such that g 0 (x (1)) ∨ . . . ∨ g r (x(1)) < −r /2, in contradiction of the optimality of (x,ū). The claim is therefore correct.
Suppose that the assertions of the theorem are false. In view of the foregoing observations, we are justified in reproducing the analysis in the proof of the earlier theorem, but now based on the perturbation estimates provided by Proposition 3.2 in place of Proposition 3.1, to obtain a contradiction of (12), for σ > 0 sufficiently small. The assertions of the theorem must therefore be true.
Finally, we state a special case of the minimum principle, in a form that emphasizes its connection with the preceding two sets of necessary conditions.

Theorem 4.3 (The Minimum Principle)
Let (x,ū) be a minimizer for (P). Assume (H1)-(H2) are satisfied. Then, there exists λ = (λ 0 , . . . , λ r ) ∈ Λ r such that λ j = 0, for j / ∈ I (x) and We shall give an independent proof of this well-known optimality condition, as a byproduct of our investigations into the relationships between the first-order minimax condition, the integral first-order minimax condition and the minimum principle. Comments.
1. The first-order minimax condition, originating in earlier algorithm convergence of Mayne and Polak [6,9] , has the alternative expression: in which, for each j, p j (t) := S T (1, t)∇g j (x(1)); it can be interpreted as the solution to the costate equation: (1)) .
2. The optimality condition of Theorem 4.3 can be equivalently written in terms of a single costate arc p(t) := S T (1, t)( j λ j ∇g j (x(1)), thus: By the properties of the transition matrix S(t, s), the costate arc p satisfies In this form, it will be recognized as the special case of the minimum principle, applied to problems with endpoint inequality constraints. (Note the Lagrange multipliers satisfy the non-triviality condition λ = 0, since λ ∈ Λ r , and λ j = 0 if h j (x(1)) < 0.)

Relations Between the Necessary Conditions
The following theorem relates the necessary conditions of Sect. 4. (iii): The minimum principle (and therefore also the integral first-order minimax condition) is a strictly stronger necessary condition than the first-order minimax condition, in the sense that data can be chosen for problem (P) such that the integrated first-order minimax condition can be used to exclude a non-minimizer, but the first-order minimax condition fails to do so.
Proof (i): Suppose the minimax condition is not satisfied. Then, there exists u ∈ Ω and t ∈ (0, 1) such that t is a Lebesgue point of s → f (x(s),ū(s)),ẋ(t) exists, and For arbitrary σ ∈ (0, 1 − t), define the 'needle variation' u σ ∈ U according to (3). From the Lebesgue point property, we have that, for all j ∈ I (x), for σ sufficiently small. We have shown that the integrated first-order minimax condition is not satisfied. The proof is complete. (ii): Take any u ∈ U. Suppose the minimum principle is satisfied (with Lagrange multipliers λ = (λ 0 , . . . , λ r ) ∈ Λ r such that λ j = 0 if j / ∈ I (x)). Then, for any u ∈ U, This implies the existence ofj ∈ I (x) such that We have shown that the conditions of the minimum principle imply the conditions of the integral first-order minimax theorem. Now, suppose that the conditions of the integral first-order minimax theorem are satisfied. It is convenient to introduce, at this stage, the notation: Δf (t, Ω) a.e. t ∈ [0, 1]}.
Notice that the integral first-order minimax condition can be expressed as Making use of the following representation of the convex hull of E: in which co denotes 'convex closure' w.r.t. the L 1 norm. Note the following properties of J and its domain (which take to be co E ×Λ r ): for any e ∈ E. This inequality tells us that for any control function u ∈ U. A standard 'needle variation' argument permits us to conclude from this last relation that This is the minimum principal condition, in which the endpoint constraint Lagrange multiplier vector is λ * . (iii): This assertion is confirmed by the example of Sect. 6.

Example One
Consider the following example of (P), in which the state and control dimensions are both 2 and there is one endpoint constraint. It follows that the first-order minimax condition is satisfied at (ū,x). (c): For u ∈ U as in part (a), we calculate We have shown that the integrated first-order minimax condition is violated.

State Constraints
Consider the state constrained problem formulated in the introduction (1) For k = 1, . . . , N s , A k (x) will denote the set of times at which the k'th state constraint is active, for the nominal process (x,ū), that is We derive similar necessary conditions of optimality to those of Sect. 4, but now allowing for state constraints.
for all u ∈ Ω.
Proof We may assume that g 0 satisfies g 0 (x(1)) = 0 . Since (x,ū) is a minimizer, we must have for all processes (x, u) for (S). Let T be the subset of points t ∈ [0, 1] with the following properties: t is a Lebesgue point of s → f (x(s),ū(s)),ẋ(t) exists,ẋ(t) = f (x(t),ū(t)) andū(t) ∈ Ω. Define, for arbitrary > 0 and k ∈ {1, . . . , N s }, We claim it suffices to prove a modified version of the theorem, in which we require, for arbitrary t ∈ T and > 0, for all u ∈ Ω and t ≥ t. Indeed, if this modified condition was true for arbitrary > 0, then it would be valid for each = i , i = 1, 2, . . . , where i ↓ 0. This means that, for each t ∈ T and u ∈ Ω and every i, either there exists j(i) ∈ I (x) such that By extracting subsequences, we can arrange the i( j) =ī ∈ I (x), k(i) =k for all i and t i →t as i → ∞, for someī ∈ I (x),k ∈ {1, . . . , N s } andt ∈ Ak ∩ [t, 1]. Since, for each k, we can deduce that either there existsj ∈ I (x) such that These relations combine to yield the required (stronger) necessary condition of the theorem statement. Assume, for some > 0, the modified condition above is false; we show that this leads to a contradiction. Then, for some t ∈ T , u ∈ Ω and γ > 0, and Take σ i ↓ 0 and, for each i, let u i ∈ U to be control function Since g j (x(1)) ≤ 0 for all j ∈ {0, . . . , r } and h k (x(t)) ≤ 0 for all t ∈ [0, 1] and k ∈ {0, . . . , N s }, we deduce from Proposition 3.1 that there exists a modulus of continuity θ(.) such that, for each j ∈ {0, . . . , r } and j = 1, 2, . . . , and, for each k ∈ {1, . . . , N s } and σ i : Since t is a Lebesgue point, there exists a further modulus of continuity ψ : (0, ∞) → (0, ∞) (that does not depend on j, k or t (≥t)) such that and Since g j (x(1)) ≤ 0 for each j, it follows from (19), (21), (22), (23) and (24) that, for all i sufficiently large, Note, on the other hand, that for each j / ∈ I (x), g j (x(1)) < 0. So there exists γ 1 > 0 such that It follows from Proposition 3.1(a) that, for all i sufficiently large, Now take any k ∈ {1, . . . , N s }. Since h k (x(t)) ≤ 0 for all t ∈ [0, 1], we can deduce from (20) (22) and (24) that, for all i sufficiently large, Since x i coincides withx on [0, t] and h k (x(t)) ≤ 0, we know also that Since h k (x(t)) < − for all t ∈ [0, 1] such that t / ∈ A k (x), we deduce from Proposition 3.1 (a) that, for i sufficiently large, Relations (25), (27) ,(28), (29) and (30) combine to tell us that, for i sufficiently large, condition (17) is violated by the admissible state trajectory x i . The validity of the assertions of the theorem follow from this contradiction.

Theorem 7.2 (The State Constrained Integral First-Order Minimax Condition)
Let (x,ū) be a minimizer for (S). Assume (H1)-(H2) are satisfied. Then Proof We may assume, without loss of generality, that g 0 (x(1)) = 0. We claim it suffices to prove a weaker form of the theorem, in which the inequality is replaced, for arbitrary > 0, by the weaker condition for all u ∈ U. (Here, the set A k (x) is as defined by (18 ).) Indeed if this weaker condition were true for arbitrary > 0 then it would be valid for = i , i = 1, 2, . . . , where i ↓ 0. This means that, for any u ∈ U , either there exists j(i) ∈ I (x) such that By extracting subsequences, we can arrange that there existj ∈ I (x),k ∈ {1, . . . , N s } andt ∈ Ak(x) such that j(i) =j, k(i) =k, for all i; furthermore t i →t . We deduce in the limit that either in which, we recall,j ∈ I (x),k ∈ {1, . . . , N s } andt ∈ Ak(x). These relations combine to yield the required (stronger) necessary condition of the theorem. So assume that the assertions of this weaker version of the theorem, in which A k (x) replaces A k (x) for some arbitrary > 0, are false; we shall show that this leads to a contradiction. Take any u ∈ U. Under the contraposition hypothesis, there exists γ > 0 such that, for all σ ∈ (0, 1), Take σ i ↓ 0. For each i, let x i be the relaxed state trajectory corresponding to the relaxed control ((σ i ,ū), (1 − σ i ), u)). We can deduce from the relaxation theorem and the optimality of (x,ū) that, for each i, Since g j (x(1)) ≤ 0 for all j ∈ {0, . . . , r } and h k (x(t)) ≤ 0 for all t ∈ [0, 1] and k ∈ {0, . . . , N s }, we deduce from Proposition 3.2 that there exists a modulus of continuity θ(.) with the following properties: for any i, for j = 0, . . . , r and We conclude from (31) that, for all i sufficiently large, and σ −1 i h k (x i (t )) ≤ −γ /2, for k ∈ 1, . . . , N s and t ∈ A k (x) .
Note, on the other hand, that for each j / ∈ I (x), g j (x(1)) < 0. It follows that there exists γ 1 > 0 such that Proposition 3.1 now tells us that, for all i sufficiently large, By the definition of A k (x), and consequently, We conclude from the preceding four relations that for i sufficiently large. This contradicts (32). The proof is concluded.
The following necessary condition is a version of the minimum principle for the state constrained problem (S), which makes its relation with the preceding first-order minimax theorem explicit.
2. The necessary condition provided by Theorem 7.2 is implicit in convergence analysis of [11]. It can be expressed as Here, p j (t) and p k (t, s) are as defined in the preceding comment. 3. An equivalent version of the necessary condition of Theorem 7.3 is: There exist λ = {λ j } ∈ (R + ) r +1 and non-decreasing functions of bounded variation ν k ∈ BV (0, 1), k = 1, . . . , N s , such that λ Here, p satisfies the 'measure driven' differential equation In this form, it will be recognized as a special case of the standard state constrained minimum principle ( [10, Thm. 9.3.1]).
The following theorem relates the necessary conditions of Theorem 7.1 (the state constrained first-order minimax condition), Theorem 7.2 (the state constrained integral first-order minimax condition) and Theorem 7.3 (the state constrained minimum principle).  (x,ū) such that the first-order minimax condition confirms that (x,ū) is not a minimizer, but the minimum principle fails to do so. The converse is also true: Data can be chosen for problem (S) and an admissible process (x,ū) such that the minimum principle confirms that (x,ū) is not a minimizer, but the first-order minimax condition fails to do so.

Comment:
The assertions of Theorem 7.4 remain true even if we replace the classical state constrained minimum principle by Arutyunov and Aseev's strengthened 'non-degenerate' state minimum principle [4]. This is because in Example Two, establishing that, for some (non-optimal) admissible process (x,ū), the first-order minimax condition is not satisfied, while the minimum principle is satisfied, both endpoints ofx are interior to the state constraint set. For such problems, the controllability hypotheses of [4] are automatically satisfied because there are no nonzero normal vectors to the state constraint set atx(0) orx(1) and the assertions of both the original minimum principle and its non-degenerate form are the same.
Proof (i): The first assertion in (i) is already confirmed by Example One of Sect. 6, in the special case of problem (P) in which none of the state constraints are active. The second assertion is confirmed by Example Two in Sect. 8. (ii) First, suppose that necessary condition of Theorem 7.3 (the minimum principle) is satisfied. Take any u ∈ U. Inserting u = u(t) into the inequality in part (b) of the theorem statement, integrating w.r.t. t and changing the order of integration give If the necessary condition of Theorem 7.2 is violated, we know that u ∈ U can be chosen such that, for some γ > 0, We deduce from the non-triviality of the Lagrange multipliers, together with the facts that the elements {λ j } have support in I (x) and the measures dμ k have support in A k (x) for each k, that there exists ρ > 0 such that It follows from (39) and (39) that This contradicts (38). We have shown that the integral first-order minimax condition is satisfied. Now, suppose that the conditions of the integral first-order minimax theorem are satisfied. Define: Define also the function J : The integral first-order minimax condition can be expressed as Now, equip co E with its relative weak L 1 topology. We also take the topology on D to be the relative topology induced on this set by the product topology E r × D, where E r is the Euclidean topology and D is the weak * topology on C * ([0, 1]; R N s ). Since (with respect to these topologies) e → J (e, (λ, ν)) is continuous and linear for fixed (λ, ν), we can deduce from the preceding relation that max (λ,ν)∈D J (e, (λ, ν)) ≥ 0 for all e ∈ co E .
We conclude that, for all u ∈ U, 1 0 Δf (t, u(t)) · S T (1, t)( j∈I (x) λ j ∇g j (x(1))) This relation will be recognized as the state constrained minimum principle condition. The proof is complete.
Since, in Example Two, we exhibit such a process (x , u ), the first-order minimax condition is not satisfied at (x,ū).

Conclusions
It is important to investigate the strength of the optimality condition associated with a particular optimal control algorithm, because this gives insights into anomalous situations where the algorithm fails to generate minimizing processes. We have investigated two optimality conditions that have arisen in earlier convergence analysis, the firstorder minimax conditions and the integrated first-order minimax conditions, and have compared them with the minimum principle. For problems without pathwise state constraints, we find that the minimum principle is strictly stronger than the first-order minimax and the minimum principle and the integrated first-order minimax condition are equivalent. For problems with state constraints, we have found once again that the minimum principle and the integrated first-order minimax condition are equivalent, but the (non-integrated) first-order minimax condition is neither strictly stronger nor strictly weaker than the minimum principle. We provide an example in which the firstorder minimax condition can be used to exclude a non-minimizer, but the minimum principle fails to do so. This example establishes that the first-order minimax condition for state constrained problems is an independent necessary condition that can, in certain circumstances, supply more information than the minimum principle.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.