Introduction

As it is well-known, water is an essential asset for human life, but scarce. Climate change, pollution and the inefficient use of water contribute to this scarcity. Bearing in mind that 80% of the water use is for agriculture and that this implies a considerable waste, it is of foremost importance to find strategies for a sustainable use of water. Not surprisingly, recently we have witnessed an increasing interest in the study of water applications within the scope of Mathematics field; see, for example, [1,2,3, 6, 8,9,10,11, 14, 16] and references cited therein. Surprisingly, the literature relating schedules water problems and optimal control is scarce.

Stochastic dynamic programming have been applied to solve optimal scheduler’s problems in some works. For example, [6] proposes to identify a lumped-parameter model based on the data produced via simulation with a distributed-parameter model. Brown et al. in [3] present an irrigation scheduling decision support method. In [1] the authors center on the study of optimal water reservoirs management using non-linear one-hidden-layer networks.

An optimal control is considered in [14] to study irrigation plans maximizing the farmers profit, taking into account the cost of the water and the sale prize of the crop. In [8] the aim is maximizing the biomass production at harvesting time, considering a constraint on the total amount of water used. The problem studied is a singular optimal control problem and it is based on a simple crop irrigation model. In a situation of water scarcity, the authors also show that a strategy with a singular arc can be better than a simple bang-bang control. They illustrate their findings with numerical simulations. A different irrigation model is proposed in [2] to study, via optimal control problem, the maximization of the biomass production at harvesting time, when a quota on the water used for irrigation is imposed. An interesting feature of this work is the introduction of a non-autonomous dynamical system with a state constraint and a non-smooth right member which is given by threshold-based soil and crop water stress functions. In [16], an optimal control problem for cascaded irrigation canals is considered. The authors intend to ensure both the minimum water levels for irrigation demands and avoidance of water overflows and, even, dam collapse. In the scope of [16], the optimal control development is not easy due to the structural complexities involving control gates and interconnected long-distance water delivery reaches that are modeled by the Saint-Venant partial differential equations with conservation laws, wave superposition effects, coupling effects and strong non-linearities. In [11] a daily plan model to the irrigation of a crop field is developed with the help of optimal control theory. Such model requires the knowledge about weather data (temperature, rainfall and wind speed), the type of crop, the type of irrigation, the location, the humidity in the soil at the initial time and the type of soil. The main goal consists in minimizing the irrigation water, guaranteeing that the field crop is kept in a good state of preservation.

In this paper our aim is to study irrigation policies minimizing the use of the water while making sure that the crop is kept in healthy condition at all the time. We propose an optimal control problem based on that developed in [9]. The aim is to minimize the amount of water used for irrigation during a fixed time interval [0, T]. The dynamics translates the variation of the water in the soil, x, which equals the difference between water gains, due to weather conditions and the flow rate of irrigated water, u, and and water losses. The differential equation is written in modes, due the fact that losses depending on whether the soil is at capacity field, or not. This equation is coupled with a state constraint \(x(t) \ge x_{\min }\) to ensure the healthy growth of the crop; here \(x_{\min }\) is the hydrological need of the crop. The optimal control problem addressed is then a non-smooth optimal control problem with a state constraint. A remarkable feature of this problem is that different weather conditions produce different solutions for such problem. Thus, and differing from [9], our aim is to study different profiles for the optimal trajectories, under some mild assumptions ((A) and (B) below). Our starting point are eight different profile for optimal trajectories, here called scenarios, capturing basic features for possible optimal trajectories in a time interval. Since our problem is a singular optimal control with a state constraint, the study of optimal trajectories with and without boundary intervals is a main concern. Observe that in contrast with [10], here we deal with a non-smooth formulation of the optimal control problem. Appealing to the Maximum Principle (e.g., [4, 15]) we study analytically the optimal solutions and their multipliers. A remarkable feature of our study is the assertion that the Maximum Principle for our problem holds in the normal form, i.e., the multiplier \(\lambda \) corresponding to the cost is not null; in this respect we refer the reader to, e.g., [5, 13] and references within.

To avoid leaving out optimal trajectories with profiles that can be seen as concatenations of those eight chosen ones, we also illustrate how such theoretical findings can be of help to determine analytical solutions and multipliers for an extra scenario which is a composition of two of the eight scenarios.

Finally, we solve the problem numerically. We do that via the direct method: we first discretize the problem using the Euler method for the differential equation and we then solve the large scale optimization problem obtained. Using three different sets of data for the predicted weather conditions, we present three cases with solutions exhibiting different profiles. Moreover, we (partially) validate such numerical findings, using the theoretical characterization extracted from the Maximum Principle, illustrating the value of the analytical study previously done.

Our study here is a first step towards the determination of implementable solutions for irrigation of healthy crops in agriculture that minimize this precious but scarce human resource called water.

This paper is organized as follows. In “Irrigation Optimal Control Problem" Section, we revisit the model proposed in [9], where the dynamics is written with field capacity modes. We begin “Necessary conditions for (OCP)” Section showing some details associated with the state inequality constraint, recalling some concepts and introducing some mild assumptions on our problem. Then, we analyse the conclusions of Theorem 9.3.1 in [15], when applied to our optimal control problem. In “Eight Scenarios", we present eight scenarios based on the type of optimal trajectories. For each scenario we apply the necessary conditions of “Necessary conditions for (OCP)" Section, characterizing analytically the solution and their multipliers. In “Approximation of Function g Values", our starting point is a table with three sets of different function values, each set representing the difference between the daily precipitation and the daily estimated evapotranspiration for the type of crop. Each set of values corresponds to different weather conditions. Since each set of values only have daily information and we need estimations of these values for mesh points in between to solve computationally our optimal control problem, we approximate those sets of values using computational tools. Such approximations are considered in “Validation" Section. We solve numerically the optimal problem for each of the three cases, via the direct method, and we present the computed solutions. We (partially) validate the numerical solutions for the three considered cases, by comparing them with the analytical ones computed in “Eight Scenarios" Section. This paper finishes with a section devoted to conclusions and future work.

Irrigation Optimal Control Problem

Various and different approaches to improve the efficient use of irrigation in agriculture have been proposed in the literature. Here we focus on the optimal approach proposed in [9]. The main idea of [9] is to determine the irrigation periods and the water’s amount used so as to minimize the total amount of water used for irrigation over a certain period of time T, taking into account the water in the soil. They consider that water’s variations in the soil depends not only on the irrigation, but also on the evapotranspiration, water loss and precipitation. Mathematically, this perspective leads to the following dynamics system governing the variation of water in the soil:

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x}(t)=u(t) + g(t) - loss(x(t)), ~\text { for all }t\ge 0,\\ x(0)=x_0, \end{array} \right. \end{aligned}$$
(S)

where

\(\triangleright \):

x(t) stands for the water in the soil at the instant t;

\(\triangleright \):

u(t) is the flow rate of water at the instant t;

\(\triangleright \):

g(t) is defined as

$$\begin{aligned} g(t)= rfall(t) - evtp(t), \end{aligned}$$
(1)

where rfall(t) is the daily precipitation and evtp(t) is the estimated evapotranspiration for the type of crop, at the instant t;

\(\triangleright \):

loss(x) represents the losses due to deep percolation. In [9], function loss, which appears in the dynamics of system (S), is considered to be defined as:

$$\begin{aligned} loss(x) = \left\{ \begin{array}{lll} \beta x, &{} \text { if } &{} x_{\min } \le x \le x_{FC},\\ x - x_{FC} + \beta x, &{} \text { if } &{} x \ge x_{FC}, \end{array} \right. \end{aligned}$$
(2)

where \(x_{FC}\) represents the water’s amount retained in the soil, after the soil was drained, as well as \(\beta \in [0,\beta _{\max }]\) is a constant that represents the percentage of water losses due to the run-off and deep infiltration. Note that \(x_{FC}\) depends on the type of soil.

We remark that the analytical expression of g is unknown. The precipitation data is collected from a weather station and evapotranspiration is obtained from the product between the crop’s coefficient and the referenced evapotranspiration. The latter is calculated according to [17], using the data from weather station. We emphasize that we will use weather predictions, in the future.

In [9], the authors also argue that the amount of water in the soil x(t) needs to be kept at, or above, a certain level \(x_{\min }\) at all the instants of time in order to guarantee the crop’s growth. Such requirement is translated by the state constraint:

$$\begin{aligned} x_{\min } \le x(t),\ \forall t\ge 0. \end{aligned}$$
(3)

It is no surprise that the control constraint

$$\begin{aligned} 0\le u(t)\le M,\ \forall t\ge 0, \end{aligned}$$
(4)

for some \(M>0\), is also imposed. The main focus of the current research is to determine the irrigation policy, i.e., the function u, such as to minimize the amount of water spent \(\displaystyle \int _0^T u(t)dt\) over a time interval [0, T] subject to (S) coupled with the constraints (3) and (4). In this paper, we also intend to characterize analytically solutions under specific different scenarios. We get such characterizations applying the well known Maximum Principle. With the purpose to simplify the analysis, we first reformulate the problem in the Mayer form appealing to the usual state augmentation technique: we introduce a new variable z such that the cost function is now z(T) leading directly to the following optimal control problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \min \ \ \ &{} z(T)\\ \text {s. t. } &{}{\dot{x}}(t) = f\big (t, x(t),u(t)\big ),\ \text {a.e. } t\in [0,T], \\ &{}{\dot{z}}(t) = u(t),\ \text {a.e. } t\in [0,T], \\ &{}x(t) \ge x_{\min },\ \forall t\in [0,T], \\ &{}u(t) \in [0,M],\ \text {a.e. } t\in [0,T], \\ &{}x(0) = x_0 ,\\ &{}z(0) = 0, \end{array}\right. } \end{aligned}$$
(OCP)

where

$$\begin{aligned} f(t,x,u)= u + g(t) - loss(x). \end{aligned}$$
(5)

Observe that \(z(T)=\displaystyle \int _0^T u(t)dt\). For convenience, we summarize the data of (OCP) in Table 1.

Observe that \(u\rightarrow f(t,x,u)\) is a smooth function. On the other hand, from (2) it is an easy task to see that \(x\rightarrow f(t,x,u)\) is Lipschitz continuous with constant \(\beta +1\) (as shown in [9]). If \(x\rightarrow f(t,x,u)\) were continuously differentiable, (OCP) would fall in the category of singular optimal control problems with a single first order state constraint. However, it is a simple task to see from the Basic Hypotheses and the definition of f (see (5)) that \((t,x)\rightarrow f(t,x,u)\) is merely Lipschitz continuous in \([0,T]\times \mathbb {R}\) for all \(u\in \mathbb {R}\) (see also [9]). It follows that (OCP) is indeed a non-smooth optimal control problem with a state constraint to which Theorem 9.3.1 in [15] applies.

Next, we analyse the conclusions of Theorem 9.3.1 in [15], when applied to (OCP). Let us also define the set of active constraints as

$$\begin{aligned} I(x)=\left\{ t\in [0,T]:~x_{\min }-x(t)=0\right\} . \end{aligned}$$

Consider the unmaximized Hamiltonian

$$\begin{aligned} H(t,x,z,p^x,p^z,u)=p^x\cdot f(t,x,u)+p^zu. \end{aligned}$$

If \(\left( {\bar{X}},{\bar{u}}\right) \) is a strong local minimizer for (OCP), then Theorem 9.3.1 in [15] asserts the existence of an absolutely continuous function \(p:[0,T]\rightarrow \mathbb {R}^2\), \(\lambda \ge 0\) and a measure \(\mu \in C^\oplus ([0,T])\) such that

(i):

\((p^x,p^z,\mu ,\lambda ) \ne 0\);

(ii):

\(-\dot{p}^x(t) \in \partial _x H\big (t,{\bar{x}}(t),{\bar{z}}(t),q^x(t),p^z(t),{\bar{u}}(t)\big ) \text { and } \dot{p}^z(t)=0 \quad \text {a.e.}\);

(iii):

\(q^x(T)=0\) and \(p^z(T)=-\lambda \);

(iv):

\(H\big (t,{\bar{x}}(t), {\bar{z}}(t), q^x(t),p^z(t),{\bar{u}}(t)\big )= \displaystyle \max _{u\in [0,M]} H\big (t,{\bar{x}}(t), {\bar{z}}(t), q^x(t),p^z(t),u\big )\);

(v):

\(\text {supp} \{ \mu \} \subset I({\bar{x}})\), where \(\text {supp}\) is the support of the measure \(\mu \);

where

$$\begin{aligned} q^x(t)=\left\{ \begin{array}{lll} p^x(t)-\displaystyle \int _{[0,t[} \mu (ds) &{} \text {for} &{} t\in [0,T[,\\ p^x(T)-\displaystyle \int _{[0,T]} \mu (ds)&{} \text {for} &{} t=T. \end{array}\right. \end{aligned}$$
(6)

A remarkable feature of (OCP) is that (i)(v) hold with \(\lambda > 0\), as shown in [9]Footnote 1. As it is well known once we have the guarantee that \(\lambda \ne 0\), we can normalize all the multipliers and proceed to work with \(\lambda = 1\). For simplicity, this is the approach that we take nextFootnote 2.

Starting with (ii), it is straightforward to see from the above that \(p^z(t)\equiv -1\). Recalling now the definition of Clarke sub-differential \(\partial _x H\) (see, e.g., [15]) and that of f, we get, from (ii),

$$\begin{aligned} \dot{p}^{x} (t) \in \left\{ {\begin{array}{*{20}l} {\{ \beta q^{x} (t)\} ,} &{} {{\text {if}}\;x_{{{\text {min}}}} \le \bar{x}(t) < x_{{{\text {FC}}}} ,} \\ {[\beta ,\;\beta + 1]\;q^{x} (t),} &{} {{\text {if}}\;\bar{x}(t) = x_{{{\text {FC}}}} ,} \\ {\{ (\beta + 1)\;q^{x} (t)\},} &{} {{\text {if}}\;\bar{x}(t) > x_{{{\text {FC}}}} ,} \\ \end{array} } \right. \end{aligned}$$
(7)

The function \(q^x\) is defined as a function of bounded variation. It follows from (v) that \(\mu \) is zero when the state constraint is not active, i.e., when \(h({\bar{x}}(t)) <0\). It is known that the measure \(\mu \) is defined by a monotone non-negative function \(\nu \) of bounded variation with at most a countable number of jumps. The Lebesgue Decomposition Theorem asserts that \(\mu \) can be written as

$$\begin{aligned} \mu =\mu _a+\mu _\text {sing}+\mu _d, \end{aligned}$$
(8)

where \(\mu _a\) is an absolutely continuous measure with respect to the Lebesgue measure, \(\mu _\text {sing}\) is a singular measure with respect to the Lebesgue measure and \(\mu _d\) is a discrete measure. Although the presence of \(\mu _\text {sing}\) can not be discarded in general, most practical problems exhibit “well behaved” measures \(\mu \) with \(\mu _\text {sing}=0\) (see, e.g, [7, 12]). On the other hand, Radon–Nikodym Theorem asserts the existence of an integral function \(\nu _a\) such that

$$\begin{aligned} \mu _a ([0,t[)=\int _{[0,t[}\mu _a (ds)=\int _0^t \nu _a(s)ds. \end{aligned}$$
(9)

By assumption (B), the set \( I({\bar{x}})\) does not include contact points. However, \( I({\bar{x}})\) may contain boundary intervals. If \(t \in [t_\text {in}, t_\text {out}] \subset I({\bar{x}})\), then \(\dot{{\bar{x}}}(t)=0\) for \(t \in \ ]t_\text {in}, t_\text {out}[\) and the corresponding optimal control satisfies

$$\begin{aligned} {\bar{u}}(t)=\beta x_{\min }-g(t) \end{aligned}$$
(10)

for all \(t \in \ ]t_\text {in}, t_\text {out}[\). Controls satisfying (10) may only appear when \(\beta x_{\min }-g(t)\in [0,M]\). Regarding the set control constraint \(\bar{u}(t)\in [0,M]\), some comments are called for. In most scenarios, M may be considered to be very large so that u is always less than M. This is the case in situations of “practical interest”. Even in cases of extreme drought, our choice of the value M is such that \(\beta x_{\min } - g(t) < M\) for all \(t \in [0,T]\). On the other hand, although the case of boundary controls being 0, i.e., \(\beta x_{\min }-g(t)=0\) for \(t \in \ ]t_\text {in}, t_\text {out}[\), is not to be overlooked, the case of interest is when the boundary control is itself a singular control with \(\beta x_{\min } - g(t) > 0\). So, here we refrain from considering boundary controls being 0, since this appears to be quite unreasonable, taking into account the physical meaning of our problem, and this case has not come up in our simulations, covering different set of values for g. So, we assume that

$$\begin{aligned} \beta x_{\min } - g(t) \in \ ]0,M[\ \text { for all }\ t \in \ ]t_\text {in}, t_\text {out}[. \end{aligned}$$
(11)

Let us now consider conclusion (iv) of the above necessary conditions. Recalling that \(p^z(t)\equiv -1\) and rewriting (iv) we deduce that

$$\begin{aligned} (q^x(t)-1)({\bar{u}}(t)-u)\ge 0, \end{aligned}$$
(12)

for all \(t\in [0,T]\). Set \(\phi (t)=q^x(t)-1\). Then, from (12) we get the following characterization of the optimal control \({\bar{u}}\):

$$\begin{aligned} {\bar{u}}(t)= \left\{ \begin{array}{lll} 0, &{} \text { if } &{} \phi (t)<0,\\ u_{\text {sing}}(t), &{} \text { if } &{} \phi (t)=0,\\ M, &{} \text { if } &{} \phi (t)>0, \end{array} \right. \end{aligned}$$
(13)

where \(u_{\text {sing}}(t) \in \ ]0,M[\). If the switching function \(\phi \) is zero at isolated instants of time, then \({\bar{u}}\) is a bang-bang control switching values between 0 and M, when \(\phi \) goes from negative to positive values. Here, and since we choose a large M, optimal controls taking the value M on some time interval are not to be expected. If the switching function is 0 on a time interval, then \({\bar{u}}\) is a singular control on this interval. However, the Maximum Principle does not provide any further information about singular controls.

In the following section, we compute the analytical solutions of \(p^x(t)\), \(q^x(t)\), \({\bar{u}}(t)\) and \({\bar{z}}(T)\) for almost every \(t \in [0,T]\) for the eight different scenarios that we describe next.

Before engaging into that discussion, it is worth mentioning that if the trajectory is above \(x_{\min }\) in [0, T], then \(\mu \) is zero, \(p^x(t) = q^x(t)\) for all t and \(p^x(T) = 0\). Thus, necessarily, \(q^x(t) \equiv 0\) and the switching function is always \(-1\) (so, the trajectory does not have singular arcs). This is in agreement with the physical interpretation of our problem: irrigation should only be active (i.e., \({\bar{u}}(t) \ne 0\)) when \({\bar{x}}(t) = x_{\min }\) and the weather conditions are not enough to keep the state away from the boundary of the state constraint.

Eight Scenarios

In this section we propose eight profiles for the optimal trajectories, under the basic assumptions (A) and (B). To each of them we apply the necessary conditions above in order to extract information on the solution and their multipliers. To avoid leaving out optimal trajectories with profiles that can be seen as concatenations of those eight chosen ones, we end this section discussing a concatenation of two scenarios among the eight proposed: the trajectory starts in between \(x_{\min }\) and \(x_{FC}\), increases up to \(x_{FC}\), remains there for some time interval, before dropping to \(x_{\min }\) where, again, it remains for some time.

Let us denote by \(\big ({\bar{X}},{\bar{u}}\big ) = \big ({\bar{x}},{\bar{z}},{\bar{u}}\big )\) the optimal solution for (OCP). We assume that \({\bar{x}}\) has at most a boundary arc in all the eight scenarios. Junctions times, as well as the instants of time when the trajectory reaches, or drops from, the threshold \(x_{FC}\), are of importance: \(0< t_a< t_b< t_c< t_d< t_e < T\). Junctions points are easily recognised in the context. The scenarios of interest here are the following:

Sketches of all these eight scenarios appear in Fig. 1. The optimal state trajectory x is not necessarily a composition of line segments. The plots are only schemes illustrating eight possible behaviours of x.

Fig. 1
figure 1

(Colour only in online version) Schemes of possible trajectories for eight different scenarios.

As we have seen, the order of the state constraint is one. Recall that we consider here cases where the optimal trajectories have at most one single boundary arc. For trajectories with more than one boundary arc, we only have to do a composition of scenarios of “Eight Scenarios" Section and proceed to a composed analytical study, possibly subjected to updates of the transversality condition associated with the multiplier \(q^x\); see an illustrative example of this in “Concatenation of Scenarios-Example" Section.

Furthermore, in order to avoid repetitive calculations from now on, the trajectories of Fig. 1 with similar analytical solutions, with respect to state variable x and to respective adjoint function, as well as to control variable, are drawn with the same colour.

Recall that we do not consider scenarios where the initial and/or the final states touch the boundary at isolated points. Those cases, specially when additionally the trajectory has a boundary arc in ]0, T[, require a much demanding and longer analysis. In spite of the interest of such trajectory from a point of view of the Maximum Principle, we opt to keep the focus on trajectories with boundary intervals satisfying assumption (B) since those are the problematic ones as far as irrigation. The analysis of trajectories with contact points at the initial and/or final points will be done elsewhere.

Analysis of the Scenarios

We now extract information from necessary conditions (i)(v), presented in “Necessary Conditions for (OCP)" Section, for all the eight scenarios of interest. Recall that Scenarios 1, 3, 4, 6, 7 and 8 have a boundary arc on the interval \(]t_\text {in}, t_\text {out}[\). On such interval, as seen above, the optimal control is \({\bar{u}}(t)= \beta x_{\min } - g(t) \in \ ]0,M[\) (see (11)). For each scenario with a boundary interval \([t_\text {in}, t_\text {out}]\), we extract the following information from the necessary conditions (i)–(v) of “Necessary Conditions for (OCP)" Section :

1.:

whenever \(t_\text {in}>0\), \(q^x(t)=p^x(t)\) for all \(t\in [0, t_\text {in}[\), because \({\bar{x}}(t) >x_{\min }\) for all \(t\in [0,t_\text {in}[\) and \(\mu \left( [0, t_\text {in}[\right) =0\);

2.:

\(q^x(t)=1\) for all \(t\in \ ]t_\text {in},t_\text {out}[\), since the optimal control is a singular control in \(]t_\text {in},t_\text {out}[\) (see (12) and (13));

3.:

whenever \(t_\text {out}<T\), \(\mu \left( ]t_\text {out}, T]\right) =0\), because \({\bar{x}}(t) >x_{\min }\) for all \(t\in \ ]t_\text {out}, T]\);

4.:

the adjoint inclusion (ii) of “Necessary Conditions for (OCP)" Section reduces to \({\dot{p}}^x(t) = \beta q^x(t)\) a.e. \(t \in [t_\text {in},t_\text {out}]\).

It follows from 2. that \(q^x\) is absolutely continuous on \(]t_\text {in}, t_\text {out}[\) and that the decomposition of \(\mu \) in (8) reduces to \(\mu =\mu _a+\mu _d,\) where

$$\begin{aligned} \mu _d(A)=\eta _\text {in}\delta _{t_\text {in}}(A)+\eta _\text {out}\delta _{t_\text {out}}(A) ~\text { for any measurable set } A\subset [0,T], \end{aligned}$$
(14)

where \(\eta _\text {in}, \eta _\text {out}\ge 0\) and

$$\begin{aligned} \delta _{t}(s)=\left\{ \begin{array}{lll} 1, &{} \text { if } &{} s=t,\\ 0, &{} \text { if } &{} s\ne t. \end{array}\right. \end{aligned}$$

We know from (9) that \(\displaystyle \int _{[0,t[} \mu _a(ds)=\displaystyle \int _{0}^t \nu _a(s)ds\) for all \(t\in [0, T]\) and, from (v) of “Necessary Conditions for (OCP)" Section,

$$\begin{aligned} \nu _a(t)\big (x_{\min }-{\bar{x}}(t)\big ) = 0\text { for all } t\in [0, T]. \end{aligned}$$

From now on, we use the following notation

$$\begin{aligned} \gamma \left( {\tilde{t}}^-\right) = \lim \limits _{t\rightarrow {\tilde{t}}^-} \gamma (t) \quad \text {and} \quad \gamma \left( {\tilde{t}}^+\right) = \lim \limits _{t\rightarrow {\tilde{t}}^+} \gamma (t), \end{aligned}$$

where \(\gamma \) is a function and \({\tilde{t}}\) is an interior point of its domain. From (6) and (14), as well as from items 1., 2. and 3. of the current section, it follows that

$$\begin{aligned} q^x\left( t_\text {in}^+\right) = q^x\left( t_\text {in}^-\right) - \eta _\text {in},\quad q^x\left( t_\text {out}^+\right) =q^x\left( t_\text {out}^-\right) -\eta _\text {out} \end{aligned}$$
(15)

and

$$\begin{aligned} 1 = p^x(t) - \eta _\text {in} - \displaystyle \int _{t_\text {in}}^t \nu _a(s)ds \text { for all } t \in \ ]t_\text {in}, t_\text {out}[. \end{aligned}$$
(16)

Differentiating now (16), as well as taking into account 2. and 4. of the current section, we conclude that

$$\begin{aligned} \nu _a(t)=\beta , \quad \text { for all }t \in \ ]t_\text {in}, t_\text {out}[. \end{aligned}$$

Again from 2. and 4. of the current section, we obtain that

$$\begin{aligned} p^x(t)=p^x(t_\text {in})+\beta (t-t_\text {in})\text { for all } t \in \ ]t_\text {in}, t_\text {out}[. \end{aligned}$$
(17)

Furthermore, from (16) and (17), we also get that \(p^x(t_\text {in})=1+\eta _\text {in}\).

The remarkable feature of this situation is we get the following characterization of \(\mu \): for any measurable set \(A \subset [0,T]\)

$$\begin{aligned} \mu (A) = \eta _\text {in}\delta _{t_\text {in}}(A) + \eta _\text {out}\delta _{t_\text {out}}(A) + \beta \displaystyle \int _{A} {\mathbf {1}}_{]t_\text {in}, t_\text {out}[}(t)dt, \end{aligned}$$
(18)

where \({\mathbf {1}}_A\) is the indicator function of the set A defined as

$$\begin{aligned} {\mathbf {1}}_A(t) := \left\{ \begin{array}{lll} 1, &{} \text { if } &{} t \in A, \\ 0, &{} \text { if } &{} t \notin A. \end{array} \right. \end{aligned}$$

For \(t \in [0,T]\), it implies that

$$\begin{aligned} \int _{[0,t[} \mu (ds) = \left\{ \begin{array}{lll} 0, &{} \text { if } &{} t \in [0,t_\text {in}[, \\ \eta _\text {in}, &{} \text { if } &{} t = t_\text {in}, \\ \eta _\text {in} + \beta (t-t_\text {in}), &{} \text { if } &{} t \in \ ]t_\text {in}, t_\text {out}[, \\ \eta _\text {in} + \eta _\text {out} + \beta (t_\text {out}-t_\text {in}), &{} \text { if } &{} t = t_\text {out}, \\ \eta _\text {in} + \eta _\text {out} + \beta (t_\text {out}-t_\text {in}), &{} \text { if } &{} t \in \ ]t_\text {out}, T]. \end{array}\right. \end{aligned}$$

It is important to emphasize that in Scenario 8 we have that \(t_\text {out} = T\). In this case, when \(t=T\), we obtain that

$$\begin{aligned} \displaystyle \int _{[0,T]} \mu (ds) = \eta _\text {in} + \eta _\text {out} + \beta (t_\text {out}-t_\text {in}). \end{aligned}$$

Recalling the decomposition (8), this means that \( \mu _\text {sing}\) vanishes on [0, T] and that \(\mu _d\) has possibly two atoms, one at \(t_\text {in}\) and another at \(t_\text {out}\). Moreover, taking into account (6) and the properties of \(\mu _a\), we deduce that

$$\begin{aligned} {\dot{p}}^x(t)={\dot{q}}^x(t)+\beta {\mathbf {1}}_{]t_\text {in}, t_\text {out}[}(t)\ \text { for a.e. } t\in [0,T]. \end{aligned}$$
(19)

Scenario 1

Let us assume that \(t_\text {in} = t_a\) and \(t_\text {out} = t_b\). For Scenario 1, we have that \(loss(x) = \beta x\) for all \(t\in [0,T]\), since \({\bar{x}}(t) < x_{FC}\) for all \(t\in [0,T]\). Thus, the adjoint inclusion (ii) of “Necessary Conditions for (OCP)" Section reduces to

$$\begin{aligned} {\dot{p}}^x(t)=\beta q^x(t)\ \text { for a.e. } t\in [0,T]. \end{aligned}$$
(20)

The optimal control is defined as in (10) for \(t \in \ ]t_\text {in},t_\text {out}[\). Recall that \(x_0>x_{\min }\) and that the optimal control has one single boundary control. More specifically, we consider the situation when

\(\triangleright \):

there exist \(t_\text {in},~t_\text {out} \in ]0,T[\) such that \(t_\text {in}<t_\text {out}\) and \({\bar{u}}(t)=\beta x_{\min }-g(t) >0\) for all \(t\in \ ]t_\text {in}, t_\text {out}[\);

\(\triangleright \):

\({\bar{x}}(t) > x_{\min }\) for all \(t\in [0,T]\backslash [t_\text {in}, t_\text {out}]\).

From (19) and (20), we get \({\dot{q}}^x(t)+\beta {\mathbf {1}}_{]t_\text {in}, t_\text {out}[}(t)= \beta q^x(t)\) for almost every \(t\in [0,T]\), leading to the adjoint equation

$$\begin{aligned} {\dot{q}}^x(t)=\beta q^x(t)-\beta {\mathbf {1}}_{]t_\text {in}, t_\text {out}[}(t)\ \text { for a.e. } t\in [0,T]. \end{aligned}$$
(21)

We now seek information on the optimal control on the intervals \([0,t_\text {in}[\) and \(]t_\text {out}, T]\). Starting with \([0,t_\text {in}[\), recall that

\(\triangleright \):

\(q^x\left( t_\text {in}^+\right) =1\) from item 2. of the current section;

\(\triangleright \):

\(\eta _\text {in}\ge 0\);

\(\triangleright \):

\(p^x(t_\text {in})=1+\eta _\text {in} \ge 1\);

\(\triangleright \):

\(p^x\) is continuous on \([0,t_\text {in}[\);

\(\triangleright \):

\(p^x(t)=q^x(t) = e^{\beta \left( t-t_\text {in}\right) }(1+\eta _\text {in})\) for \(t\in [0,t_\text {in}[\) (see 1. and (21));

\(\triangleright \):

\(\phi (t) = q^x(t) - 1\).

If \(\eta _\text {in}>0\), then \(\phi \) would be also positive on \([0,t_\text {in}[\) and, consequently, we would have \({\bar{u}}(t) = M\) for \(t \in [0,t_\text {in}[\). Such situation is not realistic, since M is very large. Thus, we deduce that \(\eta _\text {in}=0\). It then follows, from the adjoint equation, that \(p^x(t) = q^x(t) = e^{\beta (t-t_\text {in})}\) for \(t \in [0,t_\text {in}[\). If \(\beta >0 \), it then follows that \(\phi (t) < 0\) and so \({\bar{u}}(t)=0\) for \(t \in [0,t_\text {in}[\).

We now turn to \(]t_\text {out}, T]\). Recall that \(q^x(T)=0\) and that (21) holds on \(]t_\text {out}, T]\). Then, we have \(q^x(t)=0\) for all \(t \in \ ]t_\text {out}, T]\). Since \(q^x\left( t_\text {out}^+\right) =0\), we deduce from the second equation in (15) that \(\eta _\text {out}=1\). Moreover, we have \(\phi (t)=-1<0\) and so \({\bar{u}}(t)=0\) for almost every \(t \in \ ]t_\text {out}, T]\). Summarizing, we have the following:

$$\begin{aligned} \circ \&\eta _\text {in}=0,\quad \eta _\text {out}=1;\nonumber \\ \circ \&p^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [0,t_\text {in}[, \\ 1+\beta (t-t_\text {in}), &{} \text { if } &{} t \in [t_\text {in}, t_\text {out}], \\ 1 + \beta (t_\text {out}-t_\text {in}), &{} \text { if } &{} t \in \ ]t_\text {out}, T]; \\ \end{array}\right. \end{aligned}$$
(22)
$$\begin{aligned} \circ \&q^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [0,t_\text {in}[,\\ 1, &{} \text { if } &{} t \in \ ]t_\text {in}, t_\text {out}[,\\ 0, &{} \text { if } &{} t \in \ ]t_\text {out}, T]; \end{array}\right. \end{aligned}$$
(23)
$$\begin{aligned} \circ \&{\bar{u}}(t)=\left\{ \begin{array}{lll} 0, &{} \text { if } &{} t \in [0,t_\text {in}[~\cup ~]t_\text {out}, T[ \text { a.e.},\\ \beta x_{\min } - g(t), &{} \text { if } &{} t\in \ ]t_\text {in}, t_\text {out}[ \text { a.e.}; \end{array}\right. \end{aligned}$$
(24)
$$\begin{aligned} \circ \&{\bar{z}}(T) = \beta x_{\min }(t_\text {out} - t_\text {in}) - \int _{t_\text {in}}^{t_\text {out}} g(s) ds. \end{aligned}$$
(25)

The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Scenario 2

Here we have \({\bar{x}}(t)>x_{\min }\) for all \(t\in [0,T]\). So, the state constraint is never active. Consequently, we deduce that

$$\begin{aligned} \mu (A)=0 \text { for any measurable set }A\subset [0,T] \end{aligned}$$

and \(q^x(t)=p^x(t)\) for all \(t\in [0,T]\). The special feature of this scenario is the fact that \({\bar{x}}\) remains at the threshold \(x_{FC}\) on some time interval. In this case the adjoint reduces to

$$\begin{aligned} {\dot{p}}^x(t)=\left\{ \begin{array}{lll} \beta p^x(t), &{} \text { if } &{} t \in [0,t_a] \cup [t_b,T] \text { a.e.,} \\ \zeta (t) p^x(t), &{} \text { if } &{} t \in [t_a,t_b] \text { a.e.,} \end{array}\right. \end{aligned}$$

where \(\zeta \) is a measurable function such that \(\beta \le \zeta (t)\le \beta +1\) for almost every \(t \in [t_a,t_b]\). Recalling that \(q^x(T) = 0\) (see item (iii) of “Necessary Conditions for (OCP)" Section), it is then a simple matter to see that such implies that \(p^x(t) = q^x(t) = 0\) for all \(t\in [0,T]\). Taking into account that, in this case, \(\phi (t) = -1 < 0\) on [0, T]. It implies that the optimal control is \({\bar{u}}(t) = 0\) almost everywhere on [0, T] and so \(\bar{z}(T)=0\). The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Scenario 3

Here set \(t_\text {in} = t_a\) and \(t_\text {out}=t_b\). This scenario is a mix of the two previous ones: we have first a boundary arc on \([t_\text {in}, t_\text {out}]\) and we have another interval \([t_c,t_d]\), with \(t_\text {out}< t_c\), where \({\bar{x}}(t)=x_{FC}\). However, it is hard to analyse this scenario due to the interval \([t_c,t_d]\), as we show next.

Observe that on \(]t_\text {out}, T]\) we have

$$\begin{aligned} {\dot{q}}^x(t) = \zeta (t) q^x(t)\ \text { for a.e. } t \in \ ]t_\text {out},T], \end{aligned}$$

where

$$\begin{aligned} \zeta (t) = \beta \ \text { for a.e. } t \in \ ]t_\text {out},t_c[~\cup ~]t_d, T] \ \text { and }\ \zeta (t) \in [\beta , \beta +1]\ \text { for a.e. } t \in \ ]t_c,t_d[. \end{aligned}$$

Moreover, we know that \(q^x(T)=0\). Thus, we have

$$\begin{aligned} q^x(t) = 0\quad \text {for } t \in \ ]t_\text {out},T]. \end{aligned}$$

Consequently, as \(q^x\left( t_\text {out}^+\right) = 1 - \eta _\text {out} \le 1\), we have \(\eta _\text {out} = 1\) and \(\phi (t) = -1\) for \(t \in \ ]t_\text {out},T].\)

For \(t\in [0, t_\text {out}[\), \(p^x\) and \(q^x\) are as defined in Scenario 1 with \(\eta _\text {in}=0\) and \(\nu _a(t)=\beta \).

Concluding, we have that \(p^x(t)\), \(q^x(t)\), \({\bar{u}}(t)\) and \({\bar{z}}(T)\) are given by (22), (23), (24) and (25), respectively, for all \(t\in [0,T]\) with \(t_\text {in} = t_a\) and \(t_\text {out}=t_b\). The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Scenario 4

Here we set \(t_\text {in} = t_b\) and \(t_\text {out}=t_c\). Once more, we have a boundary interval. In contrast with Scenario 3, here the optimal trajectory only takes the value \(x_{FC}\) at a single point \(t_a<t_\text {in}\).

The analysis for \(t\ge t_\text {in}\) follows exactly as in Scenario 1. Recall that we assume that the boundary control on \(]t_\text {in},t_\text {out}[\) is a singular control which takes values on ]0, M[. So, for \(t \in \ ]t_\text {in},t_\text {out}[\) we have that \(\mu \), \(q^x\) and \(p^x\) behave as in Scenario 1. Thus, we can write that \(q^x(t)=1\) for \(t \in \ ]t_\text {in},t_\text {out}[\) and (15)–(18) hold. Moreover, we have \(q^x(t)=0\) for \(t \in \ ]t_\text {out},T]\). So, we also deduce that \(\eta _\text {in}=0\) and \(\eta _\text {out}=1\). Note that we also have that \(p^x(t_\text {in})=1\), as in Scenario 1, since \(q^x(t)=p^x(t)\) for \(t<t_\text {in}\). Thus, the behaviour of \(q^x\) and \(p^x\) for \(t>t_\text {in}\) is exactly as in Scenario 1.

Let us now see what happens for \(t \le t_\text {in}\). It is a simple matter to see that the adjoint inclusion (ii) leads to

$$\begin{aligned} {\dot{p}}^x(t)=\left\{ \begin{array}{lll} (\beta +1) p^x(t), &{} \text { if } &{} t \in [0,t_a] \text { a.e., }\\ \beta p^x(t), &{} \text { if } &{} t \in [t_a,t_\text {in}[ \text { a.e.. } \end{array}\right. \end{aligned}$$

Solving the above differential equations in terms of \(p^x(t_\text {in})\) and recalling that \(p^x(t_\text {in}) = 1\), we have

$$\begin{aligned} p^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}e^{t-t_a}, &{} \text { if } &{} t \in [0, t_a],\\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [t_a, t_\text {in}]. \end{array}\right. \end{aligned}$$

Since \(p^x(t) < 1\) for \(t \in [0, t_\text {in}[\), we have that \(\phi (t) < 0\) and \({\bar{u}}(t) = 0\) for \(t \in [0, t_\text {in}[\). Also \(q^x(t) = p^x(t)\) on \([0,t_\text {in}[\). We thus conclude that the optimal control and the optimal cost are given by (24) and (25) for \(t_\text {in} = t_b\) and \(t_\text {out}=t_c\). The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations. Summarizing, for \(t_\text {in} = t_b\) and \(t_\text {out}=t_c\), we have that

$$\begin{aligned} p^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}e^{t-t_a}, &{} \text { if } &{} t \in [0,t_a[, \\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [t_a,t_\text {in}[, \\ 1+\beta (t-t_\text {in}), &{} \text { if } &{} t \in [t_\text {in}, t_\text {out}], \\ 1 + \beta (t_\text {out}-t_\text {in}), &{} \text { if } &{} t \in \ ]t_\text {out}, T], \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} q^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}e^{t-t_a}, &{} \text { if } &{} t \in [0,t_a[, \\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [t_a,t_\text {in}[, \\ 1, &{} \text { if } &{} t \in \ ]t_\text {in}, t_\text {out}[, \\ 0, &{} \text { if } &{} t \in \ ]t_\text {out}, T]. \end{array}\right. \end{aligned}$$
(26)

Scenario 5

In Scenario 5 we have that \({\bar{x}}(t)>x_{\min }\) for all \(t\in [0,T]\). Then, \(\mu =0\) and \(q^x(t)=p^x(t)\) for all \(t\in [0,T]\). It is a simple matter to see from (ii) and \(q^x(T)=p^x(T)=0\) that \(q^x(t) = p^x(t) = 0\) for all \(t\in [0,T]\). Thus, \(\phi (t)=-1<0\) and \({\bar{u}}(t)=0\) for for all \(t\in [0,T]\). We thus conclude that the cost, \({\bar{z}}(T)\), is 0. The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Scenario 6

In this scenario the optimal trajectory exhibits again a boundary arc. Once more, we set \(t_\text {in} = t_a\) and \(t_\text {out}=t_b\). On \([0, t_\text {out}[\), the analysis is exactly as in Scenario 1. Moreover, we have

$$\begin{aligned} {\dot{q}}^x(t)=\zeta (t) q^x(t)\ \text { for a.e. }\ t \in \ ]t_\text {out},T], \end{aligned}$$

where

$$\begin{aligned} \zeta (t)=\beta \ \text { for a.e. }\ t \in \ ]t_\text {out},t_c[~\cup ~]t_d, T] \ \text { and }\ \zeta (t)=\beta +1\ \text { for a.e. }\ t \in \ ]t_c,t_d[. \end{aligned}$$

Solving the differential equation and recalling that \(q^x(T)=0\), we have

$$\begin{aligned} q^x(t)=0 \text { for } t \in \ ]t_\text {out}, T]. \end{aligned}$$

From condition of item 2. of the current section, we know that \(q^x\left( t_\text {out}^-\right) = 1\). Thus, it follows from \(q^x\left( t_\text {out}^+\right) =1-\eta _\text {out}\) that \(\eta _\text {out}=1\). Then, we conclude that \(p^x(t)\), \(q^x(t)\), \({\bar{u}}(t)\) and \({\bar{z}}(T)\) are given by (22), (23), (24) and (25), respectively, for all \(t \in [0,T]\) with \(t_\text {in} = t_a\) and \(t_\text {out}=t_b\). The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Scenario 7

Observe that here \(t_\text {in}=0\) and \(t_\text {out}=t_a\). Analysis of \(q^x\) on \([0,t_{a}]\) follows as the analysis on \([t_\text {in},t_\text {out}]\) for Scenarios 1, 3, 4 and 6 with the exception that we cannot deduce that \(\eta _\text {in}\) is zero. Moreover, we have

$$\begin{aligned} p^x(t) = 1 + \eta _\text {in} + \beta t \text { for } t\in [0,t_{a}], \quad p^x(0) = 1 + \eta _\text {in}, \end{aligned}$$

and

$$\begin{aligned} p^x(t) = q^x(t) + \eta _\text {in} + \eta _\text {out} + \beta t_a \text { for } t \in \ ]t_{a},T]. \end{aligned}$$

Recall that \(q^x(t) = 1\) for \(t\in \ ]0,t_\text {out}[\) (see condition of item 2. of the current section). Furthermore, from conditions (7) and (19), we know that

$$\begin{aligned} {\dot{q}}^x(t) = \left\{ \begin{array}{lll} \beta q^x(t) &{} \text { for a.e. } &{} t \in \ ]t_{a},t_b[,\\ (\beta +1) q^x(t) &{} \text { for a.e. } &{} t \in \ ]t_{b},T[. \end{array} \right. \end{aligned}$$

We also know that \(q^x(T) = 0\). Thus, we get

$$\begin{aligned} q^x(t)=\left\{ \begin{array}{lll} 1, &{} \text { if } &{} t \in \ ]0,t_\text {out}[\\ 0, &{} \text { if } &{} t \in \ ]t_\text {out}, T]. \end{array} \right. \end{aligned}$$

Concluding, we obtain that \(\eta _\text {out}=1\),

$$\begin{aligned} {\bar{u}}(t) = \left\{ \begin{array}{lll} \beta x_{\min } - g(t), &{} \text { if } &{} t \in \ ]0,t_\text {out}[ \text { a.e.},\\ 0, &{} \text { if } &{} t \in \ ]t_\text {out},T[ \text { a.e.}, \end{array} \right. \end{aligned}$$

and \({\bar{z}}(T)\) is given by (25) for \(t_\text {in}=0\) and \(t_\text {out}=t_a\). The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Scenario 8

Finally, for this scenario we have \(t_\text {in} = t_b\) and \(t_\text {out} = T\). The behaviour of \(q^x\) and \(\mu \) on \(]t_b,T[\) is as in the previous scenarios with the exception that here we know that, due to \(q^x(T)=0\) and \(q^x(t)=1\) for \(t \in \ ]t_b,T[\), that \(\eta _\text {out}=1\). As in Scenario 1, we have \(\eta _\text {in}=0\). Clearly, we have

$$\begin{aligned} q^x(t)=p^x(t) \text { for all } t\in [0,t_b[. \end{aligned}$$

So, we have that

$$\begin{aligned} \begin{array}{l} q^x(t) = \left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}e^{t-t_a}, &{} \text { if } &{} t \in [0,t_a],\\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [t_a,t_\text {in}[,\\ 1, &{} \text { if } &{} t \in \ ]t_\text {in}, T[,\\ 0, &{} \text { if } &{} t = T, \end{array}\right. \end{array}, \ \begin{array}{l} p^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}e^{t-t_a}, &{} \text { if } &{} t \in [0,t_a[, \\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [t_a,t_\text {in}[, \\ 1+\beta (t-t_\text {in}), &{} \text { if } &{} t \in [t_\text {in}, T]. \end{array}\right. \end{array} \end{aligned}$$

It is however a simple matter to see that

$$\begin{aligned} {\bar{u}}(t) = \left\{ \begin{array}{lll} 0, &{} \text { if } &{} t \in [0,t_b[ \text { a.e.},\\ \beta x_{\min } - g(t), &{} \text { if } &{} t \in \ ]t_b,T[ \text { a.e.}, \end{array} \right. \end{aligned}$$
(27)

and that \({\bar{z}}(T)\) is given by (25) for \(t_\text {in} = t_b\) and \(t_\text {out} = T\). The optimal trajectory \({\bar{x}}(t)\) for all \(t \in [0,T]\) can now easily be calculated solving the respective differential equations.

Concatenation of Scenarios - Example

The composition of two scenarios, from among the proposed in the beginning of “Eight Scenarios" Section, can arouse differences in its analysis when compared with the individual study of each scenario, whenever the last one has a boundary interval \([t_\text {in}, t_\text {out}]\). In order to illustrate the previous comment we analyse a composition of Scenarios 1 and 2, where the Scenario 2 happens earlier. Thus, let us study the scenario whose features are the following:

$$\begin{aligned} {\left\{ \begin{array}{ll} \forall t \in [0,t_a[~\cup ~]t_b,t_d[~\cup ~]t_e,T],\ x_{\min }< {\bar{x}}(t) < x_{FC},\\ \forall t \in [t_a,t_b],\ {\bar{x}}(t) = x_{FC},\\ \forall t \in [t_d,t_e],\ {\bar{x}}(t) = x_{\min }. \end{array}\right. } \end{aligned}$$

Firstly, note that \(t_\text {in} = t_d\) and \(t_\text {out} = t_e\). For all \(t \in [t_c, T]\), the analysis is exactly the same of Scenario 1. Consequently, we have that

$$\begin{aligned} q^x(t)=\left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [t_c,t_\text {in}[,\\ 1, &{} \text { if } &{} t \in \ ]t_\text {in}, t_\text {out}[,\\ 0, &{} \text { if } &{} t \in \ ]t_\text {out}, T], \end{array}\right. \end{aligned}$$

which implies that

$$\begin{aligned} q^x(t_c) = e^{\beta (t_c-t_\text {in})} \ne 0. \end{aligned}$$

However, the corresponding point in Scenario 2 is such that \(q^x(T) = 0\). Here lies the main difference that will originate others. Thus, now we have to follow the argument carried out in the study of Scenario 2, but taking into account the update of the final condition of multiplier \(q^x\): \(q^x(t_c) = e^{\beta \left( t_c-t_\text {in}\right) }.\) Notice that \(\mu \left( [0,t_\text {in}[\right) = 0\) and this implies that \(p^x(t) = q^x(t)\) for all \(t \in [0,t_\text {in}[.\) So, for almost all \(t \in \ ]t_b, t_c]\) we get

$$\begin{aligned} \left\{ \begin{array}{l} {\dot{q}}^x(t) = \beta q^x(t), \\ q^x(t_c) = e^{\beta \left( t_c-t_\text {in}\right) }, \end{array}\right. \Rightarrow ~ q^x(t) = p^x(t) = e^{\beta (t-t_\text {in})}. \end{aligned}$$

For almost all \(t \in [t_a, t_b]\) we have

$$\begin{aligned} \left\{ \begin{array}{l} {\dot{q}}^x(t) = \zeta (t) q^x(t), \\ q^x(t_b) = e^{\beta \left( t_b-t_\text {in}\right) }, \end{array}\right. \end{aligned}$$

where \(\zeta \) is a measurable function such that \(\zeta (t) \in [\beta , \beta +1]\) for almost all \(t \in [t_a,t_b]\), which implies that \(q^x(t) = p^x(t) = e^{\beta (t_b - t_\text {in}) - \int _{t}^{t_b} \zeta (s) ds}.\) Since \(\zeta (t) \in [\beta , \beta +1]\) for almost all \(t \in [t_a,t_b]\), then we get

$$\begin{aligned}&0< e^{\beta (t - t_\text {in}) + t - t_b} \le q^x(t) = p^x(t) \le e^{\beta (t - t_\text {in})}< 1\ \text { for a.a. }\ t \in [t_a, t_b]\\ \Leftrightarrow&~\phi (t) < 0 \ \text { for a.a. }\ t \in [t_a, t_b]\\ \Rightarrow&~{\bar{u}}(t) = 0 \ \text { for a.a. }\ t \in [t_a, t_b]. \end{aligned}$$

Finally, for almost all \(t \in [0,t_a[\), we have that

$$\begin{aligned} \left\{ \begin{array}{l} {\dot{q}}^x(t) = \beta q^x(t), \\ q^x(t_a) = K_a \in \ ]0,1[, \end{array}\right. \Rightarrow ~ q^x(t) = p^x(t) = K_a e^{\beta (t-t_a)}. \end{aligned}$$

Again, we get \(q^x(t) = p^x(t) \in \ ]0,1[\) which similarly implies that \({\bar{u}}(t) = 0\) for almost all \(t \in [0,t_a[\). Summarizing,

$$\begin{aligned} q^x(t)=\left\{ \begin{array}{lll} K_a e^{\beta (t-t_a)}, &{} \text { if } &{} t \in [0,t_a[,\\ {\tilde{\zeta }}(t), &{} \text { if } &{} t \in [t_a,t_b],\\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in \ ]t_b,t_\text {in}[,\\ 1, &{} \text { if } &{} t \in \ ]t_\text {in}, t_\text {out}[,\\ 0, &{} \text { if } &{} t \in \ ]t_\text {out}, T], \end{array}\right. ~\Rightarrow ~ (24) \end{aligned}$$

and

$$\begin{aligned} p^x(t)=\left\{ \begin{array}{lll} K_a e^{\beta (t-t_a)}, &{} \text { if } &{} t \in [0,t_a[,\\ {\tilde{\zeta }}(t), &{} \text { if } &{} t \in [t_a,t_b],\\ e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in \ ]t_b,t_\text {in}[,\\ 1 + \beta (t - t_\text {in}), &{} \text { if } &{} t \in [t_\text {in}, t_\text {out}],\\ 1 + \beta (t_\text {out} - t_\text {in}), &{} \text { if } &{} t \in \ ]t_\text {out}, T], \end{array}\right. \end{aligned}$$

where \(K_a, {\tilde{\zeta }}(t) \in \ ]0,1[\) for almost all \(t \in [t_a, t_b]\). From analytical expression of \(q^x\), we conclude that we only obtain a singular arc for \(t \in \ ]t_\text {in}, t_\text {out}[\).

It is an easy task to see that the conclusions for multipliers and control hold, if \(x_0\) takes any value equal or above \(x_{\min }\).

Before ending this example let us suppose that \(t_\text {out} = T\) (the end state is on the boundary). In this case we get \(q^x(t) = 1\) for \(t \in \ ]t_\text {in}, T[\), \(q^x(T) = 0\) and \(p^x(t) = 1+\beta (t-t_\text {in})\) for \(t \in [t_\text {in}, T]\).

Remark 1

Observe that to concatenate scenarios the analysis applied to each one holds, but the transversality condition associated with the multiplier \(q^x\) for the case in study as to be taken into account. In general, concatenations can not been analysed through a strict mathematical composition; they should be carried out as a composed study, possibly subjected to updates, like the one we illustrate above.

Approximation of Function g Values

In this section, we approximate the values of function g for three different cases. This is needed, since we only have daily information with respect to function g. The results obtained are then used in the numerical simulations reported in Sects. 6.16.2 and 6.3.

Case 1

Consider that g takes the values of second column of Table 2. Using a TI-nspire CX II-T CAS graphic calculator, we approximate these values through two different types of continuous functions: quartic polynomial and logistic (see Fig. 2). Since the error of the logistic regression is smaller, we choose this one as an approximation of g in “Case 1" Section.

Fig. 2
figure 2

(Colour only in online version) Quartic and logistic values, as well as g values for case 1 (see second column of Table 2)

Table 1 Function g values for cases 1, 2 and 3

Case 2

Now, we assume that g takes the values of third column of Table 2. Using again a TI-nspire CX II-T CAS graphic calculator, we approximate these values through two different types of continuous functions: quartic polynomial and sinusoidal (see Fig. 3). Since the error of the quartic regression is smaller, we choose this one as an approximation of g in “Case 2" Section.

Case 3

Finally, suppose that g takes the values of fourth column of Table 2. Trying to approximate these function g values, by regression, by different types of continuous functions, with the help of a TI-nspire CX II-T CAS graphic calculator, we obtain high values for errors (see an example in Fig. 4). Thus, we decide to do an interpolation of values of fourth column of Table 2, using MATLAB function interp1, considering the method spline and the vector \(xq = \left[ \begin{matrix} 0&0.1&0.2&\cdots&9 \end{matrix}\right] _{1\times 91}\) for the coordinates of the query points (see Fig. 4).

Fig. 3
figure 3

(Colour only in online version) Quartic and sinusoidal values, as well as g values for case 2 (see third column of Table 2)

Fig. 4
figure 4

(Colour only in online version) Quartic and spline values, as well as g values for case 3 (see fourth column of Table 2)

Fig. 5
figure 5

(Colour only in online version) Numerical and analytical results associated with case 1.

Validation

As in [9], we solve numerically (OCP) by the direct method. We first discretize the problem using Euler method and we then solve the finite dimensional non-linear optimization problem given by (8) in Section “Approximation of Function g Values" of [9]. In all the current section, the numerical solution of the optimization problem is obtained with the help of MATLAB (version R2019a). The MATLAB function fmincon is used with the following optimization options:

$$\begin{aligned} {\left\{ \begin{array}{ll} \triangleright \text { the termination tolerance on the function value is } 10^{-9};\\ \triangleright \text { the maximum number of iterations allowed is } 10^6;\\ \triangleright \text { the maximum number of function evaluations allowed is } 3\times 10^6;\\ \triangleright \text { the termination tolerance on state and control variables is } 10^{-4};\\ \triangleright \text { the optimization algorithm is \texttt {active-set}}. \end{array}\right. } \end{aligned}$$
(28)

In what follows, we study three different cases. We consider the values of Table 3 and that \(\theta \) is the time step discretization for the three numerical studies.

Table 2 Common values for all studied cases

Case 1

In numerical simulations of current subsection, we assume all values of Table 3. We also consider that \(x_0=44.2362\) mm, \(\beta =0.25\) and function g takes the values of second column of Table 2. Recall that we use the function \(g_2\), drawn in Fig. 2, as an approximation of g as discussed in “Case 1 Section.

For the discretization of (OCP), we work with 91 grid points, corresponding to a step size \(\theta = 0.1\). The numerical solutions we obtain using MATLAB function fmincon with options (28) are presented in Fig. 5a. Notice that we also present the solutions when we use the values of Table 2 instead of logistic function \(g_2\), in Fig. 5a. To do so we consider the step size \(\theta = 1\).

This case is similar to Scenario 4. Consequently, for all \(t\in [0,T]\), we know that the analytical solutions of \({\bar{u}}(t)\), \(q^x(t)\) and \({\bar{x}}(t)\) with respect to (OCP) are, respectively, given by (24), (26) and

$$\begin{aligned} {\bar{x}}(t) = \left\{ \begin{array}{lll} e^{-(\beta + 1)t} \left( x_{0} + \displaystyle \int _{0}^{t} e^{(\beta + 1)s}\big (x_{FC} + g(s)\big ) ds \right) , &{} \text { if } &{} t \in [0,t_a[,\\ e^{-\beta t}\left( x_{FC}e^{\beta t_a} + \displaystyle \int _{t_a}^{t}e^{\beta s}g(s) ds \right) , &{} \text { if } &{} t \in [t_a,t_\text {in}[,\\ x_{\min }, &{} \text { if } &{} t \in [t_\text {in},t_\text {out}],\\ e^{-\beta t} \left( x_{\min }e^{\beta t_\text {out}} + \displaystyle \int _{t_\text {out}}^{t} e^{\beta s}g(s) ds \right) , &{} \text { if } &{} t\in \ ]t_\text {out},T]. \end{array} \right. \end{aligned}$$

Observe that the numerical multiplier associated with state variable x is not normalized (see Fig.  5), when \(\theta = 0.1\). Numerically, we got \(t_\text {in} = 2.2\) and \(t_\text {out} = 6.2\), when \(\theta = 0.1\). However, the precise value of \(t_a\), instant of time where the state trajectory is equal to \(x_{FC}\), is unknown. We only know that \({\underline{t}}_{a} \le t_a \le {\overline{t}}_{a} < T\), where \({\underline{t}}_{a} = 0.1\) and \({\overline{t}}_{a}=0.2\), by Fig. 5a. To get an approximate value for \(t_a\) we search computationally for the minimum of

$$\begin{aligned} \sum _{i=1}^{N} \Big ( {\bar{x}}\big (\theta (i-1)\big )-x_{\text {num}}(i) \Big )^2 \end{aligned}$$

for all \(t_a \in \left\{ {\underline{t}}_{a} + \epsilon (j-1): j=1,\ldots ,{\tilde{N}}\right\} \), where \(x_{\text {num}}\) is the numerical solution for state variable x provided by MATLAB, \(\epsilon = 0.001\) and \({\tilde{N}} = 101\). Following this line of thought, we determine that \(t_a \simeq 0.153\). Thus, using these values, we are able to compare numerical and analytical solutions, as one can see in Fig. 5b.

Case 2

In numerical simulations of current subsection, we assume all values of Table 3. We also consider that \(x_0 = 41.3485\) mm, \(\beta = 0.0839\) and function g takes the values of third column of Table 2. Recall that we use the function \(g_1\), drawn in Fig. 3, as an approximation of g as discussed in “Case 1" Section.

Fig. 6
figure 6

(Colour only in online version) Numerical and analytical results associated with case 2.

For the discretization of (OCP), we work with 91 grid points, corresponding to a step size \(\theta = 0.1\). The numerical solutions we obtain using MATLAB function fmincon with options (28) are presented in Fig. 6a. In this figure we also present the solutions, when we use the values of third column of Table 2 (\(\theta = 1\)).

This case is similar to Scenario 8. Consequently, considering \(t_\text {out} = T\), we know that the analytical solutions of \({\bar{u}}(t)\), \(q^x(t)\) and \({\bar{x}}(t)\) with respect to (OCP) are given by (27),

$$\begin{aligned} q^x(t) = \left\{ \begin{array}{lll} e^{\beta (t-t_\text {in})}, &{} \text { if } &{} t \in [0,t_\text {in}[,\\ 1, &{} \text { if } &{} t \in \ ]t_\text {in}, T[,\\ 0, &{} \text { if } &{} t = T, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} {\bar{x}}(t) = \left\{ \begin{array}{lll} e^{-\beta t}\left( x_0 + \displaystyle \int _{0}^{t}e^{\beta s}g(s) ds \right) , &{} \text { if } &{} t \in [0,t_\text {in}[,\\ x_{\min }, &{} \text { if } &{} t \in [t_\text {in},T], \end{array} \right. \end{aligned}$$

respectively, for all \(t\in [0,T]\). Numerically, we got \(t_\text {in} = 3.0\), when \(\theta = 0.1\). Observe that the numerical multiplier associated with state variable x is not normalized (see Fig.  6), when \(\theta = 0.1\). With all this information, we can compare numerical and analytical solutions, as one can see in Fig. 6b.

Fig. 7
figure 7

(Colour only in online version) Numerical and analytical results associated with case 3

Case 3

In the numerical simulations reported here, we assume all values of Table 3. We also consider that \(x_0=33.045\) mm, \(\beta =0.0997\) and the function g takes the values of fourth column of Table 2. Recall that we use the interpolation values, drawn in Fig. 4, as an approximation of g as discussed in “Case 3" Section.

For the discretization of (OCP), we work with 91 grid points, corresponding to a step size \(\theta = 0.1\). The numerical solutions we obtain using MATLAB function fmincon with options (28) are presented in Fig. 7a. In this figure we also present the solutions, when we use the values of fourth column of Table 2 (\(\theta = 1\)).

This case is similar to Scenario 5. Following the same line of thought of Sect. 4.1.5, we can state that \({\bar{x}}(t)\) is given by

$$\begin{aligned} \left\{ \begin{array}{lll} e^{-\beta t} \left( x_0 + \displaystyle \int _{0}^{t}e^{\beta s}g(s) ds \right) ,&{}\text {if} &{} t \in [0,t_a],\\ e^{-(\beta + 1)t} \left( x_{FC}e^{(\beta +1)t_a} + \displaystyle \int _{t_a}^{t}e^{(\beta + 1)s}\big (x_{FC} + g(s)\big ) ds \right) ,&{}\text {if} &{} t \in \ ]t_a,t_b[,\\ e^{-\beta t} \left( x_{FC}e^{\beta t_b} + \displaystyle \int _{t_b}^{t}e^{\beta s}g(s) ds \right) ,&{}\text {if} &{} t \in [t_b,t_c],\\ e^{-(\beta + 1)t} \left( x_{FC}e^{(\beta +1)t_c} + \displaystyle \int _{t_c}^{t}e^{(\beta + 1)s}\big (x_{FC} + g(s)\big ) ds \right) ,&{}\text {if} &{} t \in \ ]t_c,t_d[,\\ e^{-\beta t} \left( x_{FC}e^{\beta t_d} + \displaystyle \int _{t_d}^{t}e^{\beta s}g(s) ds \right) ,&{}\text {if} &{} t \in [t_d,T], \end{array} \right. \end{aligned}$$

and \({\bar{u}}(t)=q^x (t)=0\) for all \(t \in [0,T]\). We note that for the computed solutions with \(\theta = 0.1\), we fail to obtain the exact instants of time where the state trajectory is equal to \(x_{FC}\). We only know that there are four such instants. Let us denote them by \(t_a\), \(t_b\), \(t_c\) and \(t_d\), where

$$\begin{aligned} {\left\{ \begin{array}{ll} 0< t_a< t_b< t_c< t_d < T = 9, \\ 0.8 = {\underline{t}}_a \le t_a \le {\overline{t}}_a = 0.9, \\ 2.8 = {\underline{t}}_b \le t_b \le {\overline{t}}_b = 2.9, \\ 7.0 = {\underline{t}}_c \le t_c \le {\overline{t}}_c = 7.1, \\ 8.5 = {\underline{t}}_d \le t_d \le {\overline{t}}_d = 8.6. \end{array}\right. } \end{aligned}$$

To get an approximate value for \(t_a\), \(t_b\), \(t_c\) and \(t_d\), we search computationally for the minimum of

$$\begin{aligned} \sum _{i=L_k}^{U_k} \Big ( {\bar{x}}\big (\theta (i-1)\big )-x_{\text {num}}(i) \Big )^2 \end{aligned}$$

for all \(t_k \in \left\{ {\underline{t}}_{k} + \epsilon (j-1): j=1,\ldots ,{\tilde{N}}\right\} \), where \(x_{\text {num}}\) is the numerical solution for state variable x provided by MATLAB, \(\epsilon = 0.001\), \({\tilde{N}} = 101\), \(k\in \{a,b,c,d\}\), \(L_k = \dfrac{{\underline{t}}_k}{\theta }+1\) and \(U_k = \dfrac{{\overline{t}}_k}{\theta }+1\). Following this line of thought, we determine that \(t_a \simeq 0.800,\ t_b \simeq 2.877,\ t_c \simeq 7.058\ \text { and }\ t_d \simeq 8.577.\) Also for this case, we can observe that the numerical multiplier associated with state variable x is not normalized when \(\theta = 0.1\) (see Fig. 7). Using all this information, we are now able to compare numerical and analytical solutions, as one can see in Fig. 7b.

Final Notes

Notice that the analytical expressions of Sects. 6.16.2 and 6.3 depend on the weather conditions, the switching times and the times where the trajectory changes the modes. It is important to highlight that we can not obtain any analytical information about these time values. Thus, to proceed with our study, we consider that these numerical times are correct.

Also observe that the analytical and the numerical solutions are very close in Figs. 5b, 6b and 7b. Such closeness ensures the partial validation of the numerical solutions, through the analytical ones.

Conclusions and Future Work

The need to study irrigation strategies is of foremost importance nowadays since 80% of the fresh water used on our planet is in agriculture. So, in this paper we studied the daily irrigation problem of an agricultural crop, using optimal control.

We considered an optimal control problem whose dynamics is based on field capacity modes and which includes a state constraint. When we study non-smooth state constrained optimal control problems, it is very hard to get the analytical solution and this was the focus of the current paper. To overcome this difficulty, we considered different basic profiles for the optimal trajectories. Under some mild assumptions, we applied the Maximum Principle with the purpose to get the analytical solution for each one of the profiles. We emphasize that we could not obtain any analytical information on the switching times and the times where the trajectory changes the modes. Since we intended to partially validate numerical solutions, through the analytical ones, we assumed that these numerical times are correct.

With this paper, we were able to understand better the behaviour of trajectories and controls. This work is a first step towards the definition of an automatic irrigation system, guaranteeing the minimization of the water consumption. Our results are important to validate our very simple model. To do that we need to confront our results with the experimental data, what we hope to do in the very near future in collaboration with the Centre for the Research and Technology of Agro–Environmental and Biological Sciences of Universidade de Trás-os-Montes e Alto Douro (UTAD) – Portugal.

Our future work will also focus on the determination of sub-optimal control strategies which can be experimentally implemented. We hope to test strategies based on model predictive control and robust control policies to enable us to deal with uncertainties due to unpredictable weather conditions. One aspect that will also be the focus of our attention will be the determination of the value function for our problem, using dynamic programming techniques.