Analytical Study for Different Extremal State Solutions of an Irrigation Optimal Control Problem with Field Capacity Modes

In this paper we study the problem of daily irrigation of an agricultural crop using optimal control. The dynamics is a model based on field capacity modes, where the state, x, represents the water in the soil and the control variable, u, is the flow rate of water from irrigation. The variation of water in the soil depends on the field capacity of the soil, xFC\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{FC}$$\end{document}, weather conditions, losses due to deep percolation and irrigation. Our goal is to minimize the amount of water used for irrigation while keeping the crop in a good state of preservation. To enforce such requirement, the state constraint x(t)≥xmin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x(t) \ge x_{\min }$$\end{document} is coupled with the dynamics, where xmin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\min }$$\end{document} is the hydrological need of the crop. Consequently, the problem under study is a state constrained optimal control problem. Under some mild assumptions, we consider several basic profiles for the optimal trajectories. Appealing to the Maximum Principle (MP), we characterize analytically the solution and its multipliers for each case. We then illustrate the analytical results running some computational simulations, where the analytical information is used to partially validate the computed solutions. The need to study irrigation strategies is of foremost importance nowadays since 80% of the fresh water used on our planet is used in agriculture.


Introduction
As it is well-known, water is an essential asset for human life, but scarce. Climate change, pollution and the inefficient use of water contribute to this scarcity. Bearing in mind that 80% of the water use is for agriculture and that this implies a considerable waste, it is of foremost importance to find strategies for a sustainable use of water. Not surprisingly, recently we have witnessed an increasing interest in the study of water applications within the scope of Mathematics field; see, for example, [1-3, 6, 8-11, 14, 16] and references cited therein. Surprisingly, the literature relating schedules water problems and optimal control is scarce.
Stochastic dynamic programming have been applied to solve optimal scheduler's problems in some works. For example, [6] proposes to identify a lumped-parameter model based on the data produced via simulation with a distributed-parameter model. Brown et al. in [3] present an irrigation scheduling decision support method. In [1] the authors center on the study of optimal water reservoirs management using non-linear one-hidden-layer networks.
An optimal control is considered in [14] to study irrigation plans maximizing the farmers profit, taking into account the cost of the water and the sale prize of the crop. In [8] the aim is maximizing the biomass production at harvesting time, considering a constraint on the total amount of water used. The problem studied is a singular optimal control problem and it is based on a simple crop irrigation model. In a situation of water scarcity, the authors also show that a strategy with a singular arc can be better than a simple bang-bang control. They illustrate their findings with numerical simulations. A different irrigation model is proposed in [2] to study, via optimal control problem, the maximization of the biomass production at harvesting time, when a quota on the water used for irrigation is imposed. An interesting feature of this work is the introduction of a non-autonomous dynamical system with a state constraint and a non-smooth right member which is given by threshold-based soil and crop water stress functions. In [16], an optimal control problem for cascaded irrigation canals is considered. The authors intend to ensure both the minimum water levels for irrigation demands and avoidance of water overflows and, even, dam collapse. In the scope of [16], the optimal control development is not easy due to the structural complexities involving control gates and interconnected long-distance water delivery reaches that are modeled by the Saint-Venant partial differential equations with conservation laws, wave superposition effects, coupling effects and strong non-linearities. In [11] a daily plan model to the irrigation of a crop field is developed with the help of optimal control theory. Such model requires the knowledge about weather data (temperature, rainfall and wind speed), the type of crop, the type of irrigation, the location, the humidity in the soil at the initial time and the type of soil. The main goal consists in minimizing the irrigation water, guaranteeing that the field crop is kept in a good state of preservation.
In this paper our aim is to study irrigation policies minimizing the use of the water while making sure that the crop is kept in healthy condition at all the time. We propose an optimal control problem based on that developed in [9]. The aim is to minimize the amount of water used for irrigation during a fixed time interval [0, T ]. The dynamics translates the variation of the water in the soil, x, which equals the difference between water gains, due to weather conditions and the flow rate of irrigated water, u, and and water losses. The differential equation is written in modes, due the fact that losses depending on whether the soil is at capacity field, or not. This equation is coupled with a state constraint x(t) ≥ x min to ensure the healthy growth of the crop; here x min is the hydrological need of the crop. The optimal control problem addressed is then a non-smooth optimal control problem with a state constraint. A remarkable feature of this problem is that different weather conditions produce different solutions for such problem. Thus, and differing from [9], our aim is to study differ-ent profiles for the optimal trajectories, under some mild assumptions ((A) and (B) below). Our starting point are eight different profile for optimal trajectories, here called scenarios, capturing basic features for possible optimal trajectories in a time interval. Since our problem is a singular optimal control with a state constraint, the study of optimal trajectories with and without boundary intervals is a main concern. Observe that in contrast with [10], here we deal with a non-smooth formulation of the optimal control problem. Appealing to the Maximum Principle (e.g., [4,15]) we study analytically the optimal solutions and their multipliers. A remarkable feature of our study is the assertion that the Maximum Principle for our problem holds in the normal form, i.e., the multiplier λ corresponding to the cost is not null; in this respect we refer the reader to, e.g., [5,13] and references within.
To avoid leaving out optimal trajectories with profiles that can be seen as concatenations of those eight chosen ones, we also illustrate how such theoretical findings can be of help to determine analytical solutions and multipliers for an extra scenario which is a composition of two of the eight scenarios.
Finally, we solve the problem numerically. We do that via the direct method: we first discretize the problem using the Euler method for the differential equation and we then solve the large scale optimization problem obtained. Using three different sets of data for the predicted weather conditions, we present three cases with solutions exhibiting different profiles. Moreover, we (partially) validate such numerical findings, using the theoretical characterization extracted from the Maximum Principle, illustrating the value of the analytical study previously done.
Our study here is a first step towards the determination of implementable solutions for irrigation of healthy crops in agriculture that minimize this precious but scarce human resource called water. This paper is organized as follows. In "Irrigation Optimal Control Problem" Section, we revisit the model proposed in [9], where the dynamics is written with field capacity modes. We begin "Necessary conditions for (OCP)" Section showing some details associated with the state inequality constraint, recalling some concepts and introducing some mild assumptions on our problem. Then, we analyse the conclusions of Theorem 9.3.1 in [15], when applied to our optimal control problem. In "Eight Scenarios", we present eight scenarios based on the type of optimal trajectories. For each scenario we apply the necessary conditions of "Necessary conditions for (OCP)" Section, characterizing analytically the solution and their multipliers. In "Approximation of Function g Values", our starting point is a table with three sets of different function values, each set representing the difference between the daily precipitation and the daily estimated evapotranspiration for the type of crop. Each set of values corresponds to different weather conditions. Since each set of values only have daily information and we need estimations of these values for mesh points in between to solve computationally our optimal control problem, we approximate those sets of values using computational tools. Such approximations are considered in "Validation" Section. We solve numerically the optimal problem for each of the three cases, via the direct method, and we present the computed solutions. We (partially) validate the numerical solutions for the three considered cases, by comparing them with the analytical ones computed in "Eight Scenarios" Section. This paper finishes with a section devoted to conclusions and future work.

Irrigation Optimal Control Problem
Various and different approaches to improve the efficient use of irrigation in agriculture have been proposed in the literature. Here we focus on the optimal approach proposed in [9]. The main idea of [9] is to determine the irrigation periods and the water's amount used so as to minimize the total amount of water used for irrigation over a certain period of time T , taking into account the water in the soil. They consider that water's variations in the soil depends not only on the irrigation, but also on the evapotranspiration, water loss and precipitation. Mathematically, this perspective leads to the following dynamics system governing the variation of water in the soil: where x(t) stands for the water in the soil at the instant t; u(t) is the flow rate of water at the instant t; g(t) is defined as where r f all(t) is the daily precipitation and evt p(t) is the estimated evapotranspiration for the type of crop, at the instant t; loss(x) represents the losses due to deep percolation. In [9], function loss, which appears in the dynamics of system (S), is considered to be defined as: where x FC represents the water's amount retained in the soil, after the soil was drained, as well as β ∈ [0, β max ] is a constant that represents the percentage of water losses due to the run-off and deep infiltration. Note that x FC depends on the type of soil.
We remark that the analytical expression of g is unknown. The precipitation data is collected from a weather station and evapotranspiration is obtained from the product between the crop's coefficient and the referenced evapotranspiration. The latter is calculated according to [17], using the data from weather station. We emphasize that we will use weather predictions, in the future.
In [9], the authors also argue that the amount of water in the soil x(t) needs to be kept at, or above, a certain level x min at all the instants of time in order to guarantee the crop's growth. Such requirement is translated by the state constraint: It is no surprise that the control constraint for some M > 0, is also imposed. The main focus of the current research is to determine the irrigation policy, i.e., the function u, such as to minimize the amount of water spent In this paper, we also intend to characterize analytically solutions under specific different scenarios. We get such characterizations applying the well known Maximum Principle. With the purpose to simplify the analysis, we first reformulate the problem in the Mayer form appealing to the usual state augmentation technique: we introduce a new variable z such that the cost function is now z(T ) leading directly to the following optimal control problem: where Observe that z(T ) = T 0 u(t)dt. For convenience, we summarize the data of (OCP) in Table 1.

Necessary Conditions for (OCP)
Before going any further, some words about the state constraint are called for. Observe that the state constraint appearing in the definition of (OCP) can be written as It is a simple matter to see that the state constraint is of first order (see [12]). Indeed, Setting X = (x, z) ∈ R 2 , we say that X ,ū is a strong local solution for (OCP) if X ,ū satisfies all the conditions of (OCP) and it minimizes the cost over all the admissible solutions X for (OCP) such that for some > 0. Consider now t ∈ [0, T ] whenx touches the boundary of the state constraints, i.e., the instances t whenx(t) = x min . Ifx(t) ≡ x min for all t ∈ [t in , t out ], where there exists  (1) and (2), respectively.
x 0 Fixed initial state.
x min Hydrological need of the specific crop.

M
Maximum flow rate of water entering the system. a > 0 such that is a boundary interval and the corresponding sub-arc ofx(t) is called a boundary arc. The points t in and t out are denominated by entry time and exit time with respect to such boundary arc. Furthermore, if there existst such thatx t = x min andx(t) > x min for t ∈ t − ,t ∪ t,t + , for some > 0, thent is called a contact time. Note that entry, exit and contact times are also called junction times. Here and throughout, we assume the following assumptions Basic Hypotheses: (A) There exist C g ∈ ]0, M[ and L g > 0 such that |g(t)| ≤ C g for all t ∈ [0, T ] and (B) The state trajectoryx(t) does not have any contact time for all t ∈ [0, T ].
In (A) we assume that g is Lipschitz continuous with respect to t. In (B) we assume that if the water's quantity in the soil achieves its minimum value, it will not immediately increase to a greater value. Both hypotheses are then quite reasonable considering nature of the physical meaning of the data of (OCP).
Observe that u → f (t, x, u) is a smooth function. On the other hand, from (2) it is an easy task to see that x → f (t, x, u) is Lipschitz continuous with constant β + 1 (as shown in [9]). If x → f (t, x, u) were continuously differentiable, (OCP) would fall in the category of singular optimal control problems with a single first order state constraint. However, it is a simple task to see from the Basic Hypotheses and the definition of f (see (5) [9]). It follows that (OCP) is indeed a non-smooth optimal control problem with a state constraint to which Theorem 9.3.1 in [15] applies.
Next, we analyse the conclusions of Theorem 9.3.1 in [15], when applied to (OCP). Let us also define the set of active constraints as

Consider the unmaximized Hamiltonian
If X ,ū is a strong local minimizer for (OCP), then Theorem 9.3.1 in [15] asserts the existence of an absolutely continuous function p : where supp is the support of the measure μ; where A remarkable feature of (OCP) is that (i)-(v) hold with λ > 0, as shown in [9] 1 . As it is well known once we have the guarantee that λ = 0, we can normalize all the multipliers and proceed to work with λ = 1. For simplicity, this is the approach that we take next 2 .
Starting with (ii), it is straightforward to see from the above that p z (t) ≡ −1. Recalling now the definition of Clarke sub-differential ∂ x H (see, e.g., [15]) and that of f , we get, from (ii),ṗ The function q x is defined as a function of bounded variation. It follows from (v) that μ is zero when the state constraint is not active, i.e., when h(x(t)) < 0. It is known that the measure μ is defined by a monotone non-negative function ν of bounded variation with at most a countable number of jumps. The Lebesgue Decomposition Theorem asserts that μ can be written as where μ a is an absolutely continuous measure with respect to the Lebesgue measure, μ sing is a singular measure with respect to the Lebesgue measure and μ d is a discrete measure.
Although the presence of μ sing can not be discarded in general, most practical problems exhibit "well behaved" measures μ with μ sing = 0 (see, e.g, [7,12]). On the other hand, Radon-Nikodym Theorem asserts the existence of an integral function ν a such that By assumption (B), the set I (x) does not include contact points. However, I (x) may contain boundary intervals. If t ∈ [t in , t out ] ⊂ I (x), thenẋ(t) = 0 for t ∈ ]t in , t out [ and the corresponding optimal control satisfies for all t ∈ ]t in , t out [. Controls satisfying (10) may only appear when βx min − g(t) ∈ [0, M].
Regarding the set control constraintū(t) ∈ [0, M], some comments are called for. In most scenarios, M may be considered to be very large so that u is always less than M. This is the case in situations of "practical interest". Even in cases of extreme drought, our choice of the value M is such that βx min − g(t) < M for all t ∈ [0, T ]. On the other hand, although the case of boundary controls being 0, i.e., βx min − g(t) = 0 for t ∈ ]t in , t out [, is not to be overlooked, the case of interest is when the boundary control is itself a singular control with βx min − g(t) > 0. So, here we refrain from considering boundary controls being 0, since this appears to be quite unreasonable, taking into account the physical meaning of our problem, and this case has not come up in our simulations, covering different set of values for g. So, we assume that Let us now consider conclusion (iv) of the above necessary conditions. Recalling that p z (t) ≡ −1 and rewriting (iv) we deduce that Then, from (12) we get the following characterization of the optimal controlū:ū where u sing (t) ∈ ]0, M[. If the switching function φ is zero at isolated instants of time, then u is a bang-bang control switching values between 0 and M, when φ goes from negative to positive values. Here, and since we choose a large M, optimal controls taking the value M on some time interval are not to be expected. If the switching function is 0 on a time interval, thenū is a singular control on this interval. However, the Maximum Principle does not provide any further information about singular controls. In the following section, we compute the analytical solutions of p x (t), q x (t),ū(t) and z(T ) for almost every t ∈ [0, T ] for the eight different scenarios that we describe next.
Before engaging into that discussion, it is worth mentioning that if the trajectory is above for all t and p x (T ) = 0. Thus, necessarily, q x (t) ≡ 0 and the switching function is always −1 (so, the trajectory does not have singular arcs). This is in agreement with the physical interpretation of our problem: irrigation should only be active (i.e.,ū(t) = 0) whenx(t) = x min and the weather conditions are not enough to keep the state away from the boundary of the state constraint.

Eight Scenarios
In this section we propose eight profiles for the optimal trajectories, under the basic assumptions (A) and (B). To each of them we apply the necessary conditions above in order to extract information on the solution and their multipliers. To avoid leaving out optimal trajectories with profiles that can be seen as concatenations of those eight chosen ones, we end this section discussing a concatenation of two scenarios among the eight proposed: the trajectory starts in between x min and x FC , increases up to x FC , remains there for some time interval, before dropping to x min where, again, it remains for some time.
Let us denote by X ,ū = x,z,ū the optimal solution for (OCP). We assume thatx has at most a boundary arc in all the eight scenarios. Junctions times, as well as the instants of time when the trajectory reaches, or drops from, the threshold x FC , are of importance: Junctions points are easily recognised in the context. The scenarios of interest here are the following: Sketches of all these eight scenarios appear in Fig. 1. The optimal state trajectory x is not necessarily a composition of line segments. The plots are only schemes illustrating eight possible behaviours of x.
As we have seen, the order of the state constraint is one. Recall that we consider here cases where the optimal trajectories have at most one single boundary arc. For trajectories with more than one boundary arc, we only have to do a composition of scenarios of "Eight Scenarios" Section and proceed to a composed analytical study, possibly subjected to updates of the transversality condition associated with the multiplier q x ; see an illustrative example of this in "Concatenation of Scenarios-Example" Section. Furthermore, in order to avoid repetitive calculations from now on, the trajectories of Fig. 1 with similar analytical solutions, with respect to state variable x and to respective adjoint function, as well as to control variable, are drawn with the same colour.
Recall that we do not consider scenarios where the initial and/or the final states touch the boundary at isolated points. Those cases, specially when additionally the trajectory has a boundary arc in ]0, T [, require a much demanding and longer analysis. In spite of the interest of such trajectory from a point of view of the Maximum Principle, we opt to keep the focus on trajectories with boundary intervals satisfying assumption (B) since those are the problematic ones as far as irrigation. The analysis of trajectories with contact points at the initial and/or final points will be done elsewhere.

Analysis of the Scenarios
We now extract information from necessary conditions (i)-(v), presented in "Necessary Conditions for (OCP)" Section, for all the eight scenarios of interest. Recall that Scenarios 1, 3, 4, 6, 7 and 8 have a boundary arc on the interval ]t in , t out [. On such interval, as seen above, the optimal control isū(t) = βx min − g(t) ∈ ]0, M[ (see (11)). For each scenario with a boundary interval [t in , t out ], we extract the following information from the necessary conditions (i)-(v) of "Necessary Conditions for (OCP)" Section : (see (12) and (13)); 3. whenever t out < T , μ (]t out , T ]) = 0, becausex(t) > x min for all t ∈ ]t out , T ]; 4. the adjoint inclusion (ii) of "Necessary Conditions for (OCP)" Section reduces toṗ It follows from 2. that q x is absolutely continuous on ]t in , t out [ and that the decomposition of μ in (8)  where η in , η out ≥ 0 and We know from (9) that From now on, we use the following notation where γ is a function andt is an interior point of its domain. From (6) and (14), as well as from items 1., 2. and 3. of the current section, it follows that and Differentiating now (16), as well as taking into account 2. and 4. of the current section, we conclude that ν a (t) = β, for all t ∈ ]t in , t out [.
Again from 2. and 4. of the current section, we obtain that Furthermore, from (16) and (17), we also get that p x (t in ) = 1 + η in . The remarkable feature of this situation is we get the following characterization of μ: for any measurable set A ⊂ [0, T ] where 1 A is the indicator function of the set A defined as It is important to emphasize that in Scenario 8 we have that t out = T . In this case, when t = T , we obtain that Recalling the decomposition (8), this means that μ sing vanishes on [0, T ] and that μ d has possibly two atoms, one at t in and another at t out . Moreover, taking into account (6) and the properties of μ a , we deduce thaṫ

Scenario 1
Let us assume that t in = t a and t out = t b . For Scenario 1, we have that loss(x) = βx for all t ∈ [0, T ], sincex(t) < x FC for all t ∈ [0, T ]. Thus, the adjoint inclusion (ii) of "Necessary Conditions for (OCP)" Section reduces tȯ The optimal control is defined as in (10) for t ∈ ]t in , t out [. Recall that x 0 > x min and that the optimal control has one single boundary control. More specifically, we consider the situation when From (19) and (20), we getq We now seek information on the optimal control on the intervals [0, t in [ and ]t out , T ]. Starting with [0, t in [, recall that q x t + in = 1 from item 2. of the current section; If η in > 0, then φ would be also positive on [0, t in [ and, consequently, we would havē u(t) = M for t ∈ [0, t in [. Such situation is not realistic, since M is very large. Thus, we deduce that η in = 0. It then follows, from the adjoint equation, that p We now turn to ]t out , T ]. Recall that q x (T ) = 0 and that (21) holds on ]t out , T ]. Then, we have q x (t) = 0 for all t ∈ ]t out , T ]. Since q x t + out = 0, we deduce from the second equation in (15) that η out = 1. Moreover, we have φ(t) = −1 < 0 and soū(t) = 0 for almost every t ∈ ]t out , T ]. Summarizing, we have the following: • η in = 0, η out = 1; (24) The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Scenario 2
Here we havex(t) > x min for all t ∈ [0, T ]. So, the state constraint is never active. Consequently, we deduce that and q x (t) = p x (t) for all t ∈ [0, T ]. The special feature of this scenario is the fact thatx remains at the threshold x FC on some time interval. In this case the adjoint reduces tȯ where ζ is a measurable function such that β ≤ ζ(t) ≤ β + 1 for almost every t ∈ [t a , t b ].
Recalling that q x (T ) = 0 (see item (iii) of "Necessary Conditions for (OCP)" Section), it is then a simple matter to see that such implies that p x (t) = q x (t) = 0 for all t ∈ [0, T ]. Taking into account that, in this case, φ(t) = −1 < 0 on [0, T ]. It implies that the optimal control isū(t) = 0 almost everywhere on [0, T ] and soz(T ) = 0. The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Scenario 3
Here set t in = t a and t out = t b . This scenario is a mix of the two previous ones: we have first a boundary arc on [t in , t out ] and we have another interval [t c , t d ], with t out < t c , wherē x(t) = x FC . However, it is hard to analyse this scenario due to the interval [t c , t d ], as we show next.
Observe that on ]t out , T ] we havė Moreover, we know that q x (T ) = 0. Thus, we have q x (t) = 0 for t ∈ ]t out , T ].
Consequently, as q x t + out = 1 − η out ≤ 1, we have η out = 1 and φ(t) = −1 for t ∈ ]t out , T ]. For t ∈ [0, t out [, p x and q x are as defined in Scenario 1 with η in = 0 and ν a (t) = β. Concluding, we have that p x (t), q x (t),ū(t) andz(T ) are given by (22), (23), (24) and (25), respectively, for all t ∈ [0, T ] with t in = t a and t out = t b . The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Scenario 4
Here we set t in = t b and t out = t c . Once more, we have a boundary interval. In contrast with Scenario 3, here the optimal trajectory only takes the value x FC at a single point t a < t in .
The analysis for t ≥ t in follows exactly as in Scenario 1. Recall that we assume that the boundary control on ]t in , t out [ is a singular control which takes values on ]0, M[. So, for t ∈ ]t in , t out [ we have that μ, q x and p x behave as in Scenario 1. Thus, we can write that q x (t) = 1 for t ∈ ]t in , t out [ and (15)-(18) hold. Moreover, we have q x (t) = 0 for t ∈ ]t out , T ]. So, we also deduce that η in = 0 and η out = 1. Note that we also have that p x (t in ) = 1, as in Scenario 1, since q x (t) = p x (t) for t < t in . Thus, the behaviour of q x and p x for t > t in is exactly as in Scenario 1.
Let us now see what happens for t ≤ t in . It is a simple matter to see that the adjoint inclusion (ii) leads toṗ Solving the above differential equations in terms of p x (t in ) and recalling that p x (t in ) = 1, we have Since We thus conclude that the optimal control and the optimal cost are given by (24) and (25) for t in = t b and t out = t c . The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations. Summarizing, for t in = t b and t out = t c , we have that (26)

Scenario 5
In Scenario 5 we have thatx(t) > x min for all t ∈ [0, T ]. Then, μ = 0 and q x (t) = p x (t) for all t ∈ [0, T ]. It is a simple matter to see from (ii) and q x (T ) = p x (T ) = 0 that q x (t) = p x (t) = 0 for all t ∈ [0, T ]. Thus, φ(t) = −1 < 0 andū(t) = 0 for for all t ∈ [0, T ]. We thus conclude that the cost,z(T ), is 0. The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Scenario 6
In this scenario the optimal trajectory exhibits again a boundary arc. Once more, we set t in = t a and t out = t b . On [0, t out [, the analysis is exactly as in Scenario 1. Moreover, we haveq where Solving the differential equation and recalling that q x (T ) = 0, we have From condition of item 2. of the current section, we know that q x t − out = 1. Thus, it follows from q x t + out = 1 − η out that η out = 1. Then, we conclude that p x (t), q x (t),ū(t) and z(T ) are given by (22), (23), (24) and (25), respectively, for all t ∈ [0, T ] with t in = t a and t out = t b . The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Scenario 7
Observe that here t in = 0 and t out = t a . Analysis of q x on [0, t a ] follows as the analysis on [t in , t out ] for Scenarios 1, 3, 4 and 6 with the exception that we cannot deduce that η in is zero. Moreover, we have Recall that q x (t) = 1 for t ∈ ]0, t out [ (see condition of item 2. of the current section). Furthermore, from conditions (7) and (19), we know thaṫ We also know that q x (T ) = 0. Thus, we get Concluding, we obtain that η out = 1, andz(T ) is given by (25) for t in = 0 and t out = t a . The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Scenario 8
Finally, for this scenario we have t in = t b and t out = T . The behaviour of q x and μ on ]t b , T [ is as in the previous scenarios with the exception that here we know that, due to q x (T ) = 0 and q x (t) = 1 for t ∈ ]t b , T [, that η out = 1. As in Scenario 1, we have η in = 0. Clearly, we have So, we have that It is however a simple matter to see that and thatz(T ) is given by (25) for t in = t b and t out = T . The optimal trajectoryx(t) for all t ∈ [0, T ] can now easily be calculated solving the respective differential equations.

Concatenation of Scenarios -Example
The composition of two scenarios, from among the proposed in the beginning of "Eight Scenarios" Section, can arouse differences in its analysis when compared with the individual study of each scenario, whenever the last one has a boundary interval [t in , t out ]. In order to illustrate the previous comment we analyse a composition of Scenarios 1 and 2, where the Scenario 2 happens earlier. Thus, let us study the scenario whose features are the following: Firstly, note that t in = t d and t out = t e . For all t ∈ [t c , T ], the analysis is exactly the same of Scenario 1. Consequently, we have that which implies that However, the corresponding point in Scenario 2 is such that q x (T ) = 0. Here lies the main difference that will originate others. Thus, now we have to follow the argument carried out in the study of Scenario 2, but taking into account the update of the final condition of multiplier For almost all t ∈ [t a , t b ] we have where ζ is a measurable function such that ζ(t) ∈ [β, β + 1] for almost all t ∈ [t a , t b ], which implies that q Finally, for almost all t ∈ [0, t a [, we have that −t a ) . and where K a ,ζ (t) ∈ ]0, 1[ for almost all t ∈ [t a , t b ]. From analytical expression of q x , we conclude that we only obtain a singular arc for t ∈ ]t in , t out [. It is an easy task to see that the conclusions for multipliers and control hold, if x 0 takes any value equal or above x min .
Before ending this example let us suppose that t out = T (the end state is on the boundary). In this case we get q x (t) = 1 for t ∈ ]t in , T [, q x (T ) = 0 and p x (t)

Remark 1
Observe that to concatenate scenarios the analysis applied to each one holds, but the transversality condition associated with the multiplier q x for the case in study as to be taken into account. In general, concatenations can not been analysed through a strict mathematical composition; they should be carried out as a composed study, possibly subjected to updates, like the one we illustrate above.

Approximation of Function g Values
In this section, we approximate the values of function g for three different cases. This is needed, since we only have daily information with respect to function g. The results obtained are then used in the numerical simulations reported in Sects. 6.1, 6.2 and 6.3.

Case 1
Consider that g takes the values of second column of Table 2. Using a TI-nspire CX II-T CAS graphic calculator, we approximate these values through two different types of continuous functions: quartic polynomial and logistic (see Fig. 2). Since the error of the logistic regression is smaller, we choose this one as an approximation of g in "Case 1" Section.

Case 2
Now, we assume that g takes the values of third column of Table 2. Using again a TI-nspire CX II-T CAS graphic calculator, we approximate these values through two different types of continuous functions: quartic polynomial and sinusoidal (see Fig. 3). Since the error of the quartic regression is smaller, we choose this one as an approximation of g in "Case 2" Section.  Table 2)

Case 3
Finally, suppose that g takes the values of fourth column of Table 2. Trying to approximate these function g values, by regression, by different types of continuous functions, with the help of a TI-nspire CX II-T CAS graphic calculator, we obtain high values for errors (see an example in Fig. 4). Thus, we decide to do an interpolation of values of fourth column of Table 2, using MATLAB function interp1, considering the method spline and the vector xq = 0 0.1 0.2 · · · 9 1×91 for the coordinates of the query points (see Fig. 4).

Validation
As in [9], we solve numerically (OCP) by the direct method. We first discretize the problem using Euler method and we then solve the finite dimensional non-linear optimization problem given by (8) in Section "Approximation of Function g Values" of [9]. In all the current section, the numerical solution of the optimization problem is obtained with the help of MATLAB (version R2019a). The MATLAB function fmincon is used with the following optimization options: the termination tolerance on the function value is 10 −9 ; the maximum number of iterations allowed is 10 6 ; the maximum number of function evaluations allowed is 3 × 10 6 ; the termination tolerance on state and control variables is 10 −4 ; the optimization algorithm is active-set.
In what follows, we study three different cases. We consider the values of Table 3 and that θ is the time step discretization for the three numerical studies.  Table 2)

Case 1
In numerical simulations of current subsection, we assume all values of Table 3. We also consider that x 0 = 44.2362 mm, β = 0.25 and function g takes the values of second column of Table 2. Recall that we use the function g 2 , drawn in Fig. 2, as an approximation of g as discussed in "Case 1 Section. For the discretization of (OCP), we work with 91 grid points, corresponding to a step size θ = 0.1. The numerical solutions we obtain using MATLAB function fmincon with options (28) are presented in Fig. 5a. Notice that we also present the solutions when we use the values of Table 2 instead of logistic function g 2 , in Fig. 5a. To do so we consider the step size θ = 1.
This case is similar to Scenario 4. Consequently, for all t ∈ [0, T ], we know that the analytical solutions ofū(t), q x (t) andx(t) with respect to (OCP) are, respectively, given by (24), (26) and Observe that the numerical multiplier associated with state variable x is not normalized (see Fig. 5), when θ = 0.1. Numerically, we got t in = 2.2 and t out = 6.2, when θ = 0.1.
However, the precise value of t a , instant of time where the state trajectory is equal to x FC , is unknown. We only know that t a ≤ t a ≤ t a < T , where t a = 0.1 and t a = 0.2, by Fig. 5a.
To get an approximate value for t a we search computationally for the minimum of      we determine that t a 0.153. Thus, using these values, we are able to compare numerical and analytical solutions, as one can see in Fig. 5b.

Case 2
In numerical simulations of current subsection, we assume all values of Table 3. We also consider that x 0 = 41.3485 mm, β = 0.0839 and function g takes the values of third column of Table 2. Recall that we use the function g 1 , drawn in Fig. 3, as an approximation of g as discussed in "Case 1" Section.
For the discretization of (OCP), we work with 91 grid points, corresponding to a step size θ = 0.1. The numerical solutions we obtain using MATLAB function fmincon with options (28) are presented in Fig. 6a. In this figure we also present the solutions, when we use the values of third column of Table 2 This case is similar to Scenario 8. Consequently, considering t out = T , we know that the analytical solutions ofū(t), q x (t) andx(t) with respect to (OCP) are given by (27), respectively, for all t ∈ [0, T ]. Numerically, we got t in = 3.0, when θ = 0.1. Observe that the numerical multiplier associated with state variable x is not normalized (see Fig. 6), when θ = 0.1. With all this information, we can compare numerical and analytical solutions, as one can see in Fig. 6b.

Case 3
In the numerical simulations reported here, we assume all values of Table 3. We also consider that x 0 = 33.045 mm, β = 0.0997 and the function g takes the values of fourth column of Table 2. Recall that we use the interpolation values, drawn in Fig. 4, as an approximation of g as discussed in "Case 3" Section. For the discretization of (OCP), we work with 91 grid points, corresponding to a step size θ = 0.1. The numerical solutions we obtain using MATLAB function fmincon with options (28) are presented in Fig. 7a. In this figure we also present the solutions, when we use the values of fourth column of Table 2 This case is similar to Scenario 5. Following the same line of thought of Sect. 4.1.5, we can state thatx(t) is given by andū(t) = q x (t) = 0 for all t ∈ [0, T ]. We note that for the computed solutions with θ = 0.1, we fail to obtain the exact instants of time where the state trajectory is equal to x FC .
We only know that there are four such instants. Let us denote them by t a , t b , t c and t d , where ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0 < t a < t b < t c < t d < T = 9, 0.8 = t a ≤ t a ≤ t a = 0.9, 2.8 = t b ≤ t b ≤ t b = 2.9, 7.0 = t c ≤ t c ≤ t c = 7.1, To get an approximate value for t a , t b , t c and t d , we search computationally for the minimum of   and U k = t k θ + 1. Following this line of thought, we determine that t a 0.800, t b 2.877, t c 7.058 and t d 8.577. Also for this case, we can observe that the numerical multiplier associated with state variable x is not normalized when θ = 0.1 (see Fig. 7). Using all this information, we are now able to compare numerical and analytical solutions, as one can see in Fig. 7b.

Final Notes
Notice that the analytical expressions of Sects. 6.1, 6.2 and 6.3 depend on the weather conditions, the switching times and the times where the trajectory changes the modes. It is important to highlight that we can not obtain any analytical information about these time values. Thus, to proceed with our study, we consider that these numerical times are correct.
Also observe that the analytical and the numerical solutions are very close in Figs. 5b, 6b and 7b. Such closeness ensures the partial validation of the numerical solutions, through the analytical ones.

Conclusions and Future Work
The need to study irrigation strategies is of foremost importance nowadays since 80% of the fresh water used on our planet is in agriculture. So, in this paper we studied the daily irrigation problem of an agricultural crop, using optimal control.
We considered an optimal control problem whose dynamics is based on field capacity modes and which includes a state constraint. When we study non-smooth state constrained optimal control problems, it is very hard to get the analytical solution and this was the focus of the current paper. To overcome this difficulty, we considered different basic profiles for the optimal trajectories. Under some mild assumptions, we applied the Maximum Principle with the purpose to get the analytical solution for each one of the profiles. We emphasize that we could not obtain any analytical information on the switching times and the times where the trajectory changes the modes. Since we intended to partially validate numerical solutions, through the analytical ones, we assumed that these numerical times are correct.
With this paper, we were able to understand better the behaviour of trajectories and controls. This work is a first step towards the definition of an automatic irrigation system, guaranteeing the minimization of the water consumption. Our results are important to validate our very simple model. To do that we need to confront our results with the experimental data, what we hope to do in the very near future in collaboration with the Centre for the Research and Technology of Agro-Environmental and Biological Sciences of Universidade de Trás-os-Montes e Alto Douro (UTAD) -Portugal.
Our future work will also focus on the determination of sub-optimal control strategies which can be experimentally implemented. We hope to test strategies based on model predictive control and robust control policies to enable us to deal with uncertainties due to unpredictable weather conditions. One aspect that will also be the focus of our attention will be the determination of the value function for our problem, using dynamic programming techniques.