Before we solve the more interesting infinite horizon problem, we are first going to solve its finite horizon truncations with the horizon N.
We can prove the following form of the value function and optimal control.
Theorem 2
The following function \(V^N: \mathbb {R_+}\times \{0,\dots , N+1\}\rightarrow \mathbb {R_+}\) is the value function while the following function \(S^N:\mathbb {R_+}\times \{0,\dots , N\}\rightarrow \mathbb {R_+}\) is the optimal control for the truncation of the initial problem with time horizon N.
$$\begin{aligned} V^N(x,0):={\left\{ \begin{array}{ll} V_0(x):=K_0+G_0x+\frac{H_0}{2} x^2 &{} \text { if }\,{\hat{x}_0\le x<\hat{x}_1},\\ \vdots \\ V_{N-1}(x):=K_{N-1}+G_{N-1}x+\frac{H_{N-1}}{2} x^2 &{} \text { if }\,{\hat{x}_{N-1}\le x<\hat{x}_N},\\ V_N(x):=K_N+G_Nx+\frac{H_N}{2} x^2 &{} \text { if }\,{\hat{x}_{N}\le x<\hat{y}_N},\\ U_N(x):=\sum \limits _{i=0}^N {\beta ^i}P(\hat{s}) &{} \text { if }\, {x\ge \hat{y}_N}; \end{array}\right. }\nonumber \\ \end{aligned}$$
(14)
with \(V^N(x,t)=V^{N-t}(x,0)\) for \(t\le N\) and \(V^N(x,N+1)=0\) and
$$\begin{aligned} S^N(x,0)={\left\{ \begin{array}{ll} S_0(x):=a_0x+b_0 &{} \text { if }\, \hat{x}_0\le x<\hat{x}_1,\\ \vdots \\ S_{N-1}(x):=a_{N-1}x+b_{N-1} &{} \text { if }\, \hat{x}_{N-1}\le x<\hat{x}_N,\\ S_N(x):=a_Nx+b_N &{} \text { if }\,\hat{x}_{N}\le x<\hat{y}_N,\\ \hat{s}:=\frac{A}{B} &{} \text { if }\, x\ge \hat{y}_N \end{array}\right. } \end{aligned}$$
(15)
with \(S^N(x,t)=S^{N-t}(x,0)\) for \(t\le {N}\), where the constants are
$$\begin{aligned} H_0= & {} -B{(1+\xi )}^2,\, G_0=A(1+\xi ), \, K_0=\hat{x}_{0}=0, \text { and }\nonumber \\ \hat{y}_N= & {} \frac{\hat{s}}{(1+\xi )} \sum \limits _{i=0}^{N} \frac{1}{{(1+ \xi )}^{i}}; \end{aligned}$$
(16)
$$\begin{aligned} H_{i+1}= & {} \frac{\beta B H_{i}{(1+\xi )}^2}{B-\beta H_{i}}, G_{i+1}=\frac{\beta (1+\xi )\left( BG_{i}-AH_{i}\right) }{B-\beta H_{i}}, \nonumber \\ K_{i+1}= & {} \beta K_{i} +\frac{{\left( A-\beta G_{i}\right) }^2}{2(B-\beta H_{i})}; \end{aligned}$$
(17)
$$\begin{aligned} a_{i+1}= & {} \frac{-\beta H_i (1+\xi )}{B-\beta H_{i}}, \ b_{i+1}=\frac{A-\beta G_i}{B-\beta H_{i}}, \ \hat{x}_{i+1}=\frac{b_{i}-b_{i+1}}{a_{i+1}-a_{i}}, \nonumber \\ \hat{y}_{i+1}= & {} \frac{\hat{y}_i+\hat{s}}{1+\xi }. \end{aligned}$$
(18)
The number i corresponds to time to resource exhaustion for x in the interval \((\hat{x}_{i-1},\hat{x}_{i})\): for \(\hat{x}_{i-1}<x<\hat{x}_{i}\), the resource will be depleted in i stages. So, \(V_i\) and \(S_i\) correspond to time to resource exhaustion \(i+1\), \(\hat{x}_i\) is the highest state such that if \(x_0=\hat{x}_i\), then the optimal trajectory fulfils \(X(i)=0\), while \(\hat{y}_N\) is the lowest state such that if \(x_0=\hat{y}_N\), then \(\hat{s}\) is available in each of \(N+1\) stages. Equivalently, \(\hat{x}_i \) is the lowest state at which a control S with \(S(X(0),0)=a_i X(0)+b_i\), \(S(X(1),1)=a_{i-1} X(1)+b_{i-1}\), \(\dots \), \(S(X(i),i)=a_0 X(0)+b_0\) and \(S(X(k),k)=0\) for \(k>i\) can be admissible.
Equivalently \(a_i\), \(b_i\) and \(\hat{x}_{i+1}\) can be rewritten as
$$\begin{aligned} a_{i}=-\frac{ H_{i} }{B(1+\xi )}, \ b_{i}=\frac{A(1+\xi )-G_{i}}{B(1+\xi )}, \ \hat{x}_{i+1}=\frac{G_{i+1}-G_i}{H_i-H_{i+1}}. \end{aligned}$$
(19)
We present the results of Theorem 2—the optimal control \(S^N\) and the value function \(V^N\)—in Fig. 1. The graphs are drawn for the values \(A=1000, \, B=1,\ \epsilon =0.01, \ \xi =0.02\) and \(N=100\). Small diamonds correspond to subsequent \(\hat{x}_i\) and \(\hat{y}_{100}\), which are points of non-differentiability of \(S^N\). Both functions are continuous and non-decreasing in x. The consecutive \(\hat{x}_i\) correspond to consecutive number of time moments to depletion of the state variable, while over \(\hat{y}_{100}\) the resource is not depleted in the problem with time horizon 100.
To show how the optimal control and value function at the initial time change as the time horizon increases, we illustrate them for the same values of parameters \(A=1000, \, B=1,\ \epsilon =0.01\) and \(\xi =0.02\) and for four values of \(N=0,1,10,100\) in Fig. 2 (the optimal control) and Fig. 3 (the value function). Small diamonds correspond to subsequent \(\hat{x}_i\) and \(\hat{y}_i\); \(\hat{y}_{100}\) is out of the range. The non-differentiability of \(S^N\) is well visible, so is differentiability of \(V^N\). As we can see, the optimal control is non-increasing while the value function non-decreasing in N.
To prove Theorem 2, we need the following sequence of Lemmata. The part of proofs which are less interesting while elaborate are moved to the Appendix.
Lemma 1
-
(a)
\(\phi (x,S_N(x))\) is non-decreasing in x for all N and strictly increasing in x for \(N\ge 1\).
-
(b)
If \(x \ge \hat{x}_{N-1}\) then \(\phi (x,S_N(x))\ge \hat{x}_{N-2}\) and if \(x \le \hat{x}_N\) then \(\phi (x,S_N(x))\le \hat{x}_{N-1}\).
Proof
(a) Since, \(\phi (x,S_N(x))= ((1+\xi )-a_N)x-b_N\), it is true by the fact that \(a_N\le (1+\xi )\) and \(a_N<(1+\xi )\) for \(N>1\) resulting from Lemma 4(b) from the Appendix.
(b) By the fact that \(\phi (\hat{x}_N,S_N(x))= \hat{x}_{N-1}\) (Lemma 7(a) from the Appendix), continuity and monotonicity of \(S_N(x,0)\) in x (Lemma 8(a) from the Appendix). \(\square \)
Lemma 2
For all x, \(V^N\) is concave in x and it is strictly concave for \(x<\hat{y}_N\) and it is differentiable for all x.
Proof
This proof is based on basic properties of strictly concave functions and their derivatives or superdifferentials [an analogue of properties of convex functions and their subdifferentials, see. e.g., Rockafellar (2015)].
Since by Lemma 4 from the Appendix, \(H_i<0\), \(V_i\) are strictly concave and they are differentiable. Note that \(U_i\) is constant for every i, so, it is also concave.
Since \(V_i\) is strictly concave and by Lemma 9(a) from the Appendix, \(V^N\) is continuous, \({\frac{\partial V_i}{\partial x}}\) is strictly decreasing and since by Lemma 10 from the Appendix, \(V^\prime _i(\hat{x}_i)=V^\prime _{i+1}(\hat{x}_i)\), so, \(V^N(\cdot ,0)\) is differentiable for \(x<\hat{y}_N\) and its derivative is strictly decreasing on \((0,\hat{y}_N)\).
Since \(\frac{\partial V^N(\cdot ,0)}{\partial x}\) is strictly decreasing for \(x \le \hat{y}_N\), \(V^N(\cdot ,0)\) is strictly concave on the interval \([0, \hat{y}_N)\).
Since by Lemma 10(b) from the Appendix, \({\frac{\partial V_N(\hat{y}_N,0)}{\partial x}}=0={\frac{\partial U_N(\hat{y}_N)}{\partial x}}\), \({\frac{\partial V^N(x,0)}{\partial x}}=0\) for \(x\ge y_N\).
Since \(\frac{\partial V^N(\cdot ,0)}{\partial x}\) is non-increasing \(V^N(\cdot ,0)\) is concave on the whole domain. \(\square \)
Lemma 3
-
(a)
For any N, \(P(s)+\beta V^{N}\left( (1+\xi )x-s, 0\right) \) is strictly concave and differentiable in s and the supremum in the r.h.s. of the Bellman equation (9) is attained.
-
(b)
If for some \(s\in [0,(1+\xi )x]\), \(\frac{\partial (P(s)+\beta V^{N}(\phi (x,s(x)),0))}{\partial s}=0\), then s is the unique optimum of the right hand side of Bellman equation.
Proof
(a) Immediately by Lemma 2 and boundedness of P and \(V^N\) from above.
(b) If a point fulfils the first order condition for optimization of a strictly concave function then it is the unique optimum. \(\square \)
Proof of Theorem 2
We prove it inductively in two ways: by forward induction with respect to the horizon N and within a fixed horizon N, by backward induction corresponding to the dynamic programming techniques, which we rewrite to forward induction with respect to time to resource exhaustion.
For \(N=0\) it can be easily verified that the value function
$$\begin{aligned} V^0(x,0)={\left\{ \begin{array}{ll} V_0(x):=(A-\frac{B}{2}(1+\xi )x)(1+\xi )x &{} \hat{x}_0< x < \hat{y}_0, \\ U_0(x):=P(\hat{s})=\frac{A^2}{2B} &{} {x \ge \hat{y}_0}, \end{array}\right. } \end{aligned}$$
fulfils the Bellman equation (11) and there is a unique optimal control
$$\begin{aligned} S^0(x,0)={\left\{ \begin{array}{ll} S_0(x):=(1+\xi )x &{} \hat{x}_0< x < \hat{y}_0, \\ \hat{s} &{} {x \ge \hat{y}_0}, \end{array}\right. } \end{aligned}$$
which fulfils the Bellman inclusion (12).
Assume that the value function and the optimal control are given by Eq. (14) and (15) for N and prove it for \(N+1\).
The Bellman equation (11) has the form
$$\begin{aligned} V^{N+1}(x,t)=\sup \limits _{{s}\in {[0,(1+\xi )x]}} P(s)+\beta V^{N+1} \left( \phi (x,s),t+1\right) \text { for all } t \le N, \end{aligned}$$
(20)
while the Bellman inclusion — necessary and sufficient condition for a control to be optimal is
$$\begin{aligned} S^{N+1}(x,t)\in \mathop {{{\,\mathrm{Argmax}\,}}}\limits _{{s}\in {[0,(1+\xi )x]}} P(s)+\beta V^{N+1} \left( \phi (x,s),t+1\right) \text { for all } t \le N. \end{aligned}$$
(21)
By the Bellman optimality principle (Bellman 1957), at time \(t+1\), the solution has to coincide with the optimal solution of the N horizon problem with the state resulting from the first decision. Since the only dependence on time in the functions of the model is by discounting, so, \(V^{N+1}(x,1)=V^{N}(x,0)\) and \(S^{N+1}(x,1)=S^{N}(x,0)\). By analogous reasoning, we have \(V^{N+1}(x,t+1)=V^{N}(x,t)\) and \(S^{N+1}(x,t+1)=S^{N}(x,t)\) for all \(t\le N\). Thus, we only have to check the Eq. (20) and (21) for \(t=0\).
So, Eqs. (20) and (21) can be rewritten as
$$\begin{aligned} V^{N+1}(x,0)= & {} \sup \limits _{{s}\in {[0,(1+\xi )x]}} P(s)+\beta V^{N} \left( \phi (x,s),0\right) \text { for all } t \le N, \end{aligned}$$
(22)
$$\begin{aligned} S^{N+1}(x,0)\in & {} \mathop {{{\,\mathrm{Argmax}\,}}}\limits _{{s}\in {[0,(1+\xi )x]}} P(s)+\beta V^{N} \left( \phi (x,s),0\right) \text { for all } t \le N. \end{aligned}$$
(23)
The maximum of the r.h.s. of Eq. (22) exists, it is unique by Lemma 3 and whenever there exists a point in \([0,(1+\xi )x]\) at which the derivative of the r.h.s. of Eq. (22) is 0, it is the maximum, while if this zero derivative point is greater than \((1+\xi )x\), the maximum is attained at \((1+\xi )x\).
We are going to locate the maximum. It depends on the interval to which x belongs. If \(x \in [x_0, x_1]\), then \(\phi (x,S_0(x))=0\), so, \(V^N\left( \phi (x,S_0(x))\right) =0\).
By Lemma 1, if \(x\in [\hat{x}_{k+1},\hat{x}_{k+2})\), then \(V^N\left( \phi (x,S^N(x,0)),0\right) =V_{k}\left( \phi (x,S_{k+1}(x))\right) \).
So, if \(S_{k+1}(x)\) maximizes the r.h.s. of
$$\begin{aligned} V_{k+1}(x)=\sup \limits _{s\in [0,(1+\xi )x]}P(s)+\beta V_k(\phi (x,s)) \end{aligned}$$
(24)
then for this x, Eq. (22) reduces to Eq. (24). So, what remains to be proven is the fact that \(S_{k+1}\) is really the maximizer of the r.h.s. of Eq. (24) and that this equation is fulfilled. We do it by induction with respect to k.
By substitution, we get it for \(k=0\) if we use auxiliary \(V_{-1}\equiv 0\).
Now we assume that it is fulfilled for k and prove it for \(k+1\).
The first order condition for s to be optimal is
$$\begin{aligned} A-Bs-\beta G_{k}-\beta H_{k}((1+\xi )x-s)=0. \end{aligned}$$
By solving this equation for s, we get the optimal \(S_{k+1}\)
$$\begin{aligned} S_{k+1}(x)=a_{k+1}x+b_{k+1}=\frac{\beta H_{k}(1+\xi )x+\beta G_{k}-A}{\beta H_{k}-B} \end{aligned}$$
(25)
with the constants \(a_{k+1}=\frac{\beta H_{k}(1+\xi )}{\beta H_{k}-B}\) and \(b_{k+1}=\frac{\beta G_{k}-A}{\beta H_{k}-B}\).
Substituting the value from Eq. (25) into Eq. (24), we obtain \(V_{k+1}(x)=K_{k+1}+G_{k+1}x+\frac{H_{k+1}}{2} x^2\), with the recurrence equation for the constants as in Eq. (17).
So, what remains to be proven are two cases: \(x\in [\hat{x}_{N+1},\hat{y}_{N+1})\) and \(x\ge \hat{y}_{N+1}\).
In the latter case, obviously, \(\phi (x,\hat{s})\ge \hat{y}_N\), the Bellman equation (22) reduces to \(U_{N+1}(x)=\sup \nolimits _{s\in [0,(1+\xi )x]}P(s)+\beta V^{N}(\phi (x,s),0)=\sup \nolimits _{s\in [0,(1+\xi )x]}P(s)+\beta U_{N}(\phi (x,s))\) and it is fulfilled with \(s=\hat{s}\).
In the former case we have two sub-cases:
- (i)
If \(\phi (x,S_{N+1}(x))\in [\hat{x}_{N},\hat{y}_{N})\), then the Bellman equation (22) reduces to
\(V_{N+1}(x)=\sup \nolimits _{s\in [0,(1+\xi )x]}P(s)+\beta V_{N}(\phi (x,s))\), then the reasoning is the same as for \(x\in [\hat{x}_{k},\hat{x}_{k+1})\) for \(k< N\).
- (ii)
If \(\phi (x,S_{N+1}(x))\ge \hat{y}_N\), then the Bellman Eq. (22) reduces to
\(V_{N+1}(x)=\sup \nolimits _{s\in [0,(1+\xi )x]}P(s)+\beta U_{N}(\phi (x,s))\) and it is fulfilled with \(s=\hat{s}\).
Since \(S_{N+1}(\hat{y}_{N+1})=\hat{s}\), all the case has been taken into account (by Lemma 8(b)).
All formulae in (19) are immediate by (17) and (18). \(\square \)
Limit properties of the finite horizon truncations of the problem
In this subsection, we examine the limit properties of the finite horizon truncations of the problem.
Let us introduce
$$\begin{aligned} \tilde{x}:=\frac{\hat{s}}{\xi } \text{ and } \tilde{k}=\frac{P(\hat{s})}{1-\beta }. \end{aligned}$$
(26)
The first interesting property are the limits of \(V^N\) and \(S^N\), which for \(x<\tilde{x}\) is attained in finitely many steps.
Proposition 1
-
(a)
If \(x < \tilde{x}\) then \(\exists N_x\)\(\forall N_1, N_2>N_x\), \(\forall y \le x\), \(V^{N_1}(y,0)=V^{N_2}(y,0)\) and \(S^{N_1}(y,0)=S^{N_2}(y,0)\).
-
(b)
If \(x \ge \tilde{x}\) then \(\forall t\), \(\lim \nolimits _{N\rightarrow \infty } V^N(x,t)=\tilde{k}\) and \(\forall N, t\), \(S^N(x,t)=\hat{s}\) .
Proof
Immediate \(\square \)
Another interesting issue is the limit of \(V^N(x_N,0)\) and \(S^N(x_N,0)\) for a sequence \(x_N \nearrow \tilde{x}\). To calculate it, first, we need to check convergence of the sequences \(H_N \), \(G_N\), \(K_N\), \(a_N\) and \(b_N\).
Proposition 2
-
(a)
The limit of \(H_i\) is given by
$$\begin{aligned} \lim \limits _{i \rightarrow \infty } H_i={\left\{ \begin{array}{ll} \frac{B\left( 1-\beta {(1+\xi )}^2\right) }{\beta } &{} \text { for } \beta {(1+\xi )}^2> 1,\\ 0 &{} \text { for } \beta {(1+\xi )}^2\le 1. \end{array}\right. } \end{aligned}$$
-
(b)
The limit of \(a_i\) is given by
$$\begin{aligned} \lim \limits _{i \rightarrow \infty } a_i={\left\{ \begin{array}{ll} \frac{\beta {(1+\xi )}^2-1}{\beta (1+\xi )} &{} \text { for } \beta {(1+\xi )}^2> 1,\\ 0 &{} \text { for } \beta {(1+\xi )}^2\le 1. \end{array}\right. } \end{aligned}$$
Proof
(a) Consider the recurrence relation for \(H_i\) given by (17).
By calculating the fixed point, we obtain two values: 0 and \(\frac{B(1-\beta {(1+\xi )}^2)}{\beta }\).
By Lemma 4, we know that \(H_i\) is increasing and bounded from above by 0. So the limit exists and it is non-positive. Consider the following cases.
Case 1 If \(\beta {(1+\xi )}^2>1\), then \(H_1<\frac{B \left( 1-\beta {(1+\xi )}^2\right) }{\beta }\).
Consider any sequence given by Eq. (17) without predetermined initial condition.
Let us denote it by \(\{h_i\}\). Such \(h_i\) is increasing for \(h_1<\frac{B\left( 1-\beta {(1+\xi )}^2\right) }{\beta }\) and decreasing for \(\frac{B\left( 1-\beta {(1+\xi )}^2\right) }{\beta }<h_1<0\). So, 0 cannot be the limit of \(H_i\).
Therefore, in this case \(\lim \nolimits _{i \rightarrow \infty } H_i=\frac{B(1-\beta {(1+\xi )}^2)}{\beta }\).
Case 2 If \(\beta {(1+\xi )}^2\le 1\), then \(\frac{B\left( 1-\beta {(1+\xi )}^2\right) }{\beta }\ge 0\).
Either the limit is positive and hence it cannot be the limit of \(H_i\), or it is 0.
Therefore, in this case \(\lim \nolimits _{i \rightarrow \infty } H_i=0\).
(b) Immediate by substituting the limit of \(H_i\) obtained in (a) into \(a_i=\frac{-H_i}{B(1+\xi )}\). \(\square \)
Proposition 3
Consider \(F_i:=\frac{G_i}{H_i}\).
- (a)
The limit of \(F_i\) is given by \(\lim \limits _{i \rightarrow \infty } F_i=-\frac{A}{B\xi }\).
- (b)
The limit of \(G_i\) is given by
$$\begin{aligned}\lim \limits _{i \rightarrow \infty } G_i={\left\{ \begin{array}{ll} \frac{A(\beta {(1+\xi )}^2-1)}{\beta \xi } &{} \text { for } \beta {(1+\xi )}^2> 1,\\ 0 &{} \text { for } \beta {(1+\xi )}^2\le 1. \end{array}\right. } \end{aligned}$$
- (c)
The limit of \(b_i\) is given by
$$\begin{aligned}\lim \limits _{i \rightarrow \infty } b_i={\left\{ \begin{array}{ll} \hat{s} -\frac{A(\beta {(1+\xi )}^2-1)}{B \beta \xi (1+\xi )} &{} \text { for } \beta {(1+\xi )}^2> 1,\\ \hat{s} &{} \text { for } \beta {(1+\xi )}^2\le 1. \end{array}\right. } \end{aligned}$$
Proof
(a) We calculate the fixed point of \(F_i\) which is \(\frac{-\hat{s}}{\xi }\).
By Lemma (5) from the Appendix, \(F_i\) is decreasing.
Let us consider any sequence given by Eq. (33) without predetermined initial condition and denote it by \(\{f_i\}\).
If \(f_1>\frac{-\hat{s}}{\xi }\), then \(f_i\) is decreasing, while if \(f_1<\frac{-\hat{s}}{\xi }\), then \(f_i\) is increasing.
Therefore \(\lim \nolimits _{i \rightarrow \infty } F_i=\frac{-\hat{s}}{\xi }\).
(b) \(\lim \nolimits _{i \rightarrow \infty } G_i=\left( \lim \nolimits _{i \rightarrow \infty } H_i\right) \cdot \left( \lim \nolimits _{i \rightarrow \infty } F_i\right) \) since both \(H_i\) and \(F_i\) are convergent.
It is immediate by Proposition 3(a) and 2(a).
(c) Immediate by (b) and Eq. (19). \(\square \)
Proposition 4
\(\lim \nolimits _{i \rightarrow \infty } \hat{y}_i=\lim \nolimits _{i \rightarrow \infty } \hat{x}_i=\tilde{x}\).
Proof
Immediate by the definition of \(\hat{y}_i \), \(\hat{x}_i\) given in Eq. (16) and limits of \(a_i\) and \(b_i\).\(\square \)
Proposition 5
Consider a sequence \(x_N \nearrow \tilde{x}\).
Then \(\lim \nolimits _{N\rightarrow \infty }V^N(x_N,0)=\tilde{k}\) and \(\lim \nolimits _{N\rightarrow \infty }S^N(x_N,0)=\hat{s}\) and \(\lim \nolimits _{N\rightarrow \infty }\frac{\partial V^N(x_N,0)}{\partial x}=0\).
Proof
We know the limits of the sequences \(H_i\), \(G_i\), \(a_i\) and \(b_i\). To prove the result, we need to prove the convergence and to find the limit of \(K_i\).
By Lemma 9(b), \(V^i\) is continuous at \(\hat{y}_i\), so \(K_i=\tilde{k}-\frac{H_i}{2}(\hat{y}_i)^2 - G_i\hat{y}_i\). By taking the limit, we obtain that \(\lim \nolimits _{i \rightarrow \infty }K_i=\tilde{k}-\lim \nolimits _{i \rightarrow \infty }\frac{H_i}{2}(\tilde{x})^2 - \lim \nolimits _{i \rightarrow \infty }G_i\tilde{x}\).
Since \(\lim \nolimits _{N \rightarrow \infty } x_N=\tilde{x}\), there exists an increasing sequence of integers \(n_N\) such that for N large enough \(\hat{x}_{n_N}\le x_N <\tilde{x}\). By monotonicity of the functions \(V^i\), \(V^{n_N}(\hat{x}_{n_N},0)\le V^{n_N}(x_N, 0) \le V^{n_N}(\tilde{x},0)\). Taking the limit ends the proof for V. The proof for S is analogous.
Similarly, for \(\frac{\partial V^N(x_N,0)}{\partial x}\), with opposite side monotonicity of derivative resulting from concavity of \(V^N\). \(\square \)