Skip to main content
Log in

On temporal aggregators and dynamic programming

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

This paper proposes dynamic programming tools for payoffs based on aggregating functions that depend on the current action and the future expected payoff. Some regularity properties are provided on the aggregator to establish existence, uniqueness and computation of the solution to the Bellman equation. Our setting allows to encompass and generalize many previous results based upon additive or non-additive payoff functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This approach is presented in Becker and Boyd [see Section 3.3.1. in Becker and Boyd (1997)].

  2. In Yao (2016), the aggregator and the recursive payoff are taken together as primitives.

  3. They consider an additive model where the constant discount rate is replaced by a function.

  4. In the existing literature, the aggregator A is generally defined on \(Z \times \mathbf {R}\). In this case, if A is increasing with respect to its second argument (an assumption that is made in this paper), such aggregator can be extended to a function \(Z \times \overline{\mathbf {R}} \rightarrow \overline{\mathbf {R}}\) by letting \(A(z,+\infty )=\lim _{v \rightarrow +\infty } A(z,v)\) and \(A(z,-\infty )=\lim _{v \rightarrow -\infty } A(z,v)\).

  5. The space \([-\infty ,+\infty ]\) is endowed with its standard compactification-topology: a neighborhood of \(x \in \mathbf {R}\) is standard, a neighborhood of \(-\infty \) contains \([-\infty ,y[\) for some \(y \in \mathbf {R}\), and a neighborhood of \(+\infty \) contains \(]y,+\infty ]\) for some \(y \in \mathbf {R}\).

  6. This boundedness condition is for example true when f is continuous.

  7. Note that Thompson aggregators A are usually defined on \(Z \times [0,+\infty [\), and not on \(Z \times [-\infty ,+\infty ]\) as in our paper. It is easy to extend the aggregators to the case of this paper, so that we can apply the main results of our paper to this class of aggregators.

  8. In the definition given by Marinacci and Montrucchio (2010), it is only required that there exists at least such an \(a_z\), but the authors prove uniqueness in Lemma 1 (page 1798).

  9. For some well-suited topology on \([0,1]^{\mathbf {N}}\) such that every neighborhood of 0 intersect \(]0,1]^{\mathbf {N}}\). This technical requirement would then provide a meaning to \(\lim _{\varepsilon \rightarrow 0, \varepsilon \ne 0} f(\varepsilon )\), i.e., a limit whose existence is required from the proof of the main current Theorem 4.2, \(\mathbf {R}\) being currently endowed with the standard metric.

  10. This is true, in particular, when \(A\) is uniformly continuous in \(v\).

  11. Actually, Kamihigashi considers a family of optimization problems, parametrized by \(L \in \{\mathop {\underline{\text {lim}}}\nolimits , \mathop {\overline{\text {lim}}}\nolimits \}\), and in this subsection, we only derive the supremum limit case.

  12. that is, \(x \precnsim _A y\) if \(x \precsim _A y\) is true and \(y \precsim _A x\) is false.

References

  • Alvarez, F., Stokey, N.: Dynamic programming with homogeneous functions. J. Econ. Theory 82, 167–189 (1998)

    Article  Google Scholar 

  • Becker, R.A., Boyd, J. III.: Capital Theory, Equilibrum Analysis and Recursive Utility. Blackwell, Hoboken (1997)

  • Boyd, J. III.: Recursive utility and the Ramsey problem. J. Econ. Theory 50, 326–345 (1990)

  • Duran, J.: On dynamic programming with unbounded returns. Econ. Theory 15, 339–352 (2000)

    Article  Google Scholar 

  • Jaśkiewicz, A., Matkowski, J., Nowak, A.S.: On variable discounting in dynamic programming: applications to resource extraction and other economic models. Ann. Oper. Res. 220, 263–278 (2014)

    Article  Google Scholar 

  • Kamihigashi, T.: Elementary results on solutions to the Bellman equation of dynamic programming: existence, uniqueness and convergence. Econ. Theory 56, 251–273 (2014)

    Article  Google Scholar 

  • Koopmans, T.: Stationary ordinal utility and impatience. Econometrica 28, 287–309 (1960)

    Article  Google Scholar 

  • Le Van, C., Morhaim, L.: Optimal growth models with bounded or unbounded returns: a unifying approach. J. Econ. Theory 105, 158–187 (2002)

    Article  Google Scholar 

  • Le Van, C., Vailakis, Y.: Recursive utility and optimal growth with bounded or unbounded returns. J. Econ. Theory 123, 187–209 (2005)

    Article  Google Scholar 

  • Marinacci, M., Montrucchio, L.: Unique solutions for stochastic recursive utilities. J. Econ. Theory 145, 1776–1804 (2010)

    Article  Google Scholar 

  • Martins-da-Rocha, V.F., Vailakis, Y.: Existence and uniqueness of a fixed point for local contractions. Econometrica 78, 1127–1141 (2010)

    Article  Google Scholar 

  • Rincon-Zapatero, J.P., Rodriguez-Palmero, C.: Existence and uniqueness of solutions to the Belllman equation in the unbounded case. Econometrica 71, 1519–1555 (2003)

    Article  Google Scholar 

  • Stokey, N., Lucas, R.E.: Optimal growth with many consumers. J. Econ. Theory 32, 139–171 (1984)

    Article  Google Scholar 

  • Stokey, N., Lucas, R.E., Prescott, E.: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge (1989)

    Google Scholar 

  • Streufert, P.A.: Stationary recursive utility and dynamic programming under the assumption of biconvergence. Rev. Econ. Stud. 57, 79–97 (1990)

    Article  Google Scholar 

  • Streufert, P.A.: An abstract topological approach to dynamic programming. J. Math. Econ. 21, 59–88 (1992)

    Article  Google Scholar 

  • Yao, M.: Recursive utility and the solution to the Bellman equation, manuscript, Keio university, No. DP2016-08 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philippe Bich.

Additional information

This research was completed thanks to the Novo Tempus research Grant, ANR-12-BSH1-0007, Program BSH1-2012. The authors would like to thank the participants of conferences on recursive methods held in Glasgow by June 2014 and in Phoenix by March 2015, of the PET conference, Seattle, July 2014, and the ones of the SAET conference, Tokyo, August 2014.

Appendices

Appendix 1: Proof of Proposition 2.1

Let \(Z={\mathbf {R}}\). Define A on \(\mathbf {R} \times [-\infty ,+\infty [\) by \(A(z,v)=\frac{v}{2}+\frac{1}{2}+z\) if \(v \in ]-\infty ,1[\), \(A(z,v)=v+1+z\) if \(v \in [1,+\infty [\) and \(A(z,-\infty )=-\infty \).

For every \(z_0 \le 0\),

$$\begin{aligned} {A(z_0,A(z_0,0)){=}A\left( z_0,\frac{1}{2}{+}z_0\right) {=}\frac{1}{2}\left( {z_0}{+}\frac{1}{2}\right) {+}\frac{1}{2}{+}z_0 {=}\frac{1}{2}+\frac{1}{4}+z_0\left( 1+\frac{1}{2}\right) }, \end{aligned}$$

and iterating, we get:

$$\begin{aligned} A^n(z_0,\ldots ,z_0,0)=\sum _{k=1}^n \frac{1}{2^k}+z_0\sum _{k=0}^{n-1} \frac{1}{2^{k}}. \end{aligned}$$

In particular,

$$\begin{aligned} \lim _{n \rightarrow +\infty } A^n({\mathbf{0}}, 0)=1 \end{aligned}$$

with \(\mathbf{0}\) the null sequence, and

$$\begin{aligned} \lim _{n \rightarrow +\infty } A^n({-\mathbf{1}},0)=-1 \end{aligned}$$

with \({-\mathbf{1}}\) the constant sequence whose terms are all \(-1\).

And one also has

$$\begin{aligned} \lim _{n \rightarrow +\infty } A^n({\mathbf{1}}, 0)=\lim _{n \rightarrow +\infty } \left( \frac{1}{2}+2n+1\right) =+\infty \end{aligned}$$

with \(\mathbf{1}\) the constant sequence whose terms are all \(+1\).

Thus, \({\mathbf{1}}\), \(-{\mathbf{1}}\) and \({\mathbf{0}}\) belong to . In particular, if \(\precsim _A\) is the order defined by A on , and \(\precnsim _A\) is the strict order associatedFootnote 12 to \(\precsim _A\), it is obtained that:

$$\begin{aligned} -{\mathbf{1}} \precnsim _A {\mathbf{0}} \precnsim _A {\mathbf{1}}. \end{aligned}$$

We now prove by contradiction the Proposition for A defined above: assume that there exists a A-recursive function U which induces the same order. One should have

$$\begin{aligned} U(-{\mathbf{1}})<U({\mathbf{0}}) <U({\mathbf{1}}). \end{aligned}$$

But since U is A-recursive,

$$\begin{aligned} A(0,U({\mathbf{0}}))=U({\mathbf{0}}). \end{aligned}$$

First assume that \(U({\mathbf{0}})\) is finite. Then, if \(U({\mathbf{0}}) \ge 1\), the last equality can be written

$$\begin{aligned} U({\mathbf{0}})+1=U({\mathbf{0}}), \end{aligned}$$

a contradiction. Otherwise, if \(U({\mathbf{0}}) < 1\), one gets

$$\begin{aligned} \frac{U({\mathbf{0}})}{2}+\frac{1}{2}=U({\mathbf{0}}) \end{aligned}$$

and \(U({\mathbf{0}})=1\), a contradiction. Consequently, \(U({\mathbf{0}})=+\infty \) or \(U({\mathbf{0}})=-\infty \), which contradicts \(U(-{\mathbf{1}})<U({\mathbf{0}}) <U(+{\mathbf{1}})\). This proves that we do not have \(-{\mathbf{1}} \precnsim _U {\mathbf{0}} \precnsim _U {\mathbf{1}}\), i.e., \(\precnsim _U\) and \(\precnsim _A\) are different.

Appendix 2: Proof of Proposition 3.1

1. Let \(Z={\mathbf {R}}\) and assume that there are no feasibility constraints. Define A by \(A(z,v)=\frac{v}{2}+\frac{1}{2}-(z)^2\) if \(v \in ]-\infty ,1[\), \(A(z,v)=v+1-(z)^2\) if \(v \in [1,+\infty [\), \(A(z,+\infty )=+\infty \) and \(A(z,-\infty )=-\infty \). Then,

The optimal value of \((\overline{P})\) is 1, and the maximum is reached only at \(\mathbf{0}\), the null sequence, i.e., the set of solutions is \( \text{ Sol } (\overline{P})=\{\mathbf{0}\}\).

Let us show that \({ \mathrm Sol} (P) \ne \{\mathbf{0}\}\).

If \({ \mathrm Sol} (P)= \emptyset \), then clearly \({ \mathrm Sol} (P) \ne \{\mathbf{0}\}\). If \({ \mathrm Sol} (P) \ne \emptyset \), the maximum of U is reached at some . Let us then prove by contradiction that is infinite.

Assume that is finite. Then, either and for every small enough values of \(a \in Z\), a contradiction. Or for every small enough values of \(a \in Z\), , hence , another contradiction. So, is infinite.

If then U is constantly equal to \(-\infty \) on \(Z^{\mathbf {N}}\), and \({ \mathrm Sol} (P)=Z^{\mathbf {N}}\ne \{\mathbf{0}\}\). If , then for every \(a \in Z\), , and by iteration, for every sequence equal to except for a finite number of terms. Then, \({ \mathrm Sol} (P)\) is infinite and \({ \mathrm Sol} (P) \ne \{\mathbf{0}\}\).

2. We now consider a general aggregator A satisfying Null assumption. Thus, one has

since \(\bigl (z_0,z_1,\ldots ,z_{T-1},\underline{a},\underline{a},\ldots \bigr ) \in \Sigma \bigl (x_0\bigr )\).

To prove that such an inequality can be strict, consider an aggregator A such that \(A(z,+\infty )=+\infty \). For every A-recursive payoff U, one can construct a new recursive payoff \(\overline{U}\) as follows: fix , and define if but for a finite number of terms, and otherwise. Then, \(\overline{U}\) is a A-recursive payoff, and the value of (P) associated with this payoff is \(+\infty \), which is strictly larger than the value of \((\overline{P})\) and completes the argument of the proof.

Appendix 3: Proof of Proposition 3.2

Let \(x_0 \in X\). We first prove that for every If or then the inequality is true. Consequently, we can assume and .

Since U is A-recursive, for every and \(n \in \mathbf {N}\), we have Since U is bounded below by \(\underline{v}\), and from Increasing Monotonicity Assumption, we get, choosing any such that (this exists because )

Since , we get for every n. Let \(\varepsilon >0\). From Transversality Assumption (T1), we get that for n large enough,

In particular, for n large enough.

Now, for every n large enough such that (thus is real),

The above inequality is also true for , and passing to the supremum limit and taking \(\varepsilon \rightarrow 0\), we get

The same proof gives that for every ,

Thus, finally, for every ,

Appendix 4: Proof of Lemma 4.1

Let us show \(\text{(i) } \Leftrightarrow \text{(ii) }\) where

\(\text{(i) } \ \) :

There exists \(\delta : \mathbf {R} \rightarrow \mathbf {R} \cup \{+\infty \}\) that tends to \(0\) at \(0\) and such that, for every \((z,v,v') \in Z \times {\mathbf {R}}\times {\mathbf {R}}, \vert A(z,v)-A(z,v')\vert \le \delta (v-v')\), with convention (I), and

\(\text{(ii) }\) :

A is uniformly continuous in v.

Clearly \(\text{(i) } \Rightarrow \text{(ii) }\). Let us show that \(\text{(ii) } \Rightarrow \text{(i) }\). Assume that A is uniformly continuous in v and define for every \(x \in \mathbf {R}\):

$$\begin{aligned} \delta (x)=\sup _{(z,v) \in Z\times {\mathbf {R}}} \bigl \{ \vert A(z,v+x)-A(z,v)\vert \bigr \}, \end{aligned}$$

with convention (I), that is \(A(z,v+x)-A(z,v)\) is taken to be equal to 0 if \(A(z,v+x)=A(z,v)=+\infty \) or \(A(z,v+x)=A(z,v)=-\infty \). By definition of \(\delta \), Condition (i) is true for such \(\delta \), and the fact that \(\delta : \mathbf {R} \rightarrow \mathbf {R} \cup \{+\infty \}\) tends to \(0\) at \(0\) comes from Condition (ii).

Appendix 5: Proof of Proposition 4.1

Assume that the criterion of Proposition 4.1 is true, but that A is not compactly uniformly continuous in v. Thus, there is K compact in Z, \(\varepsilon >0\), \((z_n,v_n,v_n')\) a sequence in \(K \times \mathbf {R}^2\) with \(\mid v_n-v_{n'} \mid \le \frac{1}{n}\) and such that

$$\begin{aligned} \mid A(z_n,v_n)-A(z_n,v_n') \mid > \varepsilon . \end{aligned}$$
(4.3)

Since \(\mid v_n-v_{n'} \mid \le \frac{1}{n}\), we have two cases:

  1. (1)

    Either there is a subsequence of \((v_n,v_n')\) which converges. Then, up to a subsequence, we can assume that \((z_n,v_n,v_n')\) converges to some \((z,v,v) \in K \times \mathbf {R}^2\). Passing to the limit in Eq. (4.3), and from the continuity of A, we get \(\varepsilon \le 0\), a contradiction.

  2. (2)

    Or, \(\mid v_n \mid \) and \(\mid v_n' \mid \) converges to \(+\infty \). By assumption, for n large enough, we get

    $$\begin{aligned} \varepsilon < \mid A(z_n,v_n)-A(z_n,v_n') \mid \le f(z_n) \mid v_n-v_n' \mid \le f(z_n) \frac{1}{n}, \end{aligned}$$

    and passing to the limit, and from the boundedness of \(f(z_n)\), we get a contradiction.

Appendix 6: Proof of Proposition 4.2

From Property (3) of Thompson aggregators and monotonicity of A, we get that for every \(z \ge 0\) and every \(v' > v \ge 0\),

$$\begin{aligned} \displaystyle {\left| \frac{A(z,v')-A(z,v)}{v'-v} \right| \le \left| \frac{A(z,v)-A(z,0)}{v} \right| } \end{aligned}$$
(4.4)

and similarly, we get, for every \(z > 0\) and every \(v \ge a_z\),

$$\begin{aligned} \displaystyle {\left| \frac{A(z,v)-A(z,0)}{v-0} \right| \le \ \left| \frac{A(z,a_z)-A(z,0)}{a_z-0} \right| } \end{aligned}$$
(4.5)

Which finally gives for every \(z > 0\), every \(v' > v \ge a_z\),

$$\begin{aligned} \mid A(z,v')-A(z,v) \mid \le (v'-v)\ \left| \frac{a_z-A(z,0)}{a_z} \right| \le (v'-v) \end{aligned}$$
(4.6)

which is still true for \(z=0\) by continuity of A.

Now, to prove that A is compactly uniform continuous in v, assume by contradiction that there is K compact in Z, \(\varepsilon >0\), \((z_n,v_n,v_n')\) a sequence in \(K \times [0,+\infty [ \times [0,+\infty [\) with \(\mid v_n-v_{n'} \mid \le \frac{1}{n}\) and such that

$$\begin{aligned} \mid A(z_n,v_n)-A(z_n,v_n') \mid > \varepsilon . \end{aligned}$$

If \((v_n,v_n')\) has a bounded subsequence, then up to a subsequence, we can assume that \((z_n,v_n,v_n')\) converges to some (zvv), and continuity of A gives, passing to the limit in the previous inequality, \(0 \ge \varepsilon \), a contradiction. Thus, \((v_n,v_n')\) has no bounded subsequence, and the two sequences \(v_n\) and \(v_n'\) tends to \(+\infty \).

Let \(a_n\) such that \(A(z_n,a_n)=a_n\) (such an \(a_n\) is unique if \(z_n>0\) but it still exists when \(z_n=0\)).

Since \(z_n \in K\), K compact, it is bounded. Let us prove that \(a_n \) is bounded.

By contradiction, assume that \(\lim _{n \rightarrow +\infty } a_n=+\infty \). Define \(\eta >0\) and N such that \(n \ge N\) implies \(z_n <z+\eta \). By assumption, since \(z+\eta >0\), there exists a unique a such that \(A(z+\eta ,a)=a\), and we have \(A(z+\eta ,a')>a'\) for \(a'<a\) and \(A(z+\eta ,a')<a'\) for \(a'>a\) [the function \(\frac{A(z+\eta ,v)}{v}\) is indeed strictly decreasing for \(v \in [0,+\infty [\): see Lemma 1 in Marinacci and Montrucchio (2010)]. Thus, if \(a'=a_n >a\), \(A(z+\eta ,a_n) <a_n\). Let \(N' \in \mathbf {N}\) such that for every \(n \ge N'\), \(a_n>a\). For every \(n \ge \max \{N,N'\}\), we have \(a_n>a\) and \(z_n <z+\eta \), thus by monotonicity of A, \(A(z_n,a_n) \le A(z+\eta ,a_n) <a_n\) which contradicts the definition of \(a_n\), and proves that \(a_n\) is bounded.

In particular, since \(a_n\) is bounded, permuting \(v_n\) and \(v_n'\) if necessary, we have \(v_n' \ge v_n>a_n\) for n large enough. Equation (4.6) gives, for n large enough

$$\begin{aligned} \varepsilon \le \mid A(z_n,v'_n)-A(z_n,v_n) \mid \le \frac{1}{n}, \end{aligned}$$
(4.7)

a contradiction for n large enough.

Appendix 7: Proof of Theorem 4.1

First prove the Theorem if Assumption (i) is true. In this proof, to simplify notations, we let \(w=\overline{w}\) and \(v^*=\overline{v}^*\). The proof is similar for \(v^*=\underline{v}^*\). Remark that from Boundary condition and Uniform continuity in v, we get that for every \(z \in Z\), the mapping \(v \in [-\infty ,+\infty ] \rightarrow A(z,v)\) is continuous: this will be used in this proof. Let \(x_0 \in X\).

Step 1 First show \(Bv^*(x_0) \ge v^*(x_0)\).

First case If \(v^*(x_0)=-\infty \) then \(Bv^*(x_0) \ge v^*(x_0)\) is true.

Second case \(v^*(x_0)=+\infty \). Thus, by definition of \(v^*(x_0)\), for every \(K>0\), there exists a strictly increasing sequence \((T_n)\) of integers, and a sequence \((z_n)_{n \ge 0}\) of \(\Sigma (x_0)\) such that

$$\begin{aligned} \lim _{N \rightarrow +\infty } A^{T_N}((z_n)_{n \ge 0},0) \ge K. \end{aligned}$$

Now, by definition of \(Bv^*(x_0)\) and the definition of \(v^*\), we get in particular

$$\begin{aligned} Bv^*(x_0)\ge & {} A(z_0,\mathop {\overline{\text {lim}}}\nolimits _{N \rightarrow +\infty } A^{T_N-1}((z_n)_{n \ge 1},0) \\\ge & {} \mathop {\underline{\text {lim}}}\nolimits _{N \rightarrow +\infty } A(z_0,A^{T_N-1}((z_n)_{n \ge 1},0) \\= & {} \lim _{N \rightarrow +\infty } A^{T_N}((z_n)_{n \ge 0},0) \ge K. \end{aligned}$$

Here, we apply to the continuous mapping \(v \rightarrow A(z_0,v)\) the fact that for every continuous mapping f, \(f(\mathop {\overline{\text {lim}}}\nolimits _{n \rightarrow \infty } a_n) \ge \mathop {\underline{\text {lim}}}\nolimits _{n \rightarrow \infty } f(a_n)\).

If \(K \rightarrow +\infty \), this gives \(Bv^*(x_0)=+\infty =v^*(x_0)\).

Last case \(v^*(x_0)\) is finite. Similarly to the case above, this implies that for every \(\varepsilon >0\), there exists a strictly increasing sequence \((T_n)\) of integers, and a sequence \((z_n)_{n \ge 0} \in \Sigma ^0(x_0)\) such that \(\lim _{N \rightarrow +\infty } A^{T_N}((z_n)_{n \ge 0},0) \ge v^*(x_0)-\varepsilon \). The same proof as above (replacing K by \(v^*(x_0)-\varepsilon \)) finally gives

$$\begin{aligned} Bv^*(x_0) \ge v^*(x_0)-\varepsilon \end{aligned}$$

for every \(\varepsilon >0\). Thus, we get the inequality when \(\varepsilon \rightarrow 0\).

Step 2 Now show \(Bv^*(x_0) \le v^*(x_0)\).

If \(Bv^*(x_0)=-\infty \) then the inequality is true. Thus, consider the two other cases:

1) First case \(Bv^*(x_0) \in \mathbf {R}\).

Let \(\varepsilon >0\). By definition of \(Bv^*(x_0)\), there exists \(x_1 \in \Gamma (x_0)\) and \(z_0 \in \Omega (x_0,x_1)\) such that

$$\begin{aligned} Bv^*(x_0) \le A(z_0,v^*(x_1))+{\varepsilon }. \end{aligned}$$
(4.8)

We now show that in the three following subcases (\(v^*(x_1)=+\infty \), \(v^*(x_1)=-\infty \) and \(v^*(x_1) \in \mathbf {R}\)), we have \(v^*(x_0) \ge Bv^*(x_0) -2\varepsilon .\)

First subcase \(v^*(x_1)=+\infty \). Thus,

$$\begin{aligned} Bv^*(x_0) \le A(z_0,+\infty )+{\varepsilon }. \end{aligned}$$
(4.9)

By definition of \(v^*(x_1)=+\infty \), for every \(K>0\), there exists a strictly increasing sequence \((T_n)\) of integers, and a sequence \((z_n)_{n \ge 1}\) of \(\Sigma (x_1)\) such that

$$\begin{aligned} \lim _{N \rightarrow +\infty } A^{T_N}((z_n)_{n \ge 1},0) \ge K. \end{aligned}$$

From the definition of \(v^*(x_0)\), we have

$$\begin{aligned} v^*(x_0)&\ge \mathop {\overline{\text {lim}}}\nolimits _{N \rightarrow +\infty } A(z_0,A^{T_N}((z_n)_{n \ge 1},0)) \\&\ge A(z_0,\mathop {\underline{\text {lim}}}\nolimits _{N \rightarrow +\infty } A(z_0,A^{T_N}((z_n)_{n \ge 1},0)) \\&\ge A(z_0,K) \end{aligned}$$

Passing to the limit when K tends to \(+\infty \), by Boundary assumption, we get \(v^*(x_0) \ge A(z_0,+\infty )\). From Eq. (4.9), we get

$$\begin{aligned} v^*(x_0) \ge Bv^*(x_0) -\varepsilon . \end{aligned}$$

Second subcase \(v^*(x_1)=-\infty \). Thus,

$$\begin{aligned} Bv^*(x_0) \le A(z_0,-\infty )+{\varepsilon }. \end{aligned}$$
(4.10)

By definition of \(v^*(x_0)\) and Increasing Monotonicity assumption, we have \(v^*(x_0) \ge A(z_0,-\infty )\). From Eq. (4.10), we get \(v^*(x_0) \ge Bv^*(x_0)-{\varepsilon }.\)

Last subcase \(v^*(x_1) \in \mathbf {R}.\)

Since A is uniformly continuous in v, there exists \(\eta >0\) (now fixed) such that:

$$\begin{aligned} \forall (z_0,v,v') \in Z \times \mathbf {R}^2, \vert v-v'\vert \le \eta \Rightarrow \vert A(z_0,v)-A(z_0,v')\vert \le {\varepsilon }. \end{aligned}$$
(4.11)

By definition of \(v^*(x_1)\), there exists a strictly increasing sequence \((T_n)\) of integers, and a sequence \((z_n)_{n \ge 1}\) of \(\Sigma (x_1)\) such that

$$\begin{aligned} \lim _{N \rightarrow +\infty } A^{T_N}((z_n)_{n \ge 1},0) \ge v^*(x_1)-\eta . \end{aligned}$$

From the definition of \(v^*(x_0)\), we have

$$\begin{aligned} v^*(x_0)&\ge \mathop {\overline{\text {lim}}}\nolimits _{N \rightarrow +\infty } A(z_0,A^{T_N}((z_n)_{n \ge 1},0)) \\&\ge A(z_0,\mathop {\underline{\text {lim}}}\nolimits _{N \rightarrow +\infty } A(z_0,A^{T_N}((z_n)_{n \ge 1},0)) \\&\ge A(z_0,v^*(x_1)-\eta ) \ge A(z_0,v^*(x_1))-{\varepsilon }, \end{aligned}$$

the last inequality being a consequence of Eq. (4.11). From \(Bv^*(x_0) \le A(z_0,v^*(x_1))+{\varepsilon }\), we finally get \(v^*(x_0) \ge Bv^*(x_0)-2\varepsilon .\)

Finally, this inequality still holds in all the subcases above. Passing to the limit when \(\varepsilon \rightarrow 0\), we get \(v^*(x_0) \ge Bv^*(x_0)\).

2) Second case \(Bv^*(x_0)=+\infty \).

By definition of \(Bv^*(x_0)\), for every \(K>0\), there exists \(x_1 \in \Gamma (x_0)\) and \(z_0 \in \Omega (x_0,x_1)\) such that \(A(z_0,v^*(x_1)) \ge K\). This is exactly Eq. (4.8), in which \(Bv^*(x_0)\) is replaced by \(K+{\varepsilon }\). The same proof as the first case can then be applied formally, with such a modification. This provides \(v^*(x_0) \ge K-\varepsilon \), which finally give \(v^*(x_0)=+\infty \) for \(K \rightarrow +\infty \). Thus, \(Bv^*(x_0)=v^*(x_0)=+\infty \).

Now, we prove the theorem when Assumption (ii) is true [instead of Assumption (i)]. The only places where we have used Assumption (i) in the proof above is:

  • to derive Eq. (4.11). But from \({\Sigma }(x_0)\) compact, we get that \(z_0\) in Eq. (4.11) is constrained to belong to a compact subset of Z, and thus the equation still holds from Compact uniform continuity assumption in v.

  • to get that for every \(z \in Z\), the mapping \(v \in [-\infty ,+\infty ] \rightarrow A(z,v)\) is continuous, which is clearly true if A is only assumed to be compactly uniformly continuous in v (since continuity at \(v=+\infty \) or \(v=-\infty \) is a consequence of Boundary assumption).

Thus, the proof is unchanged when Assumption (ii) is assumed.

Appendix 8: Proof of Proposition 4.3

Let \(\delta \) be such that for every \((z,v,v') \in Z \times \mathbf {R}^2, \bigl \vert A(z,v)-A(z,v')\bigr \vert \le \delta (v-v')\), where \(\delta : \mathbf {R} \rightarrow \mathbf {R} \cup \{+\infty \}\) tends to 0 at 0. Existence of \(\delta \) follows from Lemma 4.1. Let \(x_0 \in X\). For every , by iteration and by Increasing Monotonicity, one obtains:

$$\begin{aligned}&A(z_0,A(z_{1},A(z_2, \ldots ,A(z_{n-1},\overline{v}(x_n))+\varepsilon _{n-1})+{\varepsilon _{n-2}}+\cdots )+{\varepsilon _{2}})+{\varepsilon _{1}}) \nonumber \\&\quad \le A(z_0,A(z_{1},A(z_2,\ldots , A(z_{n-1},\overline{v}(x_n)))+\cdots ))\nonumber \\&\qquad \cdots {}+\delta (\varepsilon _{1}+\delta (\varepsilon _{2}+\cdots +\delta (\varepsilon _{n-2}+\delta (\varepsilon _{n-1}))\cdots ). \end{aligned}$$
(4.12)

For every \(a>0\), define

$$\begin{aligned} V_a= & {} \Bigl \{(\varepsilon _n)_{n\in \mathbf {N}^{*}} \in [0,1]^{\mathbf {N}}: \forall (\varepsilon '_n)_{n\in \mathbf {N}^{*}} \in \prod _{i \in \mathbf {N}^{*}} [0,2 \varepsilon _i],\\&\quad \delta (\varepsilon '_1) \le a, \delta (\varepsilon '_2) \le \varepsilon _1, \ldots , \delta (\varepsilon '_{n-1}) \le \varepsilon _{n-2},\ldots \Bigr \}. \end{aligned}$$

This set is nonempty (it contains 0). Moreover, every \(V_a\) intersects \(]0,1]^{\mathbf {N}}\) (indeed, since \(\delta \) is continuous at 0, one can define some \((\varepsilon _n)_{n\in \mathbf {N}^{*}} \in V_a \cap ]0,1]^{\mathbf {N}}\) inductively). Consider on \([0,1]^{\mathbf {N}}\) the topology generated by this family of neighborhood of 0. Then, every neighborhood of \(0\) intersects \(]0,1]^{\mathbf {N}}\). Moreover, for every \((\varepsilon _n)_{n\in \mathbf {N}^{*}} \in V_a\) and every integer \(n \ge 1\), \(\varepsilon _{n-2} +\delta (\varepsilon _{n-1}) \le \varepsilon _{n-2}+\varepsilon _{n-2}\), thus \(\delta (\varepsilon _{n-2}+\delta (\varepsilon _{n-1})) \le \varepsilon _{n-3}\). Iterating, one gets

$$\begin{aligned} \forall n \in \mathbf {N}, \delta (\varepsilon _{1}+\delta (\varepsilon _{2}+\cdots +\delta (\varepsilon _{n-2} +\delta (\varepsilon _{n-1}))\cdots ) \le a \end{aligned}$$

From Eq. (4.12) and the definition of f, taking the infimum with respect to n, and then the supremum, one obtains:

$$\begin{aligned} \forall \varepsilon \in V_a, f(\varepsilon ) \le f(0)+a, \end{aligned}$$

which proves the weak continuity of A, that concludes the proof.

Appendix 9: Proof of Proposition 4.4

Let \(x_0 \in X\). Since \(\Sigma (x_0)\) is compact (for the product topology), for every \(n \ge 0\) there exists \(K_n\), a compact subset of Z, such that for every (which implies ), we have \(z_n \in K_n\) for every \(n \ge 0\).

For every \(n \ge 0\), from compact uniform continuity in v, there exists \(\delta _n\) such that for every \((z,v,v') \in K_n \times \mathbf {R}^2, \bigl \vert A(z,v)-A(z,v')\bigr \vert \le \delta _n(v-v')\), where \(\delta _n: \mathbf {R} \rightarrow \mathbf {R} \cup \{+\infty \}\) tends to 0 at 0. For every , by iteration and by Increasing Monotonicity, one obtains:

$$\begin{aligned}&\displaystyle A(z_0,A(z_{1},A(z_2, \ldots ,A(z_{n-1},\overline{v}(x_n))+\varepsilon _{n-1})+{\varepsilon _{n-2}}+\cdots )+{\varepsilon _{2}})+{\varepsilon _{1}}) \nonumber \\&\quad \le A(z_0,A(z_{1},A(z_2,\ldots , A(z_{n-1},\overline{v}(x_n)))+\cdots ))\nonumber \\&\qquad \cdots {}+\delta _0(\varepsilon _{1}+\delta _1(\varepsilon _{2}+\cdots +\delta _{n-3}(\varepsilon _{n-2}+\delta _{n-2}(\varepsilon _{n-1}))\cdots ). \end{aligned}$$
(4.13)

For every \(a>0\), define

$$\begin{aligned} V_a= & {} \Bigl \{(\varepsilon _n)_{n\in \mathbf {N}^{*}} \in [0,1]^{\mathbf {N}}: \forall (\varepsilon '_n)_{n\in \mathbf {N}^{*}} \in \prod _{i \in \mathbf {N}^{*}} [0,2 \varepsilon _i],\\&\quad \delta _0(\varepsilon '_1) \le a, \delta _1(\varepsilon '_2) \le \varepsilon _1, \ldots , \delta _{n-2}(\varepsilon '_{n-1}) \le \varepsilon _{n-2},\ldots \Bigr \}. \end{aligned}$$

The end of the proof is similar to the proof of Proposition 4.3.

Appendix 10: Proof of Theorem 4.2

Consider \([\underline{v}, \overline{v}]\) such that \((\underline{v},\overline{v}) \in V^2\) satisfies Transversality, and let v be a fixed point of B on \([\underline{v}, \overline{v}]\), with \(\overline{v}(x_0)<+\infty \) for every \(x_0 \in X\).

Step 1 First prove that \(v \ge \overline{v}^*\).

Let \(x_0 \in X\).

If \(\overline{v}^*(x_0)=-\infty \) then \(v(x_0) \ge \overline{v}^*(x_0)\). Assume now that \(\overline{v}^*(x_0)>-\infty \). Then , where

By definition of B and since v is a fixed point of B, one has

$$\begin{aligned} \forall x_1 \in \Gamma (x_0), \forall z_0\in \Omega (x_0,x_1), v(x_0)=Bv(x_0) \ge A(z_0,v(x_1)). \end{aligned}$$
(4.14)

Similarly,

$$\begin{aligned} \forall x_2 \in \Gamma (x_1), \forall z_1\in \Omega (x_1,x_2), v(x_1)=Bv(x_1) \ge A(z_1,v(x_2)). \end{aligned}$$
(4.15)

and then, from Increasing Monotonicity, for every \(x_1 \in \Gamma (x_0), x_2 \in \Gamma (x_1), z_0\in \Omega (x_0,x_1), z_1\in \Omega (x_1,x_2)\), one has:

$$\begin{aligned} Bv(x_0) \ge A(z_0,A(z_1,v(x_2))) \end{aligned}$$
(4.16)

And by induction,

(4.17)

Since \(v \ge \underline{v}\) and by Increasing Monotonicity,

(4.18)

If there exists \(T \in \mathbf {N^*}\) such that , then \(v(x_0)=Bv(x_0)=+\infty \ge \overline{v}^*(x_0)\).

Assume now that for every integer \(T \in \mathbf {N^*}\), . Since \(\underline{v}\) satisfies Transversality assumption (T1), this implies for every integer T large enough.

Let . Now, we prove that . If , then it is true. Now assume that , that is . Then, (thus is real) for an infinite number of T. From Eq. (4.18) and for an infinite number of T:

(4.19)

Taking the supremum limit, Transversality Assumption (T1) implies

(4.20)

So, for every .

Taking the supremum on \(\Sigma (x_0)\),

$$\begin{aligned} v(x_0)=Bv(x_0) \ge \overline{v}^*(x_0). \end{aligned}$$
(4.21)

Step 2 Let us prove that \(v \le \underline{v}^*\).

Let \(x_0 \in X\). If \(v(x_0)=-\infty \), then \(v(x_0) \le \underline{v}^*(x_0)\). Since \(v(x_0) \le \overline{v}(x_0)<+\infty \), assume now that \(v(x_0) \in \mathbf {R}\).

For every integer n, let \(\varepsilon _n>0\). From the definition of Bv, there exists \(x_{1} \in \Gamma (x_0)\) and \(z_0 \in \Omega (x_0,x_1)\) (depending on \(\varepsilon _1\)) such that

$$\begin{aligned} v(x_0)=Bv(x_0) \le A(z_0,v(x_{1}))+{\varepsilon _1} \end{aligned}$$
(4.22)

where \(v(x_1) \le \overline{v}(x_1)<+\infty \) by assumption.

Similarly, there exists \(x_{2} \in \Gamma (x_{1})\) and \(z_1 \in \Omega (x_1,x_2)\) (depending on \(\varepsilon _1\) and \(\varepsilon _2\)) such that

$$\begin{aligned} v(x_{1}) \le A(z_{1},v(x_2))+{\varepsilon _2}. \end{aligned}$$
(4.23)

Then, by Increasing Monotonicity,

$$\begin{aligned} v(x_0) \le A(z_0,A(z_{1}, {v}(x_{2})) +\varepsilon _2)+{\varepsilon _1} \end{aligned}$$

By induction, one builds such that

$$\begin{aligned} \forall n \in \mathbf {N^*}, v(x_0)&\le A(z_0,A(z_{1},A(z_2,\ldots ,A(z_{n-1},\overline{v}(x_n))+\varepsilon _{n})\nonumber \\&\quad +{\varepsilon _{n-1}}+\cdots )+{\varepsilon _{2}})+{\varepsilon _{1}}. \end{aligned}$$
(4.24)

by \(v \le \overline{v}<+\infty \) and Increasing Monotonicity Assumption.

Take the infimum with respect to n and then the supremum with respect to ,

(4.25)

Then, passing to the limit when \((\varepsilon _n)_{n \ge 0}\) tends to 0, by Weak continuity,

(4.26)

Fix \(\varepsilon >0\). From Eq. (4.26), the supremum being finite or infinite, there exists such that for every integer \(n \ge 1\),

(4.27)

Now, \( v(x_0)>-\infty \) implies for every \(n \ge 1\), and by Transversality Assumption (T2), for n large enough.

If for n large enough, then , and passing to the supremum, we get \(v(x_0) \le \underline{v}^*(x_0)\), which ends the proof in this case.

Thus, we can assume that is infinite. In particular, from above, for \(n \in I\) large enough, is a real. From (T2), we have for n large enough.

Then, from Eq. (4.27), for \(n \in I\) large enough,

(4.28)

If (i.e., \(n \notin I\)), is also satisfied, and it is thus satisfied for every n large enough.

This implies \(v(x_0) \le \underline{v}^*(x_0)\) (take the infimum limit with respect to n, majorize by the supremum with respect to , and finally take the limit when \(\varepsilon \rightarrow 0\)).

Thus, this ends the proof of Assertion 1.a. in Theorem 4.2

Step 3 Let us prove Assertion 1.b. in Theorem 4.2. Let \(f \in V\) such that \(f \le v\) and f satisfies (T1). Recall that \(v=\underline{v}^*=\overline{v}^*\).

Since \(f \le v\), since B is non-decreasing and \(Bv=v\), \(B^nf \le v\), and for every \(x_0 \in X\),

$$\begin{aligned} \mathop {\overline{\text {lim}}}\nolimits _{n \rightarrow +\infty } B^n{f}(x_0) \le v(x_0) \end{aligned}$$
(4.29)

From the definition of B, for any \(x_0 \in X\):

$$\begin{aligned} \forall x_1 \in \Gamma (x_0),\forall z_0 \in \Omega (x_0,x_1), \ B^2f(x_0) \ge A(z_0,Bf(x_1)). \end{aligned}$$
(4.30)

Similarly,

$$\begin{aligned} \forall x_2 \in \Gamma (x_1), \forall z_1 \in \Omega (x_1,x_2), \ Bf(x_1) \ge A(z_1,f(x_2)). \end{aligned}$$
(4.31)

By Increasing Monotonicity, for every \(x_1 \in \Gamma (x_0), x_2 \in \Gamma (x_1), z_0 \in \Omega (x_0,x_1)\) and \( z_1 \in \Omega (x_1,x_2)\), one has

$$\begin{aligned} B^2f(x_0) \ge A(z_0,A(z_1,f(x_2))) \end{aligned}$$
(4.32)

And by induction,

(4.33)

Take .

Since \(v(x_0)=\overline{v}^*(x_0)<+\infty \), , thus for n large enough.

If \(v(x_0)=-\infty \), \(\mathop {\overline{\text {lim}}}\nolimits _{n \rightarrow +\infty } B^n{f}(x_0) \ge v(x_0)\), and together with Eq. (4.29), \(\mathop {\overline{\text {lim}}}\nolimits _{n \rightarrow +\infty } B^n{f}(x_0)= v(x_0)\) is proved in this case.

Assume now that \(v(x_0)>-\infty \), and let us prove that for every

(4.34)

If for an infinite number of n, and Eq.  (4.34) is true.

Otherwise, if for n large enough, from above, we can assume that is finite for n large enough, and from Eq. (4.33)

for n large enough. Taking then the supremum limit, we get

Thus from (T1) Assumption satisfied by f, for every we get Eq.  (4.34), which is also true when , because then .

Taking the supremum for in 4.34, we finally get \(\mathop {\overline{\text {lim}}}\nolimits _{n \rightarrow +\infty } B^n{f}(x_0) \ge \underline{v}^*(x_0)=v(x_0)\), and from Eq.  (4.29), Assertion 1.b is proved.

Step 4. We finally prove Assertion 2. Assume that \(\underline{v}\) and \(\overline{v}\) above also satisfy \(\underline{v} \le B\underline{v}\) and \(B\overline{v} \le \overline{v}\), then, from Tarski fixed-point theorem on \([\underline{v}, \overline{v}]\), B admits a fixed point on \([\underline{v}, \overline{v}]\), and from the first part of Theorem 4.2, this fixed point is equal to the value function.

Appendix 11: Another definition of biconvergence

To prove the equivalence between the two definitions, first assume that U is a A-recursive payoff satisfying Definition 4.4 of biconvergence (the original definition of Streufert). Take and \(\varepsilon >0\). By upper convergence, there exists an integer \(N_1>0\) such that for every \(N \ge N_1\),

thus for every \(\bigl (x'_{N+1},x'_{N+2},\ldots \bigr ) \in \prod _{t=N+1}^{+\infty } \Gamma ^t\bigl (x_0\bigr )\),

$$\begin{aligned} U\bigl (x_1,x_2,\ldots ,x_N,x'_{N+1},x'_{N+2},\ldots \bigr )-U\bigl (x_1,x_2,\ldots ,x_N,x_{N+1},x_{N+2},\ldots \bigr ) \le \varepsilon . \end{aligned}$$

Similarly, lower convergence implies that there exists an integer \(N_2>0\) such that for every \(N \ge N_2\) and every \(\bigl (x'_{N+1},x'_{N+2},\ldots \bigr ) \in \prod _{t=N+1}^{+\infty } \Gamma ^t\bigl (x_0\bigr )\), one has:

$$\begin{aligned} U\bigl (x_1,x_2,\ldots ,x_N,x'_{N+1},x'_{N+2},\ldots \bigr )-U\bigl (x_1,x_2,\ldots ,x_N,x_{N+1},x_{N+2},\ldots \bigr ) \ge -\varepsilon . \end{aligned}$$

Then, take \(N=\max \{N_1,N_2\}\) to get the other definition.

For the conversely statement, take , \(\varepsilon >0\) and assume that there exists an integer \(N'>0\) such that for every \(N \ge N'\) and for every \(\bigl (x'_{N+1},x'_{N+2},\ldots \bigr ) \in \prod _{t=N+1}^{+\infty } \Gamma ^t\bigl (x_0\bigr )\), one has:

$$\begin{aligned} \bigl \vert U\bigl (x_1,x_2,\ldots ,x_N,x_{N+1},x_{N+2},\ldots \bigr )-U\bigl (x_1,x_2,\ldots ,x_N,x'_{N+1},x'_{N+2},\ldots \bigr )\bigr \vert \le \frac{\varepsilon }{2}. \end{aligned}$$
(4.35)

By definition,

because \((x_{N+1},x_{N+2},\ldots ) \in \prod _{s=N+1}^{+\infty }\Gamma ^s(x_0).\) In addition, there exists \((x'_{N+1},x'_{N+2},\ldots ) \in \prod _{s=N+1}^{+\infty }\Gamma ^s(x_0)\) such that

$$\begin{aligned} U\bigl (x_1,x_2,\ldots ,x_N,x'_{N+1},x'_{N+2},\ldots \bigr ) \ge \sup U\left( x_1,\ldots ,x_N, \prod _{s=N+1}^{+\infty }\Gamma ^s(x_0)\right) -\frac{\varepsilon }{2} \end{aligned}$$

From Eq. (4.35), we get

$$\begin{aligned} U\bigl (x_1,x_2,\ldots ,x_N,x_{N+1},x_{N+2},\ldots \bigr ) \ge \sup U\left( x_1,\ldots ,x_N, \prod _{s=N+1}^{+\infty }\Gamma ^s(x_0)\right) -{\varepsilon }. \end{aligned}$$

Finally, for every \(N \ge N'\),

$$\begin{aligned} \vert U\bigl (x_1,x_2,\ldots ,x_N,x_{N+1},x_{N+2},\ldots \bigr ) - \sup U\left( x_1,\ldots ,x_N, \prod _{s=N+1}^{+\infty }\Gamma ^s(x_0)\right) \vert \le {\varepsilon } \end{aligned}$$

and we get upper convergence. Similarly, we get lower convergence.

Appendix 12: Proof of Corollary 4.1

First prove the following Lemma, which provide candidates satisfying Transversality Assumption.

Lemma 4.2

Under Null Assumption, let U be biconvergent over \(\prod _{t=1}^{+\infty }\Gamma ^t\bigl (x_0\bigr )\), with \(\Gamma \) being compact-valued. Then:

  1. (i)

    If A is upper semicontinuous, then satisfies Transversality Assumption \((T_1)\).

  2. (ii)

    If A is lower semicontinuous, then satisfies Transversality Assumption \((T_2)\).

  3. (iii)

    For any aggregator A, if , then \(\underline{v}\) satisfies Transversality Assumption \((T_1)\).

    Similarly, if , then \(\overline{v}\) satisfies Transversality Assumption \((T_2)\).

To prove this lemma, we use the following Claim:

Claim

  1. (i)

    If \(f:[-\infty ,+\infty ] \rightarrow [-\infty ,+\infty ]\) is an upper semicontinuous real-valued function and g is a real-valued function on a metric compact space M, then

    $$\begin{aligned} f(\inf _{x \in M} g(x)) \ge \inf _{x \in M} f(g(x)). \end{aligned}$$
  2. (ii)

    If \(f:[-\infty ,+\infty ] \rightarrow [-\infty ,+\infty ]\) is a lower semicontinuous real-valued function and g a real-valued function on a metric compact space M, then

    $$\begin{aligned} f\Bigl (\sup _{x \in M} g(x)\Bigr ) \le \sup _{x \in M} f(g(x)). \end{aligned}$$

Proof

By definition, \(\inf _{x \in M} g(x)=\lim _{n \rightarrow +\infty } g(x_n)\) for some sequence \((x_n)\) in M. Without any loss of generality, since M is compact, one can assume that \((x_n)\) converges to some \(x \in M\). Since f is upper semicontinuous,

$$\begin{aligned} f(\inf _{x \in M} g(x)) =f(\lim _{n \rightarrow +\infty } g(x_n)) \ge \mathop {\overline{\text {lim}}}\nolimits _{n \rightarrow +\infty } f(g(x_n)) \ge \inf _{x \in M} f(g(x)). \end{aligned}$$

The proof is similar for ii). \(\square \)

Now, we prove Lemma 4.2, part (i). Let . One has

This last quantity can be made as small as one wishes for T large enough by biconvergence, which proves that \(\underline{v}\) satisfies \((T_1)\).

The proof of Lemma 4.2, Part (ii) is similar.

Let us prove Lemma 4.2, Part (iii). To show that \(\underline{v}\) satisfies \((T_1)\), follow the proof above. Simply note that \(\underline{v}(x_T)\) can be written for some (depending on T), and we can mimic the proof above without using any infimum. This ends the proof of Lemma 4.2 (since the proof is similar for \(\overline{v}\)).

We finally prove Corollary 4.1.

Define, for every \(x_0 \in X\), \(\overline{v}(x_0)=\max U(\prod \limits _{t=1}^{+\infty }\Gamma ^t(x_0))\), and define \(\underline{v}(x_0)=\inf U(\prod \limits _{t=1}^{+\infty }\Gamma ^t(x_0))\). Let us show that one can apply Theorem 4.2 to prove that \(v^*\) is the unique solution to the Bellman equation.

First, \(\underline{v} \le \overline{v}<+\infty \) (by assumption), and, by definition, for every we have . By Lemma 4.2, \((\underline{v},\overline{v})\) satisfies Transversality Assumption. By Proposition 3.2, .

We shall prove that \(B\overline{v} \le \overline{v}\), and \( B\underline{v} \ge \underline{v}\). First, to prove \(B\overline{v} \le \overline{v}\), let \(x_0 \in X\).

$$\begin{aligned} B\overline{v}(x_0)&=\sup _{x_1\in \Gamma (x_0)}A(x_1,\overline{v}(x_1))=\sup _{x_1\in \Gamma (x_0)}A\Bigl (x_1,\max U\Bigl (\prod \limits _{t=1}^{+\infty }\Gamma ^t(x_1)\Bigr )\Bigr )\\&= \sup _{x_1\in \Gamma (x_0)} \sup _{x_{t+1}\in \Gamma ^t(x_1), \forall t\ge 1} A(x_1,U(x_2,x_3,\ldots ,x_n,\ldots ))\\&= \sup _{x_1\in \Gamma (x_0)} \sup _{x_{t+1}\in \Gamma ^t(x_1), \forall t\ge 1} U(x_1,x_2,\ldots ,x_n,\ldots )\\&\le \sup _{x_{t}\in \Gamma ^t(x_0), \forall t\ge 1} U(x_1,x_2,\ldots ,x_n,\ldots )=\overline{v}(x_0) \end{aligned}$$

Let us show that \(B\underline{v} \ge \underline{v}\).

$$\begin{aligned} B\underline{v}(x_0)&=\sup _{x_1\in \Gamma (x_0)}A(x_1,\underline{v}(x_1))=\sup _{x_1\in \Gamma (x_0)}A\Bigl (x_1,\inf U\Bigl (\prod \limits _{t=1}^{+\infty }\Gamma ^t(x_1)\Bigr )\Bigr )\\&\ge \sup _{x_1\in \Gamma (x_0)} \inf _{(x_2,\ldots ) \in \prod \limits _{t=1}^{+\infty }\Gamma ^t(x_1)} A(x_1,U(x_2,x_3,\ldots ))\ \\&\quad \text{(from } \text{ the } \text{ Claim } \text{ above) }\\&\ge \sup _{x_1\in \Gamma (x_0)} \inf _{(x_2,\ldots ) \in \prod \limits _{t=1}^{+\infty }\Gamma ^t(x_1)} U(x_1,x_2,x_3,\ldots )\ \ge \underline{v}(x_0) \end{aligned}$$

To be able to apply Theorem 4.2, we have finally to prove that A is weakly continuous at \({\overline{v}}\), i.e., that

is upper semicontinuous with respect to the sequence \(\varepsilon \) (here the topology considered on \([0,1]^{\mathbf {N}}\) is the standard product topology). Remark that for every integer \(n \in \mathbf {N}\) now fixed, \(A(z_0,A(z_{1},\ldots ,A(z_{n-1},\overline{v}(x_n))+\varepsilon _{n-1})+{\varepsilon _{n-2}}+\cdots )+{\varepsilon _{2}})+{\varepsilon _{1}})\) is an upper semicontinuous function of the \((n-1)-\)uple \((\varepsilon _1,\ldots ,\varepsilon _{n-1}) \in \mathbf {R}^{n-1}\), from the upper semicontinuity of A, the upper semicontinuity of \(\overline{v}\) (a consequence of Berge Maximum theorem), and from Increasing Monotonicity. Thus, \(A(z_0,A(z_{1},\ldots ,A(z_{n-1},\overline{v}(x_n))+\varepsilon _{n-1})+{\varepsilon _{n-2}}+\cdots )+{\varepsilon _{2}})+{\varepsilon _{1}})\) is also an upper semicontinuous function with respect to the whole sequence \(\varepsilon \) (by definition of the product topology chosen on \([0,1]^{\mathbf {N}}\)). Passing to the infimum with respect to n, one obtains an upper semicontinuous function of \((\varepsilon _k)_{k \ge 1}\) as well. Consequently, and since the feasibility constraint has a closed and compact graph, from Berge theorem, we get that f is upper semicontinuous at \(0\), the null sequence (in fact everywhere).

Thus, we can recover Theorem 4.2, which proves that is the only fixed point of the Bellman operator.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bich, P., Drugeon, JP. & Morhaim, L. On temporal aggregators and dynamic programming. Econ Theory 66, 787–817 (2018). https://doi.org/10.1007/s00199-017-1045-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-017-1045-0

Keywords

JEL Classification

Navigation