Skip to main content
Log in

Comparative impatience under random discounting

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

The random discounting model has been used as a tractable model which is consistent with preference for flexibility. By taking Goldman (J. Econ. Theory 9:203–222, 1974) as an example, we illustrate that under random discounting, the average time preference and preference for flexibility may be conflicting to each other and their mixed effect contributes to revealed impatience. To obtain sharp results in comparative statics, we ask under what kind of probability shifts on discount factors, it is possible to say that one agent always exhibits a more impatient choice than the other even when both agents have flexibility concern. We provide a behavioral definition of impatience comparisons and identify that the relative degree of impatience is measured as a probability shift of a random discount factor in the monotone likelihood ratio order.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. For example, Krusell and Smith (1998), Karni and Zilcha (2000), and Chatterjee et al. (2007) use the model to explain heterogeneity in a wealth distribution or a default rate observed in the data. In Goldman (1974), Diamond and Dybvig (1983), Lin (1996), and Camargo and Lester (2014), random discounting is applied to the modeling of liquidity motives.

  2. Alternatively, we can take \(\Delta (C\times C)\), which allows a correlation between consumption across time periods. Since we consider a random discounting model satisfying the separability across time, the agent cares only about marginal distributions of \(L \in \Delta (C\times C)\). Thus, the domain is effectively reduced to \(\Delta (C)\times \Delta (C)\subset \Delta (C\times C)\) with a suitable identification.

  3. In the case of continuous distributions, the same inequality is required for the corresponding density functions.

  4. For this step, we construct a finite menu \(\overline{M}\) which has the same cardinality as the union of supports of \(\mu ^1\) and \(\mu ^2\) and define M and \(M'\) as slight modifications of \(\overline{M}\). Finite Support is required only for this technical step.

  5. Since the former function converges to the latter function as \(\sigma \rightarrow 1\), \(\log c\) is identified as the case of \(\sigma =1\) in the subsequent analysis.

  6. If Assumption 3 is violated, the F.O.C. may not characterize the optimum. For example, if \(\mu \) is degenerate at 0, then the expected marginal utility from money is always positive, which leads to a corner solution at \(m^*=w\).

References

  • Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis, 3rd edn. Springer, Berlin (2006)

    Google Scholar 

  • Athey, S.: Monotone comparative statics under uncertainty. Q. J. Econ. 117, 187-223 (2002)

    Article  Google Scholar 

  • Benoît, J.P., Ok, E.A.: Delay aversion. Theor. Econ. 2, 71-113 (2007)

    Google Scholar 

  • Camargo, B., Lester, B.: Trading dynamics in decentralized markets with adverse selection. J. Econ. Theory 153, 534-568 (2014)

    Article  Google Scholar 

  • Chatterjee, S., Corbae, D., Nakajima, M., Rios-Rull, J.-V.: A quantitative theory of unsecured consumer credit with risk of default. Econometrica 75, 1525-1589 (2007)

    Article  Google Scholar 

  • Dekel, E., Lipman, B., Rustichini, A.: Representing preferences with a unique subjective state space. Econometrica 69, 891-934 (2001)

    Article  Google Scholar 

  • Dekel, E., Lipman, B., Rustichini, A.: Temptation-driven preferences. Rev. Econ. Stud. 76, 937-971 (2009)

    Article  Google Scholar 

  • Diamond, D.W., Dybvig, P.H.: Bank runs, deposit insurance, and liquidity. J. Polit. Econ. 91, 401-419 (1983)

    Article  Google Scholar 

  • Dillenberger, D., Lleras, J.S., Sadowski, P., Takeoka, N.: A theory of subjective learning. J. Econ. Theory 153, 287-312 (2014)

    Article  Google Scholar 

  • Fishburn, P.C., Porter, R.B.: Optimal portfolios with one safe and one risky asset: effects of changes in rate of return and risk. Manag. Sci. 22, 1064-1073 (1976)

    Article  Google Scholar 

  • Goldman, S.M.: Flexibility and the demand for money. J. Econ. Theory 9, 203-222 (1974)

    Article  Google Scholar 

  • Higashi, Y., Hyogo, K., Takeoka, N.: Subjective random discounting and intertemporal choice. J. Econ. Theory 144, 1015-1053 (2009)

    Article  Google Scholar 

  • Higashi, Y., Hyogo, K., Takeoka, N.: Stochastic endogenous time preference. J. Math. Econ. 51, 77-92 (2014)

    Article  Google Scholar 

  • Horowitz, J.K.: Comparative impatience. Econ. Lett. 38, 25-29 (1992)

    Article  Google Scholar 

  • Karlin, S., Rubin, H.: The theory of decision procedures for distributions with monotone likelihood ratio. Ann. Math. Stat. 27, 272-299 (1956)

    Article  Google Scholar 

  • Karni, E., Zilcha, I.: Saving behavior in stationary equilibrium with random discounting. Econ. Theory 15, 551-564 (2000)

    Article  Google Scholar 

  • Kreps, D.M.: A representation theorem for preference for flexibility. Econometrica 47, 565-578 (1979)

    Article  Google Scholar 

  • Kreps, D.M.: Static choice and unforeseen contingencies. In: Dasgupta, P., Gale, D., Hart, O., Maskin, E. (eds.) Economic Analysis of Markets and Games: Essays in Honor of Frank Hahn, pp. 259-281. MIT Press, Cambridge, MA (1992)

    Google Scholar 

  • Krishna, V., Sadowski, P.: Dynamic preference for flexibility. Econometrica 82, 655-703 (2014)

    Article  Google Scholar 

  • Krusell, P., Smith, A.: Income and wealth heterogeneity in the macroeconomy. J. Polit. Econ. 106, 867-896 (1998)

    Article  Google Scholar 

  • Landsberger, M., Meilijson, I.: Demand for risky financial assets: a portfolio analysis. J. Econ. Theory 50, 204-213 (1990)

    Article  Google Scholar 

  • Lin, P.: Banking, incentive constraints, and demand deposit contracts with nonlinear returns. Econ. Theory 8, 27-39 (1996)

    Article  Google Scholar 

  • Rustichini, A.: Preference for flexibility in infinite horizon problems. Econ. Theory 20, 677-702 (2002)

    Article  Google Scholar 

  • Shaked, M., Shanthikumar, J.G.: Stochastic Orders. Springer, Berlin (2007)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Norio Takeoka.

Additional information

We would like to thank Itzhak Gilboa, Atsushi Kajii, Hiroyuki Ozaki, Larry Samuelson, the audiences at the Mathematical Economics Workshop 2011 (Doshisha University) and SWET 2012 (Kushiro Public University of Economics), and the seminar participants at Hitotsubashi University, Keio University, Kyoto University, Okayama University, Otaru University of Commerce, and Ryukoku University for their helpful comments. We express sincere gratitude to two anonymous referees whose suggestions on earlier drafts considerably improved the paper. Hyogo acknowledges financial support by Ryukoku University. Higashi gratefully acknowledges financial support by Joint Research Program of KIER.

Appendices

Appendix 1: Proof of Theorem 1

Since \(\mu ^1 \) and \(\mu ^2\) have finite supports, their union is also a finite set. Denote the union by \(A:=\sigma (\mu ^1)\cup \sigma (\mu ^2) = \left\{ \alpha _1,\ldots ,\alpha _k,\ldots ,\alpha _n\right\} \) with assuming that \(\alpha _{k+1}>\alpha _k\) for all \(k=1,\ldots ,n-1\).

(1) \(\Rightarrow \) (2) We wish to show that for all \(\alpha _j>\alpha _i\), it holds that \(\mu ^1(\alpha _j)\mu ^2(\alpha _i)\ge \mu ^1(\alpha _i)\mu ^2(\alpha _j)\). If \(\mu ^1(\alpha _i)\mu ^2(\alpha _j)=0\), this inequality always holds. Thus, we consider the case where \(\mu ^1(\alpha _i)\mu ^2(\alpha _j)\ne 0\). Seeking a contradiction, suppose that for some \(\alpha _j>\alpha _i\), \(\mu ^1(\alpha _j)\mu ^2(\alpha _i)< \mu ^1(\alpha _i)\mu ^2(\alpha _j)\). In the following, we will construct menus M and \(M'\) that satisfy \(M \rhd M'\), \(M\succ ^1 M'\), and \(M' \succ ^2 M\).

Without loss of generality, we assume \(u(\Delta (C))=[0,1]\). Define \(B\subset [0,1]^2\) by

$$\begin{aligned} B:=\left\{ (u_1, u_2)\in [0,1]^2\,\Big |\, \Vert (u_1,u_2)\Vert \le \frac{1}{2}\right\} , \end{aligned}$$

where \(\Vert \cdot \Vert \) is the square norm.

Note that for all \(\alpha _k\), there exists a unique point \(\big (u^k_1,u^k_2\big )\in B\) such that \(\Vert \big (u^k_1,u^k_2\big )\Vert = \frac{1}{2}\) and \(\big (1-\alpha _k\big )\big (u_1-u^k_1\big )+\alpha _k\big (u_2-u^k_2\big )=0 \) is a supporting hyperplane to B at \((u^k_1,u^k_2)\), that is, for all \(\big (u_1,u_2\big )\in B\) such that \(\big (u_1,u_2\big )\ne \big (u^k_1,u^k_2\big )\),

$$\begin{aligned} \big (1-\alpha _k\big )\big (u_1-u^k_1\big )+\alpha _k \big (u_2-u^k_2\big )<0. \end{aligned}$$

In particular, for all \(h\ne k\), we have

$$\begin{aligned} \big (1-\alpha _k\big )\big (u^{h}_1-u^{k}_1\big )+ \alpha _k\big (u^{h}_2-u^{k}_2\big )<0. \end{aligned}$$
(4)

Lemma 1

For all \(k=1,\ldots ,n-1\),

$$\begin{aligned} u^{k+1}_1<u^{k}_1, \ \left( \text{ and } \text{ hence } u^{k+1}_2>u^{k}_2\right) . \end{aligned}$$

Proof

By (4), we have

$$\begin{aligned}&\left( 1-\alpha _k\right) \left( u^{k+1}_1-u^{k}_1\right) + \alpha _k\left( u^{k+1}_2-u^{k}_2\right) <0< \left( 1-\alpha _{k+1}\right) \left( u^{k+1}_1-u^{k}_1\right) \\&\quad +\, \alpha _{k+1}\left( u^{k+1}_2-u^{k}_2\right) . \end{aligned}$$

This implies that \(\big (\alpha _{k+1}-\alpha _k\big )\big (u^{k+1}_1-u^{k}_1\big ) < \big (\alpha _{k+1}-\alpha _k\big )\big (u^{k+1}_2-u^{k}_2\big )\), and hence \(u^{k+1}_1-u^{k}_1 < u^{k+1}_2-u^{k}_2\) since \(\alpha _{k+1}>\alpha _k\). If \(u^{k+1}_1\ge u^{k}_1\), we must have \(u^{k+1}_2\le u^{k}_2\). This contradicts to the above inequality.

Since

$$\begin{aligned} \frac{\mu ^2(\alpha _i)}{\mu ^2(\alpha _j)} \frac{\mu ^1(\alpha _j)}{\mu ^1(\alpha _i)}< 1, \end{aligned}$$

there exists \(\gamma \in (0,1)\) such that

$$\begin{aligned} \frac{\mu ^2(\alpha _i)}{\mu ^2(\alpha _j)} \frac{\mu ^1(\alpha _j)}{\mu ^1(\alpha _i)} <\gamma <1. \end{aligned}$$

For some \(\epsilon >0\) and \(\delta >0\), define

$$\begin{aligned}&\tilde{u}^{i}_1:= u^{i}_1+ \frac{\epsilon }{1-\alpha _i}\frac{\mu ^1(\alpha _j)}{\mu ^1(\alpha _i)}+\delta , \ \tilde{u}^{i}_2:= u^{i}_2+\delta ,\\&\tilde{u}^{j}_1:= u^{j}_1+\delta , \ \tilde{u}^{j}_2:= u^{j}_2+ \frac{\gamma \epsilon }{\alpha _j}+\delta . \end{aligned}$$

Since \(\mu ^1(\alpha _i)\ne 0\), \(1-\alpha _i>1-\alpha _j\ge 0\) and \(\alpha _j>\alpha _i\ge 0\), \(\tilde{u}^{i}_1\) and \(\tilde{u}^{j}_2\) are well defined. Moreover, for all \(k<i\) and \(k>j\), define

$$\begin{aligned} \tilde{u}^{k}_1:= u^{k}_1+\delta , \ \tilde{u}^{k}_2:= u^{k}_2+\delta . \end{aligned}$$

Since n is finite, we can find a sufficiently small \(\epsilon >0\) and \(\delta >0\) such that for all \(k\le i\) and \(k\ge j\),

$$\begin{aligned} u^{k-1}_1>\tilde{u}^{k}_1>u^{k}_1, \ u^{k+1}_2>\tilde{u}^{k}_2>u^{k}_2, \end{aligned}$$

and

$$\begin{aligned} (1-\alpha _h)\left( \tilde{u}^{k}_1-u^{h}_1\right) + \alpha _h \left( \tilde{u}^{k}_2-u^{h}_2\right) <0, \quad \text{ for } \text{ all } h\ne k. \end{aligned}$$
(5)

Since \(\{u^{k}_1,u^{k}_2\}\), \(k=1,\ldots ,n\), and \(\big \{\tilde{u}^{k}_1,\tilde{u}^{k}_2\big \}\), \(k\le i,k\ge j\), are subsets of [0, 1], there exist corresponding lotteries \(\ell ^{k}_1,\ell ^{k}_2,\tilde{\ell }^{k}_1,\tilde{\ell }^{k}_2\in \Delta (C)\) such that \(u\big (\ell ^k_1\big )=u^{k}_1\), \(u\big (\ell ^{k}_2\big )=u^{k}_2\), \(u\big (\tilde{\ell }^k_1\big )=\tilde{u}^{k}_1\), and \(u\big (\tilde{\ell }^{k}_2\big )=\tilde{u}^{k}_2\). By using these lotteries, consider a menu \(\overline{M}:= \left\{ \big (\ell ^k_1, \ell ^{k}_2\big )| k=1,\ldots ,n\right\} \). Finally, define

$$\begin{aligned} M:=\overline{M}\cup \left\{ \big (\tilde{\ell }^k_1,\tilde{\ell }^k_2\big )\,|\, k\le i\right\} \ \text{ and }\ M':= \overline{M}\cup \left\{ \big (\tilde{\ell }^k_1,\tilde{\ell }^k_2\big )\,|\, k\ge j\right\} . \end{aligned}$$

Now we show that M and \(M'\) defined above are the menus that show a contradiction. Note first that

$$\begin{aligned} E(M){\setminus } M'&= \left\{ (\tilde{\ell }^k_1,\tilde{\ell }^k_2)\,|\, k\le i\right\} ,\\ E(M'){\setminus } M&= \left\{ (\tilde{\ell }^k_1,\tilde{\ell }^k_2)\,|\, k\ge j\right\} , \ \text{ and },\\ E(M)\cap E(M')&=\left\{ (\ell ^k_1,\ell ^k_2)\,|\, i<k<j\right\} . \end{aligned}$$

By definition of \(\tilde{u}^{k}_1\) and \(\tilde{u}^{k}_2\), for all \(k\le i\), \(k'\ge j\), and \(i<k''<j\),

$$\begin{aligned} \tilde{\ell }^{k}_1 \succ _c {\ell }^{k}_1\succ _c {\ell }^{k''}_1 \succ _c \tilde{\ell }^{k'}_1,\ \text{ and } ,\ \tilde{\ell }^{k'}_2 \succ _c {\ell }^{k'}_2\succ _c {\ell }^{k''}_2 \succ _c \tilde{\ell }^k_2. \end{aligned}$$

Thus we have \(M\rhd M'\).

Then, consider the ranking between M and \(M'\). First, we confirm that \(M \succ ^{1} M'\). It follows from (4) and (5) that for all sufficiently small \(\delta \),

$$\begin{aligned} W^1(M)&-W^1(M') \\&=\sum _{k\ge j}\mu ^1(\alpha _k) ((1-\alpha _k)u^{k}_1+\alpha _ku^{k}_2)+ \sum _{k\le i}\mu ^1(\alpha _k) ((1-\alpha _k)\tilde{u}^{k}_1+\alpha _k\tilde{u}^{k}_2)\\&\qquad -\sum _{k\ge j}\mu ^1(\alpha _k) ((1-\alpha _k)\tilde{u}^{k}_1+\alpha _k\tilde{u}^{k}_2)- \sum _{k\le i}\mu ^1(\alpha _k) ((1-\alpha _k)u^{k}_1+\alpha _ku^{k}_2)\\&=-\mu ^1(\alpha _j)\gamma \epsilon + \mu ^1(\alpha _i) \frac{\mu ^1(\alpha _j)}{\mu ^1(\alpha _i)}\epsilon +\delta \left( -\sum _{k\ge j}\mu ^1(\alpha _k) +\sum _{k\le i}\mu ^1(\alpha _k)\right) \\&=\mu ^1(\alpha _j)\epsilon (1-\gamma ) +\delta \left( -\sum _{k\ge j}\mu ^1(\alpha _k) +\sum _{k\le i}\mu ^1(\alpha _k)\right) >0. \end{aligned}$$

Finally, for all sufficiently small \(\delta \), we see that \(M'\succ ^{2} M\) because

$$\begin{aligned}&W^2(M')-W^2(M) \\&\quad =\sum _{k\ge j}\mu ^2(\alpha _k) ((1-\alpha _k)\tilde{u}^{k}_1+\alpha _k\tilde{u}^{k}_2)+ \sum _{k\le i}\mu ^2(\alpha _k) ((1-\alpha _k)u^{k}_1+\alpha _ku^{k}_2)\\&\qquad -\sum _{k\ge j}\mu ^2(\alpha _k) ((1-\alpha _k)u^{k}_1+\alpha _ku^{k}_2)- \sum _{k\le i}\mu ^2(\alpha _k) ((1-\alpha _k)\tilde{u}^{k}_1+\alpha _k\tilde{u}^{k}_2)\\&\quad =\mu ^2(\alpha _j)\gamma \epsilon -\mu ^2(\alpha _i) \frac{\mu ^1(\alpha _j)}{\mu ^1(\alpha _i)}\epsilon +\delta \left( \sum _{k\ge j}\mu ^2(\alpha _k) -\sum _{k\le i}\mu ^2(\alpha _k)\right) \\&\quad =\mu ^2(\alpha _j)\epsilon \left( \gamma - \frac{\mu ^2(\alpha _i)}{\mu ^2(\alpha _j)} \frac{\mu ^1(\alpha _j)}{\mu ^1(\alpha _i)} \right) +\delta \left( \sum _{k\ge j}\mu ^2(\alpha _k) -\sum _{k\le i}\mu ^2(\alpha _k)\right) >0. \end{aligned}$$

(2) \(\Rightarrow \) (1) We first show that if \(M\rhd M'\), \(V(M,\alpha )-V(M',\alpha )\) is single crossing in \(\alpha \).

Lemma 2

For any \(M\rhd M'\) and i if

$$\begin{aligned} V(M,\alpha _{i+1}) - V(M',\alpha _{i+1})\ge (>) 0, \end{aligned}$$

then

$$\begin{aligned} V(M,\alpha _{i}) - V(M',\alpha _{i})\ge (>) 0. \end{aligned}$$

Proof

Pick any \(M\rhd M'\) and let

$$\begin{aligned} l^i\in \arg \max _{l\in M}(1-\alpha _i)u(l_1^i) + \alpha _i u(l_2^i) \end{aligned}$$

and

$$\begin{aligned} l^{'i}\in \arg \max _{l\in M'}(1-\alpha _i)u(l_1^{'i}) + \alpha _i u(l_2^{'i}). \end{aligned}$$

If strict domination holds, the conclusion automatically holds. That is, if \(u(l_t^{i+1})> u(l_t^{'i})\) \(t=1,2\), then \(V(M,\alpha _i)>V(M',\alpha _i)\) and if \(u(l_t^{'i})> u(l_t^{i+1})\) \(t=1,2\), then \(V(M',\alpha _{i+1})>V(M,\alpha _{i+1})\). Thus, the remaining cases are where \((u(l_1^{i+1}),u(l_2^{i+1}))\) and \((u(l_1^{'i}),u(l_2^{'i}))\) are not comparable by strict dominance. Hence, it is enough to verify two cases: (i) \(l^{'i}\rhd l^{i+1}\) and (ii) \(l^{i+1}\rhd l^{'i}\).

Consider the first case: \(l^{'i}\rhd l^{i+1}\). We shall show \(V(M,\alpha _{i+1})\le V(M',\alpha _{i+1})\) and \(V(M,\alpha _i) \ge V(M',\alpha _i)\). Notice that \(l^{i+1}\in E(M)\) and \(l^{'i}\in E(M')\). Since \(E(M){\setminus } M' \rhd E(M)\cap E(M')\rhd E(M'){\setminus } M\), if either \(l^{i+1}\in E(M){\setminus } M'\) or \(l^{'i}\in E(M'){\setminus } M\), then we also have \(l^{i+1}\rhd l^{'i}\), i.e., \(u(l_1^{i+1}) = u(l_1^{'i})\) and \(u(l_2^{i+1}) = u(l_2^{'i})\), which implies \(V(M,\alpha _{i+1})\le V(M',\alpha _{i+1})\) and \(V(M,\alpha _i) \ge V(M',\alpha _i)\). If \(l^{'i}\) and \(l^{i+1}\) are in \(E(M)\cap E(M')\), then we also have \(V(M,\alpha _{i+1})\le V(M',\alpha _{i+1})\) and \(V(M,\alpha _i) \ge V(M',\alpha _i)\), as desired.

Consider the second case: \(l^{i+1}\rhd l^{'i}\). For each i let

$$\begin{aligned} \Delta _i&= \alpha _{i+1} - \alpha _i .\nonumber \\ V(M,\alpha _{i}) - V(M',\alpha _{i})&\ge (1-\alpha _{i})(u(l_1^{i+1})- u(l_1^{'i})) + \alpha _{i}(u(l_2^{i+1}) - u(l_2^{'i})) \nonumber \\&=(1-(\alpha _{i+1}-\Delta _i))(u(l_1^{i+1})- u(l_1^{'i})) + (\alpha _{i+1}-\Delta _i)\nonumber \\&\qquad \times (u(l_2^{i+1}) - u(l_2^{'i})) \nonumber \\&\ge V(M,\alpha _{i+1}) - V(M',\alpha _{i+1}) + \Delta _i(u(l_1^{i+1}) - u(l_1^{'i})\nonumber \\&\qquad - u(l_2^{i+1}) + u(l_2^{'i}))\nonumber \\&\ge V(M,\alpha _{i+1}) - V(M',\alpha _{i+1}). \end{aligned}$$
(6)

Hence, the conclusion follows from inequalities (6).\(\square \)

Suppose that \(\mu ^1\ge _{MLR}\mu ^2\) and let

$$\begin{aligned} A_{-}:= & {} \left\{ \left. \alpha _{k}\in A\right| V(M,\alpha _{k})-V(M',\alpha _{k})<0\right\} , \\ A_{0}:= & {} \left\{ \left. \alpha _{j}\in A\right| V(M,\alpha _{j})-V(M',\alpha _{j})=0\right\} , \\ \text { and }A_{+}:= & {} \left\{ \left. \alpha _{i}\in A\right| V(M,\alpha _{i})-V(M',\alpha _{i})>0\right\} . \end{aligned}$$

Note that by Lemma 2, we have \(A_+ < A_0 < A_-\) and \(A = A_+ \cup A_0 \cup A_-\). By the property of the MLR, we also have

$$\begin{aligned} \sigma (\mu ^2){\setminus } \sigma (\mu ^1) <\sigma (\mu ^2)\cap \sigma (\mu ^1) <\sigma (\mu ^1){\setminus } \sigma (\mu ^2). \end{aligned}$$

Then, note that if \(\sigma (\mu ^2){\setminus } \sigma (\mu ^1)\cap A_-\ne \emptyset \), i.e., \(\sigma (\mu ^1)\subset A_-\), then we have \(W^1(M) - W^1(M') < 0\). Thus, in this case, there is nothing to show. Also note that if \(\sigma (\mu ^1){\setminus } \sigma (\mu ^2)\cap A_+\ne \emptyset \), i.e., \(\sigma (\mu ^2)\subset A_+\), we must have \(W^2(M)-W^2(M')>0\). Thus, to check the remaining cases, suppose that both sets are empty and let \(\bar{i}:= \sup \{i\,|\,\alpha _i\in A_+\cup A_0\}\). Since

$$\begin{aligned} W^2(M) - W^2(M') =&\sum _{i=1}^n(V(M,\alpha _i) - V(M',\alpha _i))\mu ^2(\alpha _i) \nonumber \\ =&\sum _{\alpha _i<\sigma (\mu _1)}(V(M,\alpha _i) - V(M',\alpha _i))\mu ^2(\alpha _i) \nonumber \\&+ \sum _{\alpha _i\in \sigma (\mu ^1)\cap \sigma (\mu ^2)}(V(M,\alpha _i) - V(M',\alpha _i)) \frac{\mu ^2(\alpha _i)}{\mu ^1(\alpha _i)}\mu ^1(\alpha _i)\\ \ge&\sum _{\alpha _i\in \sigma (\mu ^1)\cap \sigma (\mu ^2)}(V(M,\alpha _i) - V(M',\alpha _i)) \frac{\mu ^2(\alpha _i)}{\mu ^1(\alpha _i)}\mu ^1(\alpha _i)\\ \ge&\sum _{\alpha _i\in \sigma (\mu ^1)\cap \sigma (\mu ^2)}(V(M,\alpha _i) - V(M',\alpha _i)) \frac{\mu ^2(\alpha _{\bar{i}})}{\mu ^1(\alpha _{\bar{i}})}\mu ^1(\alpha _i)\\ \ge&\sum _{\alpha _i\in \sigma (\mu ^1)\cap \sigma (\mu ^2)}(V(M,\alpha _i) - V(M',\alpha _i)) \frac{\mu ^2(\alpha _{\bar{i}})}{\mu ^1(\alpha _{\bar{i}})}\mu ^1(\alpha _i)\\&+\sum _{\alpha _i> \sigma (\mu ^2)}(V(M,\alpha _i) - V(M',\alpha _i))\frac{\mu ^2(\alpha _{\bar{i}})}{\mu ^1(\alpha _{\bar{i}})}\mu ^1(\alpha _i)\\ =&\frac{\mu ^2(\alpha _{\bar{i}})}{\mu ^1(\alpha _{\bar{i}})} (W^1(M) - W^1(M')), \end{aligned}$$

we have \(M\succsim ^1 ( \succ ^1) M'\) implies \(M\succsim ^2 ( \succ ^2) M'\).

Appendix 2: Proof of Proposition 2

1.1 Ex-post choice

As a preliminary result, we follow Goldman’s argument to characterize ex-post choices with assuming that u is a CRRA utility function.

Let \(U_k\) denote the partial derivative of U with respect to \(c_k\). For all consumption streams \((c_1,c_2)\), let the marginal rate of substitution at \((c_1,c_2)\) be denoted by

$$\begin{aligned} R\left( \frac{c_{2}}{c_{1}},\alpha \right) := \frac{U_{1}}{U_{2}}=\frac{1-\alpha }{ \alpha }\left( \frac{c_{2}}{c_{1}}\right) ^{\sigma }. \end{aligned}$$

Since the utility function over consumption streams is homothetic, this rate depends only on the consumption ratio, \(c_2/c_1\). Note also that \(R(\frac{c_{2}}{c_{1}},\alpha )\) is decreasing in \(\alpha \).

We explicitly calculate the ex-post optimal consumption \((c_{1},c_{2})\). As shown in Fig. 5, the consumer may change her initial plan \(\theta =\frac{b}{m}\) depending on the realized \(\alpha \). For \(\theta =\frac{b}{m}\), define

$$\begin{aligned} A_i(\theta )&:=\{\alpha \,|\, R(\theta ,\alpha )<1\}, \\ A_s(\theta ,p\tau )&:=\left\{ \alpha \,|\, R(\theta ,\alpha )>\frac{1}{p\tau }\right\} , \\ A(\theta ,p\tau )&:=[0,1]{\setminus } (A_i(\theta )\cup A_s(\theta ,p\tau )). \end{aligned}$$

If the realized \(\alpha \) belongs to region \(A_i(\theta )\), the consumer alters her initial plan and carries the unspent money into period 2. If \(\alpha \) belongs to \(A_s(\theta ,p\tau )\), she sells bonds and consumes more than m in period 1. Otherwise, she adheres to her initial plan.

If \(\alpha \in A_{i}(\theta )\), an optimal condition is given as

$$\begin{aligned} R\left( \frac{c_{2}}{c_{1}},\alpha \right) =1. \end{aligned}$$

Denote an optimal ratio between \(c_{1}\) and \(c_{2}\) by \(\theta (\alpha ,1)\). That is, \(\theta (\alpha ,1)\) satisfies

$$\begin{aligned} \frac{1-\alpha }{\alpha }\theta (\alpha ,1)^{\sigma }=1\ \Leftrightarrow \theta (\alpha ,1)=\left( \frac{\alpha }{1-\alpha }\right) ^{\frac{1}{\sigma }}. \end{aligned}$$

Similarly, if \(\alpha \in A_{s}(\theta ,p\tau )\), an optimal condition is given as

$$\begin{aligned} R\left( \frac{c_{2}}{c_{1}},\alpha \right) =\frac{1}{p\tau }. \end{aligned}$$

Denote an optimal ratio between \(c_{1}\) and \(c_{2}\) by \(\theta (\alpha ,p\tau )\), which satisfies

$$\begin{aligned} \frac{1-\alpha }{\alpha }\theta (\alpha ,p\tau )^{\sigma }=\frac{1}{p\tau }\ \Leftrightarrow \theta (\alpha ,p\tau )=\left( \frac{1}{p\tau }\right) ^{ \frac{1}{\sigma }}\left( \frac{\alpha }{1-\alpha }\right) ^{\frac{1}{\sigma } }. \end{aligned}$$

Therefore, the ex-post optimal consumption \((c_1,c_2)\) is given as follows: for given (mb) and \(\theta =\frac{b}{m}\),

$$\begin{aligned} (c_1,c_2)= \left\{ \begin{array}{ll} \displaystyle \left( \ \frac{pm+w-m}{p(1+\theta (\alpha ,1))} , \ \theta (\alpha ,1)c_1 \ \right) &{}\quad \text{ if } \alpha \in A_i(\theta )\\ \left( m, \ b\right) &{}\quad \text{ if } \alpha \in A(\theta ,p\tau )\\ \displaystyle \left( \ \frac{m+\tau (w-m)}{1+p\tau \theta (\alpha ,p\tau )}, \ \theta (\alpha ,p\tau )c_1\ \right) &{}\quad \text{ if } \alpha \in A_s(\theta ,p\tau ). \end{array}\right. \end{aligned}$$

Each region of \(\alpha \) is characterized specifically as follows:

Lemma 3

For any \(\theta =\frac{b}{m}\ge 0\),

$$\begin{aligned} A_{s}(\theta ,p\tau )=\left[ 0,\frac{p\tau \theta ^{\sigma }}{1+p\tau \theta ^{\sigma }}\right) ,\ A(\theta ,p\tau )=\left[ \frac{p\tau \theta ^{\sigma } }{1+p\tau \theta ^{\sigma }},\frac{\theta ^{\sigma }}{1+\theta ^{\sigma }} \right] ,\ A_{i}(\theta )=\left( \frac{\theta ^{\sigma }}{1+\theta ^{\sigma } },1\right] . \end{aligned}$$

Proof

Fix any \(\theta \ge 0\). Then,

$$\begin{aligned}&R(\theta ,\alpha )\le 1\ \Leftrightarrow \ \frac{1-\alpha }{\alpha } \theta ^{\sigma }\le 1\ \Leftrightarrow \ \alpha \ge \frac{\theta ^{\sigma }}{1+\theta ^{\sigma }}, \\&R(\theta ,\alpha )\ge \frac{1}{p\tau }\ \Leftrightarrow \ \frac{1-\alpha }{\alpha }\theta ^{\sigma }\ge \frac{1}{p\tau }\ \Leftrightarrow \ \alpha \le \frac{p\tau \theta ^{\sigma }}{1+p\tau \theta ^{\sigma }}. \end{aligned}$$

\(\square \)

The marginal utility from money is characterized as follows:

$$\begin{aligned} \frac{\partial V}{\partial m}(m,\alpha )=\left\{ \begin{array}{ll} \displaystyle U_{1}\frac{p-1}{p+p\theta (\alpha ,1)}+U_{2}\frac{\theta (\alpha ,1)(p-1)}{p+p\theta (\alpha ,1)} &{} \quad \text{ if } \alpha \in A_{i}(\theta )\\ \displaystyle U_{1}-\frac{1}{p}U_{2}&{}\quad \text{ if } \alpha \in A(\theta ,p\tau )\\ \displaystyle U_{1}\frac{1-\tau }{1+p\tau \theta (\alpha ,p\tau )}+U_{2}\frac{\theta (\alpha ,p\tau )(1-\tau )}{1+p\tau \theta (\alpha ,p\tau )}&{} \quad \text{ if } \alpha \in A_{s}(\theta ,p\tau ). \end{array} \right. \end{aligned}$$

Note that if \(\alpha \in A_{i}(\theta )\), \(U_{1}=U_{2}\), and if \(\alpha \in A_{s}(\theta ,p\tau )\), \(U_{2}=p\tau U_{1}\) from the optimal condition. Thus, we have

$$\begin{aligned}&U_{1}\frac{p-1}{p+p\theta (\alpha ,1)}+U_{2}\frac{\theta (\alpha ,1)(p-1)}{p+p\theta (\alpha ,1)}=\frac{p-1}{p}U_{1} \quad \text{ if } \alpha \in A_{i}(\theta ),\\&U_{1}\frac{1-\tau }{1+p\tau \theta (\alpha ,p\tau )}+U_{2}\frac{\theta (\alpha ,p\tau )(1-\tau )}{1+p\tau \theta (\alpha ,p\tau )}=(1-\tau )U_{1} \quad \text{ if } \alpha \in A_{s}(\theta ,p\tau ). \end{aligned}$$

Hence,

$$\begin{aligned} \frac{\partial V}{\partial m}(m,\alpha )= \left\{ \begin{array}{ll} \displaystyle \frac{p-1}{p}(1-\alpha )u^{\prime }(c_{1}) &{} \quad \text{ if } \alpha \in A_{i}(\theta )\\ \displaystyle (1-\alpha )u^{\prime }(m)-\frac{1}{p}\alpha u^{\prime }(b)&{} \quad \text{ if } \alpha \in A(\theta ,p\tau )\\ (1-\tau )(1-\alpha )u^{\prime }(c_{1})&{} \quad \text{ if } \alpha \in A_{s}(\theta ,p\tau ). \end{array} \right. \end{aligned}$$
(7)

It is straightforward that \(\frac{\partial V}{\partial m}(m,\alpha )\) is decreasing in \(\alpha \) on \(A(\theta , p\tau )\) because

$$\begin{aligned} \frac{\partial V}{\partial m}(m,\alpha )= u^{\prime }(m)-\alpha \left( u^{\prime }(m)+\frac{u^{\prime }(b)}{p}\right) . \end{aligned}$$

We investigate whether \(\frac{\partial V}{\partial m}(m,\alpha )\) is increasing or decreasing in \(\alpha \) on \(A_{i}(\theta )\) and \(A_{s}(\theta ,p\tau )\), respectively. First, take any \(\alpha \in A_i(\theta )\). Then

$$\begin{aligned} U_1=(1-\alpha )u^{\prime }(c_1)=(1-\alpha )u^{\prime }\left( \frac{pm+w-m}{ p(1+\theta (\alpha ,1))}\right) . \end{aligned}$$

By taking the derivative with respect to \(\alpha \),

$$\begin{aligned} \frac{\partial U_1}{\partial \alpha }&= -u^{\prime }(c_1)+(1-\alpha )u^{\prime \prime }(c_1) \frac{-(pm+w-m)p\theta ^{\prime }(\alpha ,1)}{ p^2(1+\theta (\alpha ,1))^2} \\&= -u^{\prime }(c_1)\left( 1-(1-\alpha )\frac{-c_1u^{\prime \prime }(c_1)}{ u^{\prime }(c_1)}\frac{\theta ^{\prime }(\alpha ,1)}{1+\theta (\alpha ,1)}\right) \\&= -u^{\prime }(c_1)\left( 1-(1-\alpha )\sigma \frac{\theta ^{\prime }(\alpha ,1) }{1+\theta (\alpha ,1)} \right) \\&= -u^{\prime }(c_1)\left( 1-\frac{1}{\alpha }\frac{\theta (\alpha ,1)}{ 1+\theta (\alpha ,1)} \right) . \end{aligned}$$

Thus,

$$\begin{aligned} \frac{\partial U_1}{\partial \alpha }<0&\Leftrightarrow 1-\frac{1}{\alpha } \frac{\theta (\alpha ,1)}{1+\theta (\alpha ,1)}>0 \\&\Leftrightarrow \alpha >\frac{\alpha ^{\frac{1}{\sigma }}}{(1-\alpha )^{\frac{1 }{\sigma }} +\alpha ^{\frac{1}{\sigma }}}=:h(\alpha ). \end{aligned}$$

Note that \(h(0)=0\), \(h(1/2)=1/2\), and \(h(1)=1\). Moreover, if \(\sigma <1\), then \(\lim _{\alpha \rightarrow 0}h^{\prime }(\alpha )= \lim _{\alpha \rightarrow 1}h^{\prime }(\alpha )=0\). If \(\sigma >1\), then \(\lim _{\alpha \rightarrow 0}h^{\prime }(\alpha )= \lim _{\alpha \rightarrow 1}h^{\prime }(\alpha )=\infty \). If \(\sigma =1\), \( h(\alpha )=\alpha \). Therefore,

$$\begin{aligned} \sigma <1&\Rightarrow \ \frac{\partial U_1}{\partial \alpha } \gtrless 0 \ \quad \text{ if } \alpha \gtrless \frac{1}{2}, \\ \sigma >1&\Rightarrow \ \frac{\partial U_1}{\partial \alpha } \lessgtr 0 \ \quad \text{ if } \alpha \gtrless \frac{1}{2}, \\ \sigma =1&\Rightarrow \ \frac{\partial U_1}{\partial \alpha }=0 \ \quad \text{ for } \text{ all } \alpha . \end{aligned}$$

Similarly, take any \(\alpha \in A_{s}(\theta ,p\tau )\). Then

$$\begin{aligned} U_{1}=(1-\alpha )u^{\prime }(c_{1})=(1-\alpha )u^{\prime }\left( \frac{ m+\tau (w-m)}{1+p\tau \theta (\alpha ,p\tau )}\right) . \end{aligned}$$

By taking the derivative,

$$\begin{aligned} \frac{\partial U_{1}}{\partial \alpha }&=-u^{\prime }(c_{1})+(1-\alpha )u^{\prime \prime }(c_{1})\frac{-(m+\tau (w-m))p\tau \theta ^{\prime }(\alpha ,p\tau )}{(1+p\tau \theta (\alpha ,p\tau ))^{2}} \\&=-u^{\prime }(c_{1})\left( 1-(1-\alpha )\frac{-c_{1}u^{\prime \prime }(c_{1})}{u^{\prime }(c_{1})}\frac{p\tau \theta ^{\prime }(\alpha ,p\tau )}{ 1+p\tau \theta (\alpha ,p\tau )}\right) \\&=-u^{\prime }(c_{1})\left( 1-(1-\alpha )\sigma \frac{p\tau \theta ^{\prime }(\alpha ,p\tau )}{1+p\tau \theta (\alpha ,p\tau )}\right) \\&=-u^{\prime }(c_{1})\left( 1-\frac{1}{\alpha }\frac{p\tau \theta (\alpha ,p\tau )}{1+p\tau \theta (\alpha ,p\tau )}\right) . \end{aligned}$$

Thus,

$$\begin{aligned} \frac{\partial U_{1}}{\partial \alpha }<0&\Leftrightarrow 1-\frac{1}{\alpha }\frac{p\tau \theta (\alpha ,p\tau )}{1+p\tau \theta (\alpha ,p\tau )}>0 \\&\Leftrightarrow \alpha >\frac{(p\tau )^{1-\frac{1}{\sigma }}\alpha ^{\frac{ 1}{\sigma }}}{(1-\alpha )^{\frac{1}{\sigma }}+(p\tau )^{1-\frac{1}{\sigma } }\alpha ^{\frac{1}{\sigma }}}=: g(\alpha ). \end{aligned}$$

Note that \(g(0)=0\), \(g(\frac{p\tau }{1+p\tau })=\frac{p\tau }{1+p\tau }\), and \(g(1)=1\). Moreover, if \(\sigma <1\), then \(\lim _{\alpha \rightarrow 0}g^{\prime }(\alpha )=\lim _{\alpha \rightarrow 1}g^{\prime }(\alpha )=0\). If \(\sigma >1\), then \(\lim _{\alpha \rightarrow 0}g^{\prime }(\alpha )=\lim _{\alpha \rightarrow 1}g^{\prime }(\alpha )=\infty \). If \(\sigma =1\), \( g(\alpha )=\alpha \). Therefore,

$$\begin{aligned} \sigma <1&\Rightarrow \ \frac{\partial U_{1}}{\partial \alpha }\gtrless 0\ \quad \text{ if } \alpha \gtrless \frac{p\tau }{1+p\tau }, \\ \sigma >1&\Rightarrow \ \frac{\partial U_{1}}{\partial \alpha }\lessgtr 0\ \quad \text{ if } \alpha \gtrless \frac{p\tau }{1+p\tau }, \\ \sigma =1&\Rightarrow \ \frac{\partial U_{1}}{\partial \alpha }=0\ \quad \text{ for } \text{ all } \alpha . \end{aligned}$$

1.2 Characterization of optimal money holdings

In this subsection, we ensure that an optimal level of money holdings \(m^*\) is characterized by the F.O.C.,

$$\begin{aligned} \int \frac{\partial V}{\partial m}(m^*,\alpha )\,\mathrm {d}\mu (\alpha )=0. \end{aligned}$$

Goldman (1974) uses this characterization with assuming that the distribution \(\mu \) of random discounting admits a density function. It is verified below that under Assumption 3, the first-order approach is valid for more general distributions.Footnote 6

Lemma 4

  1. (i)

    \(\frac{\partial V}{\partial m}(m,\alpha )\) is continuous in \(\alpha \).

  2. (ii)

    \(\frac{\partial V}{\partial m}(m,\alpha )\) is continuous in m.

Proof

(i) It suffices to show that \(\frac{\partial V}{\partial m}(m,\alpha )\) is continuous on \(A_{i}(\theta )\cup A(\theta ,p\tau )\) and on \(A(\theta ,p\tau )\cup A_{s}(\theta ,p\tau )\), respectively. For the first part, we show that \(\frac{p-1}{p}U_{1}\) and \(U_{1}-\frac{1}{p}U_{2}\) coincide at \(\alpha =\frac{\theta ^{\sigma }}{ 1+\theta ^{\sigma }}\). At this \(\alpha \), \(U_{1}-\frac{1}{p}U_{2}=(1-\alpha )u^{\prime }(m)-\frac{1}{p}\alpha u^{\prime }(b)\). Take any sequence \(\alpha ^{n}\rightarrow \frac{\theta ^{\sigma }}{1+\theta ^{\sigma }}\) with \(\alpha ^{n}\in A_{i}(\theta )\). Since \(\alpha ^{n}\in A_{i}(\theta )\), \( U_{1}(c_{1}^{n})=U_{2}(c_{2}^{n})\), where

$$\begin{aligned} c_{1}^{n}=\frac{pm+w-m}{p(1+\theta (\alpha ^{n},1))},\ c_{2}^{n}=\theta (\alpha ^{n},1)c_{1}^{n}. \end{aligned}$$

Since

$$\begin{aligned} \theta (\alpha ^{n},1)=\left( \frac{\alpha ^{n}}{1-\alpha ^{n}}\right) ^{ \frac{1}{\sigma }}\rightarrow \left( \frac{\frac{\theta ^{\sigma }}{1+\theta ^{\sigma }}}{1-\frac{\theta ^{\sigma }}{1+\theta ^{\sigma }}}\right) ^{\frac{ 1}{\sigma }}=\theta , \end{aligned}$$

we have

$$\begin{aligned} c_{1}^{n}\rightarrow \frac{pm+w-m}{p(1+\theta )}=\frac{pm+w-m}{p(1+\frac{b}{m})}=m,\ \text{ and }\ c_{2}^{n}\rightarrow \theta m=\frac{b}{m}m=b. \end{aligned}$$

Thus,

$$\begin{aligned} \frac{p-1}{p}U_{1}(c_{1}^{n})=U_{1}(c_{1}^{n})-\frac{1}{p} U_{2}(c_{2}^{n})\rightarrow U_{1}(m)-\frac{1}{p}U_{2}(b)=(1-\alpha )u^{\prime }(m)-\frac{\alpha }{p}u^{\prime }(b), \end{aligned}$$

as desired. A similar argument can be applied to show the latter.

(ii) Fix \(\alpha \in (0,1)\) arbitrarily. From Lemma 3 and (7),

$$\begin{aligned} \frac{\partial V}{\partial m}(m,\alpha )= \left\{ \begin{array}{ll} \displaystyle \frac{p-1}{p}(1-\alpha )u^{\prime }(c_{i1}) &{}\quad \text{ if } \frac{b}{m}=\theta \in \Theta _{i}(\alpha )\\ \displaystyle (1-\alpha )u^{\prime }(m)-\frac{1}{p}\alpha u^{\prime }(b)&{} \quad \text{ if } \frac{b}{m}=\theta \in \Theta (\alpha ,p\tau )\\ (1-\tau )(1-\alpha )u^{\prime }(c_{s1})&{} \quad \text{ if } \frac{b}{m}=\theta \in \Theta _{s}(\alpha , p\tau ), \end{array} \right. \end{aligned}$$
(8)

where

$$\begin{aligned} c_{i1}=\frac{pm+w-m}{p(1+\theta (\alpha ,1))}, \ c_{s1}=\frac{m+\tau (w-m)}{1+p\tau \theta (\alpha ,p\tau )} \end{aligned}$$

and

$$\begin{aligned} \Theta _{i}(\alpha )= & {} \left[ 0, \frac{\alpha ^{\frac{1}{\sigma }}}{(1-\alpha )^{\frac{1}{\sigma }}} \right) , \Theta (\alpha ,p\tau )=\left[ \frac{\alpha ^{\frac{1}{\sigma }}}{(1-\alpha )^{\frac{1}{\sigma }}}, \frac{\alpha ^{\frac{1}{\sigma }}}{(p\tau (1-\alpha ))^{\frac{1}{\sigma }}} \right] , \\ \Theta _{s}(\alpha ,p\tau )= & {} \left( \frac{\alpha ^{\frac{1}{\sigma }}}{(p\tau (1-\alpha ))^{\frac{1}{\sigma }}},\infty \right) . \end{aligned}$$

Since \(\frac{\partial V}{\partial m}(m,\alpha )\) is continuous in m on these subintervals, it suffices to show its continuity at the corner points of \(\Theta (\alpha ,p\tau )\).

Now assume that a sequence \(m^n\) with \(\frac{b^n}{m^n}\in \Theta _i(\alpha )\) converges to \(\bar{m}\) with \(\frac{\bar{b}}{\bar{m}}= \frac{\alpha ^{\frac{1}{\sigma }}}{(1-\alpha )^{\frac{1}{\sigma }}}\) Since \(\theta (\alpha ,1)=\frac{\alpha ^{\frac{1}{\sigma }}}{(1-\alpha )^{\frac{1}{\sigma }}}\),

$$\begin{aligned} c^n_{i1}=\frac{pm^n+w-m^n}{p(1+\theta (\alpha ,1))}\rightarrow \frac{p\bar{m}+w-\bar{m}}{p(1+\theta (\alpha ,1))}= \frac{p\bar{m}+w-\bar{m}}{p(1+\frac{\bar{b}}{\bar{m}})}=\bar{m}, \end{aligned}$$

and \(c^n_{i2}=\theta (\alpha ,1)c^n_{i1}=\frac{\bar{b}}{\bar{m}}c^{n}_{i1}\rightarrow \bar{b}\). Then, by using the F.O.C. (that is, \((1-\alpha )u'(c^n_{i1})=\alpha u'(c^n_{i2})\)), we have

$$\begin{aligned} \frac{\partial V}{\partial m}(m^n,\alpha )&= \frac{p-1}{p}(1-\alpha )u^{\prime }(c^n_{i1})= (1-\alpha )u'(c^n_{i1})-\frac{1}{p}(1-\alpha )u'(c^n_{i1})\\&= (1-\alpha )u'(c^n_{i1})-\frac{1}{p}\alpha u'(c^n_{i2})\\&\rightarrow (1-\alpha )u'(\bar{m})-\frac{1}{p}\alpha u'(\bar{b})= \frac{\partial V}{\partial m}(\bar{m},\alpha ), \end{aligned}$$

as desired. A similar argument can be applied to the other case.\(\square \)

Lemma 5

\(\int \frac{\partial V}{\partial m}(m,\alpha )\,\mathrm{d}\mu (\alpha )\) is continuous and strictly decreasing in m.

Proof

Continuity: Assume that \(m^n\rightarrow m\). We want to show that

$$\begin{aligned} \lim _{n\rightarrow \infty } \int \frac{\partial V}{\partial m}(m^n,\alpha )\, \mathrm{d}\mu (\alpha )= \int \lim _{n\rightarrow \infty }\frac{\partial V}{\partial m}(m^n,\alpha )\, \mathrm{d}\mu (\alpha ). \end{aligned}$$

By Lemma 4 (ii), \(\frac{\partial V}{\partial m}(m^n,\cdot )\) converges to \(\frac{\partial V}{\partial m}(m,\cdot )\) at each point \(\alpha \). Moreover, define

$$\begin{aligned} \underline{M}:= \left\{ (c_1,c_2)\in \mathbb {R}^2_+\,|\, c_1+c_2\le w\right\} \cap \left\{ (c_1,c_2)\in \mathbb {R}^2_+\,|\, c_1+p\tau c_2\le \tau w\right\} . \end{aligned}$$

Then, for all m, \(\underline{M}\subset M(m)\). Therefore, for all m, it is possible to consume some positive level of \((c_1,c_2)\), which is uniformly bounded from zero. Consequently, for all m and \(\alpha \), the marginal utility from money holdings, \(\frac{\partial V}{\partial m}(m,\alpha )\), is uniformly bounded from above. Thus, by the dominated convergence theorem (Aliprantis and Border 2006, p. 415, Theorem 11.21),

$$\begin{aligned} \lim _{n\rightarrow \infty } \int \frac{\partial V}{\partial m}(m^n,\alpha )\, \mathrm{d}\mu (\alpha )= \int \frac{\partial V}{\partial m}(m,\alpha )\, \mathrm{d}\mu (\alpha ), \end{aligned}$$

as desired.

Strictly decreasing: From (8),

$$\begin{aligned} \frac{\partial ^2 V}{\partial m^2}(m,\alpha )= \left\{ \begin{array}{ll} \displaystyle \frac{p-1}{p}(1-\alpha ) u''(c_{i1})\frac{p-1}{p(1+\theta (\alpha ,1))}<0 &{} \text{ if } \frac{b}{m}=\theta \in \Theta _{i}(\alpha )\\ \displaystyle (1-\alpha )u''(m)-\frac{1}{p}\alpha u''(b) \frac{-1}{p}<0&{} \text{ if } \frac{b}{m}=\theta \in \Theta (\alpha ,p\tau )\\ \displaystyle (1-\tau )(1-\alpha )u''(c_{s1}) \frac{1-\tau }{1+p\tau \theta (\alpha ,p\tau )}<0&{} \text{ if } \frac{b}{m}=\theta \in \Theta _{s}(\alpha , p\tau ) \end{array} \right. . \end{aligned}$$

Since \(\int \frac{\partial ^2 V}{\partial m^2}(m,\alpha )\,\mathrm{d}\mu (\alpha )<0\), \(\int \frac{\partial V}{\partial m}(m,\alpha )\,\mathrm{d}\mu (\alpha )\) is strictly decreasing in m.

The next lemma ensures that an optimal level of money holdings is characterized by the F.O.C.

Lemma 6

Under Assumption 3, there exists a unique \(m^*\) satisfying

$$\begin{aligned} \int \frac{\partial V}{\partial m}(m^*,\alpha )\,\mathrm {d}\mu (\alpha )=0. \end{aligned}$$

Proof

If \(m=0\), then \(\theta =\infty \), in which case we have \(A_i(\theta )=\emptyset \), \(A(\theta ,p\tau )=\{1\}\), and \(A_{s}(\theta ,p\tau )=[0,1)\). From (7), \(\frac{\partial V}{\partial m}(0,\alpha )>0\) on \(A_s(\theta ,p\tau )\). Thus, by Assumption 3, \(\int \frac{\partial V}{\partial m}(0,\alpha )\,\mathrm {d}\mu (\alpha )>0\). On the other hand, if \(m=w\) (or \(b=0\)), then \(\theta =0\), in which case we have \(A_i(\theta )=(0,1]\), \(A(\theta ,p\tau )=\{0\}\), and \(A_{s}(\theta ,p\tau )=\emptyset \). From (7), \(\frac{\partial V}{\partial m}(w,\alpha )<0\) on \(A_i(\theta )\). Again, by Assumption 3, \(\int \frac{\partial V}{\partial m}(w,\alpha )\,\mathrm {d}\mu (\alpha )<0\). Since \(\int \frac{\partial V}{\partial m}(m,\alpha )\,\mathrm {d}\mu (\alpha )\) is continuous in m by Lemma 5, the intermediate value theorem implies the existence of \(m^*\). Its uniqueness is ensured because \(\int \frac{\partial V}{\partial m}(m,\alpha )\,\mathrm {d}\mu (\alpha )\) is strictly deceasing in m by Lemma 5.

As shown in Lemma 5, we have \(\int \frac{\partial ^2 V}{\partial m^2}(m,\alpha )\,\mathrm{d}\mu (\alpha )<0\), that is, the second-order condition is satisfied. Therefore, Lemma 6 guarantees that \(m^*\) is an optimal level of money holdings if and only if it satisfies the F.O.C.

1.3 The result

We show that if either (i) or (ii) holds, an optimal level of m increases when \(\mu ^1\) changes to \(\mu ^2\) with \(\mu ^1 \ge _{FSD} \mu ^2\). Let \(m^1\) be an optimal money holding at \(\mu ^1\), and \(\theta ^1\) be the corresponding ratio between \(m^1\) and \(b^1\). Then,

$$\begin{aligned} \frac{\mathrm {d}W^1(M(m^1))}{\mathrm {d}m}=\int _{[0,1]}\frac{\partial V}{\partial m}(m^1,\alpha )\,\mathrm {d}\mu ^1(\alpha ) =0. \end{aligned}$$

It is enough to show that the integrand \(\frac{\partial V}{\partial m}(m^1,\alpha ) \) is decreasing in \(\alpha \). Then, for any \(\mu ^2\) with \(\mu ^1\ge _{FSD}\mu ^2\), from the property of the FSD,

$$\begin{aligned} \frac{\mathrm {d}W^1(M(m^1))}{\mathrm {d}m}&=\int _{[0,1]} \frac{\partial V}{\partial m}(m^1,\alpha )\, \mathrm {d}\mu ^1(\alpha ) \le \int _{[0,1]}\frac{\partial V}{\partial m}(m^1,\alpha )\, \mathrm {d}\mu ^2(\alpha )\\&=\frac{\mathrm {d}W^2(M(m^1))}{\mathrm {d}m}. \end{aligned}$$

Since \(\frac{\mathrm {d}W^2(M(m^1))}{\mathrm {d}m}\ge 0\), we have \(m^1\le m^2\).

From (7), the integrand \(\frac{\partial V}{\partial m}(m^1,\alpha )\) is decreasing in \(\alpha \in A(\theta ^1 ,p\tau )\). Moreover, if \(\sigma =1\), \(U_{1}\) is constant in \(\alpha \in A_{i}(\theta ^1 )\cup A_{s}(\theta ^1 ,p\tau )\). Thus, the integrand \(\frac{\partial V}{\partial m}(m^1,\alpha )\) is constant in \(\alpha \in A_{i}(\theta ^1 )\cup A_{s}(\theta ^1 ,p\tau )\). In the case of \(\theta ^1 =1\), we have

$$\begin{aligned} \frac{p\tau (\theta ^1)^{\sigma }}{1+p\tau (\theta ^1)^{\sigma }}=\frac{p\tau }{ 1+p\tau }<\frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }}=\frac{1}{2}. \end{aligned}$$

This implies that \(A_{s}(\theta ^1 ,p\tau )\subsetneq [0,1/2]\) and \(A_{i}(\theta ^1 )=(1/2,1]\). Thus, if \(\sigma <1\), \((1-\tau )U_{1}\) is decreasing in \(\alpha \in A_{s}(\theta ^1 ,p\tau )\), and \(\frac{p-1}{p}U_{1}\) is decreasing in \(\alpha \in A_{i}(\theta ^1 )\).

Hence, the integrand \(\frac{\partial V}{\partial m}(m^1,\alpha )\) is decreasing on each subinterval, \(A_{s}(\theta ^1,p\tau )\), \(A(\theta ^1 ,p\tau )\), and \(A_i(\theta ^1)\). Moreover, Lemma 4(i) assures that the integrand \(\frac{\partial V}{\partial m}(m^1,\alpha )\) is a decreasing function over [0, 1].

Next we show the latter statement. Assume \(\sigma \ne 1\). Consider the following three cases: (a) \(\theta ^1 <1\). In this case, we have

Then, if \(\sigma <1\), \(\frac{p-1}{p}U_{1}\) is increasing in \(\alpha \in \left( \frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }},\frac{1}{2}\right) \), and if \(\sigma >1\), it is increasing in \(\alpha \in \big (\frac{1}{2},1\big ]\).

(b) \(\theta ^1 >1\). In this case, we have

Then, if \(\sigma <1\), \((1-\tau )U_{1}\) is increasing in \(\alpha \in \left( \frac{ p\tau }{1+p\tau },\frac{p\tau (\theta ^1)^{\sigma }}{1+p\tau (\theta ^1)^{\sigma }}\right) \), and if \(\sigma >1\), it is increasing in \(\alpha \in \big [0,\frac{ p\tau }{1+p\tau }\big )\).

(c) \(\theta ^1 =1\) and \(\sigma >1\). In this case, we have

$$\begin{aligned} \frac{p\tau (\theta ^1)^{\sigma }}{1+p\tau (\theta ^1)^{\sigma }}= \frac{p\tau }{1+p\tau }<\frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }}=\frac{1}{2}. \end{aligned}$$

Then, \((1-\tau )U_{1}\) is increasing in \(\alpha \in \big [0,\frac{p\tau }{ 1+p\tau }\big )\).

Now consider case (a) with \(\sigma <1\). As shown above, \(\frac{ p-1}{p}U_{1}\) is increasing in \(\alpha \in \left( \frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }},\frac{1}{2}\right) \). The following lemma ensures the desired result.

Lemma 7

If the interval \(\left( \frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }},\frac{1}{2}\right) \) and the support \(\sigma (\mu ^1)\) have an overlap, then there exists some \(\mu ^2\) with \(\mu ^1\ge _{FSD}\mu ^2\) such that

$$\begin{aligned} \frac{\mathrm {d}W^2(M(m^1))}{\mathrm {d}m}<0, \end{aligned}$$

and consequently, we have \(m^1>m^2\).

Proof

Let \(\overline{\alpha }\) be a point in the intersection. Take any point \(\underline{\alpha }\in \left( \frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }},\frac{1}{2}\right) \) with \(\overline{\alpha }>\underline{\alpha }\). Let \(F^1\) be the distribution function of \(\mu ^1\). Take any increasing function \(G:[\underline{\alpha },\overline{\alpha }]\rightarrow [0,1]\) such that

$$\begin{aligned} G(\underline{\alpha }) =F^1(\underline{\alpha }),\ G(\overline{\alpha })=F^1(\overline{\alpha }) ,\ \text{ and }\ G(\alpha )>F^1(\alpha ), \end{aligned}$$

for all \(\alpha \in (\underline{\alpha },\overline{\alpha })\). Such a function G exists because \(F^1(\underline{\alpha })<F^1(\overline{\alpha })\). Define a distribution function \(F^2\) by

$$\begin{aligned} F^2(\alpha ):= \left\{ \begin{array}{ll} G(\alpha ) &{} \quad \text{ if } \alpha \in [\underline{\alpha },\overline{\alpha }], \\ F^1(\alpha ) &{} \quad \text{ otherwise. } \end{array} \right. \end{aligned}$$

Let \(\mu ^2\) be a probability measure on [0, 1] associated with \(F^2\). Since \(F^2\ge F^1\), \(\mu ^1\ge _{FSD}\mu ^2\). Moreover, the interval \([\underline{\alpha },\overline{\alpha }]\) has the same probability with respect to both \(\mu ^1\) and \(\mu ^2\), and the conditional probability of \(\mu ^2\) on \([\underline{\alpha },\overline{\alpha }]\) is strictly dominated by that of \(\mu ^1\) in the FSD. Thus, by the property of the FSD,

$$\begin{aligned} 0 =&\,\frac{\mathrm {d}W^1(M(m^1))}{\mathrm {d}m} \\ =&\,\int _{A_{i}(\theta ^1)}\frac{p-1}{p}U_{1}\,\mathrm {d}\mu ^1(\alpha ) +\int _{A(\theta ^1 ,p\tau )}\left[ U_{1}-\frac{1}{p}U_{2}\right] \mathrm {d}\mu ^1(\alpha )\\&+\int _{A_{s}(\theta ^1 ,p\tau )}(1-\tau )U_{1}\, \mathrm {d}\mu ^1(\alpha ) \\ =&\,\mu ^1([\underline{\alpha },\overline{\alpha }]) \int _{[\underline{\alpha },\overline{\alpha }]}\frac{p-1}{p}U_{1}\, \mathrm {d}\mu ^1(\alpha | [\underline{\alpha },\overline{\alpha }]) +\int _{A_i(\theta ^1){\setminus } [\underline{\alpha },\overline{\alpha }]} \frac{p-1}{p}U_{1}\,\mathrm {d}\mu ^1(\alpha )\\&+\int _{A(\theta ^1 ,p\tau )}\left[ U_{1}-\frac{1}{p}U_{2}\right] \mathrm {d}\mu ^1(\alpha ) +\int _{A_{s}(\theta ^1 ,p\tau )}(1-\tau )U_{1}\, \mathrm {d}\mu ^1(\alpha ) \\ >&\, \mu ^2([\underline{\alpha },\overline{\alpha }]) \int _{[\underline{\alpha },\overline{\alpha }]}\frac{p-1}{p}U_{1}\, \mathrm {d}\mu ^2(\alpha | [\underline{\alpha },\overline{\alpha }]) +\int _{A_i(\theta ^1){\setminus } [\underline{\alpha },\overline{\alpha }]} \frac{p-1}{p}U_{1}\,\mathrm {d}\mu ^2(\alpha )\\&+\int _{A(\theta ^1 ,p\tau )}\left[ U_{1}-\frac{1}{p}U_{2}\right] \mathrm {d}\mu ^2(\alpha ) +\int _{A_{s}(\theta ^1 ,p\tau )}(1-\tau )U_{1}\, \mathrm {d}\mu ^2(\alpha ) \\ =&\, \int _{A_{i}(\theta ^1 )}\frac{p-1}{p}U_{1}\,\mathrm {d} \mu ^2(\alpha ) +\int _{A(\theta ^1 ,p\tau )}\left[ U_{1}-\frac{1}{p}U_{2}\right] \mathrm {d}\mu ^2(\alpha )\\&+\int _{A_{s}(\theta ^1 ,p\tau )}(1-\tau )U_{1}\, \mathrm {d}\mu ^2(\alpha ) \\ =&\, \frac{\mathrm {d}W^2(M(m^1))}{\mathrm {d}m}. \end{aligned}$$

\(\square \)

If \(\mu ^1\) has a full support, its support has an overlap with \(\left( \frac{(\theta ^1)^{\sigma }}{1+(\theta ^1)^{\sigma }},\frac{1}{2}\right) \). Thus, Lemma 7 implies the desired result in this case. A similar argument is applicable also to the remaining cases.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Higashi, Y., Hyogo, K., Takeoka, N. et al. Comparative impatience under random discounting. Econ Theory 63, 621–651 (2017). https://doi.org/10.1007/s00199-015-0950-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-015-0950-3

Keywords

JEL Classification

Navigation