Abstract
Under non-exponential discounting, we develop a dynamic theory for stopping problems in continuous time. Our framework covers discount functions that induce decreasing impatience. Due to the inherent time inconsistency, we look for equilibrium stopping policies, formulated as fixed points of an operator. Under appropriate conditions, fixed-point iterations converge to equilibrium stopping policies. This iterative approach corresponds to the hierarchy of strategic reasoning in game theory and provides “agent-specific” results: it assigns one specific equilibrium stopping policy to each agent according to her initial behavior. In particular, it leads to a precise mathematical connection between the naive behavior and the sophisticated one. Our theory is illustrated in a real options model.
Similar content being viewed by others
References
Ainslie, G.: Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person. Cambridge University Press, Cambridge (1992)
Barberis, N.: A model of casino gambling. Manag. Sci. 58, 35–51 (2012)
Bass, R.F.: The measurability of hitting times. Electron. Commun. Probab. 15, 99–105 (2010)
Bayraktar, E., Huang, Y.-J.: On the multidimensional controller-and-stopper games. SIAM J. Control Optim. 51, 1263–1297 (2013)
Björk, T., Khapko, M., Murgoci, A.: On time-inconsistent stochastic control in continuous time. Finance Stoch. 21, 331–360 (2017)
Björk, T., Murgoci, A.: A theory of Markovian time-inconsistent stochastic control in discrete time. Finance Stoch. 18, 545–592 (2014)
Björk, T., Murgoci, A., Zhou, X.Y.: Mean–variance portfolio optimization with state-dependent risk aversion. Math. Finance 24, 1–24 (2014)
Borodin, A.N., Salminen, P.: Handbook of Brownian Motion—Facts and Formulae, 2nd edn. Probability and Its Applications. Birkhäuser Verlag, Basel (2002)
Bouchard, B., Touzi, N.: Weak dynamic programming principle for viscosity solutions. SIAM J. Control Optim. 49, 948–962 (2011)
Dong, Y., Sircar, R.: Time-inconsistent portfolio investment problems. In: Crisan, D., et al. (eds.) Stochastic Analysis and Applications, vol. 2014, pp. 239–281. Springer, Cham (2014)
Ebert, S., Strack, P.: Until the bitter end: on prospect theory in a dynamic context. Am. Econ. Rev. 105, 1618–1633 (2015)
Ekeland, I., Lazrak, A.: Being serious about non-commitment: subgame perfect equilibrium in continuous time. Tech. rep., University of British Columbia. Preprint (2006). Available online at arXiv:math/0604264 [math.OC]
Ekeland, I., Mbodji, O., Pirvu, T.A.: Time-consistent portfolio management. SIAM J. Financ. Math. 3, 1–32 (2012)
Ekeland, I., Pirvu, T.A.: Investment and consumption without commitment. Math. Financ. Econ. 2, 57–86 (2008)
Grenadier, S.R., Wang, N.: Investment under uncertainty and time-inconsistent preferences. J. Financ. Econ. 84, 2–39 (2007)
Hu, Y., Jin, H., Zhou, X.Y.: Time-inconsistent stochastic linear-quadratic control. SIAM J. Control Optim. 50, 1548–1572 (2012)
Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance, corrected 3rd printing. Springer, New York (2001)
Karp, L.: Non-constant discounting in continuous time. J. Econ. Theory 132, 557–568 (2007)
Kent, J.: Some probabilistic properties of Bessel functions. Ann. Probab. 6, 760–770 (1978)
Laibson, D.: Golden eggs and hyperbolic discounting. Q. J. Econ. 112, 443–477 (1997)
Loewenstein, G., Prelec, D.: Anomalies in intertemporal choice: evidence and an interpretation. Q. J. Econ. 57, 573–598 (1992)
Loewenstein, G., Thaler, R.: Anomalies: intertemporal choice. J. Econ. Perspect. 3, 181–193 (1989)
Noor, J.: Decreasing impatience and the magnitude effect jointly contradict exponential discounting. J. Econ. Theory 144, 869–875 (2009)
Noor, J.: Hyperbolic discounting and the standard model: eliciting discount functions. J. Econ. Theory 144, 2077–2083 (2009)
Nutz, M.: Random \(G\)-expectations. Ann. Appl. Probab. 23, 1755–1777 (2013)
Øksendal, B., Sulem, A.: Applied Stochastic Control of Jump Diffusions, 2nd edn. Universitext. Springer, Berlin (2007)
Pedersen, J.L., Peskir, G.: Solving non-linear optimal stopping problems by the method of time-change. Stoch. Anal. Appl. 18, 811–835 (2000)
Pedersen, J.L., Peskir, G.: Optimal mean–variance selling strategies. Math. Financ. Econ. 10, 203–220 (2016)
Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel (2006)
Pollak, R.A.: Consistent planning. Rev. Econ. Stud. 35, 201–208 (1968)
Prelec, D.: Decreasing impatience: a criterion for non-stationary time preference and “hyperbolic” discounting. Scand. J. Econ. 106, 511–532 (2004)
Stahl, D.: Evolution of smart-n players. Games Econ. Behav. 5, 604–617 (1993)
Stahl, D., Wilson, P.: Experimental evidence on players’ models of other players. J. Econ. Behav. Organ. 25, 309–327 (1994)
Strotz, R.H.: Myopia and inconsistency in dynamic utility maximization. Rev. Econ. Stud. 23, 165–180 (1955)
Taksar, M.I., Markussen, C.: Optimal dynamic reinsurance policies for large insurance portfolios. Finance Stoch. 7, 97–121 (2003)
Thaler, R.: Some empirical evidence on dynamic inconsistency. Econ. Lett. 8, 201–207 (1981)
Xu, Z.Q., Zhou, X.Y.: Optimal stopping under probability distortion. Ann. Appl. Probab. 23, 251–282 (2013)
Yong, J.: Time-inconsistent optimal control problems and the equilibrium HJB equation. Math. Control Relat. Fields 3, 271–329 (2012)
Acknowledgements
For thoughtful advice and comments, we thank Erhan Bayraktar, René Carmona, Samuel Cohen, Ivar Ekeland, Paolo Guasoni, Jan Obłój, Traian Pirvu, Ronnie Sircar and Xunyu Zhou, and seminar participants at Florida State University, Princeton University and University of Oxford. Special gratitude goes to Erhan Bayraktar for bringing this problem to the first author’s attention. Special gratitude also goes to Traian Pirvu for introducing the authors to each other. We are also grateful for critical comments from two anonymous referees which improved the quality of our paper substantially.
Author information
Authors and Affiliations
Corresponding author
Additional information
Y.-J. Huang is supported in part by National Science Foundation (DMS-1715439) and the University of Colorado (11003573). A. Nguyen-Huu is supported in part by the Energy and Prosperity Chair.
Appendices
Appendix A: Proofs for Sect. 3
Throughout this appendix, we constantly use the notation
1.1 A.1 Proof of Proposition 3.11
Fix \((t,x)\in\mathbb{X}\). We deal with the two cases \(\widetilde{\tau}(t,x) = 0\) and \(\widetilde{\tau}(t,x) = 1\) separately. If \(\widetilde{\tau}(t,x) = 0\), i.e., \(\widetilde{\tau}_{t,x} = t\), then by (2.6),
which implies \((t,x)\in S_{\widetilde{\tau}}\cup I_{\widetilde{\tau}}\). We then conclude from (3.6) that
If \(\widetilde{\tau}(t,x) =1\), then \({\mathcal {L}}^{*}\widetilde{\tau}(t,x) = {\mathcal {L}}\widetilde{\tau}(t,x) = \inf\{s\ge t :\widetilde{\tau}_{s,X^{t,x}_{s}}=s \}\). By (2.5) and (2.4), \(\widetilde{\tau}_{s,X^{t,x}_{s}}=s\) means that
which is equivalent to
where the second equality follows from (2.7). As a result, we can conclude that \({\mathcal {L}}^{*}\widetilde{\tau}(t,x) = \inf\{s\ge t: \delta(s-t) g(X^{t,x}_{s})= Z^{t,x}_{s}\}=\widetilde{\tau}_{t,x}\). This together with (2.6) shows that
which implies \((t,x)\in I_{\widetilde{\tau}}\cup C_{\widetilde{\tau}}\). By (3.6), we have
We therefore have \(\Theta\widetilde{\tau}(t,x) = \widetilde{\tau}(t,x)\) for all \((t,x)\in\mathbb{X}\), i.e., \(\widetilde{\tau}\in{\mathcal {E}}(\mathbb{X})\).
1.2 A.2 Derivation of Proposition 3.13
To prove the technical result in Lemma A.1 below, we need to introduce shifted random variables as in Nutz [25]. Recall from Sect. 2 that \(\Omega\) is the canonical path space. For any \(t\ge0\) and \(\omega\in\Omega\), we define the concatenation of \(\omega\) and \(\tilde{\omega}\in\Omega\) at time \(t\) by
For any \({\mathcal {F}}_{\infty}\)-measurable random variable \(\xi:\Omega \to\mathbb{R}\), we define the shifted random variable \([\xi]_{t,\omega}:\Omega\to\mathbb{R}\), which is \({\mathcal {F}}^{t}_{\infty}\)-measurable, by
Given \(\tau\in{\mathcal {T}}\), we write \(\omega\otimes_{\tau(\omega )}\tilde{\omega}\) as \(\omega\otimes_{\tau}\tilde{\omega}\), and \([\xi]_{\tau(\omega ),\omega} (\tilde{\omega})\) as \([\xi]_{\tau,\omega} (\tilde{\omega})\). A detailed analysis of shifted random variables can be found in [4, Appendix A]; Proposition A.1 there implies that for fixed \((t,x)\in\mathbb{X}\), any \(\theta\in{\mathcal {T}}_{t}\) and \({\mathcal {F}}^{t}_{\infty}\)-measurable \(\xi\) with \(\mathbb {E}^{t,x}[|\xi|]<\infty\) satisfy
Lemma A.1
For any \(\tau\in{\mathcal {T}}(\mathbb{X})\) and \((t,x)\in\mathbb {X}\), define \(t_{0}:= {\mathcal {L}}^{*}\tau_{1}(t,x)\in{\mathcal {T}}_{t}\) and \(s_{0} := {\mathcal {L}}^{*}\tau(t,x)\in{\mathcal {T}}_{t}\), with \(\tau_{1}\) as in (A.1). If \(t_{0}\le s_{0}\), then for a.e. \(\omega\in\{t < t_{0}\}\),
Proof
For a.e. \(\omega\in\{t< t_{0}\}\in{\mathcal {F}}_{t}\), we deduce from \(t_{0} (\omega) = {\mathcal {L}}^{*}\tau_{1}(t,x) (\omega)>t\) that \(\tau_{1}(s,X^{t,x}_{s}(\omega))=1\) for all \(s\in(t,t_{0}(\omega))\). In view of (A.1) and (3.6), this implies \((s,X^{t,x}_{s}(\omega))\notin S_{\tau}\) for all \(s\in (t,t_{0}(\omega))\). Thus,
For any \(s\in(t,t_{0}(\omega))\), note that
for all \(\tilde{\omega}\in\Omega\). As \(t_{0}\le s_{0}\), a similar calculation gives
We thus conclude from (A.3) that
where the second line holds because \(\delta\) is decreasing and also \(\delta\) and \(g\) are both nonnegative. On the other hand, by (A.2), it holds a.s. that
Note that we used the countability of ℚ to obtain the above almost sure statement. This together with (A.4) shows that it holds a.s. that
Since our sample space \(\Omega\) is the canonical space for Brownian motion with the right-continuous Brownian filtration \(\mathbb{F}\), the martingale representation theorem holds under the current setting. This implies that every martingale has a continuous version. Let \((M_{s})_{s\ge t}\) be the continuous version of the martingale \((\mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{s}])_{s\ge t}\). Then (A.5) immediately implies that it holds a.s. that
Also, using the right-continuity of \(M\) and (A.2), one can show that for any \(\tau\in{\mathcal {T}}_{t}\), we have \(M_{\tau}= \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \, {\mathcal {F}}^{t}_{\tau}]\) a.s. Now we can take some \(\Omega^{*}\in{\mathcal {F}}_{\infty}\) with \(\mathbb{P}[\Omega^{*}] =1\) such that (A.6) holds true and \(M_{t_{0}}(\omega) = \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{t_{0}}](\omega)\) for all \(\omega\in\Omega^{*}\). For any \(\omega\in\Omega^{*}\cap\{t< t_{0}\}\), take \(({k_{n}})\subseteq\mathbb {Q}\) such that \(k_{n} >t\) and \(k_{n}\uparrow t_{0}(\omega)\). Then (A.6) implies that \(g(X^{t,x}_{k_{n}}(\omega)) \le M_{k_{n}}(\omega),\ \forall n\in\mathbb{N}\). As \(n\to\infty\), we obtain from the continuity of \(s\mapsto X_{s}\) and \(z\mapsto g(z)\) and the left-continuity of \(s\mapsto M_{s}\) that \(g(X^{t,x}_{t_{0}}(\omega)) \le M_{t_{0}}(\omega) = \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{t_{0}}](\omega)\). □
Now we are ready to prove Proposition 3.13.
Proof of Proposition 3.13
We prove (3.10) by induction. We know that the result holds for \(n=0\) by (3.9). Now assume that (3.10) holds for \(n=k\in\mathbb{N}\cup\{0\}\), and we intend to show that (3.10) also holds for \(n=k+1\). Recall the notation in (A.1). Fix \((t,x)\in\ker(\tau_{k+1})\), i.e., \(\tau_{k+1}(t,x)=0\). If \({\mathcal {L}}^{*}\tau_{k+1}(t,x) = t\), then \((t,x)\) belongs to \(I_{\tau_{k+1}}\). By (3.6), we get \(\tau _{k+2}(t,x) = \Theta\tau_{k+1}(t,x)= \tau_{k+1}(t,x) =0\), i.e., \((t,x)\in\ker(\tau_{k+2})\) as desired. We therefore assume below that \({\mathcal {L}}^{*}\tau_{k+1}(t,x)>t\).
By (3.6), \(\tau_{k+1}(t,x)=0\) implies
Let \(t_{0} := {\mathcal {L}}^{*}\tau_{k+1}(t,x)\) and \(s_{0} := {\mathcal {L}}^{*}\tau_{k}(t,x)\). Under the induction hypothesis that \(\ker(\tau_{k})\subseteq\ker(\tau_{k+1})\), we have \(t_{0}\le s_{0}\) as \(t_{0}\) and \(s_{0}\) are hitting times to \(\ker(\tau_{k+1})\) and \(\ker(\tau_{k})\), respectively; see (3.5). Using (A.7), \(t_{0}\le s_{0}\), Assumption 3.12 and \(g\) being nonnegative, we obtain
where the third line follows from the tower property of conditional expectations and the fourth is due to Lemma A.1. This implies \((t,x)\notin C_{\tau_{k+1}}\) and thus
That is, \((t,x)\in\ker(\tau_{k+2})\). Thus, we conclude that \(\ker (\tau_{k+1})\subseteq\ker(\tau_{k+2})\) as desired.
It remains to show that \(\tau_{0}\) defined in (3.7) is a stopping policy. Observe that for any \((t,x)\in\mathbb{X}\), \(\tau_{0}(t,x) =0\) if and only if \(\Theta^{n}\tau (t,x)=0\), i.e., \((t,x)\in\ker(\Theta^{n}\tau)\), for \(n\) large enough. This together with (3.10) implies that
Hence \(\tau_{0}:\mathbb{X}\to\{0,1\}\) is Borel-measurable and thus an element in \({\mathcal {T}}(\mathbb{X})\). □
1.3 A.3 Proof of Proposition 3.15
Fix \((t,x)\in\ker(\widetilde{\tau})\). Since \(\widetilde{\tau}(t,x)=0\), i.e., \(\widetilde{\tau}_{t,x}=t\), (2.5), (2.4) and (2.6) imply
This shows that \((t,x)\in S_{\widetilde{\tau}}\cup I_{\widetilde{\tau}}\). Thus we have \(\ker(\widetilde{\tau})\subseteq S_{\widetilde{\tau}}\cup I_{\widetilde{\tau}}\). It follows that
where the last equality follows from (3.6).
1.4 A.4 Derivation of Theorem 3.16
Lemma A.2
Suppose Assumption 3.12 holds and \(\tau\in{\mathcal {T}}(\mathbb{X})\) satisfies (3.9). Then \(\tau_{0}\) defined in (3.7) satisfies
Proof
We use the notation in (A.1). Recall that we have \(\ker(\tau _{n})\subseteq\ker(\tau_{n+1})\) for all \(n\in\mathbb{N}\) and \(\ker(\tau_{0}) = \bigcup_{n\in\mathbb {N}}\ker(\tau_{n})\) from Proposition 3.13. By (3.5), this implies that \(({\mathcal {L}}^{*}\tau_{n}(t,x))_{n\in \mathbb{N}}\) is a nonincreasing sequence of stopping times and
It remains to show that \({\mathcal {L}}^{*}\tau_{0}(t,x) \ge t_{0}\). We deal with the following two cases.
(i) On \(\{\omega\in\Omega: {\mathcal {L}}^{*}\tau_{0}(t,x)(\omega)=t\}\): By (3.5), there exists a sequence \((t_{m})_{m\in\mathbb{N}}\) in \(\mathbb{R}_{+}\), depending on \(\omega\in \Omega\), such that \(t_{m}\downarrow t\) and \(\tau_{0}(t_{m}, X^{t,x}_{t_{m}}(\omega)) = 0\) for all \(m\in\mathbb{N}\). For each \(m\in \mathbb{N}\), by the definition of \(\tau_{0}\) in (3.7), there exists \(n^{*}\in\mathbb{N}\) large enough such that \(\tau_{n^{*}}(t_{m}, X^{t,x}_{t_{m}}(\omega)) = 0\), which implies \({\mathcal {L}}^{*}\tau_{n^{*}}(t,x)(\omega)\le t_{m}\). Since \(({\mathcal {L}}^{*}\tau_{n}(t,x))_{n\in\mathbb{N}}\) is nonincreasing, we have \(t_{0}(\omega)\le{\mathcal {L}}^{*}\tau_{n^{*}}(t,x)(\omega)\le t_{m}\). With \(m\to\infty\), we obtain \(t_{0}(\omega)\le t={\mathcal {L}}^{*}\tau_{0}(t,x)(\omega)\).
(ii) On \(\{\omega\in\Omega: {\mathcal {L}}^{*}\tau_{0}(t,x)(\omega)>t\} \): Set \(s_{0}:={\mathcal {L}}^{*}\tau_{0}(t,x)\) and focus on the value of \(\tau_{0}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))\). If \(\tau _{0}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))=0\), then by (3.7) there exists \(n^{*}\in\mathbb{N}\) large enough such that \(\tau_{n^{*}}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))=0\). Since \(({\mathcal {L}}^{*}\tau_{n}(t,x))_{n\in \mathbb{N}}\) is nonincreasing, \(t_{0}(\omega)\le {\mathcal {L}}^{*}\tau_{n^{*}}(t,x)(\omega)\le s_{0}(\omega)\) as desired. If \(\tau_{0}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))=1\), then by (3.5), there exists a sequence \((t_{m})_{m\in\mathbb{N}}\) in \(\mathbb{R}_{+}\), depending on \(\omega\in\Omega\), such that \(t_{m}\downarrow s_{0}(\omega)\) and \(\tau_{0}(t_{m}, X^{t,x}_{t_{m}}(\omega)) = 0\) for all \(m\in\mathbb{N}\). Then we can argue as in case (i) to show that \(t_{0}(\omega)\le s_{0}(\omega)\) as desired. □
Now we are ready to prove Theorem 3.16.
Proof of Theorem 3.16
By Proposition 3.13, \(\tau_{0}\in{\mathcal {T}}(\mathbb{X})\) is well defined. For simplicity, we use the notation in (A.1). Fix \((t,x)\in\mathbb{X}\). If \(\tau_{0}(t,x)=0\), then (3.7) gives \(\tau_{n}(t,x)=0\) for \(n\) large enough. Since \(\tau_{n}(t,x) = \Theta\tau_{n-1}(t,x)\), we deduce from “\(\tau_{n}(t,x)=0\) for \(n\) large enough” and (3.6) that \((t,x)\in S_{\tau_{n-1}}\cup I_{\tau _{n-1}}\) for \(n\) large enough. That is, \(g(x)\ge\mathbb{E}^{t,x}[\delta({\mathcal {L}}^{*}\tau_{n-1}(t,x)-t) g(X_{{\mathcal {L}}^{*}\tau_{n-1}(t,x)})]\ \hbox{for $n$ large enough}\). With \(n\to\infty\), the dominated convergence theorem and Lemma A.2 yield
which shows that \((t,x)\in S_{\tau_{0}}\cup I_{\tau_{0}}\). We then deduce from (3.6) and \(\tau_{0}(t,x)=0\) that \(\Theta\tau_{0}(t,x) = \tau_{0}(t,x)\). On the other hand, if \(\tau _{0}(t,x)=1\), then (3.7) gives \(\tau_{n}(t,x)=1\) for \(n\) large enough. Since \(\tau_{n}(t,x) = \Theta \tau_{n-1}(t,x)\), we deduce from “\(\tau_{n}(t,x)=1\) for \(n\) large enough” and (3.6) that \((t,x)\in C_{\tau_{n-1}}\cup I_{\tau_{n-1}}\) for \(n\) large enough. That is, \(g(x)\le\mathbb{E}^{t,x}[\delta ({\mathcal {L}}^{*}\tau_{n-1}(t,x)-t) g(X_{{\mathcal {L}}^{*}\tau_{n-1}(t,x)})]\ \hbox{for }n\hbox{ large enough} \). With \(n\to\infty\), the dominated convergence theorem and Lemma A.2 yield
which shows that \((t,x)\in C_{\tau_{0}}\cup I_{\tau_{0}}\). We then deduce from (3.6) and \(\tau_{0}(t,x)=1\) that \(\Theta\tau_{0}(t,x) = \tau_{0}(t,x)\). We therefore conclude that \(\tau_{0}\in{\mathcal {E}}(\mathbb{X})\). □
Appendix B: Proofs for Sect. 4
2.1 B.1 Derivation of Proposition 4.1
In the classical case of exponential discounting, (2.7) ensures that for all \(s\ge0\),
which shows that \((\delta(s)v(X_{s}^{x}))_{s\ge0}\) is a supermartingale. Under hyperbolic discounting (4.1), since \(\delta(r_{1})\delta(r_{2}) \leq\delta (r_{1}+r_{2})\) for all \(r_{1},r_{2}\ge0\), \((\delta(s)v(X_{s}^{x}))_{s\ge t}\) need no longer be a supermartingale as the first equality in the above equation fails.
To overcome this, we introduce an auxiliary value function: for \((s,x)\in\mathbb{R}^{2}_{+}\),
By definition, \(V(0,x)=v(x)\), and \((V(s,X^{x}_{s}) )_{s\ge0}\) is a supermartingale as \(V(s,X^{x}_{s})\) is equal to the right-hand side of (B.1).
Proof of Proposition 4.1
Recall that \(X_{s}=|W_{s}|\) for a one-dimensional Brownian motion \(W\). Let \(y\in\mathbb{R}\) be the initial value of \(W\) and define \(\bar{V}(s,y) := V(s,|y|)\). The associated variational inequality for \(\bar{V}(s,y)\) is the following: for \((s,y)\in[0,\infty)\times\mathbb{R}\),
Taking \(s\mapsto b(s)\) as the free boundary to be determined, we can rewrite (B.2) as
Following [27], we propose the ansatz \(w(s,y)=\frac{1}{\sqrt{1+\beta s}} h(\frac{y}{\sqrt{1+\beta s}})\). Equation (B.3) then becomes a one-dimensional free boundary problem, namely
As the variable \(s\) does not appear in the above ODE, take \(b(s) = \alpha\sqrt{1+ \beta s}\) for some \(\alpha\ge0\). The general solution of the differential equation in the first line of (B.4) is
We then have
To find the parameters \(c_{1}, c_{2}\) and \(\alpha\), we equate the values of \(w(s,y)\) and its partial derivatives on both sides of the free boundary. This yields the equations
The first equation implies \(c_{2}=0\). Then, these equations together yield \(\alpha= 1/\sqrt{\beta}\) and \(c_{1} = \alpha e^{-1/2}\). Thus we obtain
Note that \(w(s,y)> \frac{|y|}{1+ \beta s}\) for \(|y|<\sqrt{1/\beta +s}\). Indeed, by defining the function
and observing that \(h(0)>0\), \(h(\sqrt{1/\beta+s})=0\) and \(h'(y)<\frac {1}{1+\beta s} -\frac{1}{1+\beta s}=0\) for all \(y\in(0,\sqrt{1/\beta+s})\), we conclude that \(h(y)>0\) for all \(y\in[0,\sqrt{1/\beta+s})\), i.e., \(w(s,y)> \frac{|y|}{1+ \beta s}\) for \(|y|<\sqrt{1/\beta+s}\). Also note that \(w\) is \({\mathcal {C}}^{1,1}\) on \([0,\infty)\times\mathbb{R}\) and \({\mathcal {C}}^{1,2}\) on the domain \(\{(s,y)\in[0,\infty)\times\mathbb{R} : |y|<\sqrt{1/\beta+s }\}\). Moreover, by (B.5), \(w_{s}(s,y) + \frac {1}{2}w_{yy}(s,y)<0\) for \(|y|>\sqrt{1/\beta+ s}\). We then conclude from a standard verification theorem (see e.g. [26, Theorem 3.2]) that \(\bar{V}(s,y) = w(s,y)\) is a smooth solution of (B.3). This implies that \((\bar{V}(s,W^{y}_{s}))_{s\ge0}\) is a supermartingale, and \((\bar{V}(s\wedge\tau^{*}_{y},W^{y}_{s\wedge \tau^{*}_{y}}))_{s\ge0}\), with \(\tau^{*}_{y} := \inf\{s\ge0: |W^{y}_{s}|\ge\sqrt{1/\beta+s}\}\), is a true martingale.
It then follows from standard arguments that \(\tau^{*}_{y}\) is the smallest optimal stopping time for \(\bar{V}(0,y)\). As a consequence, \(\hat{\tau}_{x}:=\inf\{s\ge0: X^{x}_{s}\ge \sqrt{1/\beta+s}\}\) is the smallest optimal stopping time for (4.2). In view of Proposition 2.2, \(\widetilde{\tau}_{x} = \hat{\tau}_{x}\). □
Remark B.1
With \(X\) being reflected at the origin, it is expected that the variational inequality of the value function \(V(s,x)\) should admit a Neumann boundary condition at \(x=0\). This is not explicitly seen in (B.2) because of the change of variable \(\bar{V}(s,y) := V(s,|y|)\) in the second line of the proof above, which shifts our analysis to a Brownian motion with no reflection at the origin. In fact, one may check directly from (B.5) that \(V(s,x) = \bar{V}(s,x) = w(s,x)\) indeed satisfies the Neumann boundary condition \(V_{x}(s,0+)=0\) for all \(s\ge0\).
2.2 B.2 Proof of Lemma 4.3
First, we prove that \(E\) is totally disconnected. If \(\ker(\tau)=[a,\infty)\), then \(E=\emptyset\) and there is nothing to prove. Assume that there exists \(x^{*}> a\) such that \(x^{*}\notin\ker(\tau)\). Define
We claim that \(\ell=u=x^{*}\). Assume to the contrary that \(\ell< u\). Then \(\tau(x)=1\) for all \(x\in(\ell,u)\). Thus, given \(y\in(\ell,u)\), \({\mathcal {L}}^{*}\tau(y) = T^{y} :=\inf\{ s\ge0 : X^{y}_{s} \notin(\ell,u)\}>0\) and
Since \(X_{s}=|W_{s}|\) for a one-dimensional Brownian motion \(W\) and \(0<\ell <y<u\), by the optional sampling theorem, \(\mathbb{P}[X_{T^{y}}=\ell] = \mathbb{P}[W^{y}_{s}\ \hbox{hits }\ell\hbox{ before hitting }u] = \frac{u-y}{u-\ell}\) and \(\mathbb{P}[X_{T^{y}}=u]=\mathbb{P}[W^{y}_{s}\ \hbox{hits }u\hbox{ before hitting }\ell] =\frac{y-\ell}{u-\ell}\). Alternatively, one may evaluate \(\mathbb{P}[X_{T^{y}}=\ell]\) and \(\mathbb{P}[X_{T^{y}}= u]\) directly by using the fact that the scale function of a one-dimensional Bessel process is the identity mapping (see e.g. [8, Part I, Chap. 6, Sect. 15]). This together with (B.6) gives \(J(y; {\mathcal {L}}^{*}\tau(y)) < y\). This implies \(y\in S_{\tau}\), and thus \(\Theta\tau(y)=0\) by (3.12). Then \(\Theta\tau(y)\neq\tau(y)\), a contradiction to \(\tau\in{\mathcal {E}}(\mathbb{R}_{+})\). This already implies that \(E\) is totally disconnected, and thus \(\overline{\ker(\tau)}=[a,\infty)\). The rest of the proof follows from Lemma 4.2.
2.3 B.3 Proof of Lemma 4.4
(i) Given \(a\ge0\), it is obvious from the definition that \(\eta (0,a)\in(0,a)\) and \(\eta(a,a)=a\). Fix \(x\in(0,a)\) and let \(f^{x}_{a}\) denote the density of \(T^{x}_{a}\). We obtain
Since \(T^{x}_{a}\) is the first hitting time of a one-dimensional Bessel process, we compute its Laplace transform by using [19, Theorem 3.1] (or [8, Part II, Sect. 3, Formula 2.0.1]), as
Here, \(I_{\nu}\) denotes the modified Bessel function of the first kind. Thanks to the above formula with \(\lambda=\sqrt{2\beta s}\), we obtain from (B.7) that
It is then obvious that \(x\mapsto\eta(x,a)\) is strictly increasing. Moreover,
which shows the strict convexity.
(ii) This follows from (B.8) and the dominated convergence theorem.
(iii) We first prove the desired result with \(x^{*}(a)\in(0,a)\), and then upgrade it to \(x^{*}(a)\in(0,a^{*})\). Fix \(a\ge0\). In view of the properties in (i), we observe that the two curves \(y=\eta(x,a)\) and \(y=x\) intersect at some \(x^{*}(a)\in(0,a)\) if and only if \(\eta_{x}(a,a)>1\). Define \(k(a):=\eta_{x}(a,a)\). By (B.8),
Thus we see that \(k(0)=0\) and \(k(a)\) is strictly increasing on \((0,1)\) since for any \(a>0\),
By numerical computation, \(k(1/\sqrt{\beta}) =\int_{0}^{\infty}e^{-s} \sqrt{2s}\tanh(\sqrt{2s}) ds \approx 1.07461 >1\). It follows that there must exist \(a^{*}\in(0,1/\sqrt{\beta })\) such that \(k(a^{*})=\eta_{x}(a^{*},a^{*})=1\). Monotonicity of \(k(a)\) then gives the desired result.
Now, for any \(a> a^{*}\), we intend to upgrade the previous result to \(x^{*}(a)\in(0,a^{*})\). Fix \(x\ge0\). By the definition of \(\eta\) and (ii), on the domain \(a\in[x,\infty )\), the mapping \(a\mapsto \eta(x,a)\) must either first increase and then decrease to 0, or directly decrease to 0. From (B.8), we have
with \(k\) as in (B.9). Recalling \(k(a^{*})=1\), we have \(\eta _{a}(a^{*},a^{*})=0\). Notice that
where the second line follows from \(\tanh(x)\le1\) for \(x\ge0\) and \(a^{*}\in(0,1/\sqrt{\beta})\). Since \(\eta_{a}(a^{*},a^{*})=0\) and \(\eta_{aa}(a^{*},a^{*})<0\), we conclude that on the domain \(a\in[a^{*},\infty)\), the mapping \(a\mapsto\eta(a^{*},a)\) decreases to 0. On the other hand, for any \(a>a^{*}\), since \(\eta(a^{*},a) < \eta(a^{*},a^{*})= a^{*}\), we must have \(x^{*}(a)< a^{*}\).
Rights and permissions
About this article
Cite this article
Huang, YJ., Nguyen-Huu, A. Time-consistent stopping under decreasing impatience. Finance Stoch 22, 69–95 (2018). https://doi.org/10.1007/s00780-017-0350-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00780-017-0350-6
Keywords
- Time inconsistency
- Optimal stopping
- Hyperbolic discounting
- Decreasing impatience
- Subgame-perfect Nash equilibrium
- Iterative approach