Skip to main content

Time-consistent stopping under decreasing impatience


Under non-exponential discounting, we develop a dynamic theory for stopping problems in continuous time. Our framework covers discount functions that induce decreasing impatience. Due to the inherent time inconsistency, we look for equilibrium stopping policies, formulated as fixed points of an operator. Under appropriate conditions, fixed-point iterations converge to equilibrium stopping policies. This iterative approach corresponds to the hierarchy of strategic reasoning in game theory and provides “agent-specific” results: it assigns one specific equilibrium stopping policy to each agent according to her initial behavior. In particular, it leads to a precise mathematical connection between the naive behavior and the sophisticated one. Our theory is illustrated in a real options model.

This is a preview of subscription content, access via your institution.

Fig. 1


  1. Ainslie, G.: Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person. Cambridge University Press, Cambridge (1992)

    Google Scholar 

  2. Barberis, N.: A model of casino gambling. Manag. Sci. 58, 35–51 (2012)

    Article  Google Scholar 

  3. Bass, R.F.: The measurability of hitting times. Electron. Commun. Probab. 15, 99–105 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  4. Bayraktar, E., Huang, Y.-J.: On the multidimensional controller-and-stopper games. SIAM J. Control Optim. 51, 1263–1297 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  5. Björk, T., Khapko, M., Murgoci, A.: On time-inconsistent stochastic control in continuous time. Finance Stoch. 21, 331–360 (2017)

    MathSciNet  Article  MATH  Google Scholar 

  6. Björk, T., Murgoci, A.: A theory of Markovian time-inconsistent stochastic control in discrete time. Finance Stoch. 18, 545–592 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  7. Björk, T., Murgoci, A., Zhou, X.Y.: Mean–variance portfolio optimization with state-dependent risk aversion. Math. Finance 24, 1–24 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  8. Borodin, A.N., Salminen, P.: Handbook of Brownian Motion—Facts and Formulae, 2nd edn. Probability and Its Applications. Birkhäuser Verlag, Basel (2002)

    Book  MATH  Google Scholar 

  9. Bouchard, B., Touzi, N.: Weak dynamic programming principle for viscosity solutions. SIAM J. Control Optim. 49, 948–962 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  10. Dong, Y., Sircar, R.: Time-inconsistent portfolio investment problems. In: Crisan, D., et al. (eds.) Stochastic Analysis and Applications, vol. 2014, pp. 239–281. Springer, Cham (2014)

    Google Scholar 

  11. Ebert, S., Strack, P.: Until the bitter end: on prospect theory in a dynamic context. Am. Econ. Rev. 105, 1618–1633 (2015)

    Article  Google Scholar 

  12. Ekeland, I., Lazrak, A.: Being serious about non-commitment: subgame perfect equilibrium in continuous time. Tech. rep., University of British Columbia. Preprint (2006). Available online at arXiv:math/0604264 [math.OC]

  13. Ekeland, I., Mbodji, O., Pirvu, T.A.: Time-consistent portfolio management. SIAM J. Financ. Math. 3, 1–32 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  14. Ekeland, I., Pirvu, T.A.: Investment and consumption without commitment. Math. Financ. Econ. 2, 57–86 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  15. Grenadier, S.R., Wang, N.: Investment under uncertainty and time-inconsistent preferences. J. Financ. Econ. 84, 2–39 (2007)

    Article  Google Scholar 

  16. Hu, Y., Jin, H., Zhou, X.Y.: Time-inconsistent stochastic linear-quadratic control. SIAM J. Control Optim. 50, 1548–1572 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  17. Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance, corrected 3rd printing. Springer, New York (2001)

    MATH  Google Scholar 

  18. Karp, L.: Non-constant discounting in continuous time. J. Econ. Theory 132, 557–568 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  19. Kent, J.: Some probabilistic properties of Bessel functions. Ann. Probab. 6, 760–770 (1978)

    MathSciNet  Article  MATH  Google Scholar 

  20. Laibson, D.: Golden eggs and hyperbolic discounting. Q. J. Econ. 112, 443–477 (1997)

    Article  MATH  Google Scholar 

  21. Loewenstein, G., Prelec, D.: Anomalies in intertemporal choice: evidence and an interpretation. Q. J. Econ. 57, 573–598 (1992)

    Article  Google Scholar 

  22. Loewenstein, G., Thaler, R.: Anomalies: intertemporal choice. J. Econ. Perspect. 3, 181–193 (1989)

    Article  Google Scholar 

  23. Noor, J.: Decreasing impatience and the magnitude effect jointly contradict exponential discounting. J. Econ. Theory 144, 869–875 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  24. Noor, J.: Hyperbolic discounting and the standard model: eliciting discount functions. J. Econ. Theory 144, 2077–2083 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  25. Nutz, M.: Random \(G\)-expectations. Ann. Appl. Probab. 23, 1755–1777 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  26. Øksendal, B., Sulem, A.: Applied Stochastic Control of Jump Diffusions, 2nd edn. Universitext. Springer, Berlin (2007)

    Book  MATH  Google Scholar 

  27. Pedersen, J.L., Peskir, G.: Solving non-linear optimal stopping problems by the method of time-change. Stoch. Anal. Appl. 18, 811–835 (2000)

    MathSciNet  Article  MATH  Google Scholar 

  28. Pedersen, J.L., Peskir, G.: Optimal mean–variance selling strategies. Math. Financ. Econ. 10, 203–220 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  29. Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel (2006)

    MATH  Google Scholar 

  30. Pollak, R.A.: Consistent planning. Rev. Econ. Stud. 35, 201–208 (1968)

    Article  Google Scholar 

  31. Prelec, D.: Decreasing impatience: a criterion for non-stationary time preference and “hyperbolic” discounting. Scand. J. Econ. 106, 511–532 (2004)

    Article  Google Scholar 

  32. Stahl, D.: Evolution of smart-n players. Games Econ. Behav. 5, 604–617 (1993)

    MathSciNet  Article  MATH  Google Scholar 

  33. Stahl, D., Wilson, P.: Experimental evidence on players’ models of other players. J. Econ. Behav. Organ. 25, 309–327 (1994)

    Article  MATH  Google Scholar 

  34. Strotz, R.H.: Myopia and inconsistency in dynamic utility maximization. Rev. Econ. Stud. 23, 165–180 (1955)

    Article  Google Scholar 

  35. Taksar, M.I., Markussen, C.: Optimal dynamic reinsurance policies for large insurance portfolios. Finance Stoch. 7, 97–121 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  36. Thaler, R.: Some empirical evidence on dynamic inconsistency. Econ. Lett. 8, 201–207 (1981)

    Article  Google Scholar 

  37. Xu, Z.Q., Zhou, X.Y.: Optimal stopping under probability distortion. Ann. Appl. Probab. 23, 251–282 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  38. Yong, J.: Time-inconsistent optimal control problems and the equilibrium HJB equation. Math. Control Relat. Fields 3, 271–329 (2012)

    MathSciNet  Article  MATH  Google Scholar 

Download references


For thoughtful advice and comments, we thank Erhan Bayraktar, René Carmona, Samuel Cohen, Ivar Ekeland, Paolo Guasoni, Jan Obłój, Traian Pirvu, Ronnie Sircar and Xunyu Zhou, and seminar participants at Florida State University, Princeton University and University of Oxford. Special gratitude goes to Erhan Bayraktar for bringing this problem to the first author’s attention. Special gratitude also goes to Traian Pirvu for introducing the authors to each other. We are also grateful for critical comments from two anonymous referees which improved the quality of our paper substantially.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Yu-Jui Huang.

Additional information

Y.-J. Huang is supported in part by National Science Foundation (DMS-1715439) and the University of Colorado (11003573). A. Nguyen-Huu is supported in part by the Energy and Prosperity Chair.


Appendix A: Proofs for Sect. 3

Throughout this appendix, we constantly use the notation

$$ \tau_{n} := \Theta^{n} \tau,\qquad n\in\mathbb{N}, \hbox{for any}\ \tau\in{\mathcal {T}}(\mathbb{X}). $$

A.1 Proof of Proposition 3.11

Fix \((t,x)\in\mathbb{X}\). We deal with the two cases \(\widetilde{\tau}(t,x) = 0\) and \(\widetilde{\tau}(t,x) = 1\) separately. If \(\widetilde{\tau}(t,x) = 0\), i.e., \(\widetilde{\tau}_{t,x} = t\), then by (2.6),

$$g(x) =\sup_{\tau\in{\mathcal {T}}_{t}}\mathbb{E}^{t,x}[\delta(\tau-t) g(X_{\tau})] \ge\mathbb{E}^{t,x}\big[\delta\big({\mathcal {L}}^{*}\widetilde{\tau}(t,x)-t\big) g(X_{{\mathcal {L}}^{*}\widetilde{\tau}(t,x)})\big], $$

which implies \((t,x)\in S_{\widetilde{\tau}}\cup I_{\widetilde{\tau}}\). We then conclude from (3.6) that

$$\Theta\widetilde{\tau}(t,x)= \left\{ \textstyle\begin{array}{ll} 0, \quad & \text{ if }(t,x)\in S_{\widetilde{\tau}} \\ \widetilde{\tau}(t,x), \quad & \text{ if }(t,x)\in I_{\widetilde {\tau}} \end{array}\displaystyle \right\} \ = \widetilde{\tau}(t,x). $$

If \(\widetilde{\tau}(t,x) =1\), then \({\mathcal {L}}^{*}\widetilde{\tau}(t,x) = {\mathcal {L}}\widetilde{\tau}(t,x) = \inf\{s\ge t :\widetilde{\tau}_{s,X^{t,x}_{s}}=s \}\). By (2.5) and (2.4), \(\widetilde{\tau}_{s,X^{t,x}_{s}}=s\) means that

$$g\big(X^{t,x}_{s}(\omega)\big) = \mathop{\rm ess\, sup}_{\tau\in {\mathcal {T}}_{s}}\mathbb{E}^{s,X^{t,x}_{s}(\omega)}[\delta(\tau-s) g(X_{\tau})], $$

which is equivalent to

$$\begin{aligned} \delta(s-t) g\big(X^{t,x}_{s}(\omega)\big) &= \delta(s-t) \mathop {\rm ess\, sup}_{\tau\in{\mathcal {T}}_{s}}\mathbb {E}^{s,X^{t,x}_{s}(\omega)}[\delta(\tau-s) g(X_{\tau})]\\ &= \mathop{\rm ess\, sup}_{\tau\in{\mathcal {T}}_{s}}\mathbb {E}^{s,X^{t,x}_{s}(\omega)}[\delta(\tau-t) g(X_{\tau})] = Z^{t,x}_{s}(\omega), \end{aligned}$$

where the second equality follows from (2.7). As a result, we can conclude that \({\mathcal {L}}^{*}\widetilde{\tau}(t,x) = \inf\{s\ge t: \delta(s-t) g(X^{t,x}_{s})= Z^{t,x}_{s}\}=\widetilde{\tau}_{t,x}\). This together with (2.6) shows that

$$\begin{aligned} \mathbb{E}^{t,x}\big[\delta\big({\mathcal {L}}^{*}\widetilde{\tau}(t,x)-t\big) g(X_{{\mathcal {L}}^{*}\widetilde{\tau}(t,x)})\big] &= \mathbb{E}^{t,x}[\delta(\widetilde{\tau}_{t,x}-t) g(X_{\widetilde{\tau}_{t,x}})]\ge g(x), \end{aligned}$$

which implies \((t,x)\in I_{\widetilde{\tau}}\cup C_{\widetilde{\tau}}\). By (3.6), we have

$$\Theta\widetilde{\tau}(t,x)= \left\{ \textstyle\begin{array}{ll} \widetilde{\tau}(t,x), \quad & \text{ if }(t,x)\in I_{\widetilde {\tau}} \\ 1,\quad & \text{ if }(t,x)\in C_{\widetilde{\tau}} \end{array}\displaystyle \right\} \ = \widetilde{\tau}(t,x). $$

We therefore have \(\Theta\widetilde{\tau}(t,x) = \widetilde{\tau}(t,x)\) for all \((t,x)\in\mathbb{X}\), i.e., \(\widetilde{\tau}\in{\mathcal {E}}(\mathbb{X})\).

A.2 Derivation of Proposition 3.13

To prove the technical result in Lemma A.1 below, we need to introduce shifted random variables as in Nutz [25]. Recall from Sect. 2 that \(\Omega\) is the canonical path space. For any \(t\ge0\) and \(\omega\in\Omega\), we define the concatenation of \(\omega\) and \(\tilde{\omega}\in\Omega\) at time \(t\) by

$$(\omega\otimes_{t}\tilde{\omega})_{s} := \omega_{s} \mathbf {1}_{[0,t)}(s) + \big(\tilde{\omega}_{s}-(\tilde{\omega}_{t} - \omega _{t})\big) \mathbf{1}_{[t,\infty)} (s),\quad s\ge0. $$

For any \({\mathcal {F}}_{\infty}\)-measurable random variable \(\xi:\Omega \to\mathbb{R}\), we define the shifted random variable \([\xi]_{t,\omega}:\Omega\to\mathbb{R}\), which is \({\mathcal {F}}^{t}_{\infty}\)-measurable, by

$$[\xi]_{t,\omega} (\tilde{\omega}):= \xi(\omega\otimes_{t} \tilde{\omega})\quad\forall\tilde{\omega}\in\Omega. $$

Given \(\tau\in{\mathcal {T}}\), we write \(\omega\otimes_{\tau(\omega )}\tilde{\omega}\) as \(\omega\otimes_{\tau}\tilde{\omega}\), and \([\xi]_{\tau(\omega ),\omega} (\tilde{\omega})\) as \([\xi]_{\tau,\omega} (\tilde{\omega})\). A detailed analysis of shifted random variables can be found in [4, Appendix A]; Proposition A.1 there implies that for fixed \((t,x)\in\mathbb{X}\), any \(\theta\in{\mathcal {T}}_{t}\) and \({\mathcal {F}}^{t}_{\infty}\)-measurable \(\xi\) with \(\mathbb {E}^{t,x}[|\xi|]<\infty\) satisfy

$$ \mathbb{E}^{t,x}[\xi\, |\, {\mathcal {F}}^{t}_{\theta}](\omega) = \mathbb {E}^{t,x}\big[[\xi]_{\theta,\omega}\big]\quad \hbox{for a.e. $\omega\in\Omega$}. $$

Lemma A.1

For any \(\tau\in{\mathcal {T}}(\mathbb{X})\) and \((t,x)\in\mathbb {X}\), define \(t_{0}:= {\mathcal {L}}^{*}\tau_{1}(t,x)\in{\mathcal {T}}_{t}\) and \(s_{0} := {\mathcal {L}}^{*}\tau(t,x)\in{\mathcal {T}}_{t}\), with \(\tau_{1}\) as in (A.1). If \(t_{0}\le s_{0}\), then for a.e. \(\omega\in\{t < t_{0}\}\),

$$g\big(X^{t,x}_{t_{0}}(\omega)\big) \le\mathbb{E}^{t,x}[\delta (s_{0}-t_{0})g(X_{s_{0}})\, |\,{\mathcal {F}}^{t}_{t_{0}}] (\omega). $$


For a.e. \(\omega\in\{t< t_{0}\}\in{\mathcal {F}}_{t}\), we deduce from \(t_{0} (\omega) = {\mathcal {L}}^{*}\tau_{1}(t,x) (\omega)>t\) that \(\tau_{1}(s,X^{t,x}_{s}(\omega))=1\) for all \(s\in(t,t_{0}(\omega))\). In view of (A.1) and (3.6), this implies \((s,X^{t,x}_{s}(\omega))\notin S_{\tau}\) for all \(s\in (t,t_{0}(\omega))\). Thus,

$$\begin{aligned} g\big(X^{t,x}_{s}(\omega)\big) \le \mathbb{E}^{s,X^{t,x}_{s}(\omega)}\big[\delta\big({\mathcal {L}}^{*}\tau (s,X_{s})-s\big)g(X_{{\mathcal {L}}^{*}\tau(s,X_{s})})\big]\quad \forall s\in\big(t,t_{0}(\omega)\big). \end{aligned}$$

For any \(s\in(t,t_{0}(\omega))\), note that

$$[t_{0}]_{s,\omega} (\tilde{\omega})= t_{0}(\omega\otimes_{s}\tilde{\omega}) = {\mathcal {L}}^{*}\tau_{1}(t,x)(\omega\otimes_{s}\tilde{\omega})= {\mathcal {L}}^{*}\tau_{1}\big(s,X^{t,x}_{s}(\omega)\big)(\tilde{\omega}) $$

for all \(\tilde{\omega}\in\Omega\). As \(t_{0}\le s_{0}\), a similar calculation gives

$$[s_{0}]_{s,\omega} (\tilde{\omega})= {\mathcal {L}}^{*}\tau\big(s,X^{t,x}_{s}(\omega)\big)(\tilde{\omega}). $$

We thus conclude from (A.3) that

$$\begin{aligned} g\big(X^{t,x}_{s}(\omega)\big) &\le\mathbb{E}^{s,X^{t,x}_{s}(\omega )}\big[\delta([s_{0}]_{s,\omega}-s)g([X_{s_{0}}]_{s,\omega})\big] \\ &\le\mathbb{E}^{s,X^{t,x}_{s}(\omega)}\big[\delta([s_{0}]_{s,\omega }-[t_{0}]_{s,\omega})g([X_{s_{0}}]_{s,\omega})\big]\quad\forall s\in \big(t,t_{0}(\omega)\big), \end{aligned}$$

where the second line holds because \(\delta\) is decreasing and also \(\delta\) and \(g\) are both nonnegative. On the other hand, by (A.2), it holds a.s. that

$$\begin{aligned} & \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \, {\mathcal {F}}^{t}_{s}](\omega) \\ &= \mathbb{E}^{t,x}\left[\delta([s_{0}]_{s,\omega}-[t_{0}]_{s,\omega}) g([X^{t,x}_{s_{0}}]_{s,\omega})\right]\quad \forall s\ge t, s\in \mathbb{Q}. \end{aligned}$$

Note that we used the countability of ℚ to obtain the above almost sure statement. This together with (A.4) shows that it holds a.s. that

$$ g\big(X^{t,x}_{s}(\omega)\big)\ \mathbf{1}_{\{(t,t_{0}(\omega))\cap \mathbb{Q}\}}(s) \le\mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{s}](\omega)\ \mathbf{1}_{\{(t,t_{0}(\omega))\cap \mathbb{Q}\}}(s). $$

Since our sample space \(\Omega\) is the canonical space for Brownian motion with the right-continuous Brownian filtration \(\mathbb{F}\), the martingale representation theorem holds under the current setting. This implies that every martingale has a continuous version. Let \((M_{s})_{s\ge t}\) be the continuous version of the martingale \((\mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{s}])_{s\ge t}\). Then (A.5) immediately implies that it holds a.s. that

$$ g\big(X^{t,x}_{s}(\omega)\big) \mathbf{1}_{\{(t,t_{0}(\omega))\cap \mathbb{Q}\}}(s) \le M_{s}(\omega)\ \mathbf{1}_{\{(t,t_{0}(\omega))\cap \mathbb{Q}\}}(s). $$

Also, using the right-continuity of \(M\) and (A.2), one can show that for any \(\tau\in{\mathcal {T}}_{t}\), we have \(M_{\tau}= \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \, {\mathcal {F}}^{t}_{\tau}]\) a.s. Now we can take some \(\Omega^{*}\in{\mathcal {F}}_{\infty}\) with \(\mathbb{P}[\Omega^{*}] =1\) such that (A.6) holds true and \(M_{t_{0}}(\omega) = \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{t_{0}}](\omega)\) for all \(\omega\in\Omega^{*}\). For any \(\omega\in\Omega^{*}\cap\{t< t_{0}\}\), take \(({k_{n}})\subseteq\mathbb {Q}\) such that \(k_{n} >t\) and \(k_{n}\uparrow t_{0}(\omega)\). Then (A.6) implies that \(g(X^{t,x}_{k_{n}}(\omega)) \le M_{k_{n}}(\omega),\ \forall n\in\mathbb{N}\). As \(n\to\infty\), we obtain from the continuity of \(s\mapsto X_{s}\) and \(z\mapsto g(z)\) and the left-continuity of \(s\mapsto M_{s}\) that \(g(X^{t,x}_{t_{0}}(\omega)) \le M_{t_{0}}(\omega) = \mathbb{E}^{t,x}[\delta(s_{0}-t_{0}) g(X_{s_{0}})\, | \,{\mathcal {F}}^{t}_{t_{0}}](\omega)\). □

Now we are ready to prove Proposition 3.13.

Proof of Proposition 3.13

We prove (3.10) by induction. We know that the result holds for \(n=0\) by (3.9). Now assume that (3.10) holds for \(n=k\in\mathbb{N}\cup\{0\}\), and we intend to show that (3.10) also holds for \(n=k+1\). Recall the notation in (A.1). Fix \((t,x)\in\ker(\tau_{k+1})\), i.e., \(\tau_{k+1}(t,x)=0\). If \({\mathcal {L}}^{*}\tau_{k+1}(t,x) = t\), then \((t,x)\) belongs to \(I_{\tau_{k+1}}\). By (3.6), we get \(\tau _{k+2}(t,x) = \Theta\tau_{k+1}(t,x)= \tau_{k+1}(t,x) =0\), i.e., \((t,x)\in\ker(\tau_{k+2})\) as desired. We therefore assume below that \({\mathcal {L}}^{*}\tau_{k+1}(t,x)>t\).

By (3.6), \(\tau_{k+1}(t,x)=0\) implies

$$ g(x)\ge\mathbb{E}^{t,x}\big[\delta\big({\mathcal {L}}^{*}\tau _{k}(t,x)-t\big) g(X_{{\mathcal {L}}^{*}\tau_{k}(t,x)})\big]. $$

Let \(t_{0} := {\mathcal {L}}^{*}\tau_{k+1}(t,x)\) and \(s_{0} := {\mathcal {L}}^{*}\tau_{k}(t,x)\). Under the induction hypothesis that \(\ker(\tau_{k})\subseteq\ker(\tau_{k+1})\), we have \(t_{0}\le s_{0}\) as \(t_{0}\) and \(s_{0}\) are hitting times to \(\ker(\tau_{k+1})\) and \(\ker(\tau_{k})\), respectively; see (3.5). Using (A.7), \(t_{0}\le s_{0}\), Assumption 3.12 and \(g\) being nonnegative, we obtain

$$\begin{aligned} g(x) &\ge \mathbb{E}^{t,x}[\delta(s_{0}-t) g(X_{s_{0}})] \\ & \ge\mathbb{E}^{t,x}[\delta(t_{0}-t)\delta(s_{0}-t_{0}) g(X_{s_{0}})]\\ &=\mathbb{E}^{t,x}\big[\delta(t_{0}-t)\mathbb{E}^{t,x}[\delta (s_{0}-t_{0})g(X_{s_{0}})\ |\ {\mathcal {F}}^{t}_{t_{0}}]\big]\\ &\ge\mathbb{E}^{t,x}[\delta(t_{0}-t)g(X_{t_{0}})], \end{aligned}$$

where the third line follows from the tower property of conditional expectations and the fourth is due to Lemma A.1. This implies \((t,x)\notin C_{\tau_{k+1}}\) and thus

$$\tau_{k+2}(t,x) = \left\{ \textstyle\begin{array}{ll} 0 \quad & \text{ for }(t,x)\in S_{\tau_{1}} \\ \tau_{k+1} (t,x) \quad & \text{ for }(t,x)\in I_{\tau_{1}} \end{array}\displaystyle \right\} = 0. $$

That is, \((t,x)\in\ker(\tau_{k+2})\). Thus, we conclude that \(\ker (\tau_{k+1})\subseteq\ker(\tau_{k+2})\) as desired.

It remains to show that \(\tau_{0}\) defined in (3.7) is a stopping policy. Observe that for any \((t,x)\in\mathbb{X}\), \(\tau_{0}(t,x) =0\) if and only if \(\Theta^{n}\tau (t,x)=0\), i.e., \((t,x)\in\ker(\Theta^{n}\tau)\), for \(n\) large enough. This together with (3.10) implies that

$$\{(t,x)\in\mathbb{X}: \tau_{0}(t,x)=0 \} = \bigcup_{n\in\mathbb {N}}\ker(\Theta^{n}\tau)\in{\mathcal {B}}(\mathbb{X}). $$

Hence \(\tau_{0}:\mathbb{X}\to\{0,1\}\) is Borel-measurable and thus an element in \({\mathcal {T}}(\mathbb{X})\). □

A.3 Proof of Proposition 3.15

Fix \((t,x)\in\ker(\widetilde{\tau})\). Since \(\widetilde{\tau}(t,x)=0\), i.e., \(\widetilde{\tau}_{t,x}=t\), (2.5), (2.4) and (2.6) imply

$$g(x) = \sup_{\tau\in{\mathcal {T}}_{t}} \mathbb{E}^{t,x}[\delta(\tau -t) g(X_{\tau})] \ge\mathbb{E}^{t,x} \big[\delta\big({\mathcal {L}}^{*}\widetilde{\tau}(t,x)-t\big) g(X_{{\mathcal {L}}^{*}\widetilde{\tau}(t,x)})\big]. $$

This shows that \((t,x)\in S_{\widetilde{\tau}}\cup I_{\widetilde{\tau}}\). Thus we have \(\ker(\widetilde{\tau})\subseteq S_{\widetilde{\tau}}\cup I_{\widetilde{\tau}}\). It follows that

$$\ker(\widetilde{\tau}) = \big(\ker(\widetilde{\tau})\cap S_{\widetilde{\tau}}\big) \cup \big(\ker(\widetilde{\tau})\cap I_{\widetilde{\tau}}\big) \subseteq S_{\widetilde{\tau}} \cup \big(\ker(\widetilde{\tau})\cap I_{\widetilde{\tau}}\big) = \ker (\Theta\widetilde{\tau}), $$

where the last equality follows from (3.6).

A.4 Derivation of Theorem 3.16

Lemma A.2

Suppose Assumption  3.12 holds and \(\tau\in{\mathcal {T}}(\mathbb{X})\) satisfies (3.9). Then \(\tau_{0}\) defined in (3.7) satisfies

$${\mathcal {L}}^{*}\tau_{0}(t,x) = \lim_{n\to\infty} {\mathcal {L}}^{*}\Theta ^{n}\tau(t,x)\quad\forall(t,x)\in\mathbb{X}. $$


We use the notation in (A.1). Recall that we have \(\ker(\tau _{n})\subseteq\ker(\tau_{n+1})\) for all \(n\in\mathbb{N}\) and \(\ker(\tau_{0}) = \bigcup_{n\in\mathbb {N}}\ker(\tau_{n})\) from Proposition 3.13. By (3.5), this implies that \(({\mathcal {L}}^{*}\tau_{n}(t,x))_{n\in \mathbb{N}}\) is a nonincreasing sequence of stopping times and

$${\mathcal {L}}^{*}\tau_{0}(t,x) \le t_{0}:=\lim_{n\to\infty} {\mathcal {L}}^{*}\tau_{n}(t,x). $$

It remains to show that \({\mathcal {L}}^{*}\tau_{0}(t,x) \ge t_{0}\). We deal with the following two cases.

(i) On \(\{\omega\in\Omega: {\mathcal {L}}^{*}\tau_{0}(t,x)(\omega)=t\}\): By (3.5), there exists a sequence \((t_{m})_{m\in\mathbb{N}}\) in \(\mathbb{R}_{+}\), depending on \(\omega\in \Omega\), such that \(t_{m}\downarrow t\) and \(\tau_{0}(t_{m}, X^{t,x}_{t_{m}}(\omega)) = 0\) for all \(m\in\mathbb{N}\). For each \(m\in \mathbb{N}\), by the definition of \(\tau_{0}\) in (3.7), there exists \(n^{*}\in\mathbb{N}\) large enough such that \(\tau_{n^{*}}(t_{m}, X^{t,x}_{t_{m}}(\omega)) = 0\), which implies \({\mathcal {L}}^{*}\tau_{n^{*}}(t,x)(\omega)\le t_{m}\). Since \(({\mathcal {L}}^{*}\tau_{n}(t,x))_{n\in\mathbb{N}}\) is nonincreasing, we have \(t_{0}(\omega)\le{\mathcal {L}}^{*}\tau_{n^{*}}(t,x)(\omega)\le t_{m}\). With \(m\to\infty\), we obtain \(t_{0}(\omega)\le t={\mathcal {L}}^{*}\tau_{0}(t,x)(\omega)\).

(ii) On \(\{\omega\in\Omega: {\mathcal {L}}^{*}\tau_{0}(t,x)(\omega)>t\} \): Set \(s_{0}:={\mathcal {L}}^{*}\tau_{0}(t,x)\) and focus on the value of \(\tau_{0}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))\). If \(\tau _{0}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))=0\), then by (3.7) there exists \(n^{*}\in\mathbb{N}\) large enough such that \(\tau_{n^{*}}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))=0\). Since \(({\mathcal {L}}^{*}\tau_{n}(t,x))_{n\in \mathbb{N}}\) is nonincreasing, \(t_{0}(\omega)\le {\mathcal {L}}^{*}\tau_{n^{*}}(t,x)(\omega)\le s_{0}(\omega)\) as desired. If \(\tau_{0}(s_{0}(\omega), X^{t,x}_{s_{0}}(\omega))=1\), then by (3.5), there exists a sequence \((t_{m})_{m\in\mathbb{N}}\) in \(\mathbb{R}_{+}\), depending on \(\omega\in\Omega\), such that \(t_{m}\downarrow s_{0}(\omega)\) and \(\tau_{0}(t_{m}, X^{t,x}_{t_{m}}(\omega)) = 0\) for all \(m\in\mathbb{N}\). Then we can argue as in case (i) to show that \(t_{0}(\omega)\le s_{0}(\omega)\) as desired. □

Now we are ready to prove Theorem 3.16.

Proof of Theorem 3.16

By Proposition 3.13, \(\tau_{0}\in{\mathcal {T}}(\mathbb{X})\) is well defined. For simplicity, we use the notation in (A.1). Fix \((t,x)\in\mathbb{X}\). If \(\tau_{0}(t,x)=0\), then (3.7) gives \(\tau_{n}(t,x)=0\) for \(n\) large enough. Since \(\tau_{n}(t,x) = \Theta\tau_{n-1}(t,x)\), we deduce from “\(\tau_{n}(t,x)=0\) for \(n\) large enough” and (3.6) that \((t,x)\in S_{\tau_{n-1}}\cup I_{\tau _{n-1}}\) for \(n\) large enough. That is, \(g(x)\ge\mathbb{E}^{t,x}[\delta({\mathcal {L}}^{*}\tau_{n-1}(t,x)-t) g(X_{{\mathcal {L}}^{*}\tau_{n-1}(t,x)})]\ \hbox{for $n$ large enough}\). With \(n\to\infty\), the dominated convergence theorem and Lemma A.2 yield

$$g(x)\ge\mathbb{E}^{t,x}\big[\delta\big({\mathcal {L}}^{*}\tau _{0}(t,x)-t\big) g(X_{{\mathcal {L}}^{*}\tau_{0}(t,x)})\big], $$

which shows that \((t,x)\in S_{\tau_{0}}\cup I_{\tau_{0}}\). We then deduce from (3.6) and \(\tau_{0}(t,x)=0\) that \(\Theta\tau_{0}(t,x) = \tau_{0}(t,x)\). On the other hand, if \(\tau _{0}(t,x)=1\), then (3.7) gives \(\tau_{n}(t,x)=1\) for \(n\) large enough. Since \(\tau_{n}(t,x) = \Theta \tau_{n-1}(t,x)\), we deduce from “\(\tau_{n}(t,x)=1\) for \(n\) large enough” and (3.6) that \((t,x)\in C_{\tau_{n-1}}\cup I_{\tau_{n-1}}\) for \(n\) large enough. That is, \(g(x)\le\mathbb{E}^{t,x}[\delta ({\mathcal {L}}^{*}\tau_{n-1}(t,x)-t) g(X_{{\mathcal {L}}^{*}\tau_{n-1}(t,x)})]\ \hbox{for }n\hbox{ large enough} \). With \(n\to\infty\), the dominated convergence theorem and Lemma A.2 yield

$$g(x)\le\mathbb{E}^{t,x}\big[\delta\big({\mathcal {L}}^{*}\tau _{0}(t,x)-t\big) g(X_{{\mathcal {L}}^{*}\tau_{0}(t,x)})\big], $$

which shows that \((t,x)\in C_{\tau_{0}}\cup I_{\tau_{0}}\). We then deduce from (3.6) and \(\tau_{0}(t,x)=1\) that \(\Theta\tau_{0}(t,x) = \tau_{0}(t,x)\). We therefore conclude that \(\tau_{0}\in{\mathcal {E}}(\mathbb{X})\). □

Appendix B: Proofs for Sect. 4

B.1 Derivation of Proposition 4.1

In the classical case of exponential discounting, (2.7) ensures that for all \(s\ge0\),

$$ \delta(s)v(X_{s}^{x})=\sup_{\tau\in{\mathcal {T}}}\mathbb {E}^{X^{x}_{s}}[\delta(s+\tau) g(X_{\tau})]= \sup_{\tau\in{\mathcal {T}}_{s}}\mathbb{E}^{x}[\delta(\tau) g(X_{\tau}) \, | \, {\mathcal {F}}_{s}], $$

which shows that \((\delta(s)v(X_{s}^{x}))_{s\ge0}\) is a supermartingale. Under hyperbolic discounting (4.1), since \(\delta(r_{1})\delta(r_{2}) \leq\delta (r_{1}+r_{2})\) for all \(r_{1},r_{2}\ge0\), \((\delta(s)v(X_{s}^{x}))_{s\ge t}\) need no longer be a supermartingale as the first equality in the above equation fails.

To overcome this, we introduce an auxiliary value function: for \((s,x)\in\mathbb{R}^{2}_{+}\),

$$\begin{aligned} V(s,x)&:=\sup_{\tau\in{\mathcal {T}}}\mathbb{E}^{x}[\delta(s+\tau) g(X_{\tau})] =\sup_{\tau\in{\mathcal {T}}}\mathbb{E}^{x}\bigg[\frac {X_{\tau}}{1+\beta(s+\tau)}\bigg]. \end{aligned}$$

By definition, \(V(0,x)=v(x)\), and \((V(s,X^{x}_{s}) )_{s\ge0}\) is a supermartingale as \(V(s,X^{x}_{s})\) is equal to the right-hand side of (B.1).

Proof of Proposition 4.1

Recall that \(X_{s}=|W_{s}|\) for a one-dimensional Brownian motion \(W\). Let \(y\in\mathbb{R}\) be the initial value of \(W\) and define \(\bar{V}(s,y) := V(s,|y|)\). The associated variational inequality for \(\bar{V}(s,y)\) is the following: for \((s,y)\in[0,\infty)\times\mathbb{R}\),

$$ \min\left\{w_{s}(s,y)+\frac{1}{2}w_{yy}(s,y),\ w(s,y)-\frac {|y|}{1+\beta s}\right\} =0. $$

Taking \(s\mapsto b(s)\) as the free boundary to be determined, we can rewrite (B.2) as

$$ \textstyle\begin{cases} \hbox{$w_{s}(s,y) + \frac{1}{2}w_{yy}(s,y)=0$ and $ w(s,y)>\frac {|y|}{1+\beta s}$} \quad & \hbox{for}\ |y|< b(s),\\ w(s,y)=\frac{|y|}{1+\beta s} \quad &\hbox{for}\ |y|\ge b(s). \end{cases} $$

Following [27], we propose the ansatz \(w(s,y)=\frac{1}{\sqrt{1+\beta s}} h(\frac{y}{\sqrt{1+\beta s}})\). Equation (B.3) then becomes a one-dimensional free boundary problem, namely

$$ \left\{ \textstyle\begin{array}{ll} \hbox{$-\beta zh'(z)+h''(z)=\beta h(z)$ and $h(z)>|z|$} \quad \,& \hbox{for}\ |z|< \frac{b(s)}{\sqrt{1+\beta s}},\\ h(z)=|z| \quad \,& \hbox{for} \ |z|\ge\frac{b(s)}{\sqrt{1+\beta s}}. \end{array}\displaystyle \right. $$

As the variable \(s\) does not appear in the above ODE, take \(b(s) = \alpha\sqrt{1+ \beta s}\) for some \(\alpha\ge0\). The general solution of the differential equation in the first line of (B.4) is

$$h(z) = e^{\frac{\beta}{2}z^{2}}\bigg(c_{1}+c_{2} \sqrt {\frac{2}{\beta}}\int_{0}^{\sqrt{{\beta}/{2}}\ z} e^{-u^{2}}du\bigg),\quad (c_{1}, c_{2}) \in\mathbb{R}^{2}\; . $$

We then have

$$w(s,y) = \textstyle\begin{cases} \frac{e^{\frac{\beta y^{2}}{2(1+\beta s)}}}{\sqrt{1+\beta s}}\bigg(c_{1} + c_{2} \sqrt{\frac{2}{\beta}}\int_{0}^{\frac{\sqrt{{\beta }/{2}} }{\sqrt{1+\beta s}}y} e^{-u^{2}}du\bigg), \quad & |y|< \alpha\sqrt{1+\beta s},\\ \frac{|y|}{1+ \beta s}, & |y|\ge\alpha\sqrt{1+\beta s}. \end{cases} $$

To find the parameters \(c_{1}, c_{2}\) and \(\alpha\), we equate the values of \(w(s,y)\) and its partial derivatives on both sides of the free boundary. This yields the equations

$$ \alpha= e^{\frac{\beta}{2}\alpha^{2}}\bigg(c_{1} \pm c_{2}\sqrt{\frac {2}{\beta}}\int_{0}^{\sqrt{{\beta}/{2}}\ \alpha}e^{-u^{2}}du\bigg) \qquad \text{and}\qquad \alpha^{2}\beta+ c_{2} =1. $$

The first equation implies \(c_{2}=0\). Then, these equations together yield \(\alpha= 1/\sqrt{\beta}\) and \(c_{1} = \alpha e^{-1/2}\). Thus we obtain

$$ w(s,y) = \textstyle\begin{cases} \frac{1}{\sqrt{\beta}\sqrt{1+\beta s}}\exp(\frac{1}{2}(\frac {\beta y^{2}}{1+\beta s}-1)), \quad & |y|< \sqrt{1/\beta+s},\\ \frac{|y|}{1+ \beta s}, & |y|\ge\sqrt{1/\beta+s}. \end{cases} $$

Note that \(w(s,y)> \frac{|y|}{1+ \beta s}\) for \(|y|<\sqrt{1/\beta +s}\). Indeed, by defining the function

$$h(y) := \frac{1}{\sqrt{\beta}\sqrt{1+\beta s}}\exp\bigg(\frac {1}{2}\Big(\frac{\beta y^{2}}{1+\beta s}-1\Big)\bigg)-\frac {y}{1+\beta s} $$

and observing that \(h(0)>0\), \(h(\sqrt{1/\beta+s})=0\) and \(h'(y)<\frac {1}{1+\beta s} -\frac{1}{1+\beta s}=0\) for all \(y\in(0,\sqrt{1/\beta+s})\), we conclude that \(h(y)>0\) for all \(y\in[0,\sqrt{1/\beta+s})\), i.e., \(w(s,y)> \frac{|y|}{1+ \beta s}\) for \(|y|<\sqrt{1/\beta+s}\). Also note that \(w\) is \({\mathcal {C}}^{1,1}\) on \([0,\infty)\times\mathbb{R}\) and \({\mathcal {C}}^{1,2}\) on the domain \(\{(s,y)\in[0,\infty)\times\mathbb{R} : |y|<\sqrt{1/\beta+s }\}\). Moreover, by (B.5), \(w_{s}(s,y) + \frac {1}{2}w_{yy}(s,y)<0\) for \(|y|>\sqrt{1/\beta+ s}\). We then conclude from a standard verification theorem (see e.g. [26, Theorem 3.2]) that \(\bar{V}(s,y) = w(s,y)\) is a smooth solution of (B.3). This implies that \((\bar{V}(s,W^{y}_{s}))_{s\ge0}\) is a supermartingale, and \((\bar{V}(s\wedge\tau^{*}_{y},W^{y}_{s\wedge \tau^{*}_{y}}))_{s\ge0}\), with \(\tau^{*}_{y} := \inf\{s\ge0: |W^{y}_{s}|\ge\sqrt{1/\beta+s}\}\), is a true martingale.

It then follows from standard arguments that \(\tau^{*}_{y}\) is the smallest optimal stopping time for \(\bar{V}(0,y)\). As a consequence, \(\hat{\tau}_{x}:=\inf\{s\ge0: X^{x}_{s}\ge \sqrt{1/\beta+s}\}\) is the smallest optimal stopping time for (4.2). In view of Proposition 2.2, \(\widetilde{\tau}_{x} = \hat{\tau}_{x}\). □

Remark B.1

With \(X\) being reflected at the origin, it is expected that the variational inequality of the value function \(V(s,x)\) should admit a Neumann boundary condition at \(x=0\). This is not explicitly seen in (B.2) because of the change of variable \(\bar{V}(s,y) := V(s,|y|)\) in the second line of the proof above, which shifts our analysis to a Brownian motion with no reflection at the origin. In fact, one may check directly from (B.5) that \(V(s,x) = \bar{V}(s,x) = w(s,x)\) indeed satisfies the Neumann boundary condition \(V_{x}(s,0+)=0\) for all \(s\ge0\).

B.2 Proof of Lemma 4.3

First, we prove that \(E\) is totally disconnected. If \(\ker(\tau)=[a,\infty)\), then \(E=\emptyset\) and there is nothing to prove. Assume that there exists \(x^{*}> a\) such that \(x^{*}\notin\ker(\tau)\). Define

$$ \ell:= \sup\{{b\in\ker(\tau) : b< x^{*}}\}\quad \;\mbox{ and }\; \quad u:=\inf\{{b\in\ker(\tau) : b>x^{*}}\}. $$

We claim that \(\ell=u=x^{*}\). Assume to the contrary that \(\ell< u\). Then \(\tau(x)=1\) for all \(x\in(\ell,u)\). Thus, given \(y\in(\ell,u)\), \({\mathcal {L}}^{*}\tau(y) = T^{y} :=\inf\{ s\ge0 : X^{y}_{s} \notin(\ell,u)\}>0\) and

$$\begin{aligned} J\big(y; {\mathcal {L}}^{*}\tau(y)\big) = \mathbb{E}^{y}\left[\frac {X_{T^{y}}}{1+\beta T^{y}}\right] < \mathbb{E}^{y}[X_{T^{y}}] = \ell\mathbb{P}[X_{T^{y}}=\ell]+ u\mathbb {P}[X_{T^{y}}=u]. \end{aligned}$$

Since \(X_{s}=|W_{s}|\) for a one-dimensional Brownian motion \(W\) and \(0<\ell <y<u\), by the optional sampling theorem, \(\mathbb{P}[X_{T^{y}}=\ell] = \mathbb{P}[W^{y}_{s}\ \hbox{hits }\ell\hbox{ before hitting }u] = \frac{u-y}{u-\ell}\) and \(\mathbb{P}[X_{T^{y}}=u]=\mathbb{P}[W^{y}_{s}\ \hbox{hits }u\hbox{ before hitting }\ell] =\frac{y-\ell}{u-\ell}\). Alternatively, one may evaluate \(\mathbb{P}[X_{T^{y}}=\ell]\) and \(\mathbb{P}[X_{T^{y}}= u]\) directly by using the fact that the scale function of a one-dimensional Bessel process is the identity mapping (see e.g. [8, Part I, Chap. 6, Sect. 15]). This together with (B.6) gives \(J(y; {\mathcal {L}}^{*}\tau(y)) < y\). This implies \(y\in S_{\tau}\), and thus \(\Theta\tau(y)=0\) by (3.12). Then \(\Theta\tau(y)\neq\tau(y)\), a contradiction to \(\tau\in{\mathcal {E}}(\mathbb{R}_{+})\). This already implies that \(E\) is totally disconnected, and thus \(\overline{\ker(\tau)}=[a,\infty)\). The rest of the proof follows from Lemma 4.2.

B.3 Proof of Lemma 4.4

(i) Given \(a\ge0\), it is obvious from the definition that \(\eta (0,a)\in(0,a)\) and \(\eta(a,a)=a\). Fix \(x\in(0,a)\) and let \(f^{x}_{a}\) denote the density of \(T^{x}_{a}\). We obtain

$$\begin{aligned} \mathbb{E}^{x}\left[\frac{1}{1+\beta T^{x}_{a}}\right] &= \int _{0}^{\infty}\frac{1}{1+\beta t}f^{x}_{a}(t)dt = \int_{0}^{\infty}\int _{0}^{\infty}e^{-(1+\beta t)s}f^{x}_{a}(t)ds\ dt \\ &= \int_{0}^{\infty}e^{-s}\bigg(\int_{0}^{\infty}e^{-\beta st}f^{x}_{a}(t)dt\bigg)\ ds \\ & = \int_{0}^{\infty}e^{-s} \mathbb{E}^{x}[e^{-\beta sT^{x}_{a}}] ds. \end{aligned}$$

Since \(T^{x}_{a}\) is the first hitting time of a one-dimensional Bessel process, we compute its Laplace transform by using [19, Theorem 3.1] (or [8, Part II, Sect. 3, Formula 2.0.1]), as

$$ \mathbb{E}^{x}[e^{-\frac{\lambda^{2}}{2} T^{x}_{a}}] = \frac{\sqrt{x} I_{-\frac{1}{2}}(x\lambda)}{\sqrt{a} I_{-\frac{1}{2}}(a\lambda)}= \cosh(x\lambda)\operatorname {sech}(a\lambda)\qquad \hbox{for}\ x\le a. $$

Here, \(I_{\nu}\) denotes the modified Bessel function of the first kind. Thanks to the above formula with \(\lambda=\sqrt{2\beta s}\), we obtain from (B.7) that

$$ \eta(x,a) = a \int_{0}^{\infty}e^{-s} \cosh(x\sqrt{2\beta s})\operatorname{sech}(a\sqrt{2\beta s}) ds. $$

It is then obvious that \(x\mapsto\eta(x,a)\) is strictly increasing. Moreover,

$$\eta_{xx}(x,a) = 2a\beta^{2} \int_{0}^{\infty}e^{-s} s\cosh(x\sqrt {2\beta s})\operatorname{sech}(a\sqrt{2 \beta s}) ds>0 \qquad \hbox{for}\ x\in[0,a], $$

which shows the strict convexity.

(ii) This follows from (B.8) and the dominated convergence theorem.

(iii) We first prove the desired result with \(x^{*}(a)\in(0,a)\), and then upgrade it to \(x^{*}(a)\in(0,a^{*})\). Fix \(a\ge0\). In view of the properties in (i), we observe that the two curves \(y=\eta(x,a)\) and \(y=x\) intersect at some \(x^{*}(a)\in(0,a)\) if and only if \(\eta_{x}(a,a)>1\). Define \(k(a):=\eta_{x}(a,a)\). By (B.8),

$$ k(a)=a\int_{0}^{\infty}e^{-s} \sqrt{2\beta s}\tanh(a\sqrt{2\beta s}) ds. $$

Thus we see that \(k(0)=0\) and \(k(a)\) is strictly increasing on \((0,1)\) since for any \(a>0\),

$$k'(a)=\int_{0}^{\infty}e^{-s} \sqrt{2s}\bigg(\tanh(a\sqrt{2s})+\frac {a\sqrt{2s}}{\cosh^{2}(a\sqrt{2s})}\bigg) ds >0. $$

By numerical computation, \(k(1/\sqrt{\beta}) =\int_{0}^{\infty}e^{-s} \sqrt{2s}\tanh(\sqrt{2s}) ds \approx 1.07461 >1\). It follows that there must exist \(a^{*}\in(0,1/\sqrt{\beta })\) such that \(k(a^{*})=\eta_{x}(a^{*},a^{*})=1\). Monotonicity of \(k(a)\) then gives the desired result.

Now, for any \(a> a^{*}\), we intend to upgrade the previous result to \(x^{*}(a)\in(0,a^{*})\). Fix \(x\ge0\). By the definition of \(\eta\) and (ii), on the domain \(a\in[x,\infty )\), the mapping \(a\mapsto \eta(x,a)\) must either first increase and then decrease to 0, or directly decrease to 0. From (B.8), we have

$$\eta_{a}(x,x) = 1-x\int_{0}^{\infty}e^{-s} \sqrt{2\beta s}\tanh(x\sqrt {2\beta s}) ds = 1-k(x), $$

with \(k\) as in (B.9). Recalling \(k(a^{*})=1\), we have \(\eta _{a}(a^{*},a^{*})=0\). Notice that

$$\begin{aligned} \eta_{aa}(a^{*},a^{*}) &= -\frac{2}{a^{*}}k(a^{*}) -2 \beta a^{*} + a^{*}\int _{0}^{\infty}4\beta s e^{-s}\tanh^{2}(a^{*}\sqrt{2\beta s}) ds\\ &\le -\frac{2}{a^{*}} + 2\beta a^{*} < 0, \end{aligned}$$

where the second line follows from \(\tanh(x)\le1\) for \(x\ge0\) and \(a^{*}\in(0,1/\sqrt{\beta})\). Since \(\eta_{a}(a^{*},a^{*})=0\) and \(\eta_{aa}(a^{*},a^{*})<0\), we conclude that on the domain \(a\in[a^{*},\infty)\), the mapping \(a\mapsto\eta(a^{*},a)\) decreases to 0. On the other hand, for any \(a>a^{*}\), since \(\eta(a^{*},a) < \eta(a^{*},a^{*})= a^{*}\), we must have \(x^{*}(a)< a^{*}\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Huang, YJ., Nguyen-Huu, A. Time-consistent stopping under decreasing impatience. Finance Stoch 22, 69–95 (2018).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Time inconsistency
  • Optimal stopping
  • Hyperbolic discounting
  • Decreasing impatience
  • Subgame-perfect Nash equilibrium
  • Iterative approach

Mathematics Subject Classification (2010)

  • 60G40
  • 91B06

JEL Classification

  • C61
  • D81
  • D90
  • G02