Skip to main content
Log in

The optimal stopping problem revisited

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

We consider an optimal stopping time problem, related with many models found in real options problems. We present analytical solutions for a broad class of gain functions, considering quite general assumptions over the model. Also, an extensive and general sensitivity analysis is provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Alvarez LH (1999) Optimal exit and valuation under demand uncertainty: a real options approach. Eur J Oper Res 114(2):320–329

    Article  Google Scholar 

  • Arkin V (2015) Threshold strategies in optimal stopping problem for one-dimensional diffusion processes. Theory Probab Appl 59(2):311–319

    Article  MathSciNet  Google Scholar 

  • Belomestny D, Rüschendorf L, Urusov MA (2010) Optimal stopping of integral functionals and a “no-loss” free boundary formulation. Theory Probab Appl 54(1):14–28

    Article  MathSciNet  Google Scholar 

  • Bronstein AL, Hughston LP, Pistorius MR, Zervos M (2006) Discretionary stopping of one-dimensional ito diffusions with a staircase reward function. J Appl Probab 43(4):984–996

    Article  MathSciNet  Google Scholar 

  • Chevalier E, Vath VL, Roch A, Scotti S (2015) Optimal exit strategies for investment projects. J Math Anal Appl 425(2):666–694

    Article  MathSciNet  Google Scholar 

  • Chronopoulos M, Hagspiel V, Fleten SE (2015) Stepwise investment and capacity sizing under uncertainty. OR Spectr 39(2):447–472

    Article  MathSciNet  Google Scholar 

  • Dayanik S (2008) Optimal stopping of linear diffusions with random discounting. Math Oper Res 33(3):645–661

    Article  MathSciNet  Google Scholar 

  • Dayanik S, Egami M (2012) Optimal stopping problems for asset management. Adv Appl Probab 44(03):655–677

    Article  MathSciNet  Google Scholar 

  • Dayanik S, Karatzas I (2003) On the optimal stopping problem for one-dimensional diffusions. Stoch Process Appl 107(2):173–212

    Article  MathSciNet  Google Scholar 

  • Décamps JP, Villeneuve S (2007) Optimal dividend policy and growth option. Financ Stoch 11(1):3–27

    Article  MathSciNet  Google Scholar 

  • Dixit A (1989) Entry and exit decisions under uncertainty. J Polit Econ 97(3):620–638

    Article  Google Scholar 

  • Dixit A, Pindyck R (1994) Investment under uncertainty. Princeton University Press, Princeton

    Book  Google Scholar 

  • Filippov AF (2013) Differential equations with discontinuous righthand sides: control systems, vol 18. Springer Science & Business Media, Berlin

    Google Scholar 

  • Guerra M, Nunes C, Oliveira C (2016) Exit option for a class of profit functions. Int J Comput Math 94(11):2178–2193

    Article  MathSciNet  Google Scholar 

  • Hagspiel V, Huisman KJ, Kort PM, Nunes C (2016) How to escape a declining market: Capacity investment or exit? Eur J Oper Res 254(1):40–50

    Article  MathSciNet  Google Scholar 

  • Huisman KJ, Kort PM (2002) Strategic technology investment under uncertainty. QR Spectr 24(1):79–98

    MathSciNet  MATH  Google Scholar 

  • Johnson TC (2015) The solution of some discretionary stopping problems. IMA J Math Control Inf 34(3):717-744

  • Johnson TC, Zervos M (2007) The solution to a second order linear ordinary differential equation with a non-homogeneous term that is a measure. Stoch Int J Probab Stoch Process 79(3–4):363–382

    Article  MathSciNet  Google Scholar 

  • Kensinger JW (1988) The capital investment project as a set of exchange options. Managerial Financ 14(2/3):16–27

    Article  Google Scholar 

  • Knudsen TS, Meister B, Zervos M (1998) Valuation of investments in real assets with implications for the stock prices. SIAM J Control Optim 36(6):2082–2102

    Article  MathSciNet  Google Scholar 

  • Kort PM (1998) Optimal R & D investments of the firm. Oper Res 20(3):155–164

    MathSciNet  MATH  Google Scholar 

  • Kulatilaka N, Trigeorgis L (2004) The general flexibility to switch: real options revisited. Real options and investment under uncertainty: classical readings and recent contributions, 1st edn. MIT Press, Cambridge, pp 179–198

    Google Scholar 

  • Lamberton D, Zervos M (2013) On the optimal stopping of a one-dimensional diffusion. Electron J Probab 18(34):1–49

    MathSciNet  MATH  Google Scholar 

  • McDonald R, Siegel D (1985) Investment and the valuation of firms when there is an option to shut down. Int Econ Rev 26:331–349

    Article  Google Scholar 

  • Peskir G, Shiryaev A (2006) Optimal stopping and free-boundary problems. Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel

    MATH  Google Scholar 

  • Revuz D, Yor M (2013) Continuous martingales and Brownian motion, vol 293. Springer Science & Business Media, Berlin

    MATH  Google Scholar 

  • Rüschendorf L, Urusov MA (2008) On a class of optimal stopping problems for diffusions with discontinuous coefficients. Ann Appl Probab 18(3):847–878

    Article  MathSciNet  Google Scholar 

  • Schwartz ES, Trigeorgis L (2004) Real options and investment under uncertainty: classical readings and recent contributions. MIT Press, Cambridge

    Google Scholar 

  • Stokey NL (2016) Wait-and-see: investment options under policy uncertainty. Rev Econ Dyn 21:246–265

    Article  Google Scholar 

  • Trigeorgis L (1996) Real options: managerial flexibility and strategy in resource allocation. MIT Press, Cambridge

    Google Scholar 

  • Villeneuve S (2007) On threshold strategies and the smooth-fit principle for optimal stopping problems. J Appl Probab 44(1):181–198

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Oliveira.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Manuel Guerra was partially supported by the Project CEMAPRE - UID/MULTI/00491/2013 financed by FCT/MEC through national funds. Cláudia Nunes was partially supported by the Project CEMAT - UID/MULTI/04621/2013 financed by FCT/MEC through national funds. Cláudia Nunes gratefully acknowledge the financial support of Fundação para a Ciência e Tecnologia (FCT Ɖ Portugal), through the research project PTDC/EGE-ECO/30535/2017 (Finance Analytics Using Robust Statistics). Carlos Oliveira was supported by the Fundação para a Ciência e Tecnologia (FCT) under Grant SFRH/BD/102186/2014.

Appendices

A Proofs

1.1 A.1 Preliminaries

We start by stating an auxiliary result that will be useful to prove the remaining results in the paper.

Proposition 6

Let X be defined as in (2) and \(g:]0,\infty [\rightarrow [0,+\infty ]\) be a Borel measurable function, such that \(\int _0^{\infty }g(x)dx>0\) (i.e., g is not almost everywhere zero). If \(d_1=d_2\) or \(d_1=\overline{d_2}\in \mathbb {C}{\setminus }\mathbb {R}\), then

$$\begin{aligned} E_x\left[ \int _0^{+\infty }e^{-rt}g(X(t))dt\right] =\infty . \end{aligned}$$

Proof

Using Fubini’s Theorem and equalities in (8), we have:

$$\begin{aligned} E_x\left[ \int _0^{+\infty }e^{-rt}g(X(t))dt\right] =&\int _0^{+\infty }e^{-rt}E_x\left[ g\left( X(t)\right) \right] dt\\ =&\int _0^{+\infty }e^{\frac{\sigma ^2}{2}d_1d_2t}\int _{-\infty }^{+\infty }g\left( xe^{-\frac{\sigma ^2}{2}\left( d_1+d_2\right) t+\sigma w}\right) \\&\quad \times \frac{e^{-\frac{w^2}{2t}}}{\sqrt{2\pi t}}dwdt. \end{aligned}$$

Using the change of variable \(w=\frac{1}{\sigma }\log \frac{y}{x}+\sigma \frac{d_1+d_2}{2}t\), it is possible to obtain

$$\begin{aligned} E_x\left[ \int _0^{+\infty }e^{-rt}g(X(t))dt\right]&=\int _0^{+\infty }e^{-\frac{\sigma ^2}{2}\left( \frac{d_2-d_1}{2}\right) ^2t}\nonumber \\&\quad \times \int _{0}^{+\infty }\frac{g\left( y\right) }{\sigma y}e^{-\frac{1}{2t\sigma ^2}(\log \frac{y}{x})^2-\frac{d_1+d_2}{2}\log \frac{y}{x}}dydt. \end{aligned}$$
(29)

Additional calculations, allow us to obtain:

$$\begin{aligned} E_x\left[ \int _0^{+\infty }e^{-rt}g(X(t))dt\right] \ge A\int _1^{+\infty }e^{-\frac{\sigma ^2}{2}\left( \frac{d_1 - d_2}{2}\right) ^2}dt, \end{aligned}$$

where \(A=\int _0^{+\infty }\frac{g(y)}{\sigma y}e^{-\frac{1}{2\sigma ^2}\left( \log \frac{y}{x}\right) ^2-\frac{d_1+d_2}{2}\log \left( \frac{y}{x}\right) }dy>0\). This proves the proposition, since \(d_1=\overline{d_2} \) means that \((d_1-d_2)^2 \le 0\) and therefore \(\int _1^{+\infty }\frac{1}{\sqrt{2\pi t}}e^{-\frac{\sigma ^2}{2}( d_1 - d_2)^2}dt = + \infty \). \(\square \)

1.2 A.2 Proof of Proposition 4

Fix a point \(a \in ]0,+\infty [\), and consider the initial value problem defined by (11) and (14). Using the change of variable: \(y=ln\left( \frac{x}{a} \right) \) and \(u(y)=v(a e^y)\), the solution of the problem above can be obtained from the solution of the equation

$$\begin{aligned} -\tilde{{\mathcal {L}}}u(y)-\varPi (a e^y)=0, \end{aligned}$$
(30)

with initial conditions

$$\begin{aligned} u(0) = {\hat{v}}_1, \qquad u'(0) = a {\hat{v}}_2 . \end{aligned}$$

Here \(\tilde{ {\mathcal {L}}} u(y)\) is defined as in Eq. (6). Defining the vector \(w(y)=(u(y), u'(y))^T \), where the superscript \(^T\) denotes the transpose, we may represent the ODE (30) as \(w'(y)=Aw(y)+b(y)\), where \(b:]0,\infty [\rightarrow \mathbb {R}^{2}\) is a vector function and A is a constant \(2 \times 2\) matrix, defined as follows:

$$\begin{aligned} b(y)= \left( 0, -\frac{2}{\sigma ^2}\varPi (ae^y)\right) ^T; \quad A=\left( \! \begin{array}{cc} 0 &{} 1 \\ \frac{2r}{\sigma ^2}&{}-\frac{\sigma ^2-2\alpha }{\sigma ^2} \end{array} \!\right) =\left( \! \begin{array}{cc} 0 &{} 1 \\ -d_1d_2&{}d_1+d_2 \end{array} \!\right) . \end{aligned}$$

The last equality follows from the parametrization defined in (8). Furthermore, straightforward calculations lead to the fundamental matrix

$$\begin{aligned} e^{yA}=\left( \! \begin{array}{cc} \displaystyle \frac{d_2e^{yd_1}-d_1e^{yd_2}}{d_2-d_1} &{}\displaystyle \frac{e^{yd_2}-e^{yd_1}}{d_2-d_1} \\ \displaystyle -d_1d_2\frac{e^{yd_2}-e^{yd_1}}{d_2-d_1}&{}\displaystyle \frac{d_2e^{yd_2}-d_1e^{yd_1}}{d_2-d_1} \end{array} \!\right) . \end{aligned}$$

The solution for this system is given by \(w(y)=e^{yA}w(0){+}\int _0^ye^{(y-s)A}b(s)ds\). Returning to the original variables, we get the expressions (16)–(17).

1.3 A.3 Proof of Proposition 2

Let v be a solution of the HJB equation. Then,

$$\begin{aligned} -\mathcal{L}v(x)=rv(x) - \alpha x v^\prime (x) - \frac{1}{2}\sigma ^2x^2 v^{\prime \prime }(x) \ge \varPi (x) \end{aligned}$$

for almost every \(x \in ]0,+\infty [\).

Fix \(t>0\), and let \(\tau \in {\mathcal {S}}\) be any stopping time. Since the function \(v'\) is continuous in \(]0,\infty [\), then \(\int _0^{t\wedge \tau }e^{-rs}v'(X(s))dW(s)\) is a martingale (we set \(a \wedge b=\min (a,b)\)). Consequently:

$$\begin{aligned} E_x\left[ \int _0^{t\wedge \tau }e^{-rs}v'(X(s))dW(s)\right] =0, \quad \text {for all }x>0, \tau \in {\mathcal {S}}. \end{aligned}$$

Since v is continuously differentiable with absolutely continuous derivative, it can be written as the difference of two convex functions. Therefore, the Itô-Tanaka formula holds (see Revuz and Yor (2013), Theorem VI.1.5), and therefore we obtain

$$\begin{aligned} 0&\le E_x \left[ e^{-r(t\wedge \tau )} v(X(t\wedge \tau )) \right] = v(x) + E_x \left[ \int _0^{t\wedge \tau } e^{-rs} {\mathcal {L}} v(X(s)) ds \right] \le \nonumber \\&\le v(x) - E_x \left[ \int _0^{t\wedge \tau } e^{-rs} \varPi (X(s)) ds \right] . \end{aligned}$$
(31)

The expressions \(\int _0^{t\wedge \tau } e^{-rs} \varPi ^+(X(s)) ds\), \(\int _0^{t\wedge \tau } e^{-rs} \varPi ^-(X(s)) ds\) are monotonically increasing with respect to t, and \(E_x \left[ \int _0^{t\wedge \tau } e^{-rs} \varPi ^+(X(s)) ds \right] \le E_x \left[ \int _0^{+\infty } e^{-rs} \varPi ^+(X(s)) ds \right] <\infty \). Therefore, the monotonic convergence theorem guarantees that

$$\begin{aligned} \lim _{t \rightarrow +\infty } E_x \left[ \int _0^{t \wedge \tau } e^{-rs} \varPi (X(s)) ds \right] = E_x \left[ \int _0^\tau e^{-rs} \varPi (X(s)) ds \right] \in [-\infty , + \infty [ . \end{aligned}$$

Thus, the inequality (31) implies

$$\begin{aligned} v(x) \ge E_x \left[ \int _0^{\tau } e^{-rs} \varPi (X(s)) ds \right] =J(x,\tau ). \end{aligned}$$

Since this holds for arbitrary \(\tau \in {\mathcal {S}}\), it shows that \(v(x) \ge V(x)\).

1.4 A.4 Proof of Proposition 3

It can be checked that the argument in the proof of Proposition 2 holds in the present case and, therefore,

$$\begin{aligned} J(x,\tau ) \le v(x) \qquad \forall x \in ]a,b[, \ \tau \in {\mathcal {S}}_{[a,b]} . \end{aligned}$$

Let

$$\begin{aligned} \tau _0 = \inf \left\{ t\ge 0: v(X(t)) = 0 \right\} . \end{aligned}$$

It is clear that \(\tau _0 \in {\mathcal {S}}_{[a,b]}\). To finish the proof, we only need to show that \(J(x,\tau _0) = v(x)\) for every \(x \in ]a,b[\). By definition of \(\tau _0\), \(e^{-r(t\wedge \tau _0)}v(X(t\wedge \tau _0))=0\) whenever \(t\wedge \tau _0=\tau _0\), therefore:

$$\begin{aligned} E_x \left[ e^{-r(t \wedge \tau _0)} v(X(t \wedge \tau _0)) \right] = E_x \left[ e^{-rt} v(X(t)) {\mathcal {I}}_{\{ \tau _0>t\}} \right] \end{aligned}$$

Since \(v(X(s))>0\) whenever \(s < \tau _0\), the argument used in the proof of Proposition 2 shows that

$$\begin{aligned} E_x \left[ e^{-rt} v(X(t)) {\mathcal {I}}_{\{ \tau _0>t\}} \right] = v(x) - E_x \left[ \int _0^{t \wedge \tau _0} e^{-rs}\varPi (X(s)) ds \right] \end{aligned}$$

for every \(t \in ]0,+\infty [\), and \(\lim \limits _{t \rightarrow + \infty } E_x \left[ \int _0^{t \wedge \tau _0} e^{-rs}\varPi (X(s)) ds \right] = E_x \left[ \int _0^{\tau _0} e^{-rs}\varPi (X(s)) ds \right] \).

We extend the function v to the interval \(]0,+\infty [\), by setting \(v(x)=0\) for every \(x \in ]0,a] \cup [b,+\infty [\). Thus, there is a constant \(K<+\infty \) such that

$$\begin{aligned} {v(x) \le K x^{\frac{d_1+d_2}{2}}, \qquad \forall x \in ]0,+\infty [ .} \end{aligned}$$

Therefore,

$$\begin{aligned} E_x \left[ e^{-rt} v(X(t)) {\mathcal {I}}_{\{ \tau _0>t\}} \right]&< K E_x \left[ e^{-rt} X(t)^{\frac{d_1+d_2}{2}} \right] \\&= K x^{\frac{d_1+d_2}{2}} E \left[ e^{-rt} e^{ \left( (\alpha - \frac{\sigma ^2}{2}) t + \sigma W_t \right) \frac{d_1+d_2}{2}} \right] \\&= K x^{\frac{d_1+d_2}{2}} E \left[ e^{\frac{\sigma ^2}{2} \left( d_1d_2 - \frac{(d_1+d_2)^2}{4} \right) t} e^{ -\frac{1}{2} \sigma ^2 \left( \frac{d_1+d_2}{2} \right) ^2 + \sigma \frac{d_1+d_2}{2}W_t } \right] \\&= K x^{\frac{d_1+d_2}{2}} e^{-\frac{\sigma ^2}{2} \frac{(d_1-d_2)^2}{4} t} , \end{aligned}$$

where \(\{W_t:t\ge 0\}\) is a standard Brownian motion. This shows that \(\lim \limits _{t \rightarrow +\infty }E_x \left[ e^{-rt} v(X(t)) {\mathcal {I}}_{\{ \tau _0>t\}} \right] = 0\), and, therefore \(v(x) = E_x \left[ \int _0^{ \tau _0} e^{-rs}\varPi (X(s)) ds \right] \), which concludes the proof.

1.5 A.5 Proof of Lemma 1

We prove assertions (i) and (iii) simultaneously.

If \(x_{1l}=0\), then \(\beta =0\), and Remark 3 implies that

$$\begin{aligned} \lim _{b \downarrow \beta } \inf \{ x>b : v_b(x)< 0 \} = x_{1r} < \gamma . \end{aligned}$$

Suppose now that \(x_{1l}>0\), and, therefore, \(\beta < x_{1l}\). For every \(b \in ]\beta , x_{1l}[\), let \(c(b) = \inf \{ x>b : v_b(x) < 0 \}\). By Remark 5, the function \(b \mapsto c(b)\) is monotonically decreasing in \(]b,x_{1l}[\) and therefore, \(\lim \limits _{b \downarrow \beta } c(b)\) exists (possibly, infinite). If \(\lim \limits _{b \downarrow \beta } c(b) \le x_{2r}\), then Remark 6 states that \(\lim \limits _{b \downarrow \beta } c(b) < \gamma \). If \(\lim \limits _{b \downarrow \beta } c(b) > x_{2r}\), then Remark 3 implies that for every \(b \in ]\beta , x_{1l}[\), we have \(v_b(c(b)) = 0\), \(v_b'(c(b)) < 0\). Therefore, due to Remark 2, the inequality \(v_{c(b)} (b) < v_b(b) =0\) must hold. This shows that \(c(b) < \gamma \) and therefore, since b is arbitrary, \(\lim \limits _{b \downarrow \beta } c(b) \le \gamma \).

Now, suppose that \(\beta >0\). If \(\lim \nolimits _{b \downarrow \beta } c(b) = + \infty \), then the equality in (i) holds trivially. Thus, we assume that \(\lim \nolimits _{b \downarrow \beta } c(b) = c < + \infty \). Since \(\lim \nolimits _{b \downarrow \beta } v_b(x) = v_\beta (x)\) uniformly with respect to x on compact subintervals of \(]0,+\infty [\), it follows that \(0 = \lim \nolimits _{b \downarrow \beta } v_b(c(b)) = v_\beta (c)\). Since \(v_\beta \) is non-negative (Remark 6), it follows that \(v_\beta '(c) = 0\), and therefore \(v_\beta = v_c\). Notice that for every \({{\tilde{c}}} >c\), \(v_{{\tilde{c}}}(x) > v_c(x)\) holds for every \(x \in ]0,c]\) (Remark 5), and \(v_{{\tilde{c}}}(x) >0\) for every \(x \in [x_{2l},{\tilde{c}}[\) (Remark 3), it follows that \(\gamma =c\).

The proof of assertion (ii) is analogous.

1.6 A.6 Proof of Lemma 2

To prove assertion (i):

Using the equalities (18)–(19), a simple computation shows that condition (25) is equivalent to

$$\begin{aligned} v_b(c) = v_b^\prime (c) = 0, \end{aligned}$$

i.e., \(v_b \equiv v_c\). Therefore, Lemma 1 states that if \(\beta >0\) and \(\gamma < +\infty \), then \(\beta \), \(\gamma \) satisfy the conditions in (25).

Let \(b \in ]0, x_{1l}[\) and \(c\in ]x_{2r},+\infty [\) be constants satisfying the conditions in (25). Since \(v_b(c) =v_c(b)= 0\), Remark 5 implies that \(\beta \le b\) and \(\gamma \ge c\).

Suppose that \(\beta < b\). By Remarks 3 and 5, there must be some \(x>b\) such that \(v_b(x) < 0\). Due to Remark 3, x must lie in the interval \(]x_{1r},x_{2l}[\), but this case is excluded by Remark 4, and therefore \(\beta = b\). A similar argument shows that \(\gamma = c\), and the proof of assertion (i) is complete.

To prove assertion (ii):

We can write equality (18) in the form:

$$\begin{aligned} v_a(x) = \frac{-2 x^{d_2}}{\sigma ^2 (d_2-d_1)} \int _a^x \left( 1 - \left( \frac{s}{x} \right) ^{d_2-d_1} \right) s^{-d_2-1} \varPi (s) ds . \end{aligned}$$
(32)

Breaking the interval \([a,+\infty [\) into intervals where \(\varPi \) does not change sign and using the Lebesgue monotone convergence theorem, we see that

$$\begin{aligned} \lim _{x \rightarrow +\infty } \int _a^x \left( 1 - \left( \frac{s}{x} \right) ^{d_2-d_1} \right) s^{-d_2-1} \varPi (s) ds = \int _a^{+\infty } s^{-d_2-1} \varPi (s) ds \end{aligned}$$
(33)

for every \(a \in ]0,+\infty [\).

Suppose that \(\beta >0\) and \(\gamma = + \infty \). Since \(v_\beta \) is nonnegative, the equalities (32), (33) imply that \(\int _\beta ^{+\infty } s^{-d_2-1}\varPi (s) ds \le 0\). If \(\int _\beta ^{+\infty } s^{-d_2-1}\varPi (s) ds < 0\), then there are constants \(\varepsilon >0\), \(c< +\infty \) such that \(v_b(x) >0\) for every \(x>c\), \(b< \beta + \varepsilon \). However, Lemma 1 states that \(\lim \limits _{b \downarrow \beta } \inf \{ x>b: v_b(x) < 0 \} = +\infty \). This shows that \(\int _\beta ^{+\infty } s^{-d_2-1}\varPi (s) ds = 0\).

For each \(b > \beta \), let \(c(b) = \inf \{ x> b : v_b(x) < 0 \} \). By Remark 5, we have

$$\begin{aligned} 0&= v_b(b) > v_{c(b)}(b) =\frac{-2 b^{d_1}}{\sigma ^2 (d_2-d_1)} \left( \int _b^{c(b)} s^{-d_1-1} \varPi (s) ds - b^{d_2-d_1}\right. \\&\quad \left. \int _b^{c(b)} s^{-d_2-1} \varPi (s) ds \right) . \end{aligned}$$

Lemma 1 states that \(\lim \limits _{b \downarrow \beta } c(b) = \gamma = + \infty \). Hence,

$$\begin{aligned} \int _\beta ^{+ \infty } s^{-d_1-1} \varPi (s) ds \ge \beta ^{d_2-d_1} \int _\beta ^{+ \infty } s^{-d_2-1} \varPi (s) ds = 0 . \end{aligned}$$

Now, let \(b \in ]0, x_{1l}[\) be a constant satisfying (21). Since for every sufficiently small \(\varepsilon >0\), we have

$$\begin{aligned} \int _{b+\varepsilon }^{+\infty } s^{-d_2-1}\varPi (s) ds >0 , \end{aligned}$$

the equalities (32) and (33) show that \(v_{ b+\varepsilon }(x) <0\) for every sufficiently large x. Thus, \(\beta \le b\). The equalities (32) and (33) also show that

$$\begin{aligned} \lim \limits _{x \rightarrow + \infty } \frac{v_b(x)}{x^{d_2}} = 0 . \end{aligned}$$
(34)

Therefore, if there is some \(x \in ]b,+\infty [ \) such that \(v_b(x) <0\), the function \(x \mapsto \frac{v_b(x)}{x^{d_2}}\) must have a global minimizer in \(]b,+\infty [\). Using equalities (18)–(19), we see that

$$\begin{aligned} \frac{d}{dx} \left( \frac{v_b(x)}{x^{d_2}} \right) = \frac{-2}{\sigma ^2 x^{d_2-d_1+1}} \int _b^x s^{-d_1-1} \varPi (s) ds . \end{aligned}$$

Thus, Assumption 1 implies that if c is a minimizer of \( \frac{v_b(x)}{x^{d_2}} \), then \(c > x_{2r}\) and \(\int _b^c s^{-d_1-1}\varPi (s) ds =0\). However, this implies that \(\int _b^{+\infty } s^{-d_1-1} \varPi (s) ds < 0\). Hence, \(v_b\) must be nonnegative and therefore \(\beta = b\).

The proof of statement (iii) is analogous to the proof of statement (ii).

The statement (iv) is straightforward: by the statements (i) and (ii), \(\beta >0\) implies \(\int _0^{+\infty } s^{-d_2-1}\varPi (s) ds < 0 \), and, by the statements (i) and (iii), \(\gamma < + \infty \) implies \(\int _0^{+\infty } s^{-d_1-1}\varPi (s) ds < 0 \). Hence, \(\beta = 0\) and \(\gamma = +\infty \) imply (28). Conversely, if (28) holds, then none of the equalities in conditions (25), (21) or (23) may hold.

1.7 A.7 Proof of Theorem 1

Suppose that there are constants \(b \in ]0, x_{1l}[\), \(c \in ]x_{2r},+\infty [\) satisfying conditions in (25). By Lemma 2, \(\beta = b\), \(\gamma =c\), and the function V defined by (26) is a Carathéodory solution of the HJB equation (10). Therefore, Proposition 2 states that V is an upper bound for the value function (4).

For every \(n \in {\mathbb {N}}\), let

$$\begin{aligned} b_n= \beta + \frac{1}{n} , \qquad c_n = \inf \{ x >b_n: v_{b_n}(x) <0 \} . \end{aligned}$$

By Proposition 3, the function

$$\begin{aligned} V_n(x) = \left\{ \begin{array}{ll} v_{b_n}(x) , &{} \text {for } x \in [b_n,c_n]\\ 0 , &{} \text {for } x \in ]0,b_n] \cup [c_n, +\infty [ \end{array} \right. , \end{aligned}$$

coincides with the function \(V_{[b_n,c_n]}(x)= \sup \nolimits _{\tau \in {\mathcal {S}}_{[b_n,c_n]}} J(x,\tau ) \). Since \({\mathcal {S}}_{[b_n,c_n]} \subset {\mathcal {S}}\), \(V_n\) is a lower bound for the value function (4). By Lemma 1, \(\lim V_n =V\) and therefore the assertion (iii) of the theorem holds. Notice that the argument above does not require \(\gamma < +\infty \). Hence, it can be used verbatim to prove assertion (i). If we set

$$\begin{aligned} c_n = \gamma - \frac{1}{n}, \qquad b_n = \sup \{ x< c_n: v(x) <0 \} , \end{aligned}$$

then the argument above proves assertion (ii).

To prove assertion (iv), we note that if there are no constants b, c as in (i)–(iii), then Lemma 2 states that \(\beta = 0\) and \(\gamma = +\infty \). Suppose, without loss of generality, that \(x_{1l} >0\), and pick a sequence \({\tilde{a}}_n < x_{1l}\), converging to zero. Due to Equations (28), (32) and (33), for each \(n \in {\mathbb {N}}\) there is some \(c_n < +\infty \) such that \(v_{{\tilde{a}}_n}(x) < 0 \) for every \(x \in {]}c_n,+\infty [\). Hence, there is a sequence \(b_n\) such that \(\lim b_n=+\infty \) and \(v_{{\tilde{a}}_n}(b_n) < 0 \). If, on the one hand, \(v_{b_n}(x) >0\) for every \(x \in ]{\tilde{a}}_n,b_n[\), then we set \(a_n = \sup \{x<b_n : v_{b_n}(x) < 0 \} \le {\tilde{a}}_n\), and, consequently, \(v_{[a_n,b_n]}(x) = v_{b_n}(x) >0\) holds for every \(x \in ]a_n,b_n[\). If, on the other hand, there is some \(x \in ]{\tilde{a}}_n, b_n[\) such that \(v_{b_n}(x) \le 0\), then we set \(a_n={\tilde{a}}_n\). Combining the expressions in Equation (20) and the inequalities \(v_{[a_n,b_n]}(b_n)=0>v_{a_n}(b_n)\) and \(v_{[a_n,b_n]}(a_n)=0>v_{b_n}(a_n)\), it follows that \(v_{[a_n,b_n]}'(a_n) >0\) and \(v_{[a_n,b_n]}'(b_n) <0\). Therefore, Remark 2 implies that

$$\begin{aligned}&v_{[a_n,b_n]}(x)> v_{a_n}(x) \qquad \text {for every } x > a_n, \end{aligned}$$
(35)
$$\begin{aligned}&v_{[a_n,b_n]}(x)> v_{b_n}(x) \qquad \text {for every } x < b_n. \end{aligned}$$
(36)

If \(x_{2r}=+\infty \), then the inequality (36) implies that \(v_{[a_n,b_n]}(x)>0\) for every \(x \in [x_{2l},b_n[\) (if \(b_n>x_{2l}\)). If \(x_{2r}<+\infty \), then we can chose \(b_n>x_{2r}\) and the Remark 3 states that \(v_{b_n}(x) >0\) for every \(x \in [x_{2l},b_n[\). In both cases, the Remark 3 states that \(v_{a_n}(x) >0\) for every \(x \in ]a_n,x_{1r}]\). Hence, the Remark 4 guarantees that \(v_{[a_n,b_n]}(x)>0\) for every \(x \in ]x_{1r},x_{2l}[\), and therefore \(v_{[a_n,b_n]}\) is strictly positive in \(]a_n,b_n[\).

Fix a sequence \(\left\{ [a_n,b_n] \right\} _{n \in {\mathbb {N}}}\) as above. Due to Remark 2, the sequence

$$\begin{aligned} v_n(x) = \left\{ \begin{array}{ll} v_{[a_n,b_n]}(x) , &{} \text {for } x \in [a_n,b_n] , \\ 0 , &{} \text {for } x \in ]0,a_n] \cup [b_n,+\infty [ \end{array} \right. \end{aligned}$$

is monotonically increasing. By Proposition 3, \(v_n(x) = \sup \nolimits _{\tau \in {\mathcal {S}}_{[a_n,b_n]}}J(x,\tau ) < v_p^+(x)\). Therefore, Assumption 2 implies that \(v_n(x)\) is a bounded monotonic sequence and hence it converges. Since solutions of Equation (11) depend continuously on boundary conditions, the convergence of \(v_n(x)\) and \(v'_n(x)\) is uniform on compact intervals. Therefore, \(v(x) = \lim \limits _{n \rightarrow \infty } v_n(x)\) is a positive Carathéodoty solution of Equation (11) and therefore it is a solution of the HJB equation (10), which, by Proposition 2, is an upper bound for the value function. Since \(v_n(x) = \sup \nolimits _{\tau \in {\mathcal {S}}_{[a_n,b_n]}}J(x,\tau ) \le \sup \nolimits _{\tau \in {\mathcal {S}}}J(x,\tau )\), v must coincide with the value function.

To see that v coincides with the function defined in Equation (27), use Equation (16) and the boundary conditions \(v_{[a_n,b_n]}(a_n) = v_{[a_n,b_n]}(b_n) = 0\) to obtain

$$\begin{aligned} v_{[a_n,b_n]}'(a_n) = \frac{2}{\sigma ^2} \frac{\int _{a_n}^{b_n} (b^{d_2} s^{-d_2-1} - b^{d_1} s^{-d_1-1}) \varPi (s) ds}{a_n \left( \left( \frac{b_n}{a_n} \right) ^{d_2} - \left( \frac{b_n}{a_n} \right) ^{d_1} \right) } . \end{aligned}$$

Substituting in Equation (20) and rearranging, we obtain

$$\begin{aligned} v_{[a_n,b_n]}(x) =&\frac{\left( \frac{x}{a_n} \right) ^{d_2} - \left( \frac{x}{a_n} \right) ^{d_1}}{d_2-d_1} \frac{2}{\sigma ^2} \frac{\int _{a_n}^{b_n} (b^{d_2} s^{-d_2-1} - b^{d_1} s^{-d_1-1}) \varPi (s) ds}{\left( \frac{b_n}{a_n} \right) ^{d_2} - \left( \frac{b_n}{a_n} \right) ^{d_1} }\\&- \frac{2}{(d_2-d_1)\sigma ^2} \int _{a_n}^x \left( x^{d_2}s^{-d_2-1} - x^{d_1}s^{-d_1-1} \right) \varPi (s) ds \\ =&\frac{2}{(d_2-d_1) \sigma ^2} \frac{1- \left( \frac{x}{b_n} \right) ^{d_2-d_1}}{1- \left( \frac{a_n}{b_n} \right) ^{d_2-d_1}} \Bigg ( x^{d_1} \int _{a_n}^x \left( 1- \left( \frac{a_n}{s} \right) ^{d_2-d_1}\right) s^{-d_1-1} \varPi (s) ds \\&+ x^{d_2} \int _x^{b_n} \left( 1- \left( \frac{s}{b_n} \right) ^{d_2-d_1}\right) s^{-d_2-1} \varPi (s) ds \Bigg ), \end{aligned}$$

for every \(x \in [a_n,b_n]\). For every \(x \in ]0,+\infty [\) (fixed), the Lebesgue monotone convergence theorem shows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{a_n}^x \left( 1- \left( \frac{a_n}{s} \right) ^{d_2-d_1}\right) s^{-d_1-1} \varPi ^+(s) ds&= \int _0^x s^{-d_1-1} \varPi ^+(s) ds ,\\ \lim _{n \rightarrow \infty } \int _x^{b_n} \left( 1- \left( \frac{s}{b_n} \right) ^{d_2-d_1}\right) s^{-d_2-1} \varPi ^+(s) ds&= \int _x^{+\infty } s^{-d_2-1} \varPi ^+(s) ds , \end{aligned}$$

and the same holds for \(\varPi ^-\). Assumption 2 guarantees that the integrals with \(\varPi ^+\) are finite. Hence,

$$\begin{aligned}&\lim _{n \rightarrow \infty } v_{[a_n,b_n]}(x)\\&\quad = \frac{2}{(d_2-d_1) \sigma ^2} \left( x^{d_1} \int _0^x s^{-d_1-1} \varPi (s) ds + x^{d_2} \int _x^{+ \infty } s^{-d_2-1} \varPi (s) ds \right) , \end{aligned}$$

and the proof is complete.

1.8 A.8 Proof of Proposition 1

Since Proposition 6, states that if \(v_p^+(x)<+\infty \), for any x, then \(d_1<d_2\), the argument used to prove assertion (iv) of Theorem 1 can be easily adapted to prove Proposition 1.

1.9 A.9 Proof of Lemma 3

For each abd such that \(0<a<b < +\infty \) and \(d \ne 0\), let

$$\begin{aligned} f_{a,b,d}(x) = \ln \frac{x}{a} - \frac{\ln b - \ln a}{b^d - a^d} \left( x^d - a^d \right) . \end{aligned}$$

Notice that \(f_{a,b,d}\) is continuous, and for \(d>0\):

$$\begin{aligned}&f_{a,b,d}(x) < 0 \qquad \text {for } x \in ]0,a[ \cup ]b,+\infty [, \\&f_{a,b,d}(x) > 0 \qquad \text {for } x \in ]a,b[ . \end{aligned}$$

For \(d<0\), \(f_{a,b,d}\) takes the oposites of the signs above.

Suppose that the assumptions of statement (i) hold.

Since \(\int _b^c s^{-d_2-1} \varPi (s) ds = 0\), we see that \(\int _b^c s^{-d_2-1} \varPi (s) \ln s \, ds = \int _b^c s^{-d_2-1} \varPi (s) \left( \ln s +C \right) \, ds\), for every constant \(C \in {\mathbb {R}}\). If \(c \le x_{2r}\), this implies

$$\begin{aligned} \int _b^c s^{-d_2-1} \varPi (s) \ln s \, ds = \int _b^c s^{-d_2-1} \varPi (s) \ln \frac{s}{x_{1l}} \, ds > 0 , \end{aligned}$$

since the integrand of the right-hand side integral is non-negative and is not zero on a set of positive measure. If \(c > x_{2r}\), then

$$\begin{aligned}&\int _b^c s^{-d_2-1} \varPi (s) \ln s \, ds \\&\quad = \int _b^c s^{-d_2-1} \varPi (s) \left( f_{x_{1l},x_{2r},d_2-d_1} (s) + \frac{\ln x_{2r} - \ln x_{1l}}{x_{2r}^{d_2-d_1} - x_{1l}^{d_2-d_1}}s^{d_2-d_1} \right) \, ds\\&\quad = \int _b^c s^{-d_2-1} \varPi (s) f_{x_{1l},x_{2r},d_2-d_1} (s) \, ds + \frac{\ln x_{2r} - \ln x_{1l}}{x_{2r}^{d_2-d_1} - x_{1l}^{d_2-d_1}} \int _b^c s^{-d_1-1} \varPi (s) \, ds\\&\quad \ge \int _b^c s^{-d_2-1} \varPi (s) f_{x_{1l},x_{2r},d_2-d_1} (s) \, ds , \end{aligned}$$

and again the integrand in the last expression is a non-negative function which differs from zero on a set of positive measure.

The proof of statement (ii) is analogous.

1.10 A.10 Proof of Lemma 4

Let \(f_{x_1,x_2,d}\) be as in the proof of Lemma 3. Using the argument in the proof of Lemma 3, we see that for any constant \(k \in ]0,+\infty [\):

$$\begin{aligned}&\int _b^c s^{-d_1-1} \varPi (s) \ln s \, ds + k^{d_2-d_1} \int _b^c s^{-d_2-1} \varPi (s) \ln s \, ds \\&\quad = \int _b^c s^{-d_1-1} \varPi (s) \left( f_{x_1,x_2,d_1-d_2}(s) + \frac{k^{d_2-d_1}}{s^{d_2-d_1}} f_{x_1,x_2,d_2-d_1}(s) \right) \, ds . \end{aligned}$$

A simple computation shows that for \(d>0\):

$$\begin{aligned}&f_{x_1,x_2,-d}(s) + \frac{k^d}{s^d} f_{x_1,x_2,d}(s) \\&\quad = \frac{1}{d s^d} \left( (s^d + k^d) \ln \frac{s^d}{x_1^d} - (x_2^d + k^d) \frac{\ln x_2^d - \ln x_1^d}{x_2^d - x_1^d}(s^d - x_ 1^d) \right) . \end{aligned}$$

This last expression is equal to zero if \(s=x_1\) or \(s= x_2\). It is a strictly concave function of \(s^d\) for \(s \in ]0,k[\), and strictly convex for \(s \in ]k,+\infty [\). It follows that

$$\begin{aligned}&f_{x_1,x_2,d_1-d_2}(s) + \frac{c^{d_2-d_1}}{s^{d_2-d_1}} f_{x_1,x_2,d_2-d_1}(s)>0 \qquad \text {for } s \in ]x_1,x_2[, \\&f_{x_1,x_2,d_1-d_2}(s) + \frac{c^{d_2-d_1}}{s^{d_2-d_1}} f_{x_1,x_2,d_2-d_1}(s)<0 \qquad \text {for } s \in [b,x_1[ \cup ]x_2,c], \\&f_{x_1,x_2,d_1-d_2}(s) + \frac{b^{d_2-d_1}}{s^{d_2-d_1}} f_{x_1,x_2,d_2-d_1}(s) <0 \qquad \text {for } s \in ]x_1,x_2[, \\&f_{x_1,x_2,d_1-d_2}(s) + \frac{b^{d_2-d_1}}{s^{d_2-d_1}} f_{x_1,x_2,d_2-d_1}(s) >0 \qquad \text {for } s \in [b,x_1[ \cup ]x_2,c], \end{aligned}$$

and the result follows.

\(\square \)

B Auxiliary results for the sensitivity analysis

In this section we present some trivial calculations in order to obtain the monotonicity of the parameters \(\beta \) and \(\gamma \) in Sects. 4.14.2 and 4.3.

In Sect. 4.1, we note that the threshold \(\beta \) is the unique solution of equation (21). Excluding singular cases, we assume that \(\int _\beta ^{+\infty } s^{-d_1-1} \varPi (s) ds >0\), and, therefore, in view of the Implicit Function Theorem and the chain rule, it follows that:

$$\begin{aligned} \frac{\partial \beta }{\partial \alpha } =&\frac{\partial \beta }{\partial d_2} \frac{\partial d_2}{\partial \alpha } = \frac{\partial \beta }{\partial d_2} \frac{-d_2}{\sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}},\\ \frac{\partial \beta }{\partial \sigma ^2} =&\frac{\partial \beta }{\partial d_2} \frac{\partial d_2}{\partial \sigma ^2} = \frac{\partial \beta }{\partial d_2} \frac{\alpha d_2-r}{\sigma ^2 \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}},\\ \frac{\partial \beta }{\partial r} =&\frac{\partial \beta }{\partial d_2} \frac{\partial d_2}{\partial r} = \frac{\partial \beta }{\partial d_2} \frac{1}{\sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}}, \end{aligned}$$

where \(\frac{\partial \beta }{\partial d_2} = - \frac{\int _\beta ^{+\infty } s^{-d_2-1} \varPi (s) \ln s \, ds}{\beta ^{-d_2-1} \varPi (\beta )} \). By Lemma 3, this expression is strictly positive, and therefore the derivatives of \(\beta \) with respect to the parameters have the signs:

$$\begin{aligned} \mathrm {sg}\left( \frac{\partial \beta }{\partial \alpha } \right) = - \mathrm {sg}(d_2), \qquad \mathrm {sg}\left( \frac{\partial \beta }{\partial \sigma ^2} \right) = \mathrm {sg}(\alpha d_2 - r), \qquad \frac{\partial \beta }{\partial r} >0. \end{aligned}$$
(37)

In Sect. 4.2, we can se that the threshold \(\gamma \) is the unique solution of equation (23). Excluding singular cases, we assume that \(\int _0^\gamma s^{-d_2-1} \varPi (s) ds >0\). Therefore, the argument above shows that

$$\begin{aligned} \frac{\partial \gamma }{\partial \alpha } =&\frac{\int _0^\gamma s^{-d_1-1} \varPi (s) \ln s \, ds}{\gamma ^{-d_1-1} \varPi (\gamma )} \frac{d_1}{\sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}} ;\\ \frac{\partial \gamma }{\partial \sigma ^2} =&\frac{\int _0^\gamma s^{-d_1-1} \varPi (s) \ln s \, ds}{\gamma ^{-d_1-1} \varPi (\gamma )} \frac{r - \alpha d_1}{\sigma ^2 \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}} ;\\ \frac{\partial \gamma }{\partial r} =&\frac{\int _0^\gamma s^{-d_1-1} \varPi (s) \ln s \, ds}{\gamma ^{-d_1-1} \varPi (\gamma )} \frac{-1}{\sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}} ; \end{aligned}$$

and the derivatives of the threshold \(\gamma \) with respect to the parameters have the signs:

$$\begin{aligned} \mathrm {sg}\left( \frac{\partial \gamma }{\partial \alpha } \right) = \mathrm {sg}(d_1), \qquad \mathrm {sg}\left( \frac{\partial \gamma }{\partial \sigma ^2} \right) = \mathrm {sg}(r - \alpha d_1), \qquad \frac{\partial \gamma }{\partial r} <0. \end{aligned}$$
(38)

Finally, in Sect. 4.3 the thresholds \(\beta \in ]0,x_{1l}[\), \(\gamma \in ]x_{2r},+\infty [\) are the unique solutions of Eq. (25), and, therefore, we can use the Implicit Function Theorem and the chain rule to derive:

$$\begin{aligned} \frac{\partial \beta }{\partial r}&= \frac{-\beta ^{d_2+1} \left( I_1 + \gamma ^{d_2-d_1} I_2 \right) }{\left( \gamma ^{d_2-d_1} - \beta ^{d_2-d_1} \right) \varPi (\beta ) \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}};\\ \frac{\partial \gamma }{\partial r}&= \frac{-\gamma ^{d_2+1} \left( I_1 + \beta ^{d_2-d_1} I_2 \right) }{\left( \gamma ^{d_2-d_1} - \beta ^{d_2-d_1} \right) \varPi (\gamma ) \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}}; \\ \frac{\partial \beta }{\partial \alpha }&= \frac{\beta ^{d_2+1} \left( I_1 d_1 + \gamma ^{d_2-d_1} I_2 d_2 \right) }{\left( \gamma ^{d_2-d_1} - \beta ^{d_2-d_1} \right) \varPi (\beta ) \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}};\\ \frac{\partial \gamma }{\partial \alpha }&= \frac{\gamma ^{d_2+1} \left( I_1 d_1 + \beta ^{d_2-d_1} I_2 d_2 \right) }{\left( \gamma ^{d_2-d_1} - \beta ^{d_2-d_1} \right) \varPi (\gamma ) \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}};\\ \frac{\partial \beta }{\partial \sigma ^2}&= \frac{\beta ^{d_2+1} \left( I_1 (r-\alpha d_1) + \gamma ^{d_2-d_1} I_2 (r - \alpha d_2 ) \right) }{\left( \gamma ^{d_2-d_1} - \beta ^{d_2-d_1} \right) \varPi (\beta ) \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}}; \\ \frac{\partial \gamma }{\partial \sigma ^2}&= \frac{\gamma ^{d_2+1} \left( I_1 (r-\alpha d_1) + \beta ^{d_2-d_1} I_2 (r - \alpha d_2 ) \right) }{\left( \gamma ^{d_2-d_1} - \beta ^{d_2-d_1} \right) \varPi (\gamma ) \sqrt{\left( \frac{\sigma ^2}{2} - \alpha \right) ^2 + 2 \sigma ^2 r}}, \end{aligned}$$

where \( I_i = \int _\beta ^\gamma s^{-d_i-1} \varPi (s) \ln s \, ds\), \(i=1,2\). Thus, Lemma 4 shows that \(\frac{\partial \beta }{\partial r} >0\) and \(\frac{\partial \gamma }{\partial r} <0\), that is, an increase of the discount rate leads to an earlier exercise of the option.

Concerning the effect of changes in the parameters \(\alpha \), \(\sigma ^2\), notice that

$$\begin{aligned} \mathrm {sg}\left( \frac{\partial \beta }{\partial \alpha } \right)&= -\mathrm {sg}\left( I_1 d_1 + \gamma ^{d_2-d_1} I_2 d_2\right) , \\ \mathrm {sg}\left( \frac{\partial \beta }{\partial \sigma ^2} \right)&= -\mathrm {sg}\left( I_1 (r - \alpha d_1) + \gamma ^{d_2-d_1} I_2 (r - \alpha d_2) \right) ,\\ \mathrm {sg}\left( \frac{\partial \gamma }{\partial \alpha } \right)&= -\mathrm {sg}\left( I_1 d_1 + \beta ^{d_2-d_1} I_2 d_2\right) ,\\ \mathrm {sg}\left( \frac{\partial \gamma }{\partial \sigma ^2} \right)&= -\mathrm {sg}\left( I_1 (r - \alpha d_1) + \beta ^{d_2-d_1} I_2 (r - \alpha d_2) \right) . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guerra, M., Nunes, C. & Oliveira, C. The optimal stopping problem revisited. Stat Papers 62, 137–169 (2021). https://doi.org/10.1007/s00362-019-01088-w

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-019-01088-w

Keywords

Mathematics Subject Classification

Navigation