Abstract
We consider an optimal stopping time problem, related with many models found in real options problems. We present analytical solutions for a broad class of gain functions, considering quite general assumptions over the model. Also, an extensive and general sensitivity analysis is provided.
Similar content being viewed by others
References
Alvarez LH (1999) Optimal exit and valuation under demand uncertainty: a real options approach. Eur J Oper Res 114(2):320–329
Arkin V (2015) Threshold strategies in optimal stopping problem for one-dimensional diffusion processes. Theory Probab Appl 59(2):311–319
Belomestny D, Rüschendorf L, Urusov MA (2010) Optimal stopping of integral functionals and a “no-loss” free boundary formulation. Theory Probab Appl 54(1):14–28
Bronstein AL, Hughston LP, Pistorius MR, Zervos M (2006) Discretionary stopping of one-dimensional ito diffusions with a staircase reward function. J Appl Probab 43(4):984–996
Chevalier E, Vath VL, Roch A, Scotti S (2015) Optimal exit strategies for investment projects. J Math Anal Appl 425(2):666–694
Chronopoulos M, Hagspiel V, Fleten SE (2015) Stepwise investment and capacity sizing under uncertainty. OR Spectr 39(2):447–472
Dayanik S (2008) Optimal stopping of linear diffusions with random discounting. Math Oper Res 33(3):645–661
Dayanik S, Egami M (2012) Optimal stopping problems for asset management. Adv Appl Probab 44(03):655–677
Dayanik S, Karatzas I (2003) On the optimal stopping problem for one-dimensional diffusions. Stoch Process Appl 107(2):173–212
Décamps JP, Villeneuve S (2007) Optimal dividend policy and growth option. Financ Stoch 11(1):3–27
Dixit A (1989) Entry and exit decisions under uncertainty. J Polit Econ 97(3):620–638
Dixit A, Pindyck R (1994) Investment under uncertainty. Princeton University Press, Princeton
Filippov AF (2013) Differential equations with discontinuous righthand sides: control systems, vol 18. Springer Science & Business Media, Berlin
Guerra M, Nunes C, Oliveira C (2016) Exit option for a class of profit functions. Int J Comput Math 94(11):2178–2193
Hagspiel V, Huisman KJ, Kort PM, Nunes C (2016) How to escape a declining market: Capacity investment or exit? Eur J Oper Res 254(1):40–50
Huisman KJ, Kort PM (2002) Strategic technology investment under uncertainty. QR Spectr 24(1):79–98
Johnson TC (2015) The solution of some discretionary stopping problems. IMA J Math Control Inf 34(3):717-744
Johnson TC, Zervos M (2007) The solution to a second order linear ordinary differential equation with a non-homogeneous term that is a measure. Stoch Int J Probab Stoch Process 79(3–4):363–382
Kensinger JW (1988) The capital investment project as a set of exchange options. Managerial Financ 14(2/3):16–27
Knudsen TS, Meister B, Zervos M (1998) Valuation of investments in real assets with implications for the stock prices. SIAM J Control Optim 36(6):2082–2102
Kort PM (1998) Optimal R & D investments of the firm. Oper Res 20(3):155–164
Kulatilaka N, Trigeorgis L (2004) The general flexibility to switch: real options revisited. Real options and investment under uncertainty: classical readings and recent contributions, 1st edn. MIT Press, Cambridge, pp 179–198
Lamberton D, Zervos M (2013) On the optimal stopping of a one-dimensional diffusion. Electron J Probab 18(34):1–49
McDonald R, Siegel D (1985) Investment and the valuation of firms when there is an option to shut down. Int Econ Rev 26:331–349
Peskir G, Shiryaev A (2006) Optimal stopping and free-boundary problems. Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel
Revuz D, Yor M (2013) Continuous martingales and Brownian motion, vol 293. Springer Science & Business Media, Berlin
Rüschendorf L, Urusov MA (2008) On a class of optimal stopping problems for diffusions with discontinuous coefficients. Ann Appl Probab 18(3):847–878
Schwartz ES, Trigeorgis L (2004) Real options and investment under uncertainty: classical readings and recent contributions. MIT Press, Cambridge
Stokey NL (2016) Wait-and-see: investment options under policy uncertainty. Rev Econ Dyn 21:246–265
Trigeorgis L (1996) Real options: managerial flexibility and strategy in resource allocation. MIT Press, Cambridge
Villeneuve S (2007) On threshold strategies and the smooth-fit principle for optimal stopping problems. J Appl Probab 44(1):181–198
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Manuel Guerra was partially supported by the Project CEMAPRE - UID/MULTI/00491/2013 financed by FCT/MEC through national funds. Cláudia Nunes was partially supported by the Project CEMAT - UID/MULTI/04621/2013 financed by FCT/MEC through national funds. Cláudia Nunes gratefully acknowledge the financial support of Fundação para a Ciência e Tecnologia (FCT Ɖ Portugal), through the research project PTDC/EGE-ECO/30535/2017 (Finance Analytics Using Robust Statistics). Carlos Oliveira was supported by the Fundação para a Ciência e Tecnologia (FCT) under Grant SFRH/BD/102186/2014.
Appendices
A Proofs
1.1 A.1 Preliminaries
We start by stating an auxiliary result that will be useful to prove the remaining results in the paper.
Proposition 6
Let X be defined as in (2) and \(g:]0,\infty [\rightarrow [0,+\infty ]\) be a Borel measurable function, such that \(\int _0^{\infty }g(x)dx>0\) (i.e., g is not almost everywhere zero). If \(d_1=d_2\) or \(d_1=\overline{d_2}\in \mathbb {C}{\setminus }\mathbb {R}\), then
Proof
Using Fubini’s Theorem and equalities in (8), we have:
Using the change of variable \(w=\frac{1}{\sigma }\log \frac{y}{x}+\sigma \frac{d_1+d_2}{2}t\), it is possible to obtain
Additional calculations, allow us to obtain:
where \(A=\int _0^{+\infty }\frac{g(y)}{\sigma y}e^{-\frac{1}{2\sigma ^2}\left( \log \frac{y}{x}\right) ^2-\frac{d_1+d_2}{2}\log \left( \frac{y}{x}\right) }dy>0\). This proves the proposition, since \(d_1=\overline{d_2} \) means that \((d_1-d_2)^2 \le 0\) and therefore \(\int _1^{+\infty }\frac{1}{\sqrt{2\pi t}}e^{-\frac{\sigma ^2}{2}( d_1 - d_2)^2}dt = + \infty \). \(\square \)
1.2 A.2 Proof of Proposition 4
Fix a point \(a \in ]0,+\infty [\), and consider the initial value problem defined by (11) and (14). Using the change of variable: \(y=ln\left( \frac{x}{a} \right) \) and \(u(y)=v(a e^y)\), the solution of the problem above can be obtained from the solution of the equation
with initial conditions
Here \(\tilde{ {\mathcal {L}}} u(y)\) is defined as in Eq. (6). Defining the vector \(w(y)=(u(y), u'(y))^T \), where the superscript \(^T\) denotes the transpose, we may represent the ODE (30) as \(w'(y)=Aw(y)+b(y)\), where \(b:]0,\infty [\rightarrow \mathbb {R}^{2}\) is a vector function and A is a constant \(2 \times 2\) matrix, defined as follows:
The last equality follows from the parametrization defined in (8). Furthermore, straightforward calculations lead to the fundamental matrix
The solution for this system is given by \(w(y)=e^{yA}w(0){+}\int _0^ye^{(y-s)A}b(s)ds\). Returning to the original variables, we get the expressions (16)–(17).
1.3 A.3 Proof of Proposition 2
Let v be a solution of the HJB equation. Then,
for almost every \(x \in ]0,+\infty [\).
Fix \(t>0\), and let \(\tau \in {\mathcal {S}}\) be any stopping time. Since the function \(v'\) is continuous in \(]0,\infty [\), then \(\int _0^{t\wedge \tau }e^{-rs}v'(X(s))dW(s)\) is a martingale (we set \(a \wedge b=\min (a,b)\)). Consequently:
Since v is continuously differentiable with absolutely continuous derivative, it can be written as the difference of two convex functions. Therefore, the Itô-Tanaka formula holds (see Revuz and Yor (2013), Theorem VI.1.5), and therefore we obtain
The expressions \(\int _0^{t\wedge \tau } e^{-rs} \varPi ^+(X(s)) ds\), \(\int _0^{t\wedge \tau } e^{-rs} \varPi ^-(X(s)) ds\) are monotonically increasing with respect to t, and \(E_x \left[ \int _0^{t\wedge \tau } e^{-rs} \varPi ^+(X(s)) ds \right] \le E_x \left[ \int _0^{+\infty } e^{-rs} \varPi ^+(X(s)) ds \right] <\infty \). Therefore, the monotonic convergence theorem guarantees that
Thus, the inequality (31) implies
Since this holds for arbitrary \(\tau \in {\mathcal {S}}\), it shows that \(v(x) \ge V(x)\).
1.4 A.4 Proof of Proposition 3
It can be checked that the argument in the proof of Proposition 2 holds in the present case and, therefore,
Let
It is clear that \(\tau _0 \in {\mathcal {S}}_{[a,b]}\). To finish the proof, we only need to show that \(J(x,\tau _0) = v(x)\) for every \(x \in ]a,b[\). By definition of \(\tau _0\), \(e^{-r(t\wedge \tau _0)}v(X(t\wedge \tau _0))=0\) whenever \(t\wedge \tau _0=\tau _0\), therefore:
Since \(v(X(s))>0\) whenever \(s < \tau _0\), the argument used in the proof of Proposition 2 shows that
for every \(t \in ]0,+\infty [\), and \(\lim \limits _{t \rightarrow + \infty } E_x \left[ \int _0^{t \wedge \tau _0} e^{-rs}\varPi (X(s)) ds \right] = E_x \left[ \int _0^{\tau _0} e^{-rs}\varPi (X(s)) ds \right] \).
We extend the function v to the interval \(]0,+\infty [\), by setting \(v(x)=0\) for every \(x \in ]0,a] \cup [b,+\infty [\). Thus, there is a constant \(K<+\infty \) such that
Therefore,
where \(\{W_t:t\ge 0\}\) is a standard Brownian motion. This shows that \(\lim \limits _{t \rightarrow +\infty }E_x \left[ e^{-rt} v(X(t)) {\mathcal {I}}_{\{ \tau _0>t\}} \right] = 0\), and, therefore \(v(x) = E_x \left[ \int _0^{ \tau _0} e^{-rs}\varPi (X(s)) ds \right] \), which concludes the proof.
1.5 A.5 Proof of Lemma 1
We prove assertions (i) and (iii) simultaneously.
If \(x_{1l}=0\), then \(\beta =0\), and Remark 3 implies that
Suppose now that \(x_{1l}>0\), and, therefore, \(\beta < x_{1l}\). For every \(b \in ]\beta , x_{1l}[\), let \(c(b) = \inf \{ x>b : v_b(x) < 0 \}\). By Remark 5, the function \(b \mapsto c(b)\) is monotonically decreasing in \(]b,x_{1l}[\) and therefore, \(\lim \limits _{b \downarrow \beta } c(b)\) exists (possibly, infinite). If \(\lim \limits _{b \downarrow \beta } c(b) \le x_{2r}\), then Remark 6 states that \(\lim \limits _{b \downarrow \beta } c(b) < \gamma \). If \(\lim \limits _{b \downarrow \beta } c(b) > x_{2r}\), then Remark 3 implies that for every \(b \in ]\beta , x_{1l}[\), we have \(v_b(c(b)) = 0\), \(v_b'(c(b)) < 0\). Therefore, due to Remark 2, the inequality \(v_{c(b)} (b) < v_b(b) =0\) must hold. This shows that \(c(b) < \gamma \) and therefore, since b is arbitrary, \(\lim \limits _{b \downarrow \beta } c(b) \le \gamma \).
Now, suppose that \(\beta >0\). If \(\lim \nolimits _{b \downarrow \beta } c(b) = + \infty \), then the equality in (i) holds trivially. Thus, we assume that \(\lim \nolimits _{b \downarrow \beta } c(b) = c < + \infty \). Since \(\lim \nolimits _{b \downarrow \beta } v_b(x) = v_\beta (x)\) uniformly with respect to x on compact subintervals of \(]0,+\infty [\), it follows that \(0 = \lim \nolimits _{b \downarrow \beta } v_b(c(b)) = v_\beta (c)\). Since \(v_\beta \) is non-negative (Remark 6), it follows that \(v_\beta '(c) = 0\), and therefore \(v_\beta = v_c\). Notice that for every \({{\tilde{c}}} >c\), \(v_{{\tilde{c}}}(x) > v_c(x)\) holds for every \(x \in ]0,c]\) (Remark 5), and \(v_{{\tilde{c}}}(x) >0\) for every \(x \in [x_{2l},{\tilde{c}}[\) (Remark 3), it follows that \(\gamma =c\).
The proof of assertion (ii) is analogous.
1.6 A.6 Proof of Lemma 2
To prove assertion (i):
Using the equalities (18)–(19), a simple computation shows that condition (25) is equivalent to
i.e., \(v_b \equiv v_c\). Therefore, Lemma 1 states that if \(\beta >0\) and \(\gamma < +\infty \), then \(\beta \), \(\gamma \) satisfy the conditions in (25).
Let \(b \in ]0, x_{1l}[\) and \(c\in ]x_{2r},+\infty [\) be constants satisfying the conditions in (25). Since \(v_b(c) =v_c(b)= 0\), Remark 5 implies that \(\beta \le b\) and \(\gamma \ge c\).
Suppose that \(\beta < b\). By Remarks 3 and 5, there must be some \(x>b\) such that \(v_b(x) < 0\). Due to Remark 3, x must lie in the interval \(]x_{1r},x_{2l}[\), but this case is excluded by Remark 4, and therefore \(\beta = b\). A similar argument shows that \(\gamma = c\), and the proof of assertion (i) is complete.
To prove assertion (ii):
We can write equality (18) in the form:
Breaking the interval \([a,+\infty [\) into intervals where \(\varPi \) does not change sign and using the Lebesgue monotone convergence theorem, we see that
for every \(a \in ]0,+\infty [\).
Suppose that \(\beta >0\) and \(\gamma = + \infty \). Since \(v_\beta \) is nonnegative, the equalities (32), (33) imply that \(\int _\beta ^{+\infty } s^{-d_2-1}\varPi (s) ds \le 0\). If \(\int _\beta ^{+\infty } s^{-d_2-1}\varPi (s) ds < 0\), then there are constants \(\varepsilon >0\), \(c< +\infty \) such that \(v_b(x) >0\) for every \(x>c\), \(b< \beta + \varepsilon \). However, Lemma 1 states that \(\lim \limits _{b \downarrow \beta } \inf \{ x>b: v_b(x) < 0 \} = +\infty \). This shows that \(\int _\beta ^{+\infty } s^{-d_2-1}\varPi (s) ds = 0\).
For each \(b > \beta \), let \(c(b) = \inf \{ x> b : v_b(x) < 0 \} \). By Remark 5, we have
Lemma 1 states that \(\lim \limits _{b \downarrow \beta } c(b) = \gamma = + \infty \). Hence,
Now, let \(b \in ]0, x_{1l}[\) be a constant satisfying (21). Since for every sufficiently small \(\varepsilon >0\), we have
the equalities (32) and (33) show that \(v_{ b+\varepsilon }(x) <0\) for every sufficiently large x. Thus, \(\beta \le b\). The equalities (32) and (33) also show that
Therefore, if there is some \(x \in ]b,+\infty [ \) such that \(v_b(x) <0\), the function \(x \mapsto \frac{v_b(x)}{x^{d_2}}\) must have a global minimizer in \(]b,+\infty [\). Using equalities (18)–(19), we see that
Thus, Assumption 1 implies that if c is a minimizer of \( \frac{v_b(x)}{x^{d_2}} \), then \(c > x_{2r}\) and \(\int _b^c s^{-d_1-1}\varPi (s) ds =0\). However, this implies that \(\int _b^{+\infty } s^{-d_1-1} \varPi (s) ds < 0\). Hence, \(v_b\) must be nonnegative and therefore \(\beta = b\).
The proof of statement (iii) is analogous to the proof of statement (ii).
The statement (iv) is straightforward: by the statements (i) and (ii), \(\beta >0\) implies \(\int _0^{+\infty } s^{-d_2-1}\varPi (s) ds < 0 \), and, by the statements (i) and (iii), \(\gamma < + \infty \) implies \(\int _0^{+\infty } s^{-d_1-1}\varPi (s) ds < 0 \). Hence, \(\beta = 0\) and \(\gamma = +\infty \) imply (28). Conversely, if (28) holds, then none of the equalities in conditions (25), (21) or (23) may hold.
1.7 A.7 Proof of Theorem 1
Suppose that there are constants \(b \in ]0, x_{1l}[\), \(c \in ]x_{2r},+\infty [\) satisfying conditions in (25). By Lemma 2, \(\beta = b\), \(\gamma =c\), and the function V defined by (26) is a Carathéodory solution of the HJB equation (10). Therefore, Proposition 2 states that V is an upper bound for the value function (4).
For every \(n \in {\mathbb {N}}\), let
By Proposition 3, the function
coincides with the function \(V_{[b_n,c_n]}(x)= \sup \nolimits _{\tau \in {\mathcal {S}}_{[b_n,c_n]}} J(x,\tau ) \). Since \({\mathcal {S}}_{[b_n,c_n]} \subset {\mathcal {S}}\), \(V_n\) is a lower bound for the value function (4). By Lemma 1, \(\lim V_n =V\) and therefore the assertion (iii) of the theorem holds. Notice that the argument above does not require \(\gamma < +\infty \). Hence, it can be used verbatim to prove assertion (i). If we set
then the argument above proves assertion (ii).
To prove assertion (iv), we note that if there are no constants b, c as in (i)–(iii), then Lemma 2 states that \(\beta = 0\) and \(\gamma = +\infty \). Suppose, without loss of generality, that \(x_{1l} >0\), and pick a sequence \({\tilde{a}}_n < x_{1l}\), converging to zero. Due to Equations (28), (32) and (33), for each \(n \in {\mathbb {N}}\) there is some \(c_n < +\infty \) such that \(v_{{\tilde{a}}_n}(x) < 0 \) for every \(x \in {]}c_n,+\infty [\). Hence, there is a sequence \(b_n\) such that \(\lim b_n=+\infty \) and \(v_{{\tilde{a}}_n}(b_n) < 0 \). If, on the one hand, \(v_{b_n}(x) >0\) for every \(x \in ]{\tilde{a}}_n,b_n[\), then we set \(a_n = \sup \{x<b_n : v_{b_n}(x) < 0 \} \le {\tilde{a}}_n\), and, consequently, \(v_{[a_n,b_n]}(x) = v_{b_n}(x) >0\) holds for every \(x \in ]a_n,b_n[\). If, on the other hand, there is some \(x \in ]{\tilde{a}}_n, b_n[\) such that \(v_{b_n}(x) \le 0\), then we set \(a_n={\tilde{a}}_n\). Combining the expressions in Equation (20) and the inequalities \(v_{[a_n,b_n]}(b_n)=0>v_{a_n}(b_n)\) and \(v_{[a_n,b_n]}(a_n)=0>v_{b_n}(a_n)\), it follows that \(v_{[a_n,b_n]}'(a_n) >0\) and \(v_{[a_n,b_n]}'(b_n) <0\). Therefore, Remark 2 implies that
If \(x_{2r}=+\infty \), then the inequality (36) implies that \(v_{[a_n,b_n]}(x)>0\) for every \(x \in [x_{2l},b_n[\) (if \(b_n>x_{2l}\)). If \(x_{2r}<+\infty \), then we can chose \(b_n>x_{2r}\) and the Remark 3 states that \(v_{b_n}(x) >0\) for every \(x \in [x_{2l},b_n[\). In both cases, the Remark 3 states that \(v_{a_n}(x) >0\) for every \(x \in ]a_n,x_{1r}]\). Hence, the Remark 4 guarantees that \(v_{[a_n,b_n]}(x)>0\) for every \(x \in ]x_{1r},x_{2l}[\), and therefore \(v_{[a_n,b_n]}\) is strictly positive in \(]a_n,b_n[\).
Fix a sequence \(\left\{ [a_n,b_n] \right\} _{n \in {\mathbb {N}}}\) as above. Due to Remark 2, the sequence
is monotonically increasing. By Proposition 3, \(v_n(x) = \sup \nolimits _{\tau \in {\mathcal {S}}_{[a_n,b_n]}}J(x,\tau ) < v_p^+(x)\). Therefore, Assumption 2 implies that \(v_n(x)\) is a bounded monotonic sequence and hence it converges. Since solutions of Equation (11) depend continuously on boundary conditions, the convergence of \(v_n(x)\) and \(v'_n(x)\) is uniform on compact intervals. Therefore, \(v(x) = \lim \limits _{n \rightarrow \infty } v_n(x)\) is a positive Carathéodoty solution of Equation (11) and therefore it is a solution of the HJB equation (10), which, by Proposition 2, is an upper bound for the value function. Since \(v_n(x) = \sup \nolimits _{\tau \in {\mathcal {S}}_{[a_n,b_n]}}J(x,\tau ) \le \sup \nolimits _{\tau \in {\mathcal {S}}}J(x,\tau )\), v must coincide with the value function.
To see that v coincides with the function defined in Equation (27), use Equation (16) and the boundary conditions \(v_{[a_n,b_n]}(a_n) = v_{[a_n,b_n]}(b_n) = 0\) to obtain
Substituting in Equation (20) and rearranging, we obtain
for every \(x \in [a_n,b_n]\). For every \(x \in ]0,+\infty [\) (fixed), the Lebesgue monotone convergence theorem shows that
and the same holds for \(\varPi ^-\). Assumption 2 guarantees that the integrals with \(\varPi ^+\) are finite. Hence,
and the proof is complete.
1.8 A.8 Proof of Proposition 1
Since Proposition 6, states that if \(v_p^+(x)<+\infty \), for any x, then \(d_1<d_2\), the argument used to prove assertion (iv) of Theorem 1 can be easily adapted to prove Proposition 1.
1.9 A.9 Proof of Lemma 3
For each a, b, d such that \(0<a<b < +\infty \) and \(d \ne 0\), let
Notice that \(f_{a,b,d}\) is continuous, and for \(d>0\):
For \(d<0\), \(f_{a,b,d}\) takes the oposites of the signs above.
Suppose that the assumptions of statement (i) hold.
Since \(\int _b^c s^{-d_2-1} \varPi (s) ds = 0\), we see that \(\int _b^c s^{-d_2-1} \varPi (s) \ln s \, ds = \int _b^c s^{-d_2-1} \varPi (s) \left( \ln s +C \right) \, ds\), for every constant \(C \in {\mathbb {R}}\). If \(c \le x_{2r}\), this implies
since the integrand of the right-hand side integral is non-negative and is not zero on a set of positive measure. If \(c > x_{2r}\), then
and again the integrand in the last expression is a non-negative function which differs from zero on a set of positive measure.
The proof of statement (ii) is analogous.
1.10 A.10 Proof of Lemma 4
Let \(f_{x_1,x_2,d}\) be as in the proof of Lemma 3. Using the argument in the proof of Lemma 3, we see that for any constant \(k \in ]0,+\infty [\):
A simple computation shows that for \(d>0\):
This last expression is equal to zero if \(s=x_1\) or \(s= x_2\). It is a strictly concave function of \(s^d\) for \(s \in ]0,k[\), and strictly convex for \(s \in ]k,+\infty [\). It follows that
and the result follows.
\(\square \)
B Auxiliary results for the sensitivity analysis
In this section we present some trivial calculations in order to obtain the monotonicity of the parameters \(\beta \) and \(\gamma \) in Sects. 4.1, 4.2 and 4.3.
In Sect. 4.1, we note that the threshold \(\beta \) is the unique solution of equation (21). Excluding singular cases, we assume that \(\int _\beta ^{+\infty } s^{-d_1-1} \varPi (s) ds >0\), and, therefore, in view of the Implicit Function Theorem and the chain rule, it follows that:
where \(\frac{\partial \beta }{\partial d_2} = - \frac{\int _\beta ^{+\infty } s^{-d_2-1} \varPi (s) \ln s \, ds}{\beta ^{-d_2-1} \varPi (\beta )} \). By Lemma 3, this expression is strictly positive, and therefore the derivatives of \(\beta \) with respect to the parameters have the signs:
In Sect. 4.2, we can se that the threshold \(\gamma \) is the unique solution of equation (23). Excluding singular cases, we assume that \(\int _0^\gamma s^{-d_2-1} \varPi (s) ds >0\). Therefore, the argument above shows that
and the derivatives of the threshold \(\gamma \) with respect to the parameters have the signs:
Finally, in Sect. 4.3 the thresholds \(\beta \in ]0,x_{1l}[\), \(\gamma \in ]x_{2r},+\infty [\) are the unique solutions of Eq. (25), and, therefore, we can use the Implicit Function Theorem and the chain rule to derive:
where \( I_i = \int _\beta ^\gamma s^{-d_i-1} \varPi (s) \ln s \, ds\), \(i=1,2\). Thus, Lemma 4 shows that \(\frac{\partial \beta }{\partial r} >0\) and \(\frac{\partial \gamma }{\partial r} <0\), that is, an increase of the discount rate leads to an earlier exercise of the option.
Concerning the effect of changes in the parameters \(\alpha \), \(\sigma ^2\), notice that
Rights and permissions
About this article
Cite this article
Guerra, M., Nunes, C. & Oliveira, C. The optimal stopping problem revisited. Stat Papers 62, 137–169 (2021). https://doi.org/10.1007/s00362-019-01088-w
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-019-01088-w