Appendix A. Derivation of the property \(\left( \beta _{1}-1\right) \left( 1-\beta _{2}\right) =-\frac{\delta }{r}\beta _{1}\beta _{2}\)
From the definition of the roots \(\beta _{1}\) and \(\beta _{2}\) in relationship (2), we have \(\beta _{1}\beta _{2}=-\frac{2r}{\sigma ^{2}}\) and \(\beta _{1}+\beta _{2}=-\frac{r-\delta -\sigma ^{2}/2}{\sigma ^{2}/2}.\) It follows that \(\beta _{1}+\beta _{2}-1+\beta _{1}\beta _{2}= \frac{2\delta }{\sigma ^{2}}\) or equivalently
$$\begin{aligned} (\beta _{1}-1)(1-\beta _{2})=\frac{2\delta }{\sigma ^{2}}=-\frac{\delta }{r} \beta _{1}\beta _{2}\text { as }\beta _{1}\beta _{2}=-\frac{2r}{\sigma ^{2}}. \end{aligned}$$
\(\square \)
Appendix B. Derivation of the reward function and its properties
We want to compute
$$\begin{aligned} H(x_{0},z_{0})=\frac{E_{0}^{Q}\left[ x_{\tau _{0}}e^{-r\tau _{0}}\right] }{ \delta }. \end{aligned}$$
Let \(\lambda >0\) and observe that if process x starts at \(x_{0}\) and process \(x^{\prime }\) starts at \(\lambda x_{0}\), then we have \(x^{\prime }\equiv \lambda x\). It follows that if counting process z (resp. \( z^{\prime }\)) associated to x (resp. \(x^{\prime }\)) starts at \(z_{0}>0\,\) (resp. \(\lambda z_{0}\)), then by linearity, we have \(z^{\prime }\equiv \lambda z\): process \(z^{\prime }\) shall hit 0 exactly when process z hits 0. This implies that \(H(\lambda x_{0},\lambda z_{0})=\lambda H(x_{0},z_{0})\), so H is homogenous of degree 1 and we can write
$$\begin{aligned} H(x,z)=zh(u), \end{aligned}$$
where \(u=\frac{x}{z}\) for some smooth function h. Furthermore, for \(t<\tau _{0}\), process H is a martingale so its drift process must be equal to 0, and H must satisfy the following HJB equation
$$\begin{aligned} rH(x,z)=(r-\delta )xH_{1}(x,z)+\frac{\sigma ^{2}}{2} x^{2}H_{11}(x,z)-xH_{2}(x,z), \end{aligned}$$
(13)
with \(H(x,0)=0.\) Then, h satisfies the following reduced HJB equation
$$\begin{aligned} (r+u)h(u)=u(u+r-\delta )h^{\prime }(u)+\frac{\sigma ^{2}}{2}u^{2}h^{\prime \prime }(u). \end{aligned}$$
(14)
Let us define the auxiliary function g such that \(g(u)=u^{-\beta }h(u)\) where \(\beta \in \{\beta _{1},\beta _{2}\}\). It is easy to check that g satisfies the following ODE
$$\begin{aligned} \frac{\sigma ^{2}}{2}ug^{\prime \prime }(u)+(u+r-\delta +\beta \sigma ^{2})g^{\prime }(u)+(\beta -1)g(u)=0. \end{aligned}$$
Next, set \(v=-\frac{2}{\sigma ^{2}}u\), and define a new auxiliary function \( \varphi \) such that \(\varphi (v)=g\left( u\right) \). Function \(\varphi \) is the solution of the following Kummer equation
$$\begin{aligned} v\varphi ^{\prime \prime }(v)+(b-v)\varphi ^{\prime }(v)-a\varphi (v)=0, \end{aligned}$$
(15)
where \(a=\beta -1\) and \(b=1+\beta -{\overline{\beta }}\), with \(\overline{\beta }_{1}=\beta _{2}\) and \({\overline{\beta }}_{2}=\beta _{1}.\) We are interested in the solutions defined for \(v\le 0.\) One solution is the Kummer function M whose representation is given by
$$\begin{aligned} M(a,b,z)=\frac{1}{B(a,b-a)}\int \nolimits _{0}^{1}e^{zt}t^{a-1}(1-t)^{b-a-1}dt, \text { for }b>a>0, \end{aligned}$$
where B denotes the Beta function. See for instance Abramowitz and Stegun [3]. Next we show that if \(\varphi \) is a solution for \(z\ge 0\) of Eq. (15) with parameters (a, b), then \({\widehat{\varphi }}\) defined by \({\widehat{\varphi }}(v)=e^{v}\varphi (-v)\) with \(v<0\) is a solution for \(z\le 0\) of Eq. (15) with parameters \(\left( b-a,b\right) \). Set \(z=-v,\) we have \(\varphi ^{\prime }(z)=e^{z}\left[ {\widehat{\varphi }}(-z)-{\widehat{\varphi }}^{\prime }(-z)\right] \) and \( \varphi ^{\prime \prime }(z)=e^{z}\left[ {\widehat{\varphi }}(-z)-2\widehat{ \varphi }^{\prime }(-z)+{\widehat{\varphi }}^{\prime \prime }(-z)\right] \). It is easy to check that \({\widehat{\varphi }}\) satisfies:
$$\begin{aligned} z{\widehat{\varphi }}^{\prime \prime }(z)+(b-z){\widehat{\varphi }}^{\prime }(z)-(b-a){\widehat{\varphi }}(z)=0. \end{aligned}$$
the other independent solution of Eq. (15) defined for \(z\le 0\) is given by
$$\begin{aligned} e^{z}U(b-a,b,-z)=\frac{1}{\Gamma (b-a)}\int \nolimits _{0}^{\infty }e^{z(1+t)}t^{b-a-1}(1+t)^{a-1}dt,\text { with }a>0, \end{aligned}$$
where U denotes the Tricomi (confluent geometric) function.
Kummer’s Transformation
$$\begin{aligned} M(a,b,z)= & {} e^{z}M(b-a,b,-z) \\ U(a,b,z)= & {} z^{1-b}U(1+a-b,2-b,z). \end{aligned}$$
In order to have a well-defined solution, we need to choose \(\beta =\beta _{1}\) so that
$$\begin{aligned} a=\beta _{1}-1>0\text { and }b=1+\beta _{1}-\beta _{2}>a. \end{aligned}$$
To sum up one solution of the homogenous Eq. (14) is
$$\begin{aligned} u^{\beta _{1}}\varphi \left( -\frac{2}{\sigma ^{2}}u\right)= & {} u^{\beta _{1}}M\left( \beta _{1}-1,1+\beta _{1}-\beta _{2},-\frac{2}{\sigma ^{2}}u\right) \\= & {} \frac{u^{\beta _{1}}}{\Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{- \frac{2u}{\sigma ^{2}}t}t^{\beta _{1}-2}(1-t)^{1-\beta _{2}}dt. \end{aligned}$$
Another independent solution of (14) is \(u^{\beta _{1}}\varphi (- \frac{2}{\sigma ^{2}}u)=u^{\beta _{1}}e^{-\frac{2}{\sigma ^{2}}u}U(2-\beta _{2},1+\beta _{1}-\beta _{2},-\frac{2}{\sigma ^{2}}u).\) One can check that around 0, we have
$$\begin{aligned} u^{\beta _{1}}e^{-\frac{2}{\sigma ^{2}}u}U(2-\beta _{2},1+\beta _{1}-\beta _{2},-\frac{2}{\sigma ^{2}}u)\underset{0}{\sim }\left( \frac{\sigma ^{2}}{2} \right) ^{\beta _{1}-\beta _{2}}\frac{\Gamma (\beta _{1}-\beta _{2})}{\Gamma (2-\beta _{2})}u^{\beta _{2}}, \end{aligned}$$
which is not bounded around 0. It follows that \(h(u)=Ku^{\beta _{1}}M(\beta _{1}-1,1+\beta _{1}-\beta _{2},-\frac{2}{\sigma ^{2}}u)\) for some constant \(K>0\) to be determined. \(\square \)
Boundary Conditions: At \(z=0\), we required \(H(x,0)=\frac{1}{ \delta }\) (value matching condition). For x large an asymptotic approximation of the Kummer function (see Abramowitz and Stegun [3]) is
$$\begin{aligned} M(a,b,-x)\underset{\infty }{\sim }\frac{\Gamma (a)x^{-a}}{B(a,b-a)} (1-(b-a-1)ax^{-1}+o(x^{-1})), \end{aligned}$$
(16)
so in particular
$$\begin{aligned} \underset{u\rightarrow \infty }{\lim }\text { }u^{\beta _{1}-1}M(\beta _{1}-1,1+\beta _{1}-\beta _{2},-\frac{2}{\sigma ^{2}}u)=\left( \frac{\sigma ^{2}}{2}\right) ^{\beta _{1}-1}\frac{\Gamma (\beta _{1}-1)}{B(\beta _{1}-1,2-\beta _{2})}. \end{aligned}$$
This implies that \(K=\frac{1}{\delta }\left( \frac{\sigma ^{2}}{2}\right) ^{1-\beta _{1}}\frac{B(\beta _{1}-1,2-\beta _{2})}{\Gamma (\beta _{1}-1)}.\) Finally, we obtain that
$$\begin{aligned} H(x,I)=\frac{x}{\delta }\frac{\left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I}t}t^{\beta _{1}-2}(1-t)^{1-\beta _{2}}dt. \end{aligned}$$
Using relationship (16), we obtain that
$$\begin{aligned} \begin{array}{lll} R(x,I) &{} \underset{x\rightarrow \infty }{\sim } &{} \frac{x}{\delta } -\tau _{c}(\frac{x}{\delta }-\frac{(1-\beta _{2})(\beta _{1}-1)}{\delta } \frac{2}{\sigma ^{2}}I) \\ &{} \underset{x\rightarrow \infty }{\sim } &{} \frac{1-\tau _{c}}{\delta } x+\tau _{c}I. \end{array} \end{aligned}$$
Then, it follows that
$$\begin{aligned} H_{1}(x,I)= & {} \frac{1}{\delta }\frac{\left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{2x}{ \sigma ^{2}I}t}t^{\beta _{1}-2}(1-t)^{-\beta _{2}}(1-\beta _{2}t)dt>0 \\ H_{2}(x,I)= & {} -\frac{(1-\beta _{2})}{\delta }\frac{\frac{x}{I}\left( \frac{2x }{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I}t}t^{\beta _{1}-1}(1-t)^{-\beta _{2}}dt<0 \\ H_{11}(x,I)= & {} \frac{\beta _{2}(\beta _{2}-1)}{\delta }\frac{\frac{1}{x} \left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I}t}t^{\beta _{1}}(1-t)^{-\beta _{2}-1}dt>0 \\ H_{12}(x,I)= & {} -\frac{\beta _{2}(\beta _{2}-1)}{\delta }\frac{\frac{1}{I} \left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I}t}t^{\beta _{1}}(1-t)^{-\beta _{2}-1}dt<0 \\ H_{22}(x,I)= & {} \frac{\beta _{2}(\beta _{2}-1)}{\delta }\frac{\frac{x}{I} \left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I}t}t^{\beta _{1}}(1-t)^{-\beta _{2}-1}dt>0. \end{aligned}$$
Note that \(H_{11}(x,I)H_{22}(x,I)=H_{12}^{2}(x,I).\) \(\square \)
Effective Earning Function. Applying Itô’s lemma, for all \( 0\le t\le s\) we have
$$\begin{aligned} R(x_{s},I)e^{-rs}=R(x_{t},I)e^{-rt}+\int \nolimits _{t}^{s}(\mathcal {A} R(x_{u},I)-rR(x_{u},I))e^{-ru}du+\int \nolimits _{t}^{s}\sigma x_{u}R_{1}(x_{u},I)e^{-ru}dw_{u}^{Q}, \end{aligned}$$
where \(\mathcal {A}R(x_{u},I)=(r-\delta )xR_{1}(x,I)+\frac{\sigma ^{2}}{2} x^{2}R_{11}(x,I)\) denotes the Dynkin operator. Then, observe that \(\left| R_{1}(x_{u},I)\right| \le \frac{1}{\delta }\) as we prove in “Appendix C” that \(H_{1}\le \frac{1}{\delta }\). This implies that \(t\mapsto \int \nolimits _{0}^{t}\sigma x_{u}R_{1}(x_{u},I)e^{-ru}dw_{u}^{Q}\) is a martingale so that
$$\begin{aligned} E_{t}\left[ R(x_{s},I)e^{-rs}\right] =R(x_{t},I)e^{-rt}+E_{t}^{Q}\left[ \int \nolimits _{t}^{s}(\mathcal {A}R(x_{u},I)-rR(x_{u},I))e^{-ru}du\right] . \end{aligned}$$
Then, recall that \(R(x,I)=\frac{x}{\delta }-\tau _{c}H(x,I);\) using relationship (13), we find that
$$\begin{aligned} \mathcal {A}R(x,I)-rR(x,I)=-(1-\tau _{c})x-\tau _{c}(x+xH_{2}(x,I)), \end{aligned}$$
so for \(0\le t\le s\), we obtain
$$\begin{aligned} R(x_{t},I)=E_{t}^{Q}\left[ \int \nolimits _{t}^{s}\left[ (1-\tau _{c})x_{u}+\tau _{c}(x_{u}+x_{u}H_{2}(x_{u},I))\right] e^{-r(u-t)}du\right] +E_{t}^{Q}\left[ R(x_{s},I)e^{-r(s-t)}\right] , \end{aligned}$$
As for all \(x\ge 0\), \(0\le R(x,I)\le \frac{x}{\delta }\), we have \(0\le E_{t}^{Q}\left[ R(x_{s},I)e^{-r(s-t)}\right] \le \frac{x_{t}}{\delta } e^{-\delta (s-t)},\) letting s goes to \(\infty \) leads to
$$\begin{aligned} R(x_{t},I)=E_{t}^{Q}\left[ \int \nolimits _{t}^{\infty }\left[ (1-\tau _{c})x_{u}+\tau _{c}x_{u}(1+H_{2}(x_{u},I))\right] e^{-r(u-t)}du\right] . \end{aligned}$$
\(\square \)
Properties of the Effective Earning Function. From relationship (5), clearly, \(e(0)=0\) and from the expression of \(H_{2}\) and relationship (16) we find that \(e(x,I)\underset{x\rightarrow \infty }{\sim }(1-\tau _{c})x\). Then,
$$\begin{aligned} e^{\prime }(x,I)=1+\tau _{c}(H_{2}(x,I)+xH_{12}(x,I)). \end{aligned}$$
Again using relationship (16) and the expressions for \(H_{2}\) and \(H_{12}\) we find that \({\lim \nolimits _{x\rightarrow \infty } }\) \( H_{2}(x,I)=-1\) and \({\lim \nolimits _{x\rightarrow \infty } }\) \(xH_{12}(x,I)=0\) , so that \({\lim \nolimits _{x\rightarrow \infty } }\) \(e^{\prime }(x,I)=1-\tau _{c}\). Finally, we have
$$\begin{aligned} e^{\prime \prime }(x,I)=\tau _{c}(2H_{12}(x,I)+xH_{112}(x,I)). \end{aligned}$$
Then rewriting \(H_{12}\) is a more compact way as
$$\begin{aligned} H_{12}(x,I)=-K\left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}M\left( \beta _{1}+1,1+\beta _{1}-\beta _{2},-\frac{2x}{\sigma ^{2}I}\right) , \end{aligned}$$
where \(K=\frac{\beta _{2}(\beta _{2}-1)}{\delta }\frac{B(\beta _{1}+1,-\beta _{2})}{\Gamma (\beta _{1}-1)}\frac{1}{I}>0,\) we obtain that
$$\begin{aligned} e^{\prime \prime }(x,I)= & {} \tau _{c}(2H_{12}(x,I)+xH_{112}(x,I)) \\= & {} -K\left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}\left[ \left( \beta _{1}+1)M(\beta _{1}+1,1+\beta _{1}-\beta _{2},-\frac{2x}{\sigma ^{2}I} \right) \right. \\&-\left. \frac{2x}{\sigma ^{2}I}M^{\prime }\left( \beta _{1}+1,1+\beta _{1}-\beta _{2},-\frac{2x}{\sigma ^{2}I}\right) \right] . \end{aligned}$$
Using the fact that \(zM^{\prime }(a,b,z)=a\left[ M(a+1,b,z)-M(a,b,z)\right] , \) we obtain that
$$\begin{aligned} e^{\prime \prime }(x,I)=-K\left( \frac{2x}{\sigma ^{2}I}\right) ^{\beta _{1}-1}M\left( \beta _{1}+2,1+\beta _{1}-\beta _{2},-\frac{2x}{\sigma ^{2}I}\right) <0, \end{aligned}$$
as since \(\beta _{1}+1>0\) and \(1+\beta _{1}-\beta _{2}>0\), we have \(M(\beta _{1}+2,1+\beta _{1}-\beta _{2},-\frac{2x}{\sigma ^{2}I})>0\). We conclude that e is a concave function in x and since \(\underset{x\rightarrow \infty }{\lim }\) \(e^{\prime }(x,I)=1-\tau _{c}>0\), function \(e^{\prime }\) must be positive. \(\square \)
Auxiliary Function \(\Phi \). Given a and b positive, let us define for \(z\ge 0\)
$$\begin{aligned} \Phi (z,a,b)= & {} \frac{z^{a}}{\Gamma (a)}\int \nolimits _{0}^{1}e^{-zt}t^{a-1}(1-t)^{b}dt \\= & {} \frac{1}{\Gamma (a)}\int \nolimits _{0}^{z}e^{-u}u^{a-1}\left( 1-\frac{u}{z} \right) ^{b}du. \end{aligned}$$
Clearly, \(\Phi \) is increasing in z and using Lebesgue monotone convergence theorem, we obtain that \({\lim \nolimits _{z\rightarrow \infty } }\) \(\Phi (z,a,b)=1\). In particular, note that for all \(z>0\), we have \(\Phi (z,a,b)<1\).
Appendix C. Existence and properties of the investment trigger
Existence and Uniqueness of Investment Trigger\(x^{*}\). Define auxiliary function \(\Psi \) as
$$\begin{aligned} \Psi (x)= & {} \beta _{1}R(x,I)-xR_{1}(x,I) \\= & {} (\beta _{1}-1)\frac{x}{\delta }-\frac{\tau _{c}}{\delta }\left( \frac{ \sigma ^{2}}{2}\right) ^{-\beta _{1}}\frac{x^{\beta _{1}+1}I^{-\beta _{1}}}{ \Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I} t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt. \end{aligned}$$
We want to show that the equation \(\Psi (x)=\beta I\) has a unique root \( x^{*}>0\). \(\Psi \) is a smooth function and
$$\begin{aligned} \Psi ^{\prime }(x)= & {} \frac{\beta _{1}-1}{\delta }-\frac{\tau _{c}}{\delta } \left( \frac{\sigma ^{2}}{2}\right) ^{-\beta _{1}}\frac{\left( \frac{x}{I} \right) ^{\beta _{1}}}{\Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{ 2x}{\sigma ^{2}I}t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}\left( \beta _{1}+1-\frac{2 }{\sigma ^{2}}\frac{x}{I}t\right) dt \\= & {} \frac{\beta _{1}-1}{\delta }-\frac{\tau _{c}}{\delta }\left( \frac{\sigma ^{2}}{2}\right) ^{-\beta _{1}}\frac{\left( \frac{x}{I}\right) ^{\beta _{1}}}{ \Gamma (\beta _{1}-1)}\left[ \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I} t}t^{\beta _{1}-1}(1-t)^{-\beta _{2}}\left[ (1-t+1-\beta _{2}\right] dt \right] \\ \Psi ^{\prime \prime }(x)= & {} -\frac{\tau _{c}}{\delta }\left( \frac{\sigma ^{2}}{2}\right) ^{-\beta _{1}}\frac{x^{\beta _{1}-1}I^{-\beta _{1}}}{\Gamma (\beta _{1}-1)}\left[ \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I} t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}\left( \beta _{1}-\frac{2}{\sigma ^{2}}\frac{ x}{I}t\right) dt\right. \\&+(1-\beta _{2})\left. \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I} t}t^{\beta _{1}-1}(1-t)^{-\beta _{2}}(\beta _{1}-\frac{2}{\sigma ^{2}}\frac{x }{I}t)dt\right] \text { } \\= & {} -\frac{(1-\beta _{2})\tau _{c}}{\delta }\left( \frac{\sigma ^{2}}{2} \right) ^{-\beta _{1}}\frac{x^{\beta _{1}-1}I^{-\beta _{1}}}{\Gamma (\beta _{1}-1)}\left[ \int \nolimits _{0}^{1}e^{-\frac{2x}{\sigma ^{2}I}t}t^{\beta _{1}-1}(1-t)^{-\beta _{2}-1}\left[ (1-t)^{2}-\beta _{2}\right] dt\right] . \end{aligned}$$
Clearly, \(\Psi ^{\prime \prime }\) is negative, so \(\Psi ^{\prime }\) is decreasing and observe that \({\lim \nolimits _{x\rightarrow \infty } }\) \(\Psi ^{\prime }(x)=\frac{\beta _{1}-1}{\delta }-\tau _{c}\frac{\Gamma (\beta _{1}) }{\Gamma (\beta _{1}-1)\delta }=\frac{\beta _{1}-1}{\delta }(1-\tau _{c})>0.\) This implies that \(\Psi ^{\prime }\) is positive: \(\Psi \) is a strictly increasing with \(\Psi (0)=0\) and \(\underset{x\rightarrow \infty }{\lim }\) \( \Psi (x)=\infty \). Therefore, the equation \(\Psi (x)=\beta _{1}I\) has a unique root \(x^{*}\). Then, totally differentiating relationship (6) with respect to \(\tau _{c}\), we find that
$$\begin{aligned} \Psi ^{\prime }(x^{*})\frac{\partial x^{*}}{\partial \tau _{c}}- \frac{x^{*}}{\delta }\left( \frac{\sigma ^{2}}{2}\right) ^{-\beta _{1}} \frac{\left( \frac{x^{*}}{I}\right) ^{\beta _{1}}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2x^{*}}{\sigma ^{2}I}t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt=0, \end{aligned}$$
so \(\frac{\partial x^{*}}{\partial \tau _{c}}>0\). Finally, we show that \( \left( 1-\tau _{c}\right) x^{*}\) is decreasing in \(\tau _{c}.\) To see this, set \(y^{*}=\left( 1-\tau _{c}\right) x^{*}\) and observe that \( y^{*}\) satisfies
$$\begin{aligned} \Psi \left( \frac{y^{*}}{1-\tau _{c}}\right) =\beta _{1}I. \end{aligned}$$
Differentiating with respect to \(\tau _{c}\,\)leads to
$$\begin{aligned}&\frac{1}{1-\tau _{c}}\Psi ^{\prime }(x^{*})\frac{\partial y^{*}}{ \partial \tau _{c}}\nonumber \\&\quad =\frac{x^{*}}{1-\tau _{c}}\left( -\Psi ^{\prime }(x^{*})+(1-\tau _{c})\frac{1}{\delta }\left( \frac{\sigma ^{2}}{2} \right) ^{-\beta _{1}}\frac{\left( \frac{x^{*}}{I}\right) ^{\beta _{1}}}{ \Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{2x^{*}}{\sigma ^{2}I }t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt\right) . \end{aligned}$$
Since \(\Psi ^{\prime }\) is decreasing, the RHS of the previous equality is an increasing function of \(x^{*}.\) Given what precedes, letting \(x^{*}\) goes to infinity, the RHS converges to \(-\frac{\beta _{1}-1}{\delta } (1-\tau _{c})+\frac{\beta _{1}-1}{\delta }(1-\tau _{c})=0.\) We conclude that the RHS is always negative. As \(\Psi ^{\prime }(x^{*})>0,\) it must be the case that \(\frac{\partial y^{*}}{\partial \tau _{c}}<0.\) \(\square \)
Proof of Proposition 2
Step 1: \(x^{*}<{\overline{x}}\). As \(\frac{1-\tau _{c}}{ \delta }{\overline{x}}=\frac{\beta _{1}I}{\beta _{1}-1}\), in order that show that \(x^{*}<{\overline{x}}\), it is enough to show that \(\Psi ({\overline{x}} )>\beta _{1}I,\) or equivalently
$$\begin{aligned} (\beta _{1}-1)\frac{{\overline{x}}}{\delta }\left( 1-\tau _{c}\frac{(\frac{2}{ \sigma ^{2}}\frac{{\overline{x}}}{I})^{\beta _{1}}}{\Gamma (\beta _{1})} \int \nolimits _{0}^{1}e^{-\frac{2{\overline{x}}}{\sigma ^{2}I}t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt\right) >\beta _{1}I, \end{aligned}$$
which is equivalent to show that
$$\begin{aligned} (\beta _{1}-1)\frac{{\overline{x}}}{\delta }\left( 1-\tau _{c}\Phi (\frac{2 {\overline{x}}}{\sigma ^{2}I},\beta _{1},1-\beta _{2})\right) >\beta _{1}I, \end{aligned}$$
where function \(\Phi \) is defined at the end of “Appendix B”. The proof is complete as we proved that function \(\Phi \) is always bounded by 1. \(\square \)
Step 2: \(\frac{1-\tau _{c}}{\delta }x^{*}>\frac{\beta _{1}}{ \beta _{1}-1}I_{E}(x^{*},I).\) \(x^{*}\) satisfies the value matching and smooth pasting conditions
$$\begin{aligned} A(x^{*})^{\beta _{1}}= & {} \frac{1-\tau _{c}}{\delta }x^{*}-I_{E}(x^{*},I) \\ \beta _{1}A(x^{*})^{\beta _{1}-1}= & {} \frac{1}{\delta }-\tau _{c}H_{1}(x^{*},I). \end{aligned}$$
Eliminating constant A leads to
$$\begin{aligned} \frac{1-\tau _{c}}{\delta }x^{*}=\frac{\beta _{1}I_{E}(x^{*},I)}{ \beta _{1}-1}+\frac{\tau _{c}}{\beta _{1}-1}\left( \frac{x^{*}}{\delta } -x^{*}H_{1}(x^{*},I)\right) , \end{aligned}$$
which implies that \(\frac{1-\tau _{c}}{\delta }x^{*}>\frac{\beta _{1}}{ \beta _{1}-1}I_{E}(x^{*},I)\) as \(\frac{x^{*}}{\delta }-x^{*}H_{1}(x^{*},I)>0.\) To see the last inequality, recall that \(H_{11}>0,\) so for \(x<M\), we have \(H_{1}(x,I)<H_{1}(M,I).\) As M goes to infinity, we have
$$\begin{aligned} \underset{M\rightarrow \infty }{\lim }\text { }H_{1}(M,I)= & {} \underset{ M\rightarrow \infty }{\lim }\text { }\frac{1}{\delta }\frac{\left( \frac{2M}{ \sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2M}{\sigma ^{2}I}t}t^{\beta _{1}-2}(1-t)^{-\beta _{2}}(1-\beta _{2}t)dt \\= & {} \underset{M\rightarrow \infty }{\lim }\text { }\frac{1}{\delta }\left[ \Phi (\frac{2M}{\sigma ^{2}I},\beta _{1}-1,-\beta _{2})-\beta _{1}\beta _{2} \frac{\sigma ^{2}I}{2M}\Phi (\frac{2M}{\sigma ^{2}I},\beta _{1},-\beta _{2}) \right] =\frac{1}{\delta }, \end{aligned}$$
where function \(\Phi \) is defined at the end of “Appendix B”. This shows that for all \(x>0,\) \(H_{1}(x,I)<\frac{1}{\delta }.\) \(\square \)
Appendix D. Verification theorem
Let F denote the proposed optimal solution defined as
$$\begin{aligned} F(x)=\left\{ \begin{array}{l} (x/x^{*})^{\beta _{1}}[R(x^{*},I)-I]\text {, if }x\le x^{*} \\ R(x,I)-I\text {, if }x\ge x^{*}, \end{array} \right. \end{aligned}$$
where \(x^{*}\) is defined in relationship (6) and set \(\tau ^{*}=\inf \) \(\{t\ge 0\), \(x_{t}=x^{*}\}.\)
Step 1: For all \(x\in \mathbb {R}_{+}\), \(F(x)\ge R(x,I)-I\). It is enough to show that for all \(x\le x^{*}\) function \(\varphi \) is positive where \(\varphi (x)=F(x)-R(x,I)+I.\) As R is a concave function and \(\beta _{1}>1\), we deduce that \(\varphi \) is a convex function. Furthermore, we have \(\varphi ^{\prime }(x^{*})=0\) (smooth pasting condition) so \( \varphi ^{\prime }\) is negative on \(\left[ 0,x^{*}\right] \), i.e. \( \varphi \) is decreasing on \(\left[ 0,x^{*}\right] \) and since \(\varphi (x^{*})=0\) (value matching condition), the desired result follows.
Step 2: \(e(x^{*},I)\ge rI\). Consider stopping time \( {\widetilde{\tau }}=\inf \) \(\{t\ge 0\), \(x_{t}=\widetilde{x}\}\) for some \( \widetilde{x}>0\). We show that we must have \(\widetilde{x}\le x^{*}\) so that the value function \(\widetilde{F}\) satisfies \(\widetilde{F}(x)\ge R(x,I)-I\) on \([0,\widetilde{x}]\) where \(\widetilde{F}(x_{0})=E_{0}\left[ (R(x_{{\widetilde{\tau }}},I)-I)e^{-r{\widetilde{\tau }}}\right] \). Standard compution leads to
$$\begin{aligned} \widetilde{F}(x)=\left\{ \begin{array}{l} (x/\widetilde{x})^{\beta _{1}}[R(\widetilde{x},I)-I]\text {, if }x\le \widetilde{x} \\ R(x,I)-I\text {, if }x\ge \widetilde{x}. \end{array} \right. \end{aligned}$$
First of all, observe that if \(R(\widetilde{x},I)\le I,\) then for all \( x\ge 0\), we have \(\widetilde{F}(x)\ge R(x,I)-I\). Assume now \(\widetilde{x}\) is large enough so \(R(\widetilde{x},I)>I\). For \(z\ge 0\), consider function \( {\overline{\Phi }}_{A}\) defined as \({\overline{\Phi }}_{A}(z)=Az^{\beta _{1}}-R(z,I)+I\) where \(A>0\). \({\overline{\Phi }}_{A}\) is a convex function and \({\overline{\Phi }}_{A}^{\prime }(z)=\beta _{1}Az^{\beta _{1}-1}-R_{1}(z,I)\). Since \({\overline{\Phi }}_{A}^{\prime }(0)=-\frac{1}{\delta }\) and \(\underset{ \infty }{\lim }\) \({\overline{\Phi }}_{A}^{\prime }=\infty \) (recall \(R_{1}\) is bounded), we deduce \({\overline{\Phi }}_{A}\) admits a minimum at \({\overline{z}} >0\) that satisfies \(\beta _{1}A({\overline{z}})^{\beta _{1}-1}-R_{1}(\overline{ z},I)=0\). Using the implicit function theorem and rearranging terms, we obtain that
$$\begin{aligned} \frac{\partial {\overline{z}}}{\partial A}=-\frac{\beta _{1}({\overline{z}} )^{\beta _{1}-1}}{\beta _{1}(\beta _{1}-1)A({\overline{z}})^{\beta _{1}-2}-R_{11}({\overline{z}},I)}<0. \end{aligned}$$
\(\overline{ \Phi } _{A}\) is decreasing on \(\left[ 0,{\overline{z}}\right] \) and \(\overline{ \Phi }_{A}({\overline{z}})=\frac{\beta _{1}I-\Psi ({\overline{z}})}{\beta _{1}}\) where \(\Psi (z)=\beta _{1}R(z,I)-zR_{1}(z,I).\) In “Appendix C”, we show that \( \Psi \) is increasing with \(\Psi (x^{*})=\beta _{1}I\). This implies that if \({\overline{z}}<x^{*}\), then the equation \({\overline{\Phi }}_{A}(x)=0\) has no root, if \({\overline{z}}=x^{*}\) then \(x^{*}\) is the unique root of the equation \({\overline{\Phi }}_{A}(x)=0\) and \({\overline{z}}>x^{*}\) the equation \({\overline{\Phi }}_{A}(x)=0\) has two roots denoted \(\widetilde{x} _{1} \) and \(\widetilde{x}_{2}\) with \(0<\widetilde{x}_{1}<{\overline{z}}< \widetilde{x}_{2}\). Next, since \(\frac{\partial {\overline{z}}}{\partial A}<0\) we deduce that when \({\overline{z}}>x^{*}\), \({\overline{\Phi }}_{A}(z)< {\overline{\Phi }}_{A^{*}}(z)\) for all \(z\ge 0\) where \({\overline{\Phi }} _{A^{*}}\) admits a minimum at \({\overline{z}}=x^{*}\). It follows that we must have \(0<\widetilde{x}_{1}<x^{*}<\widetilde{x}_{2}\). Since \( {\overline{\Phi }}_{A} \) is decreasing on \(\left[ 0,{\overline{z}}\right] ,\) \( \widetilde{x}\le {\overline{z}}\) and \({\overline{\Phi }}_{A}(\widetilde{x})=0\), we can claim that if \(\widetilde{x}\le x^{*}\), then \(\widetilde{F} (x)>R(x,I)-I\) on \([0,\widetilde{x});\) however, if \(\widetilde{x}>x^{*}\), then the latter result does not hold as for \(\varepsilon >0\) sufficiently small we have \(\widetilde{F}(\widetilde{x}-\varepsilon )<R(\widetilde{x} -\varepsilon ,I)-I.\)
Finally, note that using relationship (4), we can write
$$\begin{aligned} E_{0}^{Q}[(R(x_{\tau },I)-I)e^{-r\tau }]= & {} E_{0}^{Q}\left[ \int \nolimits _{\tau }^{\infty }[e(x_{s},I)-rI]e^{-rs}ds\right] \\= & {} R(x_{0},I)-I+E_{0}^{Q}\left[ \int \nolimits _{0}^{\tau }[rI-e(x_{s},I)]e^{-rs}ds\right] . \end{aligned}$$
Function e is positive, increasing in x with \(\underset{x\rightarrow \infty }{\lim }\) \(e(x,I)=\infty \); define \({\widehat{x}}=\inf \) \(\{x\ge 0\), \( e(x,I)=rI\}\) and \({\widehat{\tau }}=\inf \) \(\{t\ge 0\), \(x_{t}={\widehat{x}}\}\). If \({\widehat{x}}\) is such that \(R({\widehat{x}},I)\le I\), then since R is increasing in x and \(R(x^{*},I)>I\), we must have \({\widehat{x}}\le x^{*}\). Assume now that \(R({\widehat{x}},I)>I.\) It follows that the corresponding value function \({\widehat{F}}\) satisfies
$$\begin{aligned} {\widehat{F}}(x_{0})=R(x_{0},I)-I+E_{0}^{Q}\left[ \int \nolimits _{0}^{\widehat{ \tau }}[rI-e(x_{s},I)]e^{-rs}ds\right] . \end{aligned}$$
For all \(x_{0}\le {\widehat{x}}\), we have \({\widehat{F}}(x_{0})\ge R(x_{0},I)-I \) as for all \(0\le s\le {\widehat{\tau }}\), \(rI-e(x_{s},I)\ge 0 \). Given what precedes, this implies that we must have \({\widehat{x}}\le x^{*}\) and since e is increasing in x, we find that \(rI\le e(x^{*},I)\).
Step 3: Then let us define process M as \(M_{t}=e^{-rt}F(x_{t})\). F is continuously differentiable, \(C^{2}\) except at \(x=x^{*}\) and \( {\lim \nolimits _{x\rightarrow (x^{*})^{-}} }\) \(F^{\prime \prime }(x)\) and \( {\lim \nolimits _{x\rightarrow (x^{*})^{+}} }\) \(F^{\prime \prime }(x)\) exist and are bounded. In addition for all \(x\in \mathbb {R}_{+}\), the relationship
$$\begin{aligned} F^{\prime }(x)=F^{\prime }(0)+\int \nolimits _{0}^{x}F^{\prime \prime }(z)dz, \end{aligned}$$
holds, so \(F^{\prime }\,\)is absolutely continuous on \(\mathbb {R}_{+}\). We can use the generalized Itô’s rule (see Karatzas and Shreve [23], problem 7.3, p 219) and write for all \(t\ge 0\)
$$\begin{aligned} M_{t}=F(x_{0})+\int \nolimits _{0}^{t}e^{-rs}[(-rF(x_{s})+\mathcal {A} F(x_{s}))ds+\sigma x_{s}F^{\prime }(x_{s})dw_{s}^{Q}], \end{aligned}$$
when \(\mathcal {A}F\) denotes the Dynkin operator. Then observe that
$$\begin{aligned} F^{\prime }(x)=\left\{ \begin{array}{l} \beta _{1}Ax^{\beta _{1}-1}\text {, if }x\le x^{*} \\ R_{1}(x,I)\text {, if }x\ge x^{*}. \end{array} \right. \end{aligned}$$
Since \(\beta _{1}>1\) and \(R_{1}(x,I)\le \frac{1-\tau _{c}}{\delta }\) (see “Appendix B”), \(F^{\prime }\) is a bounded function so that the stochastic integral \(t\mapsto \int \nolimits _{0}^{t}\sigma x_{s}F^{\prime }(x_{s})e^{-rs}dw_{s}^{Q}\) is a martingale. Furthermore, we have
$$\begin{aligned} -rF(x)+\mathcal {A}F(x)=\left\{ \begin{array}{l} 0,\text { if }x\le x^{*} \\ rI-e(x,I),\text { if }x\ge x^{*}\text { (see ``Appendix B'').} \end{array} \right. \end{aligned}$$
Thus for all \(x\ge 0\) we have \(-rF(x)+\mathcal {A}F(x)\le 0\) where we used the fact that \(rI\le e(x^{*},I)\) and e is increasing in x. We conclude that M is a supermartingale.
Step 4: For all stopping time \(\tau \) note that \(e^{-r\tau }[R(x_{\tau },I)-I]\le e^{-r\tau }F(x_{\tau })=M_{\tau }\) so that
$$\begin{aligned} E_{0}[(R(x_{\tau },I)-I)e^{-r\tau }]\le E_{0}[M_{\tau }]\le M_{0}=F(x_{0}). \end{aligned}$$
This implies that \(\underset{\tau \ge 0}{\sup }\) \(E_{0}[(R(x_{\tau },I)-I)e^{-r\tau }]\le F(x_{0})\) with equality for \(\tau =\tau ^{*}\). \(\square \)
Appendix E. Additional tax credit
Existence of \(x^{*}(Q)\) as a Decreasing Function of Q. The existence and uniqueness of threshold \(x^{*}(Q)\) can be proved following the same steps as in the case \(Q=0\). Then, we show that \( \frac{\partial x^{*}}{\partial Q}<0.\) Given what precedes, the investment trigger \(x^{*}(.)\) satisfies
$$\begin{aligned} \frac{1-\tau _{c}}{\delta }x^{*}(Q)=\frac{\beta _{1}I}{\beta _{1}-1} -\tau _{c}\frac{\beta _{1}H(x^{*}(Q),I+Q)-x^{*}(Q)H_{1}(x^{*}(Q),I+Q)}{\beta _{1}-1}. \end{aligned}$$
(17)
Totally differentiating relationship (17) leads to
$$\begin{aligned} \begin{array}{ll} &{}\left[ \frac{1-\tau _{c}}{\delta }+\tau _{c}\frac{(\beta _{1}-1)H_{1}(x^{*}(Q),I+Q)-x^{*}(Q)H_{11}(x^{*}(Q),I+Q)}{\beta _{1}-1}\right] \frac{\partial x^{*}}{\partial Q}\\ &{} =-\tau _{c}\frac{ \beta _{1}H_{2}(x^{*}(Q),I+Q)-x^{*}(Q)H_{12}(x^{*}(Q),I+Q)}{ \beta _{1}-1}. \end{array} \end{aligned}$$
The RHS of the equation is negative since \(H_{2}>0\) and \(H_{12}<0.\) Next we show that auxiliary function \(\Lambda \) is positive with \(\Lambda (x)=(\beta _{1}-1)H_{1}(x,I)-xH_{11}(x,I).\) Observe that \(x^{\beta _{1}}\Lambda (x)=-\Delta ^{\prime }(x)\) with
$$\begin{aligned} \Delta (x)= & {} x^{1-\beta _{1}}H_{1}(x,I) \\= & {} \frac{1}{\delta }\frac{\left( \frac{2}{\sigma ^{2}I}\right) ^{\beta _{1}-1}}{\Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{2}{\sigma ^{2}} \frac{x}{I}t}t^{\beta _{1}-2}(1-t)^{-\beta _{2}}(1-\beta _{2}t)dt>0. \end{aligned}$$
Clearly \(\Delta \,\)is a decreasing function so \(\Delta ^{\prime }(x)<0\) and therefore \(\Lambda \) is positive. We conclude that \(\frac{\partial x^{*} }{\partial Q}<0\). Finally, relationship (17) can be rewritten
$$\begin{aligned} (\beta _{1}-1)\frac{x^{*}(Q)}{\delta }\left( 1-\tau _{c}\Phi (\frac{2}{ \sigma ^{2}}\frac{x^{*}(Q)}{I+Q},\beta _{1},1-\beta _{2})\right) =\beta _{1}I, \end{aligned}$$
where function \(\Phi \) is defined at the end of “Appendix B”. Since \(\underset{ Q\rightarrow \infty }{\lim }\) \(\Phi (\frac{2}{\sigma ^{2}}\frac{x^{*}(Q) }{I+Q},\beta _{1},1-\beta _{2})=0\), the desired result follows. \(\square \)
\(x^{*}\)as a Convex Function of Q. Let \(\lambda \in \left[ 0,1\right] \). For \(\left( Q_{1},Q_{2}\right) \in \mathbb {R} _{+}^{2}\), if \(\lambda x^{*}(Q_{1})+(1-\lambda )x^{*}(Q_{2})\le x^{*}(\lambda Q_{1}+(1-\lambda )Q_{2})\), we have:
$$\begin{aligned} F(\lambda x^{*}(Q_{1})+(1-\lambda )x^{*}(Q_{2}))\le & {} \lambda F(x^{*}(Q_{1}))+(1-\lambda )F(x^{*}(Q_{2}))\text { (}F\text { is convex)} \\= & {} \lambda R(x^{*}(Q_{1}),Q_{1})+(1-\lambda )R(x^{*}(Q_{2}),Q_{2}) \\\le & {} R(\lambda x^{*}(Q_{1})+(1-\lambda )x^{*}(Q_{2}),\lambda Q_{1}+(1-\lambda )Q_{2})\text {,} \end{aligned}$$
as the function \(\left( x,Q\right) \mapsto R(x,I+Q)\) is jointly concave. This implies that \(x^{*}(\lambda Q_{1}+(1-\lambda )Q_{2})\le \lambda x^{*}(Q_{1})+(1-\lambda )x^{*}(Q_{2}),\) which leads to a contradiction. \(\square \)
Appendix F. Assets in place
Existence and Uniqueness of Investment Trigger \(x_{k}^{*}\) . For convenience, set \({\widehat{x}}_{k}^{*}=(1+k)x_{k}^{*}\) and define auxiliary function \(\Psi _{k}\) as
$$\begin{aligned} \Psi (z;k)=\frac{\beta _{1}-1}{\delta }\left( 1-\frac{k}{1+k}(1-\tau _{c})\right) z-\frac{\tau _{c}}{\delta }\left( \frac{\sigma ^{2}}{2}\right) ^{-\beta _{1}}\frac{z^{\beta _{1}+1}I^{-\beta _{1}}}{\Gamma (\beta _{1}-1)} \int \nolimits _{0}^{1}e^{-\frac{2z}{\sigma ^{2}I}t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt. \end{aligned}$$
We want to show that the equation \(\Psi (z;k)=\beta I\) has a unique root \( {\widehat{x}}_{k}^{*}>0\). \(\Psi \) is a smooth function with \(\Psi _{k}(0;k)=0\) and using results from “Appendix C”, we have \(\frac{\partial ^{2}\Psi (z;k)}{\partial z^{2}}<0\), which implies that \(\frac{\partial \Psi }{\partial z}\) is decreasing. Then, observe that \({\lim \nolimits _{z\rightarrow \infty } }\) \(\frac{\partial \Psi }{\partial z}(z;k)=\frac{\beta _{1}-1}{ \delta }\left( 1-\frac{k}{1+k}(1-\tau _{c})\right) -\tau _{c}\frac{\beta _{1}-1}{\delta }=\frac{\beta _{1}-1}{\delta (1+k)}(1-\tau _{c})>0.\) This implies that for all \(z\ge 0\), \(\frac{\partial \Psi }{\partial z}(z;k)> \frac{\beta _{1}-1}{\delta (1+k)}(1-\tau _{c})>0\): \(\Psi \) is a strictly increasing in z with \(\Psi (0;k)=0\) and \({\lim \nolimits _{z\rightarrow \infty } }\) \(\ \Psi (z;k)=\infty \). Therefore, the equation \(\Psi (z;k)=\beta _{1}I\) has a unique root \({\widehat{x}}_{k}^{*}\). Then, totally differentiating relationship \(\Psi ((1+k)x_{k}^{*};k)=\beta _{1}I\) with respect to k, we find that
$$\begin{aligned} (1+k)\frac{\partial \Psi }{\partial z}((1+k)x_{k}^{*};k)\frac{\partial x_{k}^{*}}{\partial k}=-x_{k}^{*}\frac{\partial \Psi }{\partial z} ((1+k)x_{k}^{*};k)-\frac{\partial \Psi }{\partial k}((1+k)x_{k}^{*};k). \end{aligned}$$
Then, as \(\frac{\partial \Psi }{\partial k}((1+k)x_{k}^{*};k)=-\frac{ \beta _{1}-1}{\delta (1+k)}(1-\tau _{c})x_{k}^{*}\), we have
$$\begin{aligned} (1+k)\frac{\partial \Psi }{\partial z}((1+k)x_{k}^{*};k)\frac{\partial x_{k}^{*}}{\partial k}=-x_{k}^{*}\frac{\partial \Psi }{\partial z} ((1+k)x_{k}^{*};k)+\frac{\beta _{1}-1}{\delta (1+k)}(1-\tau _{c})x_{k}^{*}. \end{aligned}$$
Then, since \(\frac{\partial \Psi }{\partial z}>0\) and \(\frac{\partial \Psi }{ \partial z}(z;k)>\frac{\beta _{1}-1}{\delta (1+k)}(1-\tau _{c})\), we conclude that \(\frac{\partial x_{k}^{*}}{\partial k}<0\). Since we must have \( x_{k}^{*}\ge \frac{\beta _{1}\delta }{\beta _{1}-1}I\), we deduce that \({\lim \nolimits _{k\rightarrow \infty } }\) \(x_{k}^{*}\) exists and is strictly positive. It follows that \({\lim \nolimits _{k\rightarrow \infty } }\) \( {\widehat{x}}_{k}^{*}=\infty \). Then, we use relationship (16) with \(a=\beta _{1}\) and \(b-a-1=1-\beta _{2}\) to obtain an asymptotic expansion of \(\Psi ({\widehat{x}}_{k}^{*};k)\) and find that
$$\begin{aligned} \frac{\beta _{1}-1}{\delta }\left( 1-\frac{k}{1+k}(1-\tau _{c})\right) {\widehat{x}}_{k}^{*}-\frac{\tau _{c}}{\delta }(\beta _{1}-1)({\widehat{x}} _{k}^{*}-\frac{\sigma ^{2}}{2}I\beta _{1}(1-\beta _{2})\underset{ k\rightarrow \infty }{\sim }\beta _{1}I, \end{aligned}$$
or equivalently
$$\begin{aligned} \frac{1-\tau _{c}}{1+k}{\widehat{x}}_{k}^{*}\underset{k\rightarrow \infty }{\sim }\frac{\beta _{1}I\delta }{\beta _{1}-1}-\tau _{c}I\frac{\sigma ^{2}}{2 }\beta _{1}(1-\beta _{2}). \end{aligned}$$
Finally, as \(\frac{{\widehat{x}}_{k}^{*}}{1+k}=x_{k}^{*}\) and \(\frac{ \sigma ^{2}}{2}(\beta _{1}-1)(1-\beta _{2})=\delta \), we find that \(\underset{k\rightarrow \infty }{\lim }\) \(x_{k}^{*}=\frac{\beta _{1}\delta }{\beta _{1}-1}I\). \(\square \)
Appendix G. Loss corporation
Existence and Uniqueness of Investment Trigger \(x^{**}\) . Define auxiliary function \({\overline{\Psi }}\) as
$$\begin{aligned} {\overline{\Psi }}(x)=(\beta _{1}-1)\frac{\tau _{c}x}{\delta }-\frac{\tau _{c} }{\delta }\left( \frac{\sigma ^{2}}{2}\right) ^{-\beta _{1}}\frac{x^{\beta _{1}+1}Q^{-\beta _{1}}}{\Gamma (\beta _{1}-1)}\int \nolimits _{0}^{1}e^{-\frac{ 2x}{\sigma ^{2}Q}t}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt. \end{aligned}$$
(18)
We want to show that the equation \({\overline{\Psi }}(x)=\beta _{1}(K+\frac{y}{ r})\) has a unique root \(x^{**}>0\). Following the same steps as in “Appendix C”, one can show that \({\overline{\Psi }}^{\prime \prime }(x)<0\). Thus, \({\overline{\Psi }}^{\prime }\) is decreasing with \({\lim \nolimits _{ x\rightarrow \infty } }\) \({\overline{\Psi }}^{\prime }(x)=(\beta _{1}-1) \frac{\tau _{c}}{\delta }-\tau _{c}\frac{\Gamma (\beta _{1})}{\Gamma (\beta _{1}-1)\delta }=0\). This implies that \({\overline{\Psi }}^{\prime }\) is positive: \({\overline{\Psi }}\) is strictly increasing with \({\overline{\Psi }} (0)=0\) and using relationship (16) we find that \({\lim \nolimits _{x\rightarrow \infty } }\) \({\overline{\Psi }}(x)=\beta _{1}\tau _{c}Q>\beta _{1}(K+\frac{y}{r}).\) Therefore, the equation \({\overline{\Psi }} (x)=\beta _{1}(K+\frac{y}{r})\) has a unique root \(x^{**}>0\). Finally, evaluating relationship (18) at \(x=x^{**}\) and totally differentiating with respect to Q leads to
$$\begin{aligned} {\overline{\Psi }}^{\prime }(x^{**})\frac{\partial x^{**}}{ \partial Q}+\frac{\partial {\overline{\Psi }}(x^{**})}{\partial Q}=0. \end{aligned}$$
Set \(\Xi (z)=z^{\beta _{1}}\int \nolimits _{0}^{1}e^{-zt}t^{\beta _{1}-1}(1-t)^{1-\beta _{2}}dt\) so that \({\overline{\Psi }}(x)=(\beta _{1}-1) \frac{\tau _{c}x}{\delta }-\frac{\tau _{c}}{\delta }x\frac{\Xi (\frac{2x}{ \sigma ^{2}Q})}{\Gamma (\beta _{1}-1)}\). It is easy to verify that \(\Xi ^{\prime }>0\), which implies that \(\frac{\partial {\overline{\Psi }}}{\partial Q}>0\). It follows easily that \(\frac{\partial x^{**}}{\partial Q}<0\) . Finally, the proof of the convexity of \(x^{**}\) in Q is the same as in “Appendix E”. \(\square \)
Appendix H. Average time until tax carryforward provision exhaustion
Laplace Transform of Stopping Time \(\tau _{0}\). Let \( F_{a}(x_{0},z_{0})=E_{0}^{P}\left[ e^{-a\tau _{0}}\right] \). Let \(\lambda >0\) and observe that if process x starts at \(x_{0}\) and process \(x_{\lambda }\) starts at \(\lambda x_{0}\), then we have \(x_{\lambda }\equiv \lambda x\). Furthermore, if process \(z_{\lambda }\) associated to \(x_{\lambda }\) starts at \(\lambda z_{0}\), then \(z_{\lambda }\equiv \lambda z\), so that \(z_{\lambda }\) shall hits 0 exactly when z hits 0. It follows that \(F_{a}(\lambda x_{0},\lambda z_{0})=F_{a}(x_{0},z_{0})\), i.e., we can write \( F_{a}(x_{0},z_{0})=f_{a}(u_{0})\) where \(f_{a}\) is a smooth function and \(u= \frac{x}{z}\). Then, \(f_{a}\) satisfies the following HJB
$$\begin{aligned} af_{a}(u)=u(u+\mu )f_{a}^{\prime }(u)+\frac{\sigma ^{2}}{2} u^{2}f_{a}^{\prime \prime }(u), \end{aligned}$$
with \({\lim \nolimits _{\infty } }\) \(f_{a}=1\) and \(\underset{0}{\lim }\) \( f_{a}=0.\) Set \(y=\frac{1}{u}\) and \(g_{a}(y)=f_{a}(\frac{1}{u})\); function \( g_{a}\) satisfies the following ODE:
$$\begin{aligned} ag_{a}(y)=((\sigma ^{2}-\mu )y-1)g_{a}^{\prime }(y)+\frac{\sigma ^{2}}{2} y^{2}g_{a}^{\prime \prime }(y), \end{aligned}$$
with \({\lim \limits _{0} }\) \(g_{a}=1\) and \(\underset{\infty }{\lim }\) \( g_{a}=0.\) Recall that \(\alpha _{1,a}\) and \(\alpha _{2,a}\) are defined by relationship (12); set \(a_{0}=\frac{\sigma ^{2}}{2}-\mu ,\) \( a_{1}=1\), \(a_{2}=\frac{\sigma ^{2}}{2}\) and note that \(\frac{a_{0}}{a_{2}} =1+\alpha _{1,a}+\alpha _{2,a}\). As a goes to 0, we have
$$\begin{aligned} \alpha _{1,a}\underset{0}{\sim }1-\frac{a_{0}}{a_{2}}-\frac{a}{a_{0}-a_{2}} \text { and }\alpha _{2,a}\underset{0}{\sim }\frac{a}{a_{0}-a_{2}}. \end{aligned}$$
It follows that \(E^{P}[\tau _{0}]=-\frac{\partial g_{a}(y)}{\partial a} _{\mid _{a=0}}\). Following the same path as for the derivation of function H, it is easy to check that
$$\begin{aligned} g_{a}(y)=\underset{y^{*}\rightarrow 0}{\lim }\text { }g_{a}(y;y^{*})=\left( \frac{y}{y^{*}}\right) ^{\alpha _{2,a}}\frac{\int _{0}^{1}e^{- \frac{2t}{\sigma ^{2}y}}t^{-\alpha _{2,a}-1}(1-t)^{\alpha _{1,a}}dt}{ \int _{0}^{1}e^{-\frac{2t}{\sigma ^{2}y^{*}}}t^{-\alpha _{2,a}-1}(1-t)^{\alpha _{1,a}}dt}. \end{aligned}$$
Observe that
$$\begin{aligned} (y^{*})^{\alpha _{2,a}}\int _{0}^{1}e^{-\frac{2t}{\sigma ^{2}y^{*}} }t^{-\alpha _{2,a}-1}(1-t)^{\alpha _{1,a}}dt=\int _{0}^{\frac{1}{y^{*}} }e^{-\frac{2s}{\sigma ^{2}}}s^{-\alpha _{2,a}-1}(1-y^{*}s)^{\alpha _{1,a}}ds, \end{aligned}$$
so that by Lebesgue dominated convergence theorem, we have \({\lim \nolimits _{y^{*}\rightarrow 0} }\) \((y^{*})^{\alpha _{2,a}}\int _{0}^{1}e^{- \frac{2t}{\sigma ^{2}y^{*}}}t^{-\alpha _{2,a}-1}(1-t)^{\alpha _{1,a}}dt=\int _{0}^{\infty }e^{-\frac{2s}{\sigma ^{2}}}s^{-\alpha _{2,a}-1}ds=(\frac{2}{\sigma ^{2}})^{\alpha _{2,a}}\Gamma (-\alpha _{2,a}).\) It follows that
$$\begin{aligned} g_{a}(y)=\frac{1}{\Gamma (-\alpha _{2,a})}\left( \frac{\sigma ^{2}y}{2} \right) ^{\alpha _{2,a}}\int _{0}^{1}e^{-\frac{2t}{\sigma ^{2}y}}t^{-\alpha _{2,a}-1}(1-t)^{\alpha _{1,a}}dt, \end{aligned}$$
so that
$$\begin{aligned} f_{a}(u)= & {} \frac{1}{\Gamma (-\alpha _{2,a})}\left( \frac{2u}{\sigma ^{2}} \right) ^{-\alpha _{2,a}}\int _{0}^{1}e^{-\frac{2ut}{\sigma ^{2}}}t^{-\alpha _{2,a}-1}(1-t)^{\alpha _{1,a}}dt \\ f_{a}^{\prime }(u)= & {} \frac{2\alpha _{1,a}}{\sigma ^{2}\Gamma (-\alpha _{2,a})}\left( \frac{2u}{\sigma ^{2}}\right) ^{-\alpha _{2,a}-1}\int _{0}^{1}e^{-\frac{2ut}{\sigma ^{2}}}t^{-\alpha _{2,a}}(1-t)^{\alpha _{1,a}-1}dt>0. \end{aligned}$$
Then fix \(y^{*}>0\) and let denote
$$\begin{aligned} A(y)=\int _{0}^{1}e^{-\frac{2t}{\sigma ^{2}y}}(1-t)^{-\alpha _{1}-\alpha _{2}} \left[ 1+\left( \frac{2(1-t)}{\sigma ^{2}y}+1-\alpha _{1}-\alpha _{2}\right) \ln t\right] dt, \end{aligned}$$
and \(B(u)=A(1/y)\). It follows that
$$\begin{aligned} g_{a}(y;y^{*})= & {} \left( 1+\frac{a}{a_{0}-a_{2}}\ln \frac{y}{y^{*}} \right) \left( \frac{-1+\frac{a}{a_{0}-a_{2}}A(y)}{-1+\frac{a}{a_{0}-a_{2}} A(y^{*})}\right) +o(a) \\= & {} 1+\frac{a}{a_{0}-a_{2}}\left( \ln \frac{y}{y^{*}}+A(y^{*})-A(y)\right) +o(a). \end{aligned}$$
We deduce that \(E^{P}[\tau _{0}]=-\frac{1}{(\alpha _{1}+\alpha _{2})\frac{ \sigma ^{2}}{2}}\left( \ln \frac{y}{y^{*}}+A(y^{*})-A(y)\right) \). Then
$$\begin{aligned} \begin{array}{lll} A(y^{*}) &{} \underset{0}{\sim } &{} \frac{2}{_{\sigma ^{2}}}\frac{1}{ y^{*}}\int _{0}^{1}e^{-\frac{2t}{\sigma ^{2}y^{*}}}(1-t)^{1-\alpha _{1}-\alpha _{2}}\ln tdt \\ &{} \underset{0}{\sim } &{} \frac{2}{\sigma ^{2}}\int _{0}^{1/y^{*}}e^{- \frac{2s}{\sigma ^{2}}}(1-sy^{*})^{1-\alpha _{1}-\alpha _{2}}(\ln s+\ln y^{*})ds \\ &{} \underset{0}{\sim } &{} \ln y^{*}+\frac{2}{\sigma ^{2}} \int _{0}^{\infty }e^{-\frac{2s}{\sigma ^{2}}}\ln sds \\ &{} \underset{0}{\sim } &{} \ln y^{*}-\ln \frac{2}{\sigma ^{2}}-\gamma , \text { where }\gamma \text { denotes Euler's constant.} \end{array} \end{aligned}$$
The desired result follows easily. \(\square \)