Skip to main content
Log in

A Multidimensional Problem of Optimal Dividends with Irreversible Switching: A Convergent Numerical Scheme

  • Published:
Applied Mathematics & Optimization Aims and scope Submit manuscript

Abstract

In this paper we study the problem of optimal dividend payment strategy which maximizes the expected discounted sum of dividends to a multidimensional set up of n associated insurance companies where the surplus process follows an n-dimensional compound Poisson process. The general manager of the companies has the possibility at any time to exercise an irreversible switch into another regime; we also take into account an expected discounted value at ruin. This multidimensional dividend problem is a mixed singular control/optimal problem. We prove that the optimal value function is a viscosity solution of the associated HJB equation and that it can be characterized as the smallest viscosity supersolution. The main contribution of the paper is to provide a numerical method to approximate (locally uniformly) the optimal value function by an increasing sequence of sub-optimal value functions of admissible strategies defined in an n-dimensional grid. As a numerical example, we present the optimal time of merger for two insurance companies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Albrecher, H., Azcue, P., Muler, N.: Optimal dividend strategies for two collaborating insurance companies. Adv. Appl. Probab. 49(2), 515–548 (2017)

    Article  MathSciNet  Google Scholar 

  2. Asmussen, S., Taksar, M.: Controlled diffusion models for optimal dividend pay-out. Insur. Math. Econ. 20, 1–15 (1997)

    Article  MathSciNet  Google Scholar 

  3. Avram, F., Palmowski, Z., Pistorius, M.: On the optimal dividend problem for a spectrally negative Lévy process. Ann. Appl. Probab. 17, 156–180 (2007)

    Article  MathSciNet  Google Scholar 

  4. Avram, F., Palmowski, Z., Pistorius, M.: On Gerber–Shiu functions and optimal dividend distribution for a Levy risk-process in the presence of a penalty function. Ann. Appl. Probab. 25(4), 1868–1935 (2015)

    Article  MathSciNet  Google Scholar 

  5. Azcue, P., Muler, N.: Optimal reinsurance and dividend distribution policies in the Cramér–Lundberg model. Math. Financ. 15, 261–308 (2005)

    Article  Google Scholar 

  6. Azcue, P., Muler, N.: Stochastic Optimization in Insurance: A Dynamic Programming Approach. Springer Briefs in Quantitative Finance. Springer, New York (2014)

    Book  Google Scholar 

  7. Azcue, P., Muler, N.: Optimal dividend payment and regime switching in a compound Poisson risk model. SIAM J. Control Optim. 53(5), 3270–3298 (2015)

    Article  MathSciNet  Google Scholar 

  8. Barles, G., Souganidis, P.E.: Convergence of approximation schemesfor fully nonlinear second order equations. Asymptot. Anal. 4, 271–283 (1991)

    Article  Google Scholar 

  9. Budhiraja, A., Ross, K.: Convergent numerical scheme for singular stochastic control with state constraints in a portfolio selection problem. SIAM J. Control Optim. 45(6), 2169–2206 (2007)

    Article  MathSciNet  Google Scholar 

  10. Crandall, M.G., Lions, P.L.: Viscosity solution of Hamilton–Jacobi equations. Trans. Am. Math. Soc. 277, 1–42 (1983)

    Article  MathSciNet  Google Scholar 

  11. Czarna, I., Palmowski, Z.: De Finetti’s dividend problem and impulse control for a two-dimensional insurance risk process. Stoch. Models 27, 220–250 (2011)

    Article  MathSciNet  Google Scholar 

  12. De Finetti, B.: Su Un’impostazione Alternativa Della Teoria Collettiva Del Rischio. Trans. XVth Int. Cong. Actuar. 2, 433–443 (1957)

    Google Scholar 

  13. Dickson, D.C.M., Waters, H.R.: Some optimal dividends problems. ASTIN Bull. 34, 49–74 (2004)

    Article  MathSciNet  Google Scholar 

  14. Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions. Springer, New York (1993)

    MATH  Google Scholar 

  15. Gerber, H.: Entscheidungskriterien für den zusammengesetzten Poisson-Prozeß. Mitt. Ver. Schweiz. Vers. Math 69, 185–228 (1969)

    MATH  Google Scholar 

  16. Gerber, H.U., Shiu, E.S.W.: On the time value of ruin. N. Am. Actuar. J. 2(1), 48–78 (1998)

    Article  MathSciNet  Google Scholar 

  17. Gerber, H.U., Shiu, E.S.W.: On the merger of two companies. N. Am. Actuar. J. 10(3), 60–67 (2006)

    Article  MathSciNet  Google Scholar 

  18. Gerber, H.U., Lin, X.S., Yang, H.: A note on the dividends-penalty identity and the optimal dividend barrier. Astin Bull. 36, 489–503 (2006)

    Article  MathSciNet  Google Scholar 

  19. Kushner, H.J., Dupuis, P.: Numerical Methods for Stochastic Control Problems in Continuous Time. Springer, New York (2001)

    Book  Google Scholar 

  20. Kushner, H.J., Martins, L.F.: Numerical methods for stochastic singular control problems. SIAM J. Control Optim. 29, 1443–1475 (1991)

    Article  MathSciNet  Google Scholar 

  21. Loeffen, R.L.: On optimality of the barrier strategy in de Finetti’s dividend problem for spectrally negative Levy processes. Ann. Appl. Probab. 18, 1669–1680 (2008)

    Article  MathSciNet  Google Scholar 

  22. Loeffen, R.L., Renaud, J.F.: De Finetti’s optimal dividends problem with an affine penalty function at ruin. Insur. Math. Econ. 46, 98–108 (2010)

    Article  MathSciNet  Google Scholar 

  23. Ly Vath, V., Pham, H., Villeneuve, S.: A mixed singular/switching control problem for a dividend policy with reversible technology investment. Ann. Appl. Probab 18, 1164–1200 (2008)

    Article  MathSciNet  Google Scholar 

  24. Schmidli, H.: Stochastic Control in Insurance. Springer, New York (2008)

    MATH  Google Scholar 

  25. Souganidis, P.E.: Approximation schemes for viscosity solutions of Hamilton–Jacobi equations. J. Differ. Equ. 59, 1–43 (1985)

    Article  MathSciNet  Google Scholar 

  26. Thonhauser, S., Albrecher, H.: Dividend maximization under consideration of the time value of ruin. Insur. Math. Econ. 41, 163–184 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo Azcue.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

This section contains the proofs of all the lemmas.

Proof of Lemma 3.3

Let us extend the function g to \({\mathbf {R}}^{n}\) as \(g({\mathbf {x}})=0\) for \({\mathbf {x}}\notin {\mathbf {R}}_{+}^{n} \) and the function \(\upsilon \) to \({\mathbf {R}}^{n}\times {\mathbf {R}}_{+}^{n}\) as \(\upsilon ({\mathbf {x}},\varvec{\alpha })=0\) for \(\left( {\mathbf {R}}^{n} \times {\mathbf {R}}_{+}^{n}\right) \diagup B\), where B is defined in (2.4). Using the expressions (2.1 ) and the change of variables formula for finite variation processes, and calling \({\mathbf {z}}_{s}={\mathbf {X}}_{s}^{{\mathbf {L}}}{}\) and \({\breve{\mathbf {z}} }_{s}={\check{\mathbf {X}}}_{s}^{{\mathbf {L}}}\), we can write

$$\begin{aligned} \begin{array} [c]{l} g({\mathbf {z}}_{\tau })e^{-c\tau }-g({\mathbf {x}})\\ \begin{array} [c]{ll} &{}= \int \nolimits _{0}^{\tau }\mathbf {p\cdot }\nabla g({\mathbf {z}}_{s-} )e^{-cs}ds-c\int \nolimits _{0}^{\tau }g({\mathbf {z}}_{s-})e^{-cs}ds\\ &{}\quad -\int \nolimits _{0}^{\tau }e^{-cs}\left( \nabla g({\mathbf {z}}_{s-} )\mathbf {\cdot }d{\mathbf {L}}_{s}^{c}\right) +\sum \limits _{{\mathbf {L}}_{s} \ne {\mathbf {L}}_{s-},~s\le \tau }\left( g({\mathbf {z}}_{s})-g({\breve{\mathbf {z}} }_{s})\right) e^{-cs}\\ &{} \quad + \sum \limits _{{\breve{\mathbf {z}}}_{s}\ne {\mathbf {z}}_{s-},~s \le \tau }\left( g({\breve{\mathbf {z}}}_{s})-g({\mathbf {z}}_{s-})\right) e^{-cs}. \end{array} \end{array} \end{aligned}$$
(7.1)

Note that \({\mathbf {z}}_{s}\in {\mathbf {R}}_{+}^{n}\) for \(s\le \tau \) except in the case that \(\tau =\tau ^{{\mathbf {L}}}\). Since \({\mathbf {z}}_{s}=\) \({\breve{\mathbf {z}}}_{s}-\Delta {\mathbf {L}}_{s},\)

$$\begin{aligned}&-\int \nolimits _{0}^{\tau }e^{-cs}\nabla g({\mathbf {z}}_{s-})\mathbf {\cdot }d{\mathbf {L}}_{s}^{c}+\sum \limits _{{\mathbf {L}}_{s}\ne {\mathbf {L}}_{s-},s\le \tau }\left( g({\mathbf {z}}_{s})-g({\breve{\mathbf {z}}}_{s})\right) e^{-cs}\nonumber \\&\quad = -\int \nolimits _{0}^{\tau }e^{-cs}\nabla g({\mathbf {z}}_{s-})\mathbf {\cdot }d{\mathbf {L}}_{s}^{c} -\sum \limits _{{\mathbf {L}}_{s}\ne {\mathbf {L}}_{s-},s\le \tau }e^{-cs}\left( \int \nolimits _{0}^{1}\left( \nabla g\left( {\breve{\mathbf {z}}}_{s}-\gamma \Delta {\mathbf {L}}_{s}\right) \mathbf {\cdot }\Delta {\mathbf {L}} _{s}\right) d\gamma \right) \nonumber \\&\quad = -\int _{0-}^{\tau }e^{-cs}{\mathbf {a}}\cdot d{\mathbf {L}}_{s}+\int \nolimits _{0}^{\tau }e^{-cs}\left( {\mathbf {a}}-\nabla g({\mathbf {z}} _{s-})\right) \mathbf {\cdot }d{\mathbf {L}}_{s}^{c}\nonumber \\&\qquad + \sum \limits _{{\mathbf {L}}_{s}\ne {\mathbf {L}}_{s-},s\le \tau }e^{-cs} \int \nolimits _{0}^{1}\left( {\mathbf {a}}-\nabla g\left( {\breve{\mathbf {z}}} _{s}-\gamma \Delta {\mathbf {L}}_{s}\right) \right) \mathbf {\cdot }\Delta {\mathbf {L}}_{s}d\gamma . \end{aligned}$$
(7.2)

Since

$$\begin{aligned} M_{1}(t)= & {} \sum \limits _{{\breve{\mathbf {z}}}\left( s-\right) \ne {\mathbf {z}} _{s-},s\le t}\left( g({\breve{\mathbf {z}}}_{s})-g({\mathbf {z}}_{s-})\right) e^{-cs}\nonumber \\&\quad -\lambda \int \limits _{0}^{t}e^{-cs}\int \limits _{{\mathbf {R}}_{+}^{n} }\left( g({\mathbf {z}}_{s-}-\varvec{\alpha })-g({\mathbf {z}}_{s-})\right) dF(\varvec{\alpha })ds \end{aligned}$$
(7.3)

and

$$\begin{aligned} M_{2}(t)=\sum \limits _{{\breve{\mathbf {z}}}\left( s-\right) \ne {\mathbf {z}} _{s-},s\le t}-\upsilon ({\breve{\mathbf {z}}}_{s-},{\mathbf {z}}(s-)-{\breve{\mathbf {z}}}_{s})e^{-cs}+\lambda \int \limits _{0}^{t}e^{-cs}\int \limits _{{\mathbf {R}} _{+}^{n}}\upsilon ({\mathbf {z}}_{s-},\varvec{\alpha })dF(\varvec{\alpha })ds\nonumber \\ \end{aligned}$$
(7.4)

are martingales with zero expectation, we have from (7.1) and (7.2)

$$\begin{aligned}{}\begin{array}[c]{l} (g({\mathbf {z}}_{\tau })I_{\{\tau <\tau ^{{\mathbf {L}}}\}}-\upsilon ({\mathbf {z}} _{\tau -},{\mathbf {z}}_{\tau -}-{\mathbf {z}}_{\tau })I_{\{\tau =\tau ^{{\mathbf {L}}} \}})e^{-c\tau }-g({\mathbf {x}})\\ \begin{array} [c]{ll} &{}= (g({\mathbf {z}}_{\tau })-\upsilon ({\mathbf {z}}_{\tau -},{\mathbf {z}}_{\tau -}-{\mathbf {z}}_{\tau }))e^{-c\tau }-g({\mathbf {x}})\\ &{}= \int \nolimits _{0}^{\tau }{\mathcal {L}}(g)({\mathbf {z}}_{s-})e^{-cs}ds-\int _{0-}^{\tau }e^{-cs}{\mathbf {a}}\cdot d{\mathbf {L}}_{s}\\ &{}\quad +\int \nolimits _{0}^{\tau }e^{-cs}\left( {\mathbf {a}}-\nabla g({\mathbf {z}} _{s-})\right) \mathbf {\cdot }d{\mathbf {L}}_{s}^{c}\\ &{}\quad + \sum \limits _{{\mathbf {L}}_{s}\ne {\mathbf {L}}_{s-},s\le \tau }e^{-cs} \int \nolimits _{0}^{1}\left( {\mathbf {a}}-\nabla g\left( {\breve{\mathbf {z}}} _{s}-\gamma \Delta {\mathbf {L}}_{s}\right) \mathbf {\cdot }\Delta {\mathbf {L}} _{s}\right) d\gamma +M(\tau ); \end{array} \end{array} \end{aligned}$$

where \(M(t)=M_{1}(t)+M_{2}(t)\). \(\square \)

In order to prove Lemma 3.6, we will use a technical lemma in which we construct a sequence of smooth functions that approximate a (possible non-smooth) viscosity supersolution. This is done in order to apply Lemma 3.3 to an approximate smooth function instead of the viscosity supersolution; we have to do that because the amount of time the controlled process spends at non-differentiable points of the viscosity supersolution could have positive Lebesgue measure. We omit the proof of this lemma because it is similar to the one-dimensional version given in Lemma 4.1 of [6]; the result is obtained by standard convolution arguments using that the function \({\mathcal {R}}\) is continuous.

Lemma 7.1

Fix \({\mathbf {x}}^{0}\) in the interior of \({\mathbf {R}}_{+}^{n}\) and let \({\overline{u}}\) be a supersolution of (3.1) satisfying the growth condition (2.14). We can find a sequence of functions \({\overline{u}} _{m}:{\mathbf {R}}_{+}^{n}\rightarrow {\mathbf {R}}\) such that:

(a) \({\overline{u}}_{m}\) is continuously differentiable and \({\overline{u}} _{m}\ge {\overline{u}}\ge f.\)

(b) \({\overline{u}}_{m}\) satisfies the growth condition (2.14).

(c) \({\mathbf {p}}{\mathbf {\cdot }}\nabla {\overline{u}}_{m}\) \(\le \left( c+\lambda \right) {\overline{u}}_{m}+\lambda \left| {\overline{u}}({\mathbf {0}})\right| +\lambda {\mathbb {E}}\left( \left| \upsilon ({\mathbf {0}},{\mathbf {U}} _{1})\right| \right) \) in \({\mathbf {R}}_{+}^{n}\) and \({\mathbf {a}} -\nabla {\overline{u}}_{m}\le {\mathbf {0}}\).

(d) \({\overline{u}}_{m}\) \(\searrow \) \({\overline{u}}\) uniformly on compact sets in \({\mathbf {R}}_{+}^{n}\) and \(\nabla {\overline{u}}_{m}\) converges to \(\nabla {\overline{u}}\) a.e. in \({\mathbf {R}}_{+}^{n}\).

(e) There exists a sequence \(c_{m}\) with \(\lim \limits _{m\rightarrow \infty }c_{m}=0\) such that

$$\begin{aligned} \sup \nolimits _{{\mathbf {x}}\in [{\mathbf {0}},{\mathbf {x}}^{0}]}{\mathcal {L}} ({\overline{u}}_{m})\left( {\mathbf {x}}\right) \le c_{m}. \end{aligned}$$

Proof of Lemma 3.6

Consider the processes \({\mathbf {z}}_{s}={\mathbf {X}}_{s}^{{\mathbf {L}}}{}\) defined in (2.3), let us call \(\tau =\tau ^{{\mathbf {L}}}\) and take \(\widetilde{\tau }={\overline{\tau }}\wedge \tau \). Let us consider the functions \({\overline{u}}_{m}\) defined in Lemma 7.1 in \({\mathbf {R}}_{+}^{n}\) . Using Lemma 3.3 for \({\widetilde{\tau }}\wedge t\), we get from Lemma 7.1 (a) and (c) that

$$\begin{aligned}{}\begin{array}[c]{l} {\overline{u}}_{m}({\mathbf {z}}_{t})e^{-ct}I_{\{t<{\widetilde{\tau }}\}} +e^{-c{\overline{\tau }}}f({\mathbf {z}}_{{\overline{\tau }}})I_{\{t\wedge {\widetilde{\tau }}={\overline{\tau }},{\overline{\tau }}<\tau \}}-e^{-c{\overline{\tau }} }\upsilon \left( {\mathbf {z}}_{\tau \mathbf {-}},{\mathbf {z}}_{\tau \mathbf {-} }-{\mathbf {z}}_{\tau }\right) I_{\{t\wedge {\widetilde{\tau }}=\tau \}}-\overline{u}_{m}({\mathbf {x}})\\ \begin{array} [c]{ll} &{}\le {\overline{u}}_{m}({\mathbf {z}}_{t})e^{-ct}I_{\{t<{\widetilde{\tau }} \}}+e^{-c{\overline{\tau }}}{\overline{u}}_{m}({\mathbf {z}}_{{\overline{\tau }} })I_{\{t\wedge {\widetilde{\tau }}={\overline{\tau }},{\overline{\tau }}<\tau \}}-e^{-c{\overline{\tau }}}\upsilon \left( {\mathbf {z}}_{\tau \mathbf {-} },{\mathbf {z}}_{\tau \mathbf {-}}-{\mathbf {z}}_{\tau }\right) I_{\{t\wedge {\widetilde{\tau }}=\tau \}}-{\overline{u}}_{m}({\mathbf {x}})\\ &{}\le \int \nolimits _{0}^{t\wedge {\widetilde{\tau }}}{\mathcal {L}}({\overline{u}} _{m})({\mathbf {z}}_{s-})e^{-cs}ds-\int _{0-}^{t\wedge {\widetilde{\tau }}} e^{-cs}{\mathbf {a}}\cdot d{\mathbf {L}}_{s}+M(t\wedge {\widetilde{\tau }}), \end{array} \end{array}\nonumber \\ \end{aligned}$$
(7.5)

where M(t) is a zero-expectation martingale. Since \({\mathbf {L}}_{s}\) is non-decreasing we get, using the monotone convergence theorem, that

$$\begin{aligned} \begin{array} [c]{l} \lim \limits _{t\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( \int _{0-}^{t\wedge {\widetilde{\tau }}}e^{-cs}{\mathbf {a}}\cdot d{\mathbf {L}} _{s}+e^{-c{\overline{\tau }}}f({\mathbf {z}}_{{\overline{\tau }}})I_{\{t\wedge {\widetilde{\tau }}={\overline{\tau }},{\overline{\tau }}<\tau \}}-e^{-c{\overline{\tau }} }\upsilon \left( {\mathbf {z}}_{\tau \mathbf {-}},{\mathbf {z}}_{\tau \mathbf {-} }-{\mathbf {z}}_{\tau }\right) I_{\{t\wedge {\widetilde{\tau }}=\tau \}}\right) \\ =J(\pi ;{\mathbf {x}}). \end{array} \end{aligned}$$

From Lemma 7.1(c), we have

$$\begin{aligned}&-\left( c+\lambda \right) {\overline{u}}_{m}({\mathbf {x}})+{\overline{u}} _{m}(0)\lambda F({\mathbf {x}})-\lambda {\mathbb {E}}\left( \left| \upsilon ({\mathbf {0}},{\mathbf {U}}_{1})\right| \right) \nonumber \\&\quad \le {\mathcal {L}} ({\overline{u}}_{m})({\mathbf {x}})\le \lambda {\overline{u}}_{m}({\mathbf {x}} )+\lambda \left| {\overline{u}}({\mathbf {0}})\right| +\lambda {\mathbb {E}}\left( \left| \upsilon ({\mathbf {0}},{\mathbf {U}}_{1})\right| \right) -{\mathcal {R}}({\mathbf {x}}). \end{aligned}$$
(7.6)

By Lemma 7.1(b), (c) and the inequality \({\mathbf {z}}_{s}\le {\mathbf {x}}+{\mathbf {p}}s,\) there exists \(d_{0}\) large enough such that

$$\begin{aligned} {\overline{u}}_{m}({\mathbf {z}}_{s})\le {\overline{u}}_{m}({\mathbf {x}}+{\mathbf {p}} s)\le d_{0}e^{\frac{c}{2n}\sum _{i=1}^{n}\frac{x_{i}+p_{i}s}{p_{i}}} =d_{0}h_{0}({\mathbf {x}})e^{\frac{c}{2}s} \end{aligned}$$
(7.7)

and

$$\begin{aligned} -\upsilon ({\mathbf {z}}_{s-},\varvec{\alpha })\le S({\mathbf {z}}_{s-})\le d_{0}h_{0}({\mathbf {x}})e^{\frac{c}{2}s}\text { for }\left( {\mathbf {z}} _{s-}-\varvec{\alpha }\right) \notin {\mathbf {R}}_{+}^{n}, \end{aligned}$$
(7.8)

where \(h_{0}\) and S are defined in (2.15) and Proposition 2.4 respectively. Therefore, from (7.6), we obtain that there exists \(d_{1}\) large enough such that,

$$\begin{aligned} e^{-cs}\left| {\mathcal {L}}({\overline{u}}_{m})\left( {\mathbf {z}}_{s-}\right) \right| \le d_{1}e^{-\frac{c}{2}s}. \end{aligned}$$
(7.9)

And using the bounded convergence theorem,

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( \int \nolimits _{0}^{t\wedge {\widetilde{\tau }}}{\mathcal {L}}({\overline{u}}_{m} )({\mathbf {z}}_{s-})e^{-cs}ds\right) ={\mathbb {E}}_{{\mathbf {x}}}\left( \int \nolimits _{0}^{{\widetilde{\tau }}}{\mathcal {L}}({\overline{u}}_{m} )({\mathbf {z}}_{s-})e^{-cs}ds\right) .\qquad \end{aligned}$$
(7.10)

From (7.5) and (7.10), we get

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( \overline{u}_{m}({\mathbf {z}}_{t})e^{-ct}I_{\{t<{\widetilde{\tau }}\}}\right) -\overline{u}_{m}({\mathbf {x}})\le {\mathbb {E}}_{{\mathbf {x}}}\left( \int \nolimits _{0} ^{{\widetilde{\tau }}}{\mathcal {L}}({\overline{u}}_{m})({\mathbf {z}}_{s-} )e^{-cs}ds\right) -J(\pi ;{\mathbf {x}}).\nonumber \\ \end{aligned}$$
(7.11)

By (7.7),

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( \overline{u}_{m}({\mathbf {z}}_{t})e^{-ct}I_{\{t<{\widetilde{\tau }}\}}\right) =0. \end{aligned}$$
(7.12)

Let us prove now that

$$\begin{aligned} \limsup \limits _{m\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( \int \nolimits _{0}^{{\widetilde{\tau }}}{\mathcal {L}}({\overline{u}}_{m} )({\mathbf {z}}_{s-})e^{-cs}ds\right) \le 0. \end{aligned}$$
(7.13)

Given any \(\varepsilon >0\), from (7.9), we can find T large enough such that

$$\begin{aligned} {\mathbb {E}}_{{\mathbf {x}}}\left( \int \nolimits _{T\wedge {\widetilde{\tau }} }^{{\widetilde{\tau }}}\left| {\mathcal {L}}({\overline{u}}_{m})({\mathbf {z}} _{s-})\right| e^{-cs}ds\right) \le \frac{2d_{1}}{c}\left( e^{-\frac{c}{2} T}\right) <\frac{\varepsilon }{2}. \end{aligned}$$
(7.14)

For \(s\le T\), we get \({\mathbf {z}}_{s-}\in [{\mathbf {0}},\) \({\mathbf {x}} +{\mathbf {p}}T\) ] , then from Lemma 7.1(e) we can find \(m_{0}\) large enough such that for any \(m\ge m_{0}\)

$$\begin{aligned} \int \nolimits _{0}^{T}{\mathcal {L}}({\overline{u}}_{m})({\mathbf {z}}_{s-} )e^{-cs}ds\le c_{m}\int \nolimits _{0}^{T}e^{-cs}ds\le \frac{c_{m}}{c}\le \frac{\varepsilon }{2} \end{aligned}$$

and so we have (7.13). Thus, from (7.11) and using (7.12) and (7.13), we obtain

$$\begin{aligned} {\overline{u}}({\mathbf {x}})=\lim \nolimits _{m\rightarrow \infty }{\overline{u}} _{m}({\mathbf {x}})\ge J(\pi ;{\mathbf {x}})\text {. } \end{aligned}$$
(7.15)

\(\square \)

Proof of Lemma 4.2

Suppose that \(\tilde{k}=\infty \), calling

$$\begin{aligned} k_{l}:={\mathbf {m}}{\mathbf {\cdot }}{\mathbf {1}}+(l-1)n+1, \end{aligned}$$

there are at least \(i_{l}\ge l\) control actions \({\mathbf {E}}_{0}\) in \(\left( s_{1},s_{2},\ldots ,s_{k_{l}}\right) \). Let us consider the non-decreasing sequence \((j_{l})_{l}\) defined as

$$\begin{aligned} j_{l}:=\max \{j:\tau _{j}\le t_{k_{l}}\}, \end{aligned}$$

we have that \(t_{k_{l}}\ge \tau _{j_{l}}+(i_{l}-j_{l})\delta \). If \(\lim _{l\rightarrow \infty }i_{l}-j_{l}=\infty \), then

$$\begin{aligned} \lim \nolimits _{l\rightarrow \infty }t_{k_{l}}\ge \lim \nolimits _{l\rightarrow \infty }\tau _{j_{l}}+(i_{l}-j_{l})\delta \ge \lim \nolimits _{l\rightarrow \infty }(i_{l}-j_{l})\delta =\infty ; \end{aligned}$$

if not, \(\lim _{l\rightarrow \infty }j_{l}=\infty \) and so

$$\begin{aligned} \lim \nolimits _{l\rightarrow \infty }t_{k_{l}}\ge \lim \nolimits _{l\rightarrow \infty }\tau _{j_{l}}+(i_{l}-j_{l})\delta \ge \lim \nolimits _{l\rightarrow \infty }\tau _{j_{l}} \end{aligned}$$

and since \(\lim _{l\rightarrow \infty }\tau _{j_{l}}=\) \(\lim _{i\rightarrow \infty }\tau _{i}=\) \(\infty \) a.s., we have the result. \(\square \)

Proof of Lemma 4.4

It is straightforward that \(T_{0}\), \(T_{i},\) \(T_{s}\) and T are non-decreasing and that

$$\begin{aligned} \sup \nolimits _{{\mathbf {m}}\in {\mathbf {N}}_{0}^{n}}\left| T(w_{1} )({\mathbf {m}})-T(w_{2})({\mathbf {m}})\right| \le \sup \nolimits _{{\mathbf {m}} \in {\mathbf {N}}_{0}^{n}}\left| w_{1}({\mathbf {m}})-w_{2}({\mathbf {m}} )\right| . \end{aligned}$$

Also, given a function \(w:{\mathbf {N}}_{0}^{n}\rightarrow {\mathbf {R}}\) it is immediate to see that \(T_{i}(w)\) and \(T_{s}(w)\) can be written as a linear combination of the values of \(w({\mathbf {m}})\) plus a constant. Let us prove now that

$$\begin{aligned} T_{0}(w)({\mathbf {m}})=e^{-(c+\lambda )\delta }w({\mathbf {m}}+{\mathbf {1}} )+\sum \limits _{0\le {\mathbf {k}}\le {\mathbf {m}}}a_{1}({\mathbf {k}},{\mathbf {m}} )w({\mathbf {k}})+a_{2}({\mathbf {m}})\text {,} \end{aligned}$$

Lemma 7.2

where

$$\begin{aligned} \begin{array} [c]{lll} a_{1}({\mathbf {k}},{\mathbf {m}}) &{} = &{} I_{\{{\mathbf {k}}\le {\mathbf {m}}{-}{\mathbf {1}} \}}\int \limits _{0}^{\delta }\lambda e^{-(c+\lambda )t}(F(g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) {+}t{\mathbf {p}})-F(g^{\delta }\left( {\mathbf {m}} -{\mathbf {k}}-{\mathbf {1}}\right) +t{\mathbf {p}}))dt \\ &{} &{} \quad +I_{\{{\mathbf {k}}\le {\mathbf {m}},{\mathbf {k}}\nleqslant {\mathbf {m}} -{\mathbf {1}}\}}\int \limits _{0}^{\delta }\lambda e^{-(c+\lambda )t}(F(g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) \\ &{}&{}\quad +t{\mathbf {p}})-F({\mathbf {0}}\vee \left( g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}}\right) ))dt~ \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array} [c]{lll} a_{2}({\mathbf {m}}) &{} = &{} \sum \limits _{0\le {\mathbf {k}}<{\mathbf {m}}-{\mathbf {1}} }\int \limits _{0}^{\delta }\left( \lambda e^{-(c+\lambda )t} {\textstyle \int \limits _{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}-{\mathbf {1}} \right) +t{\mathbf {p}}}^{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}}}} {\mathbf {a}}\cdot (g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}} -\varvec{\alpha })dF(\varvec{\alpha })\right) dt\\ &{} &{} +\sum \limits _{{\mathbf {k}}\le {\mathbf {m}},{\mathbf {k}}\nleqslant {\mathbf {m}}-{\mathbf {1}}}\int \limits _{0}^{\delta }\left( \lambda e^{-(c+\lambda )t} {\textstyle \int \limits _{{\mathbf {0}}\vee \left( g^{\delta }\left( {\mathbf {m}} -{\mathbf {k}}\right) +t{\mathbf {p}}\right) }^{g^{\delta }\left( {\mathbf {m}} -{\mathbf {k}}\right) +t{\mathbf {p}}}} {\mathbf {a}}\cdot (g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}} -\varvec{\alpha })dF(\varvec{\alpha })\right) dt\\ &{} &{} -\int \limits _{0}^{\delta }e^{-(c+\lambda )t}{\mathcal {R}}(g^{\delta }({\mathbf {m}})+t{\mathbf {p}})dt. \end{array} \end{aligned}$$

Given \({\mathbf {m}}\in {\mathbf {N}}_{0}^{n}\), \(\varvec{\alpha } \in {\mathbf {R}}_{+}^{n}\) and \(0<t\le \delta \) such that \({\mathbf {0}}\le g^{\delta }({\mathbf {m}})+t{\mathbf {p}}-\varvec{\alpha }\), let us define

$$\begin{aligned} {\mathbf {k}}:=\rho ^{\delta }(g^{\delta }({\mathbf {m}})+t{\mathbf {p}}-\varvec{\alpha }), \end{aligned}$$

and so \({\mathbf {k}}\le {\mathbf {m}}.\)

If \({\mathbf {k}}\le {\mathbf {m}}-{\mathbf {1}}\),

$$\begin{aligned} g^{\delta }({\mathbf {k}})\le g^{\delta }({\mathbf {m}})+t{\mathbf {p}}-\varvec{\alpha }<g^{\delta }\left( {\mathbf {k}}+{\mathbf {1}}\right) \le g^{\delta }({\mathbf {m}}) \end{aligned}$$

that implies

$$\begin{aligned} {\mathbf {0}}<g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}-{\mathbf {1}}\right) +t{\mathbf {p}}<\varvec{\alpha }\le g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} \right) +t{\mathbf {p}}. \end{aligned}$$

If \({\mathbf {k}}\le {\mathbf {m}}\) with \({\mathbf {k}}\nleqslant {\mathbf {m}}-{\mathbf {1}} \),

$$\begin{aligned} g^{\delta }({\mathbf {k}})\le g^{\delta }({\mathbf {m}})+t{\mathbf {p}}-\varvec{\alpha }<g^{\delta }\left( {\mathbf {k}}+{\mathbf {1}}\right) \wedge \left( g^{\delta }({\mathbf {m}})+t{\mathbf {p}}\right) \end{aligned}$$

and so

$$\begin{aligned} \left( g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}-{\mathbf {1}}\right) +t{\mathbf {p}}\right) \vee {\mathbf {0}}<\varvec{\alpha }\le g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}}. \end{aligned}$$

Then, we can write

$$\begin{aligned} \begin{array} [c]{l} {\mathcal {I}}^{\delta }(w)({\mathbf {m}})\\ \begin{array} [c]{ll} &{}= \sum \limits _{0\le {\mathbf {k}}\le {\mathbf {m}}-{\mathbf {1}}}w({\mathbf {k}})\int _{0}^{\delta }\lambda e^{-(c+\lambda )t}\left( {\textstyle \int \nolimits _{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} -{\mathbf {1}}\right) +t{\mathbf {p}}}^{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} \right) +t{\mathbf {p}}}} dF(\varvec{\alpha })\right) dt\\ &{}\quad +\sum \limits _{0\le {\mathbf {k}}\le {\mathbf {m}}-{\mathbf {1}}}\int _{0}^{\delta }\lambda e^{-(c+\lambda )t}\left( {\textstyle \int \nolimits _{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} -{\mathbf {1}}\right) +t{\mathbf {p}}}^{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} \right) +t{\mathbf {p}}}} {\mathbf {a}}\cdot \left( g^{\delta }({\mathbf {m}}-{\mathbf {k}})+t{\mathbf {p}} -\varvec{\alpha }\right) dF(\varvec{\alpha })\right) dt\\ &{}\quad +\sum \limits _{{\mathbf {k}}\le {\mathbf {m}},{\mathbf {k}}\nleqslant {\mathbf {m}}-{\mathbf {1}} }w({\mathbf {k}})\int _{0}^{\delta }\lambda e^{-(c+\lambda )t}\left( {\textstyle \int \nolimits _{\left( g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} -{\mathbf {1}}\right) +t{\mathbf {p}}\right) \vee {\mathbf {0}}}^{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}}}} dF(\varvec{\alpha })\right) dt\\ &{}\quad +\sum \limits _{{\mathbf {k}}\le {\mathbf {m}},{\mathbf {k}}\nleqslant {\mathbf {m}}-{\mathbf {1}} }\int _{0}^{\delta }\lambda e^{-(c+\lambda )t}\left( {\textstyle \int \nolimits _{\left( g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}} -{\mathbf {1}}\right) +t{\mathbf {p}}\right) \vee {\mathbf {0}}}^{g^{\delta }\left( {\mathbf {m}}-{\mathbf {k}}\right) +t{\mathbf {p}}}} {\mathbf {a}}\cdot \left( g^{\delta }({\mathbf {m}}-{\mathbf {k}})+t{\mathbf {p}} -\varvec{\alpha }\right) dF(\varvec{\alpha })\right) dt. \end{array} \end{array} \end{aligned}$$

Therefore, from (4.2), we have the result. \(\square \)

Proof of Lemma 4.6

The proof of this lemma is a discrete version of the one of Lemma 3.6. Assume that \(\pi =({\mathbf {L}},{\overline{\tau }})\in \Pi _{g^{\delta }({\mathbf {m}})}^{\delta }\). For any \(\omega =(\tau _{i},{\mathbf {U}}_{i})_{i\ge 1}\), consider the sequence \({\mathbf {s}}=(s_{k})_{k=1,\ldots ,{\tilde{k}}}\) with \(s_{k}\in {\mathcal {E}}\) corresponding to \(\pi \) and \({\mathbf {m}}^{k}\), \({\mathbf {y}}^{k}\) and times \(t_{k}\) and \(\Delta _{k}\) as defined in Section 4. Let \(\left( \kappa _{l}\right) _{l\ge 1}\) be the indices of the sequence \({\mathbf {s}}=(s_{k})_{k=1,\ldots ,{\tilde{k}}}\) where \(s_{k}\) is either \({\mathbf {E}}_{s}\) or \({\mathbf {E}}_{0}\). If the sequence stops at \({\tilde{k}}=\kappa _{l_{0}}<\infty \), we define

$$\begin{aligned} \kappa _{l}=\kappa _{l_{0}}\text { for }l\ge l_{0},\text { }t_{\kappa _{l_{0}+j} }=t_{\kappa _{l_{0}}}+\Delta _{\kappa _{l_{0}}}\text { for }j\ge 1; \end{aligned}$$

and if \({\tilde{k}}=\infty \) we put \(l_{0}=\infty \). Consider the case in which the process goes to ruin at \(\kappa _{l}\), that is \({\mathbf {y}}^{\kappa _{l} }\notin {\mathbf {R}}_{+}^{n}\); then the surplus prior to the ruin is \({\mathbf {y}}^{\kappa _{l}}+{\mathbf {U}}\) and the penalty paid at ruin is \(\upsilon ({\mathbf {y}}^{\kappa _{l}}+{\mathbf {U}},{\mathbf {U}})\), where \({\mathbf {U}} \) is the last jump of the uncontrolled process. So we define, for \(l\ge 1\),

$$\begin{aligned} H(l)=w({\mathbf {m}}^{1+\kappa _{l}})I_{\{s_{\kappa _{l}}={\mathbf {E}}_{0} \}}I_{\{{\mathbf {y}}^{\kappa _{l}}\in {\mathbf {R}}_{+}^{n}\}}-\upsilon ({\mathbf {y}}^{\kappa _{l}}+{\mathbf {U}},{\mathbf {U}})I_{\{s_{\kappa _{l}} ={\mathbf {E}}_{0}\}}I_{\{{\mathbf {y}}^{\kappa _{l}}\notin {\mathbf {R}}_{+}^{n} \}}+f(g^{\delta }\left( {\mathbf {m}}^{\kappa _{l}}\right) )I_{\{s_{\kappa _{l} }={\mathbf {E}}_{s}\}}\text {.} \end{aligned}$$

If we put \(H(0)=w({\mathbf {m}})\), \(\kappa _{0}=0\) and \(t_{0}=0\), we have using \(\left( T_{i}(w)-w\right) _{i=1,\ldots ,n}\le 0,\)

$$\begin{aligned} \begin{array} [c]{lll} e^{-ct_{\kappa _{l+1}}}H(l)-w({\mathbf {m}}) &{} = &{} \sum \limits _{j=1}^{l}\left( e^{-ct_{\kappa _{j+1}}}H(j)-e^{-ct_{\kappa _{j}}}H(j-1)\right) \\ &{} = &{} \sum \limits _{j=1}^{l}I_{\{\kappa _{j+1}\ne \kappa _{j}\}}\left( e^{-ct_{\kappa _{j+1}} }H(j)-e^{-ct_{\kappa _{j}}}H(j-1)\right) \\ &{} = &{} \sum \limits _{j=1}^{l}I_{\{\kappa _{j+1}\ne \kappa _{j}\}}\left( e^{-ct_{1+\kappa _{j-1} }}\left( \sum \nolimits _{k=1+\kappa _{j-1}}^{\kappa _{j}-1}\left( w({\mathbf {m}}^{k+1} )-w({\mathbf {m}}^{k})\right) \right) \right) \\ &{} &{} +\sum \limits _{j=1}^{l}I_{\{\kappa _{j+1}\ne \kappa _{j}\}}\left( e^{-ct_{\kappa _{j+1}} }H(j)-e^{-ct_{\kappa _{j}}}w({\mathbf {m}}^{\kappa _{j}})\right) \\ &{} \le &{} \sum \limits _{j=1}^{l}I_{\{\kappa _{j+1}\ne \kappa _{j}\}}\left( \sum \limits _{k=1+\kappa _{j-1}}^{\kappa _{j}-1}e^{-ct_{1+\kappa _{j-1}}}\left( \sum \limits _{i=1}^{n}\left( -a_{i}p_{i}\delta \right) I_{\{s_{k}={\mathbf {E}}_{i}\}}\right) \right) \\ &{} &{} +\sum \limits _{j=1}^{l}I_{\{\kappa _{j+1}\ne \kappa _{j}\}}\left( e^{-ct_{\kappa _{j+1}} }H(j)-e^{-ct_{\kappa _{j}}}w({\mathbf {m}}^{\kappa _{j}})\right) ; \end{array} \end{aligned}$$
(7.16)

and since \(T_{0}(w)-w\le 0\) and \(T_{s}(w)-w\le 0,\) if \(\kappa _{j+1}\ne \kappa _{j}\),

$$\begin{aligned} \begin{array} [c]{l} {\mathbb {E}}\left( \left. e^{-ct_{\kappa _{j+1}}}H(j)-e^{-ct_{\kappa _{j}} }w({\mathbf {m}}^{\kappa _{j}})\right| {\mathcal {F}}_{t_{\kappa _{j}}}\right) \\ \begin{array}[c]{ll} &{}= {\mathbb {E}}\left( \left. (e^{-ct_{\kappa _{j+1}}}H(j)-e^{-ct_{\kappa _{j}} }w({\mathbf {m}}^{\kappa _{j}}))I_{\{s_{\kappa _{j}} ={\mathbf {E}}_{0}\}}\right| {\mathcal {F}}_{t_{\kappa _{j}}}\right) \\ &{}\quad +\,I_{\{s_{\kappa _{j}}={\mathbf {E}}_{s} \}}e^{-ct_{\kappa _{j}}}\left( f(g^{\delta }({\mathbf {m}}^{\kappa _{j} }))-w({\mathbf {m}}^{\kappa _{j}})\right) \\ &{}\le {\mathbb {E}}\left( \left. e^{-ct_{\kappa _{j+1}}}I_{\{s_{\kappa _{j} }={\mathbf {E}}_{0}\}}(w({\mathbf {m}}^{1+\kappa _{j}})I_{\{{\mathbf {y}}^{\kappa _{j} }\in {\mathbf {R}}_{+}^{n}\}}-\upsilon ({\mathbf {y}}^{\kappa _{j}}+{\mathbf {U}} ,{\mathbf {U}})I_{\{{\mathbf {y}}_{\kappa _{j}}\notin {\mathbf {R}}_{+}^{n} \}})\right| {\mathcal {F}}_{t_{\kappa _{j}}}\right) \\ &{}\quad -\,e^{-ct_{\kappa _{j}}}w({\mathbf {m}}^{\kappa _{j}})I_{\{s_{\kappa _{j} }={\mathbf {E}}_{0}\}}\\ &{}= e^{-ct_{\kappa _{j}}}I_{\{s_{\kappa _{j}}={\mathbf {E}}_{0}\}}\left( T_{0}(w)\left( {\mathbf {m}}^{\kappa _{j}}\right) -w({\mathbf {m}}^{\kappa _{j} })\right) \\ &{}\quad -\,e^{-ct_{\kappa _{j}}}I_{\{s_{\kappa _{j}}={\mathbf {E}}_{0}\}}\int \limits _{0}^{\delta } {\textstyle \int \limits _{\varvec{\alpha }\in [{\mathbf {0}},{\mathbf {z}} _{j}(t)]}} \lambda e^{-(c+\lambda )t}{\mathbf {a}}\cdot \left( {\mathbf {z}}_{j} (t)-\varvec{\alpha }-\left\langle {\mathbf {z}}_{j}(t)-\varvec{\alpha }\right\rangle ^{\delta }\right) dF(\varvec{\alpha })dt\\ &{}\le -e^{-ct_{\kappa _{j}}}I_{\{s_{\kappa _{j}}={\mathbf {E}}_{0}\}} \int \limits _{0}^{\delta } {\textstyle \int \limits _{\varvec{\alpha }\in [{\mathbf {0}},{\mathbf {z}} _{j}(t)]}} \lambda e^{-(c+\lambda )t}{\mathbf {a}}\cdot \left( {\mathbf {z}}_{j} (t)-\varvec{\alpha }-\left\langle {\mathbf {z}}_{j}(t)-\varvec{\alpha }\right\rangle ^{\delta }\right) dF(\varvec{\alpha })dt\text {,} \end{array} \end{array} \end{aligned}$$
(7.17)

where \({\mathbf {z}}_{j}(t)=g^{\delta }({\mathbf {m}}^{\kappa _{j}})+t{\mathbf {p}}\). From (7.16) and (7.17), and calling the initial surplus \({\mathbf {x}}=g^{\delta }({\mathbf {m}})\in {\mathcal {G}}^{\delta }\) we have,

$$\begin{aligned} \lim \sup _{l\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( e^{-ct_{\kappa _{l+1}}}H(l)-w({\mathbf {m}})\right) \le -{\mathbb {E}}_{{\mathbf {x}}}\left( \int _{0-}^{{\overline{\tau }}\wedge \tau _{L}}e^{-cs}{\mathbf {a}}\cdot d{\mathbf {L}} _{s}\right) . \end{aligned}$$

Then,

$$\begin{aligned} w({\mathbf {m}})\ge J(\pi ;g^{\delta }({\mathbf {m}}))+\lim \sup _{l\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( I_{\{l\le l_{0}\}}e^{-ct_{1+\kappa _{l}} }w({\mathbf {m}}^{1+\kappa _{l}})I_{\{{\mathbf {y}}^{\kappa _{l}}\in {\mathbf {R}}_{+} ^{n}\}}\right) . \end{aligned}$$

Since

$$\begin{aligned} g^{\delta }({\mathbf {m}}^{1+\kappa _{l}})\le g^{\delta }\left( {\mathbf {m}} +\rho ^{\delta }(t_{1+\kappa _{l}}{\mathbf {p}})\right) \end{aligned}$$

and w satisfies the growth condition (4.7), there exists d large enough such that

$$\begin{aligned}&\lim \sup _{l\rightarrow \infty }\left( {\mathbb {E}}_{{\mathbf {x}}}I_{\{l\le l_{0}\}}e^{-ct_{1+\kappa _{l}}}w({\mathbf {m}}^{1+\kappa _{l}})I_{\{{\mathbf {y}} ^{\kappa _{l}}\in {\mathbf {R}}_{+}^{n}\}}\right) \\&\quad \le d\lim _{l\rightarrow \infty }{\mathbb {E}}_{{\mathbf {x}}}\left( I_{\{l\le l_{0}\}}e^{-ct_{1+\kappa _{l}} }e^{c\delta {\mathbf {m}}{\mathbf {\cdot }}{\mathbf {1}}/\left( 2n\right) }e^{\frac{c}{2}t_{1+\kappa _{l} }})\right) =0; \end{aligned}$$

so we have the result. \(\square \)

Proof of Lemma 5.1

(1) Take the \({\mathcal {G}}^{\delta }\)-optimal strategy \(\pi _{g^{\delta }({\mathbf {m}})}^{\delta }\in \Pi _{g^{\delta }({\mathbf {m}})}^{\delta }\) and define \({\overline{\pi }}_{g^{\delta }({\mathbf {m}}+{\mathbf {e}}_{i})}\in \Pi _{g^{\delta }({\mathbf {m}}+{\mathbf {e}}_{i})}^{\delta }\) by applying first the control action \({\mathbf {E}}_{i}\) and then the \({\mathcal {G}}^{\delta }\)-optimal strategy \(\pi _{g^{\delta }({\mathbf {m}})}^{\delta }\). The value function of this strategy is given by

$$\begin{aligned} a_{i}p_{i}\delta +v^{\delta }({\mathbf {m}}), \end{aligned}$$

so we obtain the the first inequality of this proposition. Now, take the \({\mathcal {G}}^{\delta }\)-optimal strategy \(\pi _{g^{\delta }\left( {\mathbf {m}} +{\mathbf {1}}\right) }^{\delta }\in \Pi _{g^{\delta }\left( {\mathbf {m}} +{\mathbf {1}}\right) }^{\delta }\) and define \({\overline{\pi }}_{g^{\delta }({\mathbf {m}})}\in \Pi _{g^{\delta }({\mathbf {m}})}^{\delta }\) by applying first the control action \({\mathbf {E}}_{0}\) and then the \({\mathcal {G}}^{\delta }\)-optimal strategy \(\pi _{g^{\delta }\left( {\mathbf {m}}+{\mathbf {1}}\right) }^{\delta }\). Hence, we obtain the second inequality from

$$\begin{aligned} v^{\delta }\left( {\mathbf {m}}+{\mathbf {1}}\right) e^{-(c+\lambda )\delta }\le T_{0}(v^{\delta })\left( {\mathbf {m}}\right) \le T(v^{\delta })\left( {\mathbf {m}}\right) =v^{\delta }\left( {\mathbf {m}}\right) . \end{aligned}$$

(2) In order to avoid any confusion, in the remainder of the proof we put a superindex \(\delta \) to the control actions in \({\mathcal {G}}^{\delta }\). Note first that given any surplus in \({\mathbf {R}}_{+}^{n}\), the strategy of paying dividends in such a way that the surplus goes to the nearest smaller point in \({\mathcal {G}}^{2\delta }\) corresponds to go first to the nearest smaller point in \({\mathcal {G}}^{\delta }\) and then to apply (possibly) a combination of control actions \({\mathbf {E}}_{i}^{\delta \prime }s \). Consider \(\pi _{2g^{\delta }({\mathbf {m}})}\in \Pi _{g^{2\delta }\left( {\mathbf {m}}\right) }^{2\delta }\) given by the random sequence \({\mathbf {s}}=(s_{k})_{k=1,\ldots ,{\tilde{k}}}\) with

$$\begin{aligned} s_{k}\in {\mathcal {E}}^{2\delta }=\left\{ {\mathbf {E}}_{s}^{2\delta },\left( {\mathbf {E}}_{i}^{2\delta }\right) _{i=1,\ldots ,n},{\mathbf {E}}_{0}^{2\delta }\right\} . \end{aligned}$$

We can see that \(\pi _{2g^{\delta }({\mathbf {m}})}\) also belongs to \(\Pi _{2g^{\delta }({\mathbf {m}})}^{\delta }\) rewriting the sequence as follows: If \(s_{k}={\mathbf {E}}_{i}^{2\delta }\), we replace it by the pair \({\mathbf {E}} _{i}^{\delta }\mathbf {,E}_{i}^{\delta }\); if \(s_{k}={\mathbf {E}}_{s}^{2\delta } \), we replace it by \({\mathbf {E}}_{s}^{\delta }\); and if \(s_{k} ={\mathbf {E}}_{0}^{2\delta }\), we replaces it

  • either by \({\mathbf {E}}_{0}^{\delta },{\mathbf {E}}_{0}^{\delta }\) if the next jump in the uncontrolled process arrives at time \(\tau>\) \(2\delta ;\)

  • or by \({\mathbf {E}}_{0}^{\delta },{\mathbf {E}}_{0}^{\delta }\mathbf {,}\) and a possible combination of \({\mathbf {E}}_{i}^{\delta \prime }s\), if it arrives at time \(\tau \in \) \((\delta \),\(2\delta ]\), so the surplus goes to the nearest smaller point in \({\mathcal {G}}^{2\delta }\);

  • or by \({\mathbf {E}}_{0}^{\delta }\), and a possible combination of \({\mathbf {E}}_{i}^{\delta \prime }s\), if it arrives at time \(\tau \le \) \(\delta \), so again the surplus goes to the nearest smaller point in \({\mathcal {G}} ^{2\delta }\).

    So we have the result.

\(\square \)

Proof of Lemma 5.3

Let us first prove that

$$\begin{aligned} \begin{array} [c]{l} \left| V^{\delta _{k}}({\mathbf {y}})-V^{\delta _{k}}({\mathbf {x}})\right| \\ \le \frac{2}{{\hat{p}}}V^{\delta _{k}}\left( \left\langle {\mathbf {x}}\vee {\mathbf {y}} \right\rangle ^{\delta _{k}}\right) \left( \frac{e^{(c+\lambda )\delta _{k}}-1}{\delta _{k} }\right) \left\| \left\langle {\mathbf {y}}\right\rangle ^{\delta _{k}}-\left\langle {\mathbf {x}}\right\rangle ^{\delta _{k}}\right\| _{1}+2\delta _{k} {\mathbf {a}}\cdot {\mathbf {p}}, \end{array} \end{aligned}$$
(7.18)

for any \({\mathbf {x}}\) and \({\mathbf {y}}\) in \({\mathbf {R}}_{+}^{n}\). Let us assume first that \({\mathbf {y}}>{\mathbf {x}}\). We have from Lemma 5.1,

$$\begin{aligned}&V^{\delta _{k}}(g^{\delta _{k}}\left( {\mathbf {m}}+{\mathbf {e}}_{i}\right) )-V^{\delta _{k}}(g^{\delta _{k}}({\mathbf {m}}))\\&\quad \le V^{\delta _{k}}(g^{\delta _{k} }\left( {\mathbf {m}}+{\mathbf {1}}\right) )-V^{\delta _{k}}(g^{\delta _{k} }({\mathbf {m}}))\le V^{\delta _{k}}(g^{\delta _{k}}({\mathbf {m}}))(e^{(c+\lambda )\delta _{k}}-1). \end{aligned}$$

Let us call \({\mathbf {m}}_{{\mathbf {y}}}=\rho ^{\delta _{k}}({\mathbf {y}})\) and \({\mathbf {m}}_{{\mathbf {x}}}=\rho ^{\delta _{k}}({\mathbf {x}})\). Then,

$$\begin{aligned} \begin{array} [c]{lll} V^{\delta _{k}}({\mathbf {y}})-V^{\delta _{k}}({\mathbf {x}}) &{} \le &{} V^{\delta _{k} }\left( g^{\delta _{k}}({\mathbf {m}}_{{\mathbf {y}}})\right) -V^{\delta _{k}}\left( g^{\delta _{k} }\left( {\mathbf {m}}_{{\mathbf {x}}}\right) \right) +{\mathbf {a}}\cdot ({\mathbf {y}} -g^{\delta _{k}}({\mathbf {m}}_{{\mathbf {y}}}))\\ &{} \le &{} \left( \frac{e^{(c+\lambda )\delta _{k}}-1}{\delta _{k}}\right) V^{\delta _{k} }({\mathbf {y}})\sum \nolimits _{i=1}^{n}\frac{g_{i}^{\delta _{k}}\left( {\mathbf {m}} _{{\mathbf {y}}}-{\mathbf {m}}_{{\mathbf {x}}}\right) }{p_{i}}+\delta _{k} {\mathbf {a}}\cdot {\mathbf {p}}\\ &{} \le &{} \left( \frac{e^{(c+\lambda )\delta _{k}}-1}{{\hat{p}}\delta _{k}}\right) V^{\delta _{k}}({\mathbf {y}})\left\| g^{\delta _{k}}\left( {\mathbf {m}} _{{\mathbf {y}}}-{\mathbf {m}}_{{\mathbf {x}}}\right) \right\| _{1}+\delta _{k}{\mathbf {a}}\cdot {\mathbf {p}}. \end{array} \end{aligned}$$

Let us consider now \({\mathbf {x}}\) and \({\mathbf {y}}\) in \({\mathbf {R}}_{+}^{n}\), consider \({\mathbf {m}}_{0}=\rho ^{\delta _{k}}({\mathbf {x}}\wedge {\mathbf {y}})\),

$$\begin{aligned} \begin{array} [c]{l} \left| V^{\delta _{k}}({\mathbf {y}})-V^{\delta _{k}} ({\mathbf {x}})\right| \\ \begin{array} [c]{ll} &{}\le V^{\delta _{k}}({\mathbf {y}})-V^{\delta _{k}}({\mathbf {x}}\wedge {\mathbf {y}})+V^{\delta _{k}}({\mathbf {x}})-V^{\delta _{k}}({\mathbf {x}} \wedge {\mathbf {y}})\\ &{}\le \frac{1}{{\hat{p}}}V^{\delta _{k}}({\mathbf {x}}\vee {\mathbf {y}})\left( \frac{e^{(c+\lambda )\delta _{k}}-1}{\delta _{k}}\right) \left( \left\| g^{\delta _{k} }\left( {\mathbf {m}}_{{\mathbf {y}}}-{\mathbf {m}}_{0}\right) \right\| _{1}+\left\| g^{\delta _{k}}\left( {\mathbf {m}}_{{\mathbf {x}}}-{\mathbf {m}} _{0}\right) \right\| _{1}\right) +2\delta _{k}{\mathbf {a}} \cdot {\mathbf {p}}\\ &{}\le \frac{2}{{\hat{p}}}V^{\delta _{k}}({\mathbf {x}}\vee {\mathbf {y}})\left( \frac{e^{(c+\lambda )\delta _{k}}-1}{\delta _{k}}\right) \left\| g^{\delta _{k}}\left( {\mathbf {m}}_{{\mathbf {y}}}-{\mathbf {m}}_{{\mathbf {x}}}\right) \right\| _{1}+2\delta _{k}{\mathbf {a}}\cdot p. \end{array} \end{array} \end{aligned}$$

Therefore we have (7.18).

By definitions (4.9) and (5.1), and since \(T_{i}\left( v^{\delta _{k}}\right) \le v^{\delta _{k}}\),

$$\begin{aligned} \begin{array} [c]{lll} {\overline{V}}({\mathbf {y}})-{\overline{V}}({\mathbf {x}}) &{} \ge &{} \overline{V}({\mathbf {y}})-V^{\delta _{k}}({\mathbf {y}})+{\mathbf {a}}\cdot g^{\delta _{k} }\left( \rho ^{\delta _{k}}({\mathbf {y}})-\rho ^{\delta _{k}}({\mathbf {x}})\right) \\ &{} &{}\quad +\,{\mathbf {a}}\cdot ({\mathbf {y}}-g^{\delta _{k}}(\rho ^{\delta _{k}} ({\mathbf {y}})-\rho ^{\delta _{k}}({\mathbf {x}}))+{\mathbf {x}})+V^{\delta _{k} }({\mathbf {x}})-{\overline{V}}({\mathbf {x}}); \end{array} \end{aligned}$$

taking the limit as k goes to infinity, we obtain the first inequality of the Lipschitz inequality.

We can write, from (7.18),

$$\begin{aligned} \begin{array} [c]{lll} {\overline{V}}({\mathbf {y}})-{\overline{V}}({\mathbf {x}}) &{} = &{} \overline{V}({\mathbf {y}})-V^{\delta _{k}}({\mathbf {y}})+V^{\delta _{k}}({\mathbf {y}} )-V^{\delta _{k}}({\mathbf {x}})+V^{\delta _{k}}({\mathbf {x}})-\overline{V}({\mathbf {x}})\\ &{} \le &{} {\overline{V}}({\mathbf {y}})-V^{\delta _{k}}({\mathbf {y}})+\frac{2}{{\hat{p}} }{\overline{V}}({\mathbf {y}})\left( \frac{e^{(c+\lambda )\delta _{k}}-1}{\delta _{k} }\right) \left\| g^{\delta _{k}}\left( \rho ^{\delta _{k}}({\mathbf {y}})-\rho ^{\delta _{k}}({\mathbf {x}})\right) \right\| _{1}\\ &{} &{}\quad +\,2\delta _{k}{\mathbf {a}}\cdot {\mathbf {p}}+V^{\delta _{k}}({\mathbf {x}} )-{\overline{V}}({\mathbf {x}}); \end{array} \end{aligned}$$

taking the limit as k goes to infinity, we obtain the second inequality of the Lipschitz inequality. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Azcue, P., Muler, N. A Multidimensional Problem of Optimal Dividends with Irreversible Switching: A Convergent Numerical Scheme. Appl Math Optim 83, 1613–1649 (2021). https://doi.org/10.1007/s00245-019-09602-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-019-09602-0

Keywords

Navigation