Cramér–Lundberg model for some classes of extremal Markov sequences

The classical Cramér–Lundberg model was the first attempt to describe the financial condition of the insurance company. The incomes were approximated by a steady stream of money, and insurance payments were not limited and could take any value from zero to infinity. The society did not invest any part of its money and does not have any employees, shareholders, or enterprise maintenance costs. There exist many modifications of the Cramér–Lundberg model that cover at least some of the problems described here but usually require insight into the internal financial policy of the insurance company. We propose another modification based on Markov processes defined by generalized convolutions. Thanks to the generalized convolutions, we can stochastically approximate the internal financial policy of the company based on publicly available data. In this paper, we focus on computing the ruin probability in the Cramér–Lundberg model for an infinite time horizon for the Markov processes where the transition probabilities are defined by generalized convolutions, in particular, by the α-convolution, maximal convolution, or Kendall convolution.


Introduction
The classical Cramér-Lundberg risk model was introduced by Lundberg [19] in 1903 and developed by Crámer [7] and his Stockholm School at the beginning of the XX century.We can find a rich information on actuarial risk theory, nonlife insurance models, and financial models in a huge number of papers and books (e.g., [2,5,8,10,11,21,22,23,24]).We refer the interested reader to the series of P. Embrechts' papers and especially to the book of Embrechts, Kluppelberg, and Mikosch [9].Among many interesting results, we can find there very important modifications of Cramér-Lundberg models for heavy-tailed distributions.This means that in fact the original simple Cramér-Lundberg model rather describes the state of water supply in the underground tank when the temperature and humidity in the cave are constant and water drips from the ceiling of the cave with constant intensity.Animals come to this tank with constant intensity and drink as much water as they need, as long as their needs can be described by an exponential distribution (which is rather reasonable assumption).
There are many modifications of the Cramér-Lundberg model.Some of them can be found in [9].In these modifications, some of the defects are eliminated.However, this usually means that much more information about the insurance company policy is required for description, whereas no insurance company wants to make such information publicly available.In the next section, we propose a family of models based on special class of Markov chains.The secret information of the insurance company can be coded in transition probabilities.This approach may be more convenient than taking under considerations various different company policy elements.
The paper is organized as follows: In Section 2, we present our model.The considered random walks are very special: the transition probabilities are defined by the Urbanik generalized convolution.The basics of generalized convolutions are described in Section 3, and the construction of the random walk with respect to generalized convolution is given in Section 4. The detailed calculations are given in the last three sections for the following examples: a random walk with respect to stable convolution, the max-convolution, and the Kendall random walk.
B.H. Jasiulis-Gołdyn, A. Lechańska, and J.K. Misiewicz Notation.By N 0 we denote here the set of natural numbers including zero.By P + we denote the set of probability measures on the positive half-line [0, ∞).If λ n converges weakly to λ, then we write λ n → λ.For simplicity, we denote by T a the rescaling operator (dilatation operator) defined by T 0 λ = δ 0 and (T a λ)(A) = λ(A/a) for all Borel sets A and a = 0.

Description of the proposed model
In our model, we assume that the insurance company invests at least part of its money and has employees and shareholders that have income; at each moment when the insurance payment request comes, the company calculates the total claims amount, subtracts from this all costs, and adds benefits.Thus the corrected cost of the total outcome for claims is not just a simple sum of X k defined on (A, A, P).In fact, in this model the financial situation of the company after paying claim X k can be even better than before.A rich collection of generalized convolutions and freedom in choosing the claim distribution λ gives the possibility of adjusting the model to the real situation without precise information about company activities.
We propose the following structure of the model.
(a) Claim times: the claims occur at the random instants of time where the interarrival times (c) Cumulated claims process: the total amount of money spent on the first n claims corrected by part of the incomes other than premium and/or some of the costs is a discrete-time Markov process {X n , n ∈ N 0 }, which is a -Lévy process with step distribution U i ∼ μ ∈ P + (see Sections 3 and 4) and transition probabilities P n (x, •).The sequence (U i ) is i.i.d.We denote the cumulative distribution function for the measure μ by F , its density by f , and the generalized characteristic function by Φ μ , and we denote H(t) = Φ μ (t −1 ).(d) Cumulated income units: the total insurance premium collected by the company up to the moment of the nth claim corrected by part of the cost of the company activity and/or part of the income from the investments is a discrete-time Markov process {Y n , n ∈ N 0 }, which is a -Lévy process with step distribution V i ∼ ν ∈ P + (see Sections 3 and 4) and the transition probabilities We denote the cumulative distribution function for the measure ν by G, its density by g, and the generalized characteristic function by Φ ν , and we denote J(t) = Φ ν (t −1 ).(e) Independence assumption: the processes {N t , n ∈ N 0 }, {X n , n ∈ N 0 }, and {Y n , n ∈ N 0 } are defined on (A, A, P) and are independent.
The risk process is defined as where u ⊕ Y n is a Markov process {Y n , n ∈ N 0 } with the starting point moved to u > 0 in the generalized convolution sense (see, e.g., [4]).Notice that if and ⊕ denotes adding in the generalized convolution sense, that is, X n ∼ μ n and Y n ∼ ν n , respectively (for the notation , see Section 3).Let Q t (u) denotes the probability that the insurance company with the initial capital u > 0 will bankrupt until time t.Since the changes in the process {R t , t 0} can occur only at the moments S n , n 0, we see that Calculating the same probability in the unbounded time horizon Q ∞ (u), we see that Notice that Q ∞ (u) does not depend on the process {N t }.This is natural since in our case, this process describes only the moments of claims arrival and {N t } is independent of the processes {X n } and {Y n }.Every continuous-time Markov chain taking values in (whole!) N 0 would give the same result.For brevity, we introduce the following notation for the probability that ruin will not occur: 3 Basic information about generalized convolution Following Urbanik (see [25,26,28,29,30]), we give the following: DEFINITION 1.A commutative and associative P-valued binary operation defined on P 2 + = P + × P + is called a generalized convolution if for all λ, λ 1 , λ 2 ∈ P + and a 0, the following conditions are satisfied: η for all η ∈ P and λ n ∈ P + ; (v) there exists a sequence (c n ) n∈N of positive numbers such that the sequence T cn δ n 1 converges to a measure different from δ 0 .We call the set (P + , ) a generalized convolution algebra.A continuous mapping h : for all λ, ν ∈ P + and p ∈ (0, 1) is called a homomorphism of (P + , ).
Every convolution algebra (P + , ) admits two trivial homomorphisms, h ≡ 1 and h ≡ 0. We say that a generalized convolution is regular if it admits a nontrivial homomorphism.If the generalized convolution is regular, then its homomorphism is uniquely determined in the sense that if h 1 , h 2 are homomorphisms of (P + , ), then there exists c > 0 such that h 1 (λ) = h 2 (T c λ) (for details, see [25,26,28,29,30]).It was also shown in [25,26,28,29,30] that the generalized convolution is regular if and only if there exists unique up to a scale function such that for all λ, ν, λ n ∈ P + , the following conditions are satisfied: (at) for a 0; 4. the uniform convergence of Φ λn on every compact set to a function Φ is equivalent with the existence of λ ∈ P + such that Φ = Φ λ and λ n → λ The function Φ λ is called the -generalized characteristic function of the measure λ.Let Ω(t) = h(δ t ).By properties 1 and 2 of the characteristic function we see that and thus the function Ω is called the kernel of generalized characteristic function (similarly as the function e it is the kernel of the Fourier transform, that is, the classical characteristic function).
3.0.The classical convolution, denoted by * , is given by Here we have Ω(t) = e −t if we consider this convolution on P + and Ω(t) = e it if we consider it on the whole line.3.1.Symmetric convolution on P + is defined by The kernel of generalized characteristic function here is Ω(t) = cos(t).3.2.By stable convolution * α for α > 0 we mean the following: This convolution admits the existence of a characteristic function, but its kernel is not continuous: where π 2α is a Pareto measure with density 2αx −2α−1 1 [1,∞) (x).The kernel of the generalized characteristic function here is given by Ω(t) = (1 − (ts) α ) + , where a + = a for a 0 and a + = 0 otherwise.3.5.The Kingman convolution ⊗ ωs on P + , s > −1/2, is defined by Cramér-Lundberg model for some classes of extremal Markov sequences 277 where θ s is absolutely continuous with the density function The kernel of the generalized characteristic function here is given by the Bessel function of the first kind with parameter connected with s. 3.6.For every p 2 and properly chosen c > 0, the function is the kernel of a Kendall-type (see [20]) generalized convolution defined for x ∈ [0, 1] by the formula where λ 1 , λ 2 are probability measures absolutely continuous with respect to the Lebesgue measure and independent of x.For example, if c = (p − 1) −1 , then and 4 Random walk with respect to the generalized convolution All the information contained in this section comes from [4], where the Lèvy processes with respect to generalized convolution were defined and studied.It was shown there that each such process is a Markov process (in the classical sense) with the transition probabilities defined by generalized convolution.We consider here only discrete-time stochastic processes of this kind.

4.1.
A discrete-time stochastic process {X n , n ∈ N 0 } is a random walk with respect to generalized convolution with the step distribution μ if it is the Markov process with the transition probabilities The consistency of this definition and the existence of ae random walk with respect to generalized convolution was shown in [4].Notice that in the case of classical convolution, it is the simple random walk with step distribution μ and can be simply represented as

d. random variables with distribution μ.
There are only two cases where the generalized convolution is representative, that is, there exists a sequence of functions For other generalized convolutions, rewriting a convolution in the language of the corresponding independent random variables is more complicated (if possible) and requires some additional variables.For example: Lith.Math.J., 63(3):272-290, 2023.

278
B.H. Jasiulis-Gołdyn, A. Lechańska, and J.K. Misiewicz 4.4.for the Kendall convolution for x ∈ [0, 1], the measure δ x α δ 1 is the distribution of the random variable where Q has the uniform distribution on [0, 1], Π 2α has the Pareto distribution with density π 2α described in Example 3.4, and Q and Π 2α are independent; 4.5.for the Kingman convolution and a, b > 0, we can define where θ s is absolutely continuous with the density function f s described in Example 3.5.

Model for * α random walk
For * α generalized convolution on P + , we have where (U k ) are i.i.d.random variables with cumulative distribution function F U responsible for the damage claim values.By F = F U α we denote the cumulative distribution function of U α .We assume also that m α = EU α 1 < ∞.We assume here that the variables V k , responsible for the insurance premium during the time T k , are i.i.d. with the cumulative distribution function F V (x) = 1 − e −γx α for x > 0. This assumption seems to be natural, since this is the distribution with the lack of memory property (see [15]) for the * α -convolution.Consequently,

Now we have
We want to calculate the ruin probability (see [2,9]) for the insurance company u by the time t, Since the ruin can occur only at the arrival moments of the claims, that is, at the moments of jumps of the Poisson process N t , it suffices to consider R Sn : Basically, we can calculate the function δ(u α ) following the classical calculations: In the last step of these calculations, we substituted u α + β α y = z.Now we calculate the derivative of both sides of this equality with respect to du α : Integrating both sides of this equality over the set [0, t] with respect to the measure with the density function αu α−1 for u > 0, we obtain We demote the first integral on the right-hand side by I 1 , and the second by I 2 .Then In the second integral, we change the order of integration and then substitute u α − x = r: The last equality we obtained by integrating by parts.Since F (0) = 0, we obtain To calculate δ(0), notice first that ruin probability for the insurance company with infinite initial capital is zero, thus δ(∞) = 1, and we have Thus δ(0) = 1 − γμ α /β α , and we have For the convenience in further calculations, we substitute t α = z.Let f be the Laplace-Stieltjes transform given by f where G(x) = 1 − F (x). Consequently, we obtain and thus Since the Laplace-Stieltjes transform uniquely determines a function, we finally have that in this case the ruin probability for the insurance company with initial capital u is equal to Q ∞ (u) = 1 − δ(u α ) with the function δ obtained from equation ( * ).

Model for ∞-generalized convolution
For the random walk with respect to the ∞-convolution on P + , we have X n = max{U 1 , . . ., U n } and Y n = max{V 1 , . . . ,V n }, where (U k ) and (V k ) are independent sequences of i.i.d.positive random variables with distributions μ and ν and the cumulative distribution functions F and G, respectively.Consequently, X n has the cumulative distribution function F n , Y n has this function equal to G n , and u ⊕ Y n has G n .The first safety condition for the insurance company is ER t > 0; thus we need to calculate EX t and E(u ⊕ Y t ), where Cramér-Lundberg model for some classes of extremal Markov sequences

281
We have where F = 1 − F is the survival function for U k .To calculate E(u ⊕ Y t ), notice first that the variable u ⊕ Y n takes value u with probability G(u) n , and thus with the same notation G = 1 − G. Consequently, the first safety condition in the case of ∞-generalized convolution is the following: Usually, we take the random variables V k with distribution having the lack of memory property, which in the case of ∞-convolution is given by the cumulative distribution function G(x) = 1 (a,∞) (x) for some a > 0 (see [15] for details).In this case, we have (u∨a) .
Consequently, the first safety condition for the ∞-convolution is the following: Calculating the probability that the company will not bankrupt in the unbounded time horizon, we will consider two cases.If the insurance premium has a distribution with the lack of memory property, thats is, We see that bankruptcy in unbounded time horizon is granted if only the random variables U k can take any positive value, that is, if F (x) < 1 for all x > 0. However, if the largest possible claim is less than u ∨ a, then bankruptcy is impossible, and δ(u) = 1.
If we assume that the cumulative distribution functions F, G are not trivial, then we have where If the distribution functions F and G have densities f and g, then differentiating both sides of the previous equation, we obtain .
Example.Assume that for 0 < a < b, we have Then for u ∈ (0, a), For u ∈ (a, b), we have If u > b, then evidently δ(u) = 1.Since in our case the function δ is continuous, we have δ(0) = 1 − a/b, and thus, finally, 7 Model for the Kendall random walk As we have seen in the previous section, the renewal process based on the max-generalized convolution is rather trivial.For example, in the case of very natural step distribution F (x) = 1 (1,∞) (x), it does not move at all.The Kendall convolution generalizes the max-convolution in the sense that the Kendall convolution of two nonnegative two random variables is equal to the maximum one with positive probability; otherwise, it is larger than the maximum.For this generalized convolution, we will not get any trivial process.
The Kendall random walk, that is, a random walk {X n , n ∈ N 0 } for fixed α > 0 with respect to Kendall convolution, can be described by the recursive construction given below.We see that we can get here explicit formulas for X n , but in addition to the sequence (U n ), we also need two sequences of random variables (catalyzers of α -adding): where all these sequences are independent.Then the Kendall random walk has the following representation: X 0 ≡ 0, where This representation is especially helpful if we want to make computer simulation of the Kendall random walk.For calculations, however, it is more convenient to use the Markov properties and transition probabilities P k,n ( dx) given in Definition 2.
Recall that for the Kendall convolution, the generalized characteristic function has the kernel Ω(t) = (1−t α ) + .We consider here two Markov chains {X n : n ∈ N 0 } and {Y n : n ∈ N 0 } with transition probabilities given respectively as follows.
Lemma 1.For all x, y, t 0 and μ, ν ∈ P + , we have where and Proof.For the second formula of the first equality, it suffices to apply Ψ (x/t) = (1− x α /t α ) + .We can prove the second equality for v < t in the following way: The last formula is equivalent to the previous one.

Inversion formula and cumulative distribution functions
The generalized characteristic function for the Kendall convolution is the Williamson integral transform: Φ μ (t) : (1) = αt α Notice that the Williamson transform (see [1,14,16,17,32]) is easy to invert: If μ has a cumulative distribution function F , then for H(t) := Φ μ (1/t), using formulation (3), we have Differentiating both sides with respect to t, we obtain and thus also Applying this technique to the c.d.f.F n of the Kendall random walk X n with step variables (U k ), we have and we see that Since X t = ∞ n=0 X n 1 {Nt=n} , we see that the c.d.f.F t of X t is given by The cumulative distribution function F v,n (t) of v ⊕ X n is given in Lemma 2 by the formula We need also to calculate the cumulative distribution function for the variable u ⊕ Y n with the distribution G u,n , where Y n is the Kendall random walk with steps (V k ), i.i.d.random variables with distribution ν, and distribution function G.We see that u ⊕ Y n has the distribution δ u α ν α n and the generalized characteristic function This distribution has an atom at the point u of the weight G u,n (u + ) = J(u) n and the absolutely continuous part with the density This distribution has an atom at u of the weight G u,t (u + ) = e −λt(1−J(u)) .

First safety condition for the insurance company
In the classical theory the first safety condition for the insurance company states that ER t > 0 for all t > 0.
In our case, we have First, we calculate EX α t assuming that the distribution of U 1 is absolutely continuous with respect to the Lebesgue measure (if this is not the case, then we will add the atomic part): In a similar way, for the absolutely continuous distribution of V 1 , we obtain If we consider ν as the distribution with the lack of memory property for the Kendall convolution (see [15]), then for some c > 0, we have and, assuming that 1 [a,b] ≡ 0 for a > b, we have Notice that in this case, Consequently, for u > c −1 = ((α + 1)/α) x dG(x), which is a natural assumption since the initial capital will be significant, we have = u α e −λt(cu) −α /2 + u α + λt 2 c −α .
For μ with the lack of memory property with c > 0 in the Kendall convolution algebra, we have Since we have that is, the first safety condition holds.
Open Access.This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/

3 ,
. . ., are i.i.d.random variables with exponential distribution with ET k = 1/λ; (b) Claim arrival process: the number of claims in the time interval [0, t] is the Poisson process with parameter λ > 0 defined by N t = sup{n 1: S n < t};