Fitting phase--type scale mixtures to heavy--tailed data and distributions

We consider the fitting of heavy tailed data and distribution with a special attention to distributions with a non--standard shape in the"body"of the distribution. To this end we consider a dense class of heavy tailed distributions introduced recently, employing an EM algorithm for the the maximum likelihood estimates of its parameters. We present methods for fitting to observed data, histograms, censored data, as well as to theoretical distributions. Numerical examples are provided with simulated data and a benchmark reinsurance dataset. We empirically demonstrate that our model can provide excellent fits to heavy--tailed data/distributions with minimal assumptions


INTRODUCTION
In this paper we consider the maximum likelihood estimation for a dense class of nonnegative heavy-tailed tailed distributions, referred to as scale mixtures of phase-type distribuitions (NPH), which was defined in [3]. Distributions in the NPH class allow for the simultaneous modelling of the "body" and the "tail" of general distributions which are assumed to be absolutely continuous and nonnegative while their "tails" are assumed to belong to some general class of heavy tailed distributions, like for instance Regularly Varying or Weibullian. These very general assumptions allows us to fit heavy-tailed distributions which may look distinctively different from distributions usually found in catalogues.
Apart from providing an adequate description of data, distributions from NPH can also be seen as infinite-dimensional phase-type distributions, which to all intents and purposes are as tractable as their finite-dimensional counterparts. Much of the machinery available for finite-dimensional phase-type distributions is also applicable to the extended class. Algorithms are for example available for the exact calculations of properties related to renewal theory, random walks (ladder processes) and ruin probabilities (see [3] for details).
The maximum likelihood estimation will be carried out employing an EM algorithm similarly as for finite-dimensional phase-type distributions [2]. The main challenge we face is the algorithmic implementation resulting from the extension to infinite dimensions since we cannot make a prefixed cut-off in the number of dimensions as this would be equivalent to a light-tailed estimation. We shall see, however, that it is possible to reduce the formulas to simple (one-dimensional) infinite series which can be numerically evaluated to a specified degree of precision. We are also able to simplify certain expressions of the original paper ( [2]), obtaining explicit expressions which allow for an improved numerical performance.
The rest of the paper is organized as follows. In Section 3 we develop the main algorithm for estimating independent and identically distributed data sampled from an NPH distribution. We show that the algorithm essentially uses the empirical cumulative distribution function, and that a significant increase in speed may be obtained by grouping the data into intervals on parts of the support and considering the corresponding histograms. In Section 4 we adjust the algorithm to cope with the presence of left-, right-or interval censored data. Section 5 provides an EM-algorithm for adjusting an NPH distribution to a theoretical distribution function F, which in turn is equivalent to finding the distribution in the NPH class with a given order which minimizes the Kullback-Leibler distance to F. In section 6 we provide some numerical examples from both simulated data, real data and a fits to theoretical distribution. In there, we highlight some minimal assumptions made for efficiently fitting general Regularly Varying and Weibullian distributions.

PHASE-TYPE DISTRIBUTIONS AND THE NPH CLASS
Before proceeding with a more detailed account, we first provide some background on phasetype distributions and the extended class NPH of scale mixtures of phase-type distributions.
The class PH of phase-type distributions consists of distributions of (random) times until a finite state Markov jump process exits a set of transient states. This can be made precise by letting E = {1, 2, ..., p, p + 1} denote the state-space of a Markov jump process {X t } t≥0 , where states 1, 2, ..., p are transient and p + 1 is absorbing. The intensity matrix for {X t } t≥0 can then be written on the form where T T T is a p× p sub-intensity matrix, t t t is a p-dimensional column vector and 0 0 0 the p-dimensional row vector of zeroes. We follow the convention that matrices are written in capital bold and their elements with the corresponding minuscule (like A A A = {a i j }), bold minuscule Greek letters (like α α α) are row vectors while bold minuscule Roman letters (like t t t) are column vectors. Their dimensions are usually clear from the context and left unspecified unless necessary. Let α i = P(X 0 = i), ∑ p i=1 α i = 1 and α α α = (α 1 , ..., α p ). Then we say that has a phase-type distribution with representation PH p (α α α, T T T ). Since t t t = −T T T e e e, where e e e denotes the column vector of ones, the distribution of τ is fully specified in terms of α α α and T T T . For further background on phase-type distributions we refer e.g. to [?], [4], [7] or [8].
The class of phase-type distributions is widely used in the area of Applied Probability, where they may often provide exact (or even explicit) solutions in complex stochastic models. This is for example the case for ruin probabilities in risk theory or waiting time distributions for queues. Any distribution with support on the positive real numbers may be approximated arbitrarily close by a phase-type distribution. In spite of this denseness property, phase-type distributions are all light tailed and consequently any approximating (finite-dimensional) phase-type distribution will not be able to capture a possible heavy tailed behaviour.
In [3] the dense class, NPH, of genuinely heavy tailed distributions is proposed in terms of infinite-dimensional phase-type distributions with a finite number of parameters. The idea is very simple and goes a follows. Let N be a discrete random variable with support on {s i : i ∈ N, s i > 0} and distribution π π π = {π i } i≥1 where π i = P(N = s i ). If N, for example, is a discretization of a continuous distribution G at step length ∆, then s i = i∆ and When sampling from Y , we first draw an index i (referred to as the level) from π π π and then a phase-type random variable from PH p (α α α, T T T /s i ). Hence, the random variable Y = Nτ may be seen as a scale mixture of phase-type distributions, so we often refer to N as the scaling random variable and its distribution π π π as the scaling distribution of the NPH. The density of Y can written as The distribution of Y can also be seen as an infinite dimensional phase-type distribution since we can write f Y (y) = (π π π ⊗ α α α) e Γ Γ Γy γ γ γ, Here e e e is now the infinite-dimensional column vector of ones and we notice that the exponential of the infinite dimensional matrix Γ Γ Γ is well defined since it is a bounded operator (as the sequence {s i : s i > 0} is bounded away from zero). We also let t t t i = −T T T i e e e. We shall write Y ∼ NPH p (π π π, α α α, T T T ).
While the support of π π π is important, we will not denote it explicitly in the parametrisation of Y .
A key feature of the NPH class is that it contains a rich variety of heavy-tailed distributions. For instance, if π π π has unbounded support, then Y has a heavy-tailed distribution [?, cf.]]RojasXie. In the regularly varying case, the tail of Y greatly follows that of N. Breiman's lemma implies that if N has a regularly varying with tail index −α < 0, then the tail of Y will also be regularly varying with the same index. More precisely, A similar feature occurs for the the class of Weibullian distributions [1], which is defined as the collection of nonnegative distributions having survival function If the parameter p ∈ (0, 1), then the distribution is heavy-tailed (and light-tailed otherwise). Since a PH distribution is Weibullian with parameter p = 1, then Lemma 2.1 of [1] implies that if the scaling distribution π π π is Weibullian with parameter p 1 then Y ∼ NPH(π π π, α α α, T T T ) is Weibullian with parameter p 1 /(1 + p 1 ).

ESTIMATION
In this section we address the problem of fitting an NPH distribution to a data set. We assume that y 1 , y 2 , ..., y M forms an i.i.d. data set sampled from NPH p (π(θ θ θ ), α α α, T T T ), where π π π(θ θ θ ) is some parametric distribution describing the distribution of N. We assume that the support for π π π(θ θ θ ) does not depend on θ θ θ . We shall estimate the parameters θ θ θ , α α α and T T T .
Hence, with probability π π π(θ θ θ ) i , y n is the realization of the i-th level phase-type distribution PH p (α α α, T T T /s i ), but both the level i and the actual realization of the underlying Markov jump process are not observable. Thus, we resort to the EM algorithm for approximating the maximum likelihood estimators. To this end we first attend the corresponding estimation problem for complete data.
Assume that apart from y 1 , ..., y M we have also observed all levels and all underlying Markov jump process. Let I n denote the level of the phase-type distribution of the n'th path, and let denote the number of i-level processes in the data. Next consider the Markov jump process {J (n) u } u≥0 underlying the n'th phase-type distribution (which generates the data y n ) and let be the number of i-level processes that are initiated in state k. Define which is the total time all underlying i-level Markov jump processes spend in state k and let N i k denote the total number of jumps from state k to within all i-level Markov jump processes. Finally, let N i k be the number of i-level processes that exit to the absorbing state from state k. Then the complete data likelihood is easily seen to be (see e.g. [2] or [?] for further comments on this) In order to calculate the maximum likelihood estimator (θ θ θ ,α α α,T T T ), which is the point that maximizes the likelihood (or log-likelihood) function, we calculate first order partial derivatives of t k , t k , while we use the Lagrange multiplier methods for α i and π i (θ θ θ ) due to the constraints on them summing up to one.
Consider the parameter t k , k = . Then Regarding α α α, consider the Lagrange function where µ is a Lagrange multiplier. Then Summing over k yields Concerning π π π(θ θ θ ), it depends on the particular form of the discrete distribution whether it can be estimated explicitly or numerically. We shall consider some particular examples.
We now consider the case of incomplete data. We shall employ the EM-algorithm, which optimizes the incomplete data likelihood L, using the complete likelihood L c (or c ) in the following way. Here f Y (y; θ θ θ , α α α, T T T ) denotes the density function of Y ∼ NPH p (π(θ θ θ ), α α α, T T T ). 0: Initialize with some "arbitrary" (θ θ θ 0 , α α α 0 , T T T 0 ) and let n = 0.
) and the procedure hence converges (possibly to a local maximum or saddlepoint though).
We notice that the actual calculations in the EM-algorithm only involve the complete data likelihood. From the actual form of the log-likelihood (2) we see that it is a linear function of the sufficient statistics, and the conditional expected value of the log-likelihood given the data will then be the log-likelihood function with the sufficient statistics replaced by their conditional expectations given the data. Hence the M-step is trivial since we only have to plug in the conditional expectations given the data instead of the sufficient statistics which are not available Concerning the E-step we proceed as follows. First we consider one (generic) data point (M = 1) and let y = y 1 . We need to calculate the conditional expected values of the sufficient statistics given Y = y. All distributions and expectations are under P = P Ψ Ψ Ψ , Ψ Ψ Ψ = (θ θ θ , α α α, T T T ), but we will omit the index in order to ease the exposition.
Concerning L i , it equals one if i is the chosen level and zero otherwise. Let I denote the random variable indicating the chosen level. Then The formula for E(N i k |Y = y) is derived in a similar way, resulting in e e e e T T T i (y−u) t t t i α α αe T T T i u e e e k du.

Finally,
The θ θ θ needs to be treated separately on a case-by-case basis and it generally involves a numerical procedure for finding the next iteration θ θ θ (n+1) . We need to maximize In general, for M > 1 datapoints we simply sum the previous formulas with arguments y j , j = 1, ..., M. we have that (see [12]) Thus a simple (and numerically efficient) way of obtaining both exp(T T T y) and J J J(y) is by calculating the matrix exponential on the left hand side.
The EM-algorithm can now be stated as follows.
subject to ∑ i π i (θ θ θ ) = 1 and letθ θ θ be the argument at which the maximum is attained. Let Consider the E-step of Theorem 3.2. Suppose that there are repeated values among the data points such that among the data y 1 , ..., y M there areỹ 1 , ...,ỹ D different values and thatỹ i appears k i ≥ 1 times in the original data. Then k 1 + · · · k D = M and the sums over j = 1, ..., M in expected values can then be reduced to weighted sums of fewer terms instead. For example, This shows that the EM-algorithm of Theorem 3.2 can also be used to estimate an NPH distribution when data are represented by a histogram rather than the raw data. This is particularly suitable when the amount of data is large since the histogram may here adequately represent the target distribution.
Remark 3.4. In order to reduce the computational burden, we may replace the raw data by a histogram in certain regions of the support where data gather rather densely. Heavy-tailed data with support on [0, ∞) will typically concentrate in an interval [0, T ) (the "body" of the distribution) while it will be more scarce in [T, ∞) (the "tail"). Now assume that M is large and that the support for the data may be split into [0, T ) and [T, ∞) for some T > 0 such that the concentration of data points in Then we may use the repeated data reduction of Remark 3.3 for the interval [0, T ) by treating T i +T i+1 2 , i = 0, ..., K − 1 as data points with counts k i . In fact we might choose any point in [T i , T i+1 ) as a representative, including any data point falling in this interval, however, if we choose the left points of the intervals, T i , as representatives, we have to make special arrangements regarding the first interval in order not to provoke an atom at zero.
Concerning the tail, [T, ∞), we continue to use the raw data since the scattering in the tail is typically too diffuse in order to be represented by a histogram without too much loss of information.
This distribution corresponds to a the discretization of a Pareto distribution F and the resulting discrete distribution is supported by {e ic : i =∈ N}. For c > 0, N with parameter e −θ c . The advantage of representing the scaling Pareto distribution on this form is that its argument which maximizes and it is not difficult to see it has an explicit solution given by The selected support of N is also convenient for implementation purposes. In practice, the infinite series defining the estimators can only be computed up to a finite number of terms. The sequence π(θ ) converges faster to 0 so a significant reduction of terms to be computed is required in order to attain a certain specified precision.
Since the distance between consecutive points in the support of N is unbounded, then the distribution of N is not regularly varying and Breiman's lemma no longer applies. Nevertheless, the tail probability of Y oscillates between two regularly varying functions, so this model can provide an accurate approximation to any regularly varying distribution [11].

CENSORING
In certain situations some data may be censored. Again we consider first the case of a single data point y taken from a realization of Y ∼ NPH p (π(θ θ θ ), α α α, T T T ). The data point is right censored at t if the only knowledge about Y is that Y > t, left censored at t if Y ≤ t and interval censored at (s,t] if Y ∈ (s,t]. Left censoring is a special case of interval censoring with s = 0 while right censoring can be obtained by fixing s and letting t → ∞. Formulas for right censoring will, however, appear as a part of the derivation of interval censoring.
The EM algorithm works entirely the same way as for uncensored data with the only difference that we are no longer observing a data point Y = y but Y ∈ (s,t]. This will only change the E-step where we now have to calculate the following conditional expectations: and E(L i 1{Y > t}) = π i (θ θ θ )α α αe T T T i t e e e. Thus .
For right censored data we get The rest of the formulas are derived as for censored phase-type distributions (see [9]) conditionally on the level L i , with parameters α α α, T T T i , which happens with probability π i (θ θ θ ). Thus we get that For more than one data point, the data are split into a group of uncensored data and into other groups of different types of censored data. The conditional expectations are then calculated for all data points subject to their group classification, and all conditional expectations of the same kind (jumps, occupation times etc.) are then summed over all data. This amounts to the E-step in an EM algorithm, the rest of which is identical to Theorem 3.2.
Remark 4.1. In Remark 3.4 we suggested the use of a histogram for representing the "body" of the distribution if the amount of data is large. Interval censoring provides a feasible alternative in this same direction.

FITTING TO A KNOWN DISTRIBUTION
It may be of interest to approximate a given (heavy tailed) distribution H by a distribution G ∈ NPH if for example methods for calculation the ruin probability based on the claim size distribution H is not known. In [2] it was shown how the EM algorithm can be modified in order to approximate phase-type distributions to a given distribution F with non-negative support. The EM algorithm then converges to a limit which minimizes the Kullback-Leibler distance between phase-type distributions of a specified order and the distribution F.
The idea is to let the number of data points M → ∞ such that the distribution H is seen as an empirical distribution function for a data set with M = +∞. If we let M → ∞ then (see Theorem α k e e e k exp(T T T i y j )t t t i f Y (y j ; θ θ θ , α α α, α k e e e k exp(T T T i y)t t t i f Y (y; θ θ θ , α α α, T T T ) dH(y).
Similarly it is not difficult to see that α α αe T T T i y e e e k t k f Y (y; θ θ θ , α α α, T T T ) dH(y) .
Concerning θ θ θ , maximizing As M → ∞ the latter converges to which is then the function to be maximized. Letθ θ θ denote the argument which maximizes this function.

EXAMPLES
In this section we provide five examples. In the first example we consider the simplest case of simulated data from a scale mixture of exponential distributions, where the scaling distribution has a Pareto type of tail. In the second example we fit NPH distributions to a real data set (Danish reinsurance data of fire claims), while in the last three examples we consider the fitting of NPH distributions to the theoretical distributions log-gamma, Weibull and log-normal respectively. Example 6.1 (Erlang distributions). A q dimensional Erlang distribution, ER q (λ ), is a phase-type distribution with canonical representation Hence, the correspoding NPH distribution has density and the maximum likelihood estimator for λ is easily calculated to be while the maximum likelihood estimate for θ θ θ has the general form The E-step is similar as for the general model. In the case of the sufficient statistic Z i we have As for the sufficient statistic L i , its expected value is analogue for the general model in the previous section and found to be equal to We present a small simulation study for the case of q = 1 (corresponding to infinite dimensional hyperexponential distribution), s i = i and scaling distribution is the Riemann Zeta function with parameter θ . This distribution is also known as the Riemann Zeta or discrete Pareto distribution since its tail resemble that of a Pareto distribution. In order to find the EM-estimate of θ (n+1) , we have to maximize the function Differentiating with respect to θ and rearranging implies that we have to solve the following equation w.r.t. θ , which is done numerically using a simple Newton-Raphson procedure. Results of the simulation study, which is shown in Table 1, reveals that the EM algorithm is able to recover the underlying structure of the data, and that the estimation, as expected, improves with larger sample sizes.  Table 1: EM-estimates of ( λ , θ ) infinite dimensional hyper-exponential distributions with varying parameters and sample size Example 6.2. We consider 2167 reinsurance data for Danish fire insurance claims above 1 million DKR for the period 1980-1993. These data have been widely studied in Extreme Value Theory [6,10]. The data corresponds to claims in millions of Danish Kroner for the period 1980-1993, the amounts being adjusted for inflation to prices of 1985. We subtracted 1 (million) from all data in order to shift the support to [0, ∞) which is the natural support for a phase-type distributions. Of the 2167 data, 519 are repeated values so only 1648 have different values.
We propose an NPH model for the data by mixing a five dimensional phase-type distribution with an N that is assumed to follow the discretized Pareto distribution of Example 3.5.
Just above 90% of the data is below 5, while less than 10% falls between 5 and the maximum observation of 262.2504. Thus we select the "body part" of the distribution as the densely populated region [0, 5] while the much more diffuse region [5, ∞) is then considered the "tail".
First we test the difference in performance between the EM-algorithm using raw data and one using the hybrid method proposed in Remark 3.4.
We divide [0, 5] into 100 subintervals of the same size, thus treating losses within 50,000 DKR as all being the same. We choose the centre of intervals as data points. In the tail we have 186 data points of which 8 are repeated values so a total of 178 different values. This amounts to a total of 278 distinct data points. The result of the experiment is following. While the actual discretization of [0, 5] does not appear to have any effect at all on the estimation, as compared with the EMalgorithm of the raw data, the speed was increased by a factor 5.75 which is more or less consistent with the execution times depending linearly on the number of data points, in which case we should have expected an increase in speed by a factor 1648/278 = 5.92.
In all the EM algorithms which follows we have used the hybrid method of Remark 3.4 with [0, 5] divided into 100 equally sized subintervals.
We now investigate the effect the choice of the parameter c (see Example 3.5) will have on the estimation. Intuitively, the smaller we choose the discretization steps of the geometric progression, the better the estimation. Fixing the dimension of the phase-type distribution to five, we select three different values for c, 1, 1/2 and 1/4. To obtain the EM estimates we randomly selected various sets of initial and iterated the EM until the relative change in the loglikelihood was smaller than 10 −8 . We kept the model with the largest likelihood. The estimates are as follows That the estimates for (α α α, T T T ) look distinct is because phase-type representations are by no means unique, however, the estimated densities which are plotted in Figure 1 against the histogram of the data are almost identical (actually indistinguishable in the plotted range of [0, 6]). It is clear that the different phases do not have a physical interpretation but are merely dummy (or black box) states used for the only purpose of obtaining a proper adjustment of the distribution. In order to inspect the tail behaviour we plotted the log-survival function in Figure 2. We have provided two plots of the survival function in different ranges in order to be able to inspect the tail behaviour well beyond the largest data value (which is 262.2504). The different values for θ (obtained as a consequence of the different choices of c) does only result in a slightly different tail behaviour for very large claim sizes. We conclude that the estimation does not depend significantly on the choice of c within the possible choices of c = 1, 1/2 or 1/4, and can probably be extrapolated to concluding robustness against choices in c within any "reasonable" range for c. Finally we compare the fit for c = 1/4 to one obtained by [6] and [10]. The first reference fits a Generalized Pareto Distribution via maximum likelihood estimation while the second employs the Hill estimator. The implementation of the methods described above involve the selection of an appropriate threshold which should be chosen by the modeler. According to [10] the values in the interval [1.40, 1.46] produce excellent fits, the value recommend by the same author being 1.45.
We finally compare our estimated model with c = 1/4 to a NPH model where we fix θ = 1.45 as recommended by [10]. This can be done by running the EM algorithm in the usual way but avoiding any adjustments in θ . The results of the estimation are given next Again this representation does not resemble any of the previous estimations (due to non-uniqueness of representations) but from figures 3 and 4 it is clear that there is no distinguishable difference between densities for the fixed and EM adjusted tails in the range [0, 6] where the main body of the distribution is situated. The tail behaviour also looks quite similar though the EM fitted tail seems to be slight heavier than the fixed tail.   so its support is [0, ∞), which is the natural support for the class NPH. Its density function is then given by This distribution is regularly varying with parameter β . For this example we choose the parameters α = β = 2, for the target distribution with the purposes of analysing a distribution with a mode away from 0 and a moderately heavy-tail. We consider an NPH model where the underlying phase-type distribution has five phases and the scaling distribution is that of Example 3.5 with c = 1. With this example, we want to test if general Regularly Varying distributions can be correctly fitted with the general model suggested in Example 3.5. We employed Quasi Monte Carlo ideas to approximate the integrals in the EM steps and iterated the algorithm until the relative error was smaller than 10 −9 . The results are given beloŵ θ = 1.6031 α α α = (0.5717, 0.0330, 0.0000, 0.3954, 0.0000) The densities of the log-gamma distribution and its NPH approximation are plotted in Figure 5. The densities of the log-gamma and its NPH approximation are almost indistinguishable from each other in the body region. The tail behavior is correctly captured by the NPH model as seen in Figure  6. The shape of the tail of the NPH is very close to that of the Loggamma distribution although the NPH estimate has a heavier tail (the estimated regularly varying parameter wasθ = 1.6031). Example 6.4 (Fitting to a theoretical distribution: Weibull). Next we move away from the Regularly Varying case and consider instead the Weibullian case. As a target distribution we consider a classical two-parameter Weibull with λ = 1 and p = 1/2 so its density is e − √ x /2 √ x, x ≥ 0. For adjusting this model we consider an NPH family of distributions where the phase-type part has five phases and the scaling distribution is supported over a geometric progression e c , e 2c , e 3c , . . . with c = 1 and taken as a discretization of a classical two-parameter Weibull distribution with δ = p − 1 More precisely, its density is given by Notice, that the target distribution is in the same two-parameter family of Weibull distributions. The results are given below. The agreement between the Weibull distribution and its NPH approximation is very good in the body of the distribution. The approximation of the tail is also particularly good for values going up to 100 which correspond to probabilities of order 10 −4 . The NPH adjusted model is Weibullian with parameter p(1 + p) −1 ≈ 1.1673/2.1673 ≈ 0.5386. Recall that the parameter of the target distribution is 0.5. Example 6.5 (Fitting to a theoretical distribution: Lognormal). Finally we consider a Lognormal distribution with location parameter µ = 0 and dispersion parameter σ 2 = 1. The lognormal distribution is heavy-tailed but its survival function ultimately decays faster than a Regularly Varying function (but slower than a Weibullian function). We consider the lognormal case more difficult because a scaling distribution π π π(θ θ θ ) for which the NPH model has the same tail behavior as the lognormal distribution is unknown. Therefore, we consider two alternative NPH models to adjust the data. For both we have selected a phase-type part has eight phases and the scaling distribution is chosen as a discretization of a Lognormal distribution LN(µ, σ 2 ) and supported over the set {s i = e i : i = 0, 1, . . . }. The difference between the two models is that for the first one we let the EM algorithm to estimate the values of the parameters µ and σ , while for the second one we take these values to be fixed and equal to 0 and 1 respectively. The results for the first model are given beloŵ µ = 0.1909,σ = 0.5979 α α α = (0.0000, 0.0000, 0.0000, 0, 0.0000, 0.0439, 0.1360, 0.8201)  . Density of the Lognormal distribution with parameters (0, 1) and an approximation via NPH models with EM adjustment.
From Figure 9 we can observe that the adjustment of the body of the distributions of the first model is excellent, but the tail probabilities differ. In the lognormal case, the heaviness of the distribution is mostly determined by the parameter σ . The larger the parameter σ the more heavier its tail. In our estimations we obtained an estimate σ = 0.5979 which suggest a lighter tail, and this is confirmed in the last panel of Figure 10.
For our second model we obtained a very poor fit, but this is somewhat expected since the tail distribution of the NPH model is very different from the tail behavior of its scaling distribution (as in the case of Regularly Varying or Weibullian cases). In fact, [11] demonstrate that the tail probability of the NPH model is significantly heavier and different to the scaling distribution. More

CONCLUSIONS
In this paper we have introduced a new methodology for adjusting phase-type scale mixture distributions to heavy-tailed data. While the model only requires the specification of the dimension of the underlying phase-type distribution and a parametric family of discrete scaling distributions, the suggested algorithm simultaneously fit both the body and the tail of general distributions.
Since the class of NPH distributions are generally genuinely heavy tailed (if N has unbounded support) and dense in the class of heavy tailed distributions with support on R + , we may in principle approximate any heavy tailed distribution (data) arbitrarily close by a NPH distribution. In particular, for the case of regular varying and Weibullian distributions the aforementioned approximation is not only in the limit (denseness) but can be effectively carried out in praxis.