1 Introduction

Distributions of many economic variables are characterized by heavy right tails. Such tails are often modelled in economics and other fields of science using Pareto distribution, which was originally introduced late in the nineteenth century by Vilfredo Pareto in the context of modelling income and wealth distributions (Pareto 1897). Since then, the Pareto distribution has become the most popular model to describe top income and wealth values (see, e.g. Drăgulescu and Yakovenko 2001; Kleiber and Kotz 2003; Clementi and Gallegati 2005; Klass et al. 2006; Cowell and Flachaire 2007; Cowell and Victoria-Feser 2007; Ogwang 2011; Alfons et al. 2013).Footnote 1 However, the model is also heavily used in several other areas of economics to model the right-hand tails of fluctuations in stock prices (Lauridsen 2000; Gabaix et al. 2003, 2006; Balakrishnan et al. 2008), exchange rates (Wagner and Marsh 2005), firm sizes (Axtell 2001; Luttmer 2007), city sizes (Soo 2005), countries’ interactions in international trade (Hinloopen and van Marrewijk 2012), CEO compensation (Gabaix and Landier 2008), supply of regulations (Mulligan and Shleifer 2005), tourist visits (Ulubaşoğlu and Hazari 2004), claims in actuarial problems (Ramsay 2003), macroeconomic disasters (Barro and Jin 2011), and macroeconomic fluctuations (Gaffeo et al. 2003). In addition, Pareto distribution appears widely in physics, biology, earth and planetary sciences, computer science, and in other disciplines (Newman 2005).

The maximum likelihood estimator (MLE) for the shape parameter of the Pareto distribution (also known as the Pareto tail index or the Pareto exponent) was introduced by Hill (1975) and is referred to as the Hill’s estimator.Footnote 2 If the Pareto distribution is the true model for a given sample, then one can safely estimate the Pareto tail index using MLE, which has the optimal asymptotic variance. However, in the presence of data contamination or when the sample deviates from the Pareto model, the MLE is not robust and becomes severely biased (Victoria-Feser and Ronchetti 1994; Finkelstein et al. 2006). To make matters worse, even small errors in estimation of the Pareto exponent can produce large errors in estimation of quantities based on estimates of the exponent such as extreme quantiles, upper-tail probabilities and mean excess functions (Brazauskas and Serfling 2000). Similarly, inequality measures computed for the data simulated from the Pareto model are largely affected by even small or moderate data contamination (Cowell and Victoria-Feser 1996).

In recent years, a number of appealing robust estimators for the Pareto exponent have been proposed. These estimators perform better than the MLE in the presence of outliers, while retaining high asymptotic relative efficiency (ARE) with respect to the MLE.Footnote 3 Although asymptotic properties of most of these estimators are well known, their performance in the small-sample setting is less clear. However, as observed recently by Beran and Schell (2012), researchers and practitioners studying problems such as operational risk assessment, reinsurance and natural disasters often have to fit heavy-tailed models to sparse samples with the number of observations ranging from 20 to at most 50. In another context, Barro and Jin (2011) have estimated the upper-tail exponent of the distribution of macroeconomic disasters using samples of only 21–22 observations. Soo (2005) applied the Pareto model to the distribution of cities for a number of countries; in case of 22 countries the number of observations was less than 50 and it was even less than 20 in four cases. A recent study of Ogwang’s (2011), which analyses the Pareto behaviour of the top Canadian wealth distribution is based on a rather small sample of about one hundred observations. Therefore, it seems that in practical applications the Pareto tail index is indeed quite often estimated using sparse data.

The existing literature that examines the small-sample performance of alternative robust estimators for the Pareto exponent is fairly small (see Brazauskas and Serfling 2001b; Huisman et al. 2001; Wagner and Marsh 2004; Finkelstein et al. 2006; Alfons et al. 2010). In addition, none of the existing studies compares all of the most popular robust estimators for the Pareto tail index. The present paper fills the gap in the literature by providing an extensive comparison of the small-sample properties of the most popular robust estimators for the Pareto tail index. We investigate the properties of the estimators by Monte Carlo simulations under various data contaminations and model deviations, which produce outliers that can be found in real data sets. In particular, the paper compares the optimal bias-robust estimator (OBRE) (Hampel et al. 1986; Victoria-Feser and Ronchetti 1994), the weighted maximum likelihood estimator (WMLE) (Dupuis and Morgenthaler 2002; Dupuis and Victoria-Feser 2006), the generalized median estimator (GME) (Brazauskas and Serfling 2000, 2001a), the partial density component estimator (PDCE) (Vandewalle et al. 2007) and the probability integral transform statistic estimator (PITSE) (Finkelstein et al. 2006).Footnote 4 The OBRE, WMLE and PDCE have been applied in robust modelling of income distribution (Cowell and Victoria-Feser 2007, 2008; Alfons et al. 2013). The OBRE has been also recently applied to study the distribution of large macroeconomic contractions (Brzezinski 2015).

It is worth noting here that the alternative approach to modelling extreme economic events relies on generalized Pareto distribution, which is a three parameter variant of the classical two-parameter Pareto distribution. However, this paper focuses solely on the latter distribution. The comparison of robust estimators for the generalized Pareto model can be found in Ruckdeschel and Horbenko (2013).

The remainder of the paper is organized as follows. Alternative robust estimators for the Pareto tail index, as well as the MLE treated as the benchmark in our study, are described in Sect. 2. Section 3 presents the Monte Carlo design and discusses the results of our Monte Carlo simulations. Section 4 applies the estimators to real income distribution data from the European Union Statistics on Income and Living Conditions (EU-SILC), while Sect. 5 concludes and gives recommendations for practice.

2 Alternative estimators for the Pareto tail index

2.1 The MLE

The classical (or type I) Pareto distribution \(P(x_{0}, \alpha \)) is defined in terms of its cumulative distribution function as follows

$$\begin{aligned} F_\alpha (x)=1-(x_{0}/x)^{\alpha },x\ge x_{0}>0, \end{aligned}$$
(1)

where \(x_{0}\) is a scale parameter and \(\alpha > 0\) is the Pareto tail index describing the shape of the distribution. It is a heavy-tailed distribution with the right tail becoming heavier for smaller values of the Pareto tail index. The literature offers various methods to estimate the value of the cut-off \(x_{0}\), above which the Pareto model can be fitted to data. However, as Gabaix (2009) observes, in practice \(x_{0}\) is set usually using visual goodness of fit or by assuming that a fixed proportion of top observations (e.g. 5 %) in a given data set follow a Pareto model. A robust statistical procedure for choosing \(x_{0}\), based on the robust prediction error criterion, was proposed by Dupuis and Victoria-Feser (2006). In this paper, \(x_{0}\) is estimated as the first-order statistic of the sample drawn from the Pareto model.

The simulation study presented in this paper uses the MLE for the Pareto tail index as a non-robust benchmark, which allows to evaluate better the properties of robust estimators. We also use the MLE as a starting value in numerical procedures used to compute some of the robust estimators compared in this study.

For a random sample of n observations, \(x_{1}, \ldots , x_{n}\), the MLE for parameter \(\alpha \) in (1) is given by

$$\begin{aligned} \hat{{\alpha }}_{MLE} =\frac{1}{n^{-1}\sum \nolimits _{i=1}^{n}{\log x_{i}-\log x_{0}}}. \end{aligned}$$
(2)

Actually, the paper uses the unbiased (and asymptotically equivalent) version of MLE, which is defined as (Kleiber and Kotz 2003, p. 84)

$$\begin{aligned} \hat{{\alpha }}_{MLU} =\left( {1-\frac{2}{n}} \right) \hat{{\alpha }}_{ML}. \end{aligned}$$
(3)

The reminder of this section briefly introduces the most popular robust estimators for \(\alpha \). Detailed discussions of these estimators, which include presentation of their asymptotic properties, are offered in the original papers that introduced the estimators. For all estimators under discussion, except for the PDCE, the trade-off between robustness and efficiency is regulated by the estimator’s asymptotic properties. A comparison of the OBRE, GME and PITSE in terms of the upper breakdown point (UBP) and gross error sensitivity (GES) is presented in Finkelstein et al. (2006).

2.2 Optimal B-robust estimator

In the context of robust measurement of income inequality, Victoria-Feser and Ronchetti (1994) introduced the optimal B-robust estimator (OBRE) for the Pareto model, which is an M-estimator with minimal asymptotic covariance matrix. The class of OBREs was defined by Hampel et al. (1986) in terms of the influence function (IF), which allows for assessing the robustness of an estimator for a parametric model. IF can be defined in the following way.

Let \(F_{\theta }\) be a parametric model with density \(f_{\theta }\), where the unknown parameters belong to some parameter space \(\Theta \subseteq \mathfrak {R}^{p}\). For a sample of n observations, \(x_{1}, \ldots , x_{n}\), the empirical distribution function \(F_{n}(x)\) is

$$\begin{aligned} F_{n}(x)=\frac{1}{n}\sum \limits _{i=1}^{n}{\delta _{x_{i}} (x)} , \end{aligned}$$
(4)

where \(\delta _{i}\) denotes a point mass in x. For a parametric model \(F_{\theta }\), \(\theta \in \Theta \subseteq \mathfrak {R}^{p}\), and estimators of \(\theta \), \(T_{n}\), treated as functional of the empirical distribution function, i.e. \(T(F_{n})=T_{n}(x_{1}, {\ldots },x_{n})\), the IF is defined as

(5)

The IF describes the effect of a small contamination \((\varepsilon \delta _{x})\) at a point x on the estimate of \(T_{n}\), standardized by the mass of the contamination. Linear approximation \(\varepsilon ~\hbox {IF}(x; T; F_{\theta })\) measures therefore the asymptotic bias of the estimator caused by the contamination. In case of the MLE, the IF is proportional to the score function \(s(x;\theta )=\frac{\partial }{\partial \theta }\log f_\theta (x)\), which for the Pareto distribution is \(s(x;\alpha )=1/\alpha -\log x+\log x_{0}\). Since this function is unbounded in x, the MLE for \(\alpha \) is not robust. A robust estimator possessing a bounded IF is called B-robust (or biased-robust).

The OBRE is the solution \(T_{n}\) of the system of equations

$$\begin{aligned} \sum \limits _{i=1}^{n}{\psi \left( x_{i};T_{n}\right) =0} \end{aligned}$$
(6)

for some function \({\varvec{\psi }}\). The OBRE is optimal M-estimator with minimum asymptotic covariance matrix under the constraint that it has a bounded IF. Victoria-Feser and Ronchetti (1994) use the so-called standardized version of OBRE, which for a given bound c on IF is defined implicitly by the solution \(\hat{{\theta }}\) in

$$\begin{aligned} \sum \limits _{i=1}^{n}{\psi (x_{i};\theta )} =\sum \limits _{i=1}^{n}{\left\{ {s(x_{i};\theta )-a(\theta )} \right\} W_{c}(x_{i};\theta )=0} \end{aligned}$$
(7)

with

$$\begin{aligned} W_{c}(x;\theta )=\min \left\{ {1;\frac{c}{\left\| {A(\theta )\left[ {s(x;\theta )-a(\theta )} \right] } \right\| }} \right\} , \end{aligned}$$
(8)

where \(\left\| \cdot \right\| \) denotes the Euclidean norm, and the matrix \(A (\theta )\) and vector \(a (\theta )\) are defined implicitly by

$$\begin{aligned}&E\left[ {\psi (x_{i};\theta )\psi (x_{i};\theta )^{T}} \right] =\left[ {A(\theta )^{T}A(\theta )} \right] ^{-1}, \end{aligned}$$
(9)
$$\begin{aligned}&E\left[ {\psi (x_{i};\theta )} \right] =0. \end{aligned}$$
(10)

For efficiency reasons, OBRE uses the score as the \(\psi \) function for the bulk of the data and truncates the score only if a robustness constant c is exceeded. The robustness weights \(W_{c}\) given in Eq. (8) are attributed to each observation to downweight observations deviating from the assumed model. The matrix \(A (\theta )\) and vector \(a (\theta )\) can be considered as Lagrange multipliers for the constraints due to a bounded IF and the condition of Fisher consistency, \(T(F_{\theta }) = \theta \). Bound c is a regulator between efficiency and robustness—for small c an OBRE is more robust but less efficient, and vice versa for large c. If \(c = \infty \), then OBRE is equivalent to the MLE. Simulations in this paper were performed using \(c = (1.63, 2.73)\), which, for the Pareto model, gives a more robust but only moderately efficient OBRE (78 % ARE) in the case of smaller c and an efficient (94 % of ARE) but less robust estimator in the case of higher c.Footnote 5

The OBRE is computationally complex as one has to solve (7) under (9) and (10). An iterative algorithm to compute OBRE was proposed by Victoria-Feser and Ronchetti (1994); see also Bellio (2007).

2.3 Weighted maximum likelihood estimator

Dupuis and Victoria-Feser (2006) introduced another robust M-estimator for the Pareto tail index, which belongs to the class of WMLE of Dupuis and Morgenthaler (2002). For a parametric model \(F_{\theta }\) with density \(f_{\theta }\), where for simplicity \(\theta \) is assumed to be one-dimensional, and a random sample of n observations, \(x_{1},\ldots , x_{n}\), the WMLE is defined as the solution \(\hat{{\theta }}\) in \(\theta \) of

$$\begin{aligned} \sum \limits _{i=1}^{n}{\psi (x_{i};\theta )=\sum \limits _{i=1}^{n}{w(x_{i};\theta )\frac{\partial }{\partial \theta }\log f_\theta (x_{i})} =0} , \end{aligned}$$
(11)

where \(w (\hbox {x}; \theta )\) is a weight function with values in [0,1]. Dupuis and Victoria-Feser (2006) propose to use a weighting scheme based on the Pareto quantile plot (see, e.g, Beirlant et al. 1996). The Pareto quantile plot shows that for the Pareto model (1) with tail index \(\alpha \) and for \(x > x_{0}\), there is a linear relationship between the log of the x and the log of the survival function

$$\begin{aligned} \log \left( {\frac{x}{x_{0}}} \right) =-\frac{1}{\alpha }\log (1-F_\alpha (x)),x>x_{0}. \end{aligned}$$
(12)

Let \(x_{[i]}^{*}\), \(i = 1,{\ldots }, k\), be the ordered largest k observations and \(Y_{i}=\log (x_{[i]}^{*}/x_{0})\) be logarithms of relative excesses. For the Pareto model, the \(Y_{i}\) may be predicted by \(\hat{{Y}}_{i}=-1/\hat{{\alpha }}\log [(k+1-i)/(k+1)]\), where \(\hat{{\alpha }}\) is an estimator of \(\alpha \). The variance of \(Y_{i}\) may be estimated by \(\hat{{\sigma }}_{i}^{2}=\sum \nolimits _{j=1}^{i}{1/[\hat{{\alpha }}^{2}(k-i+j)^{2}]}\). Using the standardized residuals defined as \(r_{i}=(Y_{i}-\hat{{Y}}_{i})/\sigma _{i}\), Dupuis and Victoria-Feser (2006) propose Huber-type weight function in (11), which downweights observations deviating from the Pareto model in terms of the size of the residuals, \(r_{i}\), i.e.

$$\begin{aligned} w(x_{[i]}^{*};\alpha )=\left\{ {{ \begin{array}{ll} {1,}&{} {if\left| {r_{i}} \right| <c,} \\ {c/\left| {r_{i}} \right| ,}&{} {if\left| {r_{i}} \right| \ge c,} \\ \end{array}}} \right. \end{aligned}$$
(13)

with \(\alpha \) estimated by the WMLE and where c is a constant regulating the robustness-efficiency trade-off.

The WMLE is not in general unbiased, but the first-order bias-corrected WMLE with weights defined by (13) is derived by Dupuis and Victoria-Feser (2006) as \(\tilde{\alpha }=\hat{{\alpha }}-B(\hat{{\alpha }})\), where \(\hat{{\alpha }}\) is the WMLE as defined in (11) and

$$\begin{aligned}&\!\!\!B\left( \hat{{\alpha }}\right) \nonumber \\&=\frac{-\sum \limits _{i=1}^{k}{\left( {w\left( x_{[i]}^{*};\alpha \right) {\partial \log f\left( x_{[i]}^{*};\alpha \right) }\Big /{\partial \alpha }} \right) } \left| {_{\hat{{\alpha }}}} \right. \left( F_{\hat{{\alpha }}} \left( x_{[i]}^*\right) -F_{\hat{{\alpha }}} \left( x_{[i-1]}^*\right) \right) }{\sum \limits _{i=1}^{k}{\left( {{\partial w\left( x_{[i]}^{*};\alpha \right) }\Big /{\partial \alpha }{\partial \log f\left( x_{[i]}^{*};\alpha \right) }\Big /{\partial \alpha {+}w\left( x_{[i]}^{*};\alpha \right) {\partial ^{2}\log f\left( x_{[i]}^{*};\alpha \right) }\Big /{\partial ^{2}\alpha }}} \right) } \left| {_{\hat{{\alpha }}}} \right. \left( F_{\hat{{\alpha }}} \left( x_{[i]}^*\right) {-}F_{\hat{{\alpha }}} \left( x_{[i-1]}^*\right) \right) },\nonumber \\ \end{aligned}$$
(14)

with \(x_{[0]}^{*}\) set to \(x_{0}\).

Dupuis and Victoria-Feser (2006) have shown in simulations that in the small-sample setting the WMLE does not achieve high relative efficiency. For example, the relative efficiency of the WMLE for samples of 100 observations is at most 81 %. Other estimators that we compare in this paper do not suffer from this problem. For this reason, we include the WMLE in our comparison only for the case of ARE = 78 %, while other robust estimators for the Pareto tail index are compared also for the case of ARE = 94 %. The constant c that regulates the trade-off between efficiency and robustness was estimated for the WMLE by simulation performed independently for each sample size used in our Monte Carlo comparison.

2.4 Generalized median estimators

Another class of robust estimators for the Pareto tail index was developed by Brazauskas and Serfling (2000, 2001a). Consider a sample \(x_{1}, . . ., x_{n}\) drawn from \(P(x_{0}, \alpha )\). The GME are, for a sample of size n and for a given choice of integer \(k \ge 1\), defined as the median of the evaluations \(h(x_{i_{1}} ,\ldots ,x_{i_{k}})\), where \(\{i_{1}, {\ldots }, i_{k}\}\) is a set of distinct indices from \(\{1, {\ldots }, n\}\), of a given kernel \(h (X_{1}, {\ldots }, X_{k})\) over all \(\left( {\begin{array}{l} n \\ k \\ \end{array}} \right) \) subsets of observations taken k at a time. In particular, Brazauskas and Serfling (2000, 2001a) define the GME for the Pareto tail index as

$$\begin{aligned} \hat{{\alpha }}_{GM} =\text {Median}\left\{ h\left( x_{i_{1}} ,\ldots ,x_{i_{k}}\right) \right\} , \end{aligned}$$
(15)

with two choices of kernel \(h (X_{1}, {\ldots }, X_{k})\):

$$\begin{aligned} h^{(1)}(X_{1},\ldots ,X_{k})=\frac{1}{C_{k}}\frac{1}{k^{-1}\sum \nolimits _{j=1}^{k}{\log X_{j}-\log \min \{X_{1},\ldots ,X_{k}\}}} \end{aligned}$$
(16)

and

$$\begin{aligned} h^{(2)}\left( X_{1},\ldots ,X_{k};x_{[1]}\right) =\frac{1}{C_{n,k}} \frac{1}{k^{-1}\sum \nolimits _{j=1}^{k}{\log X_{j}-\log x_{[1]}}}, \end{aligned}$$
(17)

where \(C_{k}\) and \(C_{n,k}\) are multiplicative median-unbiasing factors. The choice of these kernels is motivated by relative efficiency considerations—\(h^{(1)}\) is the MLE based on a particular subsample, while \(h^{(2)}\) is a modification of the MLE that always uses the minimum of the full sample instead of the minimum of the particular subsample. The estimators corresponding to \(h^{(1)}\) and \(h^{(2)}\) are denoted, respectively, by \(\hat{{\alpha }}_\mathrm{{GME}}^{(1)}\) and \(\hat{{\alpha }}_\mathrm{{GME}}^{(2)}\). Brazauskas and Serfling (2001a, b) show that in the case of contamination at high quantiles \(\hat{{\alpha }}_\mathrm{{GME}}^{(2)}\) significantly outperforms \(\hat{{\alpha }}_\mathrm{{GME}}^{(1)}\) with respect to asymptotic efficiency even in the small-sample setting. Since this paper focuses on upper contamination, only \(\hat{{\alpha }}_\mathrm{{GME}}^{(2)}\) will be examined in our experiments.Footnote 6 The multiplicative median-unbiased factor for \(\hat{{\alpha }}_\mathrm{{GME}}^{(2)}\) is defined as

$$\begin{aligned} C_{n,k} =\frac{\text {Median}((1-k/n)\chi _{2k}^{2}+(k/n)\chi _{2}^{2}(k-1))}{2k}, \end{aligned}$$
(18)

where \(\chi _{d}^{2}\) is Chi-squared distribution with d degrees of freedom. In our Monte Carlo simulations, we use \(\hat{{\alpha }}_\mathrm{{GME}}^{(2)}\) with \(k = 2\) and \(k = 5\), which correspond, respectively, to the ARE = 78 % and ARE = 94 %.

2.5 Probability integral transform statistic

Finkelstein et al. (2006) noticed that since the distribution function of the Pareto model (1) is continuous and strictly increasing, the random variables \(F_\alpha (x_{1}),\ldots ,F_\alpha (x_{n})\) form a random sample on the uniform distribution on the interval (0,1). They observed that even an infinite contamination has a bounded effect on data transformed this way. The new robust estimator of Pareto tail index was defined with the help of the following statistic

$$\begin{aligned} G_{n,t} (\beta )=n^{-1}\sum \limits _{j=1}^{n}{\left( {\frac{x_{0}}{x_{j}}} \right) }^{\beta t}, \end{aligned}$$
(19)

where \(t > 0\) is the parameter regulating the trade-off between efficiency and robustness. When \(\beta = \alpha \) and \(t = 1\), \((x_{0}/x_{i})^{\alpha }=1-F_\alpha (x_{i})\) is a random variable with the uniform distribution. Denoting a random sample from the uniform distribution by \(u_{1}\),...,\(u_{n}\), and knowing that \(\Pr (\mathop {\lim }\nolimits _{n\rightarrow \infty } n^{-1}\sum \nolimits _{j=1}^{n}{u_{j}^{t}})=1/(t+1)\), the PITSE, \(\hat{{\alpha }}_{PITSE}\), is defined as the solution of the equation

$$\begin{aligned} G_{n,t} (\beta )=\frac{1}{t+1}. \end{aligned}$$
(20)

The balance between efficiency and robustness can be regulated by setting the appropriate value of the parameter t. By taking t close to 0, ARE of PITSE can be made arbitrarily close to 1; for higher values of t, PITSE gains robustness but loses relative efficiency. Simulations in this paper use \(t = 0.324\) and \(t = 0.883\), which correspond, respectively, to 78 and 94 % of ARE.

As stressed by Finkelstein et al. (2006), the PITSE is both conceptually and computationally simpler that other robust estimators for the Pareto tail index. Its computation requires only solving Eq. (20), which for a given data set and the value of t has exactly one solution. This relative computational simplicity of the PITSE can be considered as an argument in its favour, especially if the results of our comparison would suggest that it delivers a satisfactory degree of protection against data contamination and model deviation.

2.6 Partial density component estimator

Vandewalle et al. (2007) introduced a robust estimator for the tail index of Pareto-type distributions based on the so-called partial density component estimation, which extends the integrated squared error approach (Scott 2001, 2004).Footnote 7 In general, the approach of Vandewalle et al. (2007) uses a minimum distance criterion based on integrated squared error as a measure of discrepancy between the estimated density function and the true but unknown density. More specifically, they use the approach of Scott (2001, 2004), who considered estimation of mixture models by this method. Given the unknown true density f, and a model \(f_{\theta }\), the goal is to find a fully data-based estimate of the distance between the two densities using the integrated squared error criterion. Therefore, the estimated parameter \(\hat{{\theta }}\) is given by

$$\begin{aligned} \hat{{\theta }}=\arg \mathop {\min }\limits _\theta \left[ {\int {\left( f_\theta (x)-f(x)\right) ^{2}dx}} \right] . \end{aligned}$$
(21)

For a sample of size n drawn from a model with density \(f_{\theta }\), the criterion can be shown to be equivalent to

$$\begin{aligned} \hat{{\theta }}=\arg \mathop {\min }\limits _\theta \left[ {\int {f_\theta ^{2}(x)dx-\frac{2}{n}\sum \limits _{i=1}^{n}{f_\theta (x_{i})}}} \right] . \end{aligned}$$
(22)

Following Scott (2004), Vandewalle et al. (2007) make use of the fact that in derivation of (22) it is assumed that only f is a real density function, but not necessarily the model \(f_{\theta }\). Hence, also an incomplete mixture model \({ wf}_{\theta }\) can be considered

$$\begin{aligned} \hat{{\theta }}^{w}=\arg \mathop {\min }\limits _{\theta ,w} \left[ {w^{2}\int {f_\theta ^{2}(x)dx-\frac{2w}{n}\sum \limits _{i=1}^{n}{f_\theta (x_{i})}}} \right] , \end{aligned}$$
(23)

where the parameter w may be interpreted, with some restrictions, as a measure of the uncontaminated proportion of the sample. It is estimated by

$$\begin{aligned} \hat{{w}}=\frac{n^{-1}\sum \nolimits _{i=1}^{n}{f_{\hat{{\theta }}} (x_{i})}}{\int {f_{\hat{{\theta }}}^{2}(x)dx}}. \end{aligned}$$
(24)

For the strict Pareto model with density \(f_{\alpha } (x)=\alpha x_{0}x^{-(\alpha +1)}\), the integral \(\int _{x_{0}}^{\infty } {f_{\alpha }^{2}(x)dx} \) can be calculated easily in closed form as \(\alpha ^{2}/[(2\alpha +1)x_{0}]\). Therefore, the so-called PDCE for the Pareto model is defined as

$$\begin{aligned} \hat{{\alpha }}_{PDCE} =\arg \mathop {\min }\limits _\alpha \left[ {\hat{{w}}^{2}\frac{\alpha ^{2}}{(2\alpha +1)x_{0}}-\frac{2\hat{{w}}}{n}\sum \limits _{i=1}^{n}{f_\alpha (x_{i})}} \right] . \end{aligned}$$
(25)

3 Monte Carlo comparison

3.1 Simulation design

In most of the economic and other applications, the estimated Pareto tail index has a direct economic interpretation or it is used to calculate some other index (e.g. inequality measure) of interest. Obtaining an unbiased estimate of the Pareto tail index is therefore crucial. From this perspective, our Monte Carlo comparison focuses on comparing the bias of alternative estimators. Therefore, the performance of the estimators is assessed in terms of the percentage relative bias (RB) and the percentage relative root-mean-square error (RRMSE). For a given true value of the Pareto exponent, \(\alpha \), the relative bias of an estimator is given by

$$\begin{aligned} \hbox {RB}=\frac{100}{\alpha }\frac{1}{\mathrm{m}}\sum \limits _{\mathrm{i=1}}^{\mathrm{m}}(\hat{\alpha }_{i} -\alpha ), \end{aligned}$$
(26)

where \(\hat{{\alpha }}_{i}\) is the estimated value of the Pareto tail index for the i-th \((i = 1,\ldots m)\) simulated sample and m is the number of simulations. The relative root-mean-square error is defined as

$$\begin{aligned} \mathrm{RRMSE}=\frac{100}{\alpha }\sqrt{\frac{1}{\mathrm{m}}\sum \limits _{\mathrm{i=1}}^{\mathrm{m}}{\left( \hat{{\alpha }}_{i}- \alpha \right) ^{2}}.} \end{aligned}$$
(27)

Both measures are routinely used to assess the accuracy and precision of an estimator; the smaller the values of each measure in absolute terms, the better the estimator. The RB measures the extent of the bias of an estimator, while the RRMSE takes into account both the bias and the dispersion of an estimator.

The data sets simulated from the Pareto distribution \(P (1, \alpha )\) are contaminated in two ways. Both methods of contamination were previously used in the literature and rely on introducing “upper” outliers, which have more relevance in practical economic applications. First, following Brazauskas and Serfling (2001b), we have drawn contaminated data from the following model

$$\begin{aligned} F=(1-\varepsilon )P(1,\alpha )+\varepsilon P(1000,\alpha ), \end{aligned}$$
(28)

where \(\varepsilon = 0.05\), 0.1 is the proportion of contamination and \(\alpha = 1, 2, 3\).Footnote 8 This way of introducing “outliers” to the data allows to study how compared estimators are affected by model deviation. Second, we multiply by 10 a fixed proportion (1, 2, 5 and 10 %) of randomly selected observations simulated from \(P (1, \alpha )\). This corresponds to the “decimal point error”—a situation, when a person coding or cleaning the data inadvertently puts the decimal point in the wrong place and thus multiplies an observation by a factor of 10 (Cowell and Victoria-Feser 1996). We compare the performance of the estimators in two cases with respect to the ARE—setting it to 78 and 94 %.Footnote 9 The former case gives more protection against outliers at the cost of an efficiency loss; the latter gives more preference to efficiency, but offers only moderate robustness. The number of Monte Carlo simulations is 2,500 for each combination of parameters, sample sizes (ranging from 20 to 200), contamination types and AREs. This number was chosen on the basis of the trade-off between the need to reduce simulation variability and the required computation time, which is longer for some of the more complex estimators such as the OBRE.

3.2 Monte Carlo results

Tables 1 and 2 give results for the uncontaminated Pareto distribution, with estimators computed for ARE = 94 % (Table 1) and ARE = 78 % (Table 2). We do not present results for the PDCE with very small samples (20 and 40 observations, and in some cases even more), because in this setting the minimization procedure used to compute the estimator did not converge (or diverged) in a significant number of replications. However, the performance of the PDCE is much worse than that of other estimators even in larger samples (100, 200). The bias of the PDCE in uncontaminated samples decreases very slowly with increasing sample size, and it is still noticeable (in the range from 4 to 8 %) even in samples of 200 observations. In the case of contaminated samples, the PDCE displays acceptable properties only for the biggest sample size studied (200 observations). Thus, the first recommendation of our study is to avoid the PDCE in practical small-sample settings \((n < 200)\), when alternative robust estimators can be used.Footnote 10

The GME has the smallest bias in the uncontaminated case, but its performance in terms of the RRMSE is similar to that of other robust estimators, especially for larger samples. Other compared estimators—the OBRE, WMLE and PITSE—have significant biases in very small samples, which disappear only in samples of 100–200 observations. The ranking of the estimators is similar for both levels of the ARE studied.

Results for the contaminated Pareto models \(F=(1-\varepsilon )P(1,\alpha )+\varepsilon P(1000,\alpha )\), with \(\varepsilon = 0.05\), 0.1 are presented in Tables 3, 4, 5 and 6. We first discuss results for the smaller degree of contamination (Tables 3, 4). We can observe that the MLE for all sample sizes performs bad according to both evaluation criteria, reaching (in absolute terms) more than 50 % for \(\alpha = 3\). Interestingly, the performance of the MLE deteriorates significantly with the rise in \(\alpha \). All robust estimators provide at least some protection against contamination, which seems to be independent of the value of \(\alpha \). For this reason, the biggest gains from using robust estimators are observed for \(\alpha = 3\). In the case of higher ARE (Table 3), the OBRE, PITSE and GME perform similarly for all sample sizes. For higher robustness and lower ARE (Table 4), when the WMLE is also included in the comparison, we can observe that the WMLE performs worse than the alternatives, especially in terms of RRMSE. In this case, the OBRE, PITSE and GME provide similar and higher level of protection than the WMLE. For the former estimators, moving from higher efficiency and lower robustness to lower efficiency and higher robustness reduces RRMSE from about 17–20 % to about 11–12 % (for samples size of 200).

Table 3 Simulation results for the Pareto tail index with data drawn from a contaminated Pareto distribution \(F=0.95P(1,\alpha )+0.05P(1000,\alpha )\), ARE = 94 %
Table 4 Simulation results for the Pareto tail index with data drawn from a contaminated Pareto distribution \(F=0.95P(1,\alpha )+0.05P(1000,\alpha )\), ARE = 78 %
Table 5 Simulation results for the Pareto tail index with data drawn from a contaminated Pareto distribution \(F=0.9P(1,\alpha )+0.1P(1000,\alpha )\), ARE = 94 %
Table 6 Simulation results for the Pareto tail index with data drawn from a contaminated Pareto distribution \(F=0.9P(1,\alpha )+0.1P(1000,\alpha )\), ARE = 78 %

The results for higher degree of contamination \((\varepsilon = 0.1)\) are shown in Tables 5, 6. This type of contamination is rather extreme and not surprisingly it makes the MLE useless. For example, the values of both evaluative criteria exceed 65 % for \(\alpha = 3\). The performance of the OBRE, PITSE and GME is again roughly similar in case of the higher ARE. Results for the case of lower ARE and higher robustness reveal an interesting behaviour of the WMLE. For small sample sizes \((n < 100)\), the WMLE performs substantially worse than the alternatives, for \(n = 100\) it performs comparably, while for \(n = 200\) it gives slightly better results than other robust estimators. This behaviour is likely caused by the first-order bias correction term (14), which works poorly in small samples, but does much better job in samples of at least 100 observations. The results from Table 6 provide the strongest evidence for the power of robust estimators. Using them instead of the MLE allows to reduce the RRMSE from more than 67 % to about 18–20 % in case of \(\alpha = 3\) and \(n = 200\).

Tables 7, 8, 9, 10, 11, 12, 13 and 14 present results for Pareto distributions contaminated with multiplying by 10 randomly chosen 1 % (Tables 7, 8), 2 % (Tables 9, 10), 5 % (Tables 11, 12) and 10 % (Tables 13, 14) of observations. In the case of the smallest degree of data contamination, we can observe that all robust estimators, with the exception of PDCE, perform slightly better than the MLE, but only for \(\alpha = 3\) and \(n = 200\). Bigger advantage of robust estimators is visible for the moderate (2 %) degree of contamination. In this case (Tables 9, 10), the OBRE, PITSE and GME perform similarly and significantly better than the MLE, but only for bigger sample sizes (100, 200) and \(\alpha > 1\). For these values of n and \(\alpha \), the WMLE, which is included only in the comparison of estimators with ARE = 78 %, has significantly higher RRMSE than other robust alternatives (beside the PDCE).

Table 7 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 1 % of observations multiplied by 10, ARE = 94 %
Table 8 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 1 % of observations multiplied by 10, ARE = 78 %
Table 9 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 2 % of observations multiplied by 10, ARE = 94 %
Table 10 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 2 % of observations multiplied by 10, ARE = 78 %
Table 11 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 5 % of observations multiplied by 10, ARE = 94 %
Table 12 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 5 % of observations multiplied by 10, ARE = 78 %
Table 13 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 10 % of observations multiplied by 10, ARE = 94 %
Table 14 Simulation results for the Pareto tail index with data drawn from a Pareto distribution \(P(1,\alpha )\) with randomly chosen 10 % of observations multiplied by 10, ARE = 78 %

In the case of large degree of contamination (5 %), which is presented in Tables 11, 12, we observe that for the ARE = 94 % (Table 11), the performance of robust estimators is better than that of the MLE for samples of 40 observations and bigger and for \(\alpha > 1\). All robust estimators, except for the PDCE, which performs well only for sample size of 200, display similar, if rather small, improvement over MLE. When more robust versions of estimators are considered (Table 12), the protection against outliers is greater, but again only for \(\alpha > 1\). The OBRE, PITSE and GME perform similarly and markedly better than the WMLE and PDCE. The WMLE gives much smaller RB than the MLE, but it gives no or only very small improvement in terms of RRMSE. Finally, Tables 13, 14 present results for the extreme case of 10 % contamination. In the case of higher efficiency (Table 13), the PITSE seems to be the best choice, at least when \(\alpha > 1\). When less efficient, but more robust versions of estimators are considered (Table 14), the OBRE, PITSE and GME provide significant improvement (especially in terms of RRMSE) with respect to the MLE when \(\alpha > 1\). For \(n < 200\), the WMLE usually performs worse than most of other robust estimators. It is only for the case of \(n = 200\) that the WMLE gives comparable or even slightly better results than alternatives.

The main results of our Monte Carlo study can be summarized as follows. The PDCE and WMLE are not reliable in small samples and can be considered only when the sample size is at least 200. The remaining estimators—the OBRE, PITSE and GME—offer in general a comparable level of protection against data contamination or model deviation. Since the PITSE is the simplest estimator from the computational point of view, it seems that it is the best choice for estimating the Pareto tail index in small samples.

4 Empirical application

In this section, we apply the compared estimators to a real income distribution data set taken from the European Union Statistics on Income and Living Conditions (EU-SILC) database. The EU-SILC is an annual survey providing harmonized micro-data on income, poverty, social exclusion and living conditions for all the EU member states.Footnote 11 We focus on the distribution of disposable equivalized incomes for Belgium in 2005.Footnote 12 This data set was previously used by Alfons et al. (2013) in the context of robust estimation of the Gini index of inequality from survey data. The reason for robust estimation arises in this context because survey samples may contain extreme observations, which have large influence on estimates on many of the standard inequality measures (Cowell and Flachaire 2007). In the presence of extreme outliers, both estimation and inference for inequality indices can be unreliable. The extreme observations or outliers may be found in survey samples due to errors in data collecting or data coding. On the other hand, they may also be non-representative unique observations that belong to the true distribution on the population level. In both cases, outliers can severely affect estimation and inference for inequality measures, so robust methods may deliver more reliable results.

The fit of the Pareto model using the MLE and robust estimators to the Belgian income distribution in 2005 is shown on a log–log plot in Fig. 1.Footnote 13 In order to stay within our small-sample setting, we apply the estimators to the 40 highest incomes in the data set. The figure confirms the observation of Alfons et al. (2013) that for the data set at hand there is one extreme outlier, which can have a disproportionally high influence on the estimate of a population parameter of interest. The MLE is very seriously affected by the presence of the outlier. All robust estimators perform much better than the MLE with the PITSE having a small edge over the OBRE and the WMLE (the latter two estimators produce almost identical estimates, which are indistinguishable on the figure). The GME does a slightly worse job.

Fig. 1
figure 1

The empirical complementary cumulative distribution function, P(x), and the fitted various estimators for the Pareto tail index, 40 highest disposable equivalized incomes for Belgium in 2005 (EU-SILC data)

Let us assume now that we are interested in the problem of measuring income inequality among the rich persons and that the rich are represented in our sample by the 40 top income observations.Footnote 14 Since the total sample size for the Belgian data from 2005 is 5,133, the rich defined in our way constitute about 0.8 % of the total sample. The Gini inequality index for our 40 highest income observations, \(\hat{{G}}\), computed nonparametrically, is 0.6481. However, the same index computed excluding the outlying highest income, \(\hat{{G}}_{E}\), is only 0.2468. This shows that the influence of the extreme outlier on statistics computed using tail observations can be indeed very high. Table 15 presents values of the Gini index for the rich implied by the fitted Pareto models with different estimators of the Pareto tail index.Footnote 15

Table 15 Pareto tail indices estimated using MLE and robust estimators and implied Gini indices, 40 highest disposable equivalized incomes for Belgium in 2005 (EU-SILC data)

All parametric estimates of the Gini index give much smaller values than the nonparametric estimate \(\hat{{G}}\), which is destroyed by the presence of the outlier. However, the parametric estimate implied by the MLE (0.3309) is still much higher than the nonparametric estimate computed for the data set excluding the outlier \(\hat{{G}}_{E}\). In general, all parametric estimates of the Gini implied by robust estimators of the tail index are similar and much closer to \(\hat{{G}}_{E}\). However, the PITSE is able to reconstruct the value of the Gini index, which is the closest to \(\hat{{G}}_{E}\), and the variability of the Gini implied by this estimator is comparable or slightly smaller than that of other estimators. This evidence confirms the conclusion from our simulation study that the PITSE should be the preferred choice in applied work using Pareto tail modelling in small samples.

5 Conclusions

The classical Pareto distribution is widely used in many areas of economics and other sciences to model the right tail of heavy-tailed distributions. Since the most popular method of estimating the shape parameter (the Pareto tail index) of this distribution—the maximum likelihood estimation—is non-robust to model deviation and data contamination, several robust approaches have been proposed in the literature. In this paper, we have provided an extensive Monte Carlo comparison of the small-sample performance of the most popular robust estimators for the Pareto tail index.

The main conclusions from our simulation study are the following.Footnote 16 First, the MLE indeed performs unreliably with even a moderate degree of model deviation or data contamination. Our simulations suggest also that the performance of the MLE deteriorates significantly with the rise in the value of the Pareto tail index. Second, there are computational problems with the PDCE for small samples \((n \le 80)\). The performance of the PDCE is similar to that of other robust estimators only for the largest sample size in our study (200 observations). For these reasons, we recommend that the PDCE should be avoided in practical small-sample settings \((n < 200)\). Third, the WMLE usually performs worse than most of other robust estimators, but shows good results in samples of size 200. Therefore, this estimator should be only used in sufficiently large samples. Fourth, the OBRE, PITSE and GME offer a similar level of protection in most of the studied settings. Taking into account the fact that the PITSE is the simplest estimator from the computational point of view, while both remaining alternatives (and especially the OBRE) are much more complex computationally, the PITSE seems to give the desired compromise between ease of use and power to protect against outliers in the small-sample setting.