Estimation Using Non-parametric Approaches

. Estimation of tail distributions and extreme quantiles is important in areas such as risk management in ﬁnance and insurance in relation to extreme or catastrophic events. The main diﬃculty from the statistical perspective is that the available data to base the estimates on is very sparse, which calls for tailored estimation methods. In this chapter, we provide a survey of currently used parametric and non-parametric methods, and provide some perspectives on how to move forward with non-parametric kernel-based estimation


Introduction
This chapter presents a position survey on the overall objectives and specific challenges encompassing the state of the art in tail distribution and extreme quantile estimation of currently used parametric and non-parametric approaches and their application to Financial Risk Measurement. What is envisioned, is an enhanced non-parametric estimation method based on the Extreme Value Theory approach. The compounding perspectives of current challenges are addressed, like the threshold level of excess data to be chosen for extreme values and the bandwidth selection from a bias reduction perspective. The application of the kernel estimation approach and the use of Expected Shortfall as a coherent risk measure instead of the Value at Risk are presented. The extension to multivariate data is addressed and its challenges identified.
Overview of the Following Sections. In the following sections, Financial risk measures are presented in Sect. 2. Section 3, covers Extreme Value Theory, Sect. 4, Parametric estimation and Semi-parametric estimation methods, Sect. 5, Non-Parametric estimation methods and Sect. 6, the perspectives identified by the addressed challenges when estimating the presented financial risk measures.

Financial Risk Measures
The Long Term Capital Management collapse and the 1998 Russian debt crisis, the Latin American and Asian currency crises and more recently, the U.S. mortgage credit market turmoil, followed by the bankruptcy of Lehman Brothers and the world's biggest-ever trading loss at Société Générale are some examples of financial disasters during the last twenty years. In response to the serious financial crises, like the recent global financial crisis (2007)(2008), regulators have become more concerned about the protection of financial institutions against catastrophic market risks. We recall that market risk is the risk that the value of an investment will decrease due to movements in market factors. The difficulty of modelling these rare but extreme events has been greatly reduced by recent advances in Extreme Value Theory (EVT). Value at Risk (VaR) and the related concept of Expected Shortfall (ES) have been the primary tools for measuring risk exposure in the financial services industry for over two decades. Additional literature can be found in [39] for Quantitative Risk Management and in [42] or in [25] for the application of EVT in insurance, finance and other fields.

Value at Risk
Consider the loss X of a portfolio over a given time period δ, then VaR is a risk statistic that measures the risk of holding the portfolio for the time period δ. Assume that X has a cumulative distribution function (cdf), F X , then we define VaR at level α ∈ (0, 1) as F ← X is the generalized inverse of the cdf F X . Typical values of α are 0.95 and 0.99, while δ usually is 1 day or 10 days. Value-at-Risk (VaR) has become a standard measure for risk management and is also recommended in the Basel II accord. For an overview on VaR in a more economic setting we refer to [37] and [23]. Despite its widespread use, VaR has received criticism for failing to distinguish between light and heavy losses beyond the VaR. Additionally, the traditional VaR method has been criticized for violating the requirement of sub-additivity [4]. Artzner  Any risk measure which satisfies these axioms is said to be coherent. A related concept to VaR, which accounts for the tail mass is the conditional tail expectation (CVaR), or Expected Shortfall (ES). ES is the average loss conditional on the VaR being exceeded and gives risk managers additional valuable information about the tail risk of the distribution. Due to its usefulness as a risk measure, in 2013 the Basel Committee on Bank Supervision has even proposed replacing VaR with ES to measure market risk exposure.

Conditional Value at Risk or Expected Shortfall
Acerbi and Tasche proved in [1] that CVaR satisfies the above axioms and is therefore a coherent risk measure. Conditional Value-at-Risk can be derived from VaR in the case of a continuous random variable and another possibility to calculate CVaR is to use Acerbi's Integral Formula: Estimating ES from the empirical distribution is generally more difficult than estimating VaR due to the scarcity of observations in the tail. As in most risk applications, we do not need to focus on the entire distribution. Extreme value theory is then a practical and useful tool for modeling and quantifying risk. Value at Risk and Extreme value theory is covered well in most books on risk management and VaR in particular (also ES with much less extent), see for example [33,37,39], and [22]. Vice versa, VaR is treated in some Extreme value theory literature, such as [26] and [17].

Extreme Value Theory: Two Main Approaches
Extreme value theory (EVT) is the theory of modelling and measuring events which occur with very small probability: More precisely, having an X 1 , ..., X n sample of n random variables independently and identically following a distribution function F (·), we want to estimate the real x pn defined by where p n is a known sequence andF ← (u) = inf{x ∈ R,F (x) ≤ u}.F ← is the generalized inverse of the survival functionF (·) = 1 − F (·). Note that x pn is the order quantile 1 − p n of the cumulative distribution function F . A similar problem to the estimate of x pn is the estimate of "small probabilities" p n or the estimation of the tail distribution. In other words, for a series of fixed (c n ) reals, we want to estimate the probability p n defined by p n = P (X > c n ), with c n > x n,n .
The main result of Extreme Value Theory states that the tails of all distributions fall into one of three categories, regardless of the overall shape of the distribution. Two main approaches are used for implementing EVT in practice: Block maxima approach and Peaks Over Thresholds (POT).

Block Maxima Approach
The Fisher and Tippett [29] and Gnedenko [30] theorems are the fundamental results in EVT. The theorems state that the maximum of a sample of properly normalized independent and identically distributed random variables converges in distribution to one of the three possible distributions: the Weibull, Gumbel or the Fréchet.

Theorem 1 (Fisher, Tippett, Gnedenko).
Let X 1 , ..., X n ∼ i.i.d. F and X 1,n ≤ ... ≤ X n,n . If there exist two sequences a n and b n and a real γ such that when n −→ +∞, then We say that F is in the domain of attraction of H γ and denote this by F ∈ DA(H γ ). The distribution function H γ (·) is called the Generalized Extreme Value distribution (GEV).
This law depends only on the parameter called the tail index. The density associated is shown in Fig. 1 for different values of γ. According to the sign of γ, we define three areas of attraction:  The Weibull distribution clearly has a finite endpoint (s . This is usually the case of the distribution of mortality and insurance/reinsurance claims for example, see [20]. The Fréchet tail is thicker than the Gumbel's. Yet, it is well known that the distributions of the return series in most financial markets are heavy tailed (fat tails). The term "fat tails" can have several meanings, the most common being "extreme outcomes occur more frequently than predicted by the normal distribution". The block Maxima approach is based on the utilization of maximum or minimum values of these observations within a certain sequence of constant length. For a sufficiently large number k of established blocks, the resulting peak values of these k blocks of equal length can be used for estimation. The procedure is rather wasteful of data and a relatively large sample is needed for accurate estimate.

Peaks Over Threshold (POT) Approach
The POT (Peaks-Over-Threshold) approach consists of using the generalized Pareto distribution (GPD) to approximate the distribution of excesses over a threshold. This approach has been suggested originally by hydrologists. This approach is generally preferred and forms the basis of our approach below. Both EVT approaches are equivalent by the Pickands-Balkema-de Haan theorem presented in [5,40].

Theorem 2 (Pickands-Balkema-de Haan). For a large class of underlying distribution functions F ,
is the distribution of excess, and G γ, σ is the Generalized Pareto Distribution (GPD) defined as This means that the conditional excess distribution function F u , for u large, is well approximated by a Generalized Pareto Distribution. Note that the tail index γ is the same for both the GPD and GEV distributions. The tail shape parameter σ and the tail index are the fundamental parameters governing the extreme behavior of the distribution, and the effectiveness of EVT in forecasting depends upon their reliable and accurate estimation. By incorporating information about the tail through our estimates of γ and σ, we can obtain VaR and ES estimates, even beyond the reach of the empirical distribution.

Parametric and Semi-parametric Estimation Methods
The problem of estimating the tail index γ has been widely studied in the literature. The most standard methods are of course the method of moments and maximum likelihood. Unfortunately, there is no explicit form for the parameters, but numerical methods provide good estimates. More generally, the two common approaches to estimate the tail index are: -Semi-parametric models (e.g., the Hill estimator).

Semi-parametric Estimation
The most known estimator for the tail index γ > 0 of fat tails distribution is without contest the Hill estimator [31]. The formal definition of fat tail distributions comes from regular variation. The cumulative distribution is in the Fréchet domain if and only if as x → ∞, the tails are asymptotically Pareto-distributed: where A > 0 and τ = 1/γ. Based on this approximation, the Hill estimator is written according to the statistics order X 1,n ≤ ... ≤ X n,n as follows: where k n is a sequence so that 1 ≤ k n ≤ n. Other estimators of this index have have been proposed by Beirlant et al. [6,7] using a regression exponential model to reduce the Hill estimator bias and by [28] that introduce a least squares estimator. The use of a kernel in the Hill estimator has been studied by Csörgő et al. [18]. An effective estimator of the extreme value index has been proposed by Falk and Marohn in [27]. A more detailed list of the different works on the estimation of the index of extreme values is found in [19]. Note that the Hill estimator is sensitive to the choice of threshold u = X n−kn,n (or the number of excess k n ) and is only valid for fat-tailed data.

Parametric Estimation
The principle of POT is to approximate the survival function of the excess distribution by a GPD after estimating its parameters from the distribution of excess over a threshold u as explained in the following two steps: be the exceedances over a chosen threshold u n . The distribution of excess F un is given by: and then, the distribution F , of the extreme observations, is given by: The distribution of excess F un is approximated by G γ,σ(un) and the first step consists in estimating the parameters of this last distribution using the sample (Y 1 , . . . , Y Nn ). The parameter estimations can be done using MLE. Different methods have been proposed to estimate the parameters of the GPD.
Other estimation methods are presented in [26]. The Probability Weighted Moments (PWM) method proposed by Hosking and Wallis [32] for γ < 1/2 was extended by Diebolt et al. [21] by a generalization of PWM estimators for γ < 3/2, as for many applications, e.g., in insurance, distributions are known to have a tail index larger than 1.
-Second step-Quantile estimation In order to estimate the extreme quantile x p defined as We estimate F (u) by its empirical counterpart N u /n and we approximate F un by the approximate Generalized Pareto Distribution GP D(γ n ,σ n ) in the Eq. (1). Then, for the threshold u = X n−k,n , the extreme quantile is estimated byx The application of POT involves a number of challenges. The early stage of data analysis is very important in determining whether the data has the fat tail needed to apply the EVT results. Also, the parameter estimates of the limit GPD distributions depend on the number of extreme observations used. The choice of a threshold should be large enough to satisfy the conditions to permit its application (u tends towards infinity), while at the same time leaving sufficient observations for the estimation. A high threshold would generate few excesses, thereby inflating the variance of our parameter estimates. Lowering the threshold would necessitate using samples that are no longer considered as being in the tails which would entail an increase in the bias.

Non-parametric Estimation Methods
A main argument for using non-parametric estimation methods is that no specific assumptions on the distribution of the data is made a priori. That is, model specification bias can be avoided. This is relevant when there is limited information about the 'theoretical' data distribution, when the data can potentially contain a mix of variables with different underlying distributions, or when no suitable parametric model is available. In the context of extreme value distributions, the GPD and GEV distributions discussed in Sect. 3 are appropriate parametric models for the univariate case. However, for the multivariate case there is no general parametric form.
We restrict the discussion here to one particular form of non-parametric estimation, kernel density estimation [44]. Classical kernel estimation performs well when the data is symmetric, but has problems when there is significant skewness [9,24,41].
A common way to deal with skewness is transformation kernel estimation [45], which we will discuss with some details below. The idea is to transform the skew data set into another variable that has a more symmetric distribution, and allows for efficient classical kernel estimation.
Another issue for kernel density estimation is boundary bias. This arises because standard kernel estimates do not take knowledge of the domain of the data into account, and therefore the estimate does not reflect the actual behaviour close to the boundaries of the domain. We will also review a few bias correction techniques [34].
Even though kernel estimation is non-parametric with respect to the underlying distribution, there is a parameter that needs to be decided. This is the bandwidth (scale) of the kernel function, which determines the smoothness of the density estimate. We consider techniques intended for constant bandwidth [35], and also take a brief look at variable bandwidth kernel estimation [36]. In the latter case, the bandwidth and the location is allowed to vary such that bias can be reduced compared with using fixed parameters.
Kernel density estimation can be applied to any type of application and data, but some examples where it is used for extreme value distributions are given in [8,9]. A non parametric method to estimate the VaR in extreme quantiles, based on transformed kernel estimation (TKE) of the cdf of losses was proposed in [3]. A kernel estimator of conditional ES is proposed in [13,14,43].
In the following subsections, we start by defining the classical kernel estimator, then we describe a selection of measures that are used for evaluating the quality of an estimate, and are needed, e.g, in the algorithms for bandwidth selection. Finally, we go into the different subareas of kernel estimation mentioned above in more detail.

Classical Kernel Estimation
Expressed in words, a classical kernel estimator approximates the probability density function associated with a data set through a sum of identical, symmetric kernel density functions that are centered at each data point. Then the sum is normalized to have total probability mass one.
We formalize this in the following way: Let k(·) be a bounded and symmetric probability distribution function (pdf), such as the normal distribution pdf or the Epanechnikov pdf, which we refer to as the kernel function.
Given a sample of n independent and identically distributed observations X 1 , . . . , X n of a random variable X with pdf f X (x), the classical kernel estimator is given byf where k b (·) = 1 b k( · b ) and b is the bandwidth. Similarly, the classical kernel estimator for the cumulative distribution function (cdf) is given bŷ where is the cdf corresponding to the pdf k(·).

Selected Measures to Evaluate Kernel Estimates
A measure that we would like to minimize for the kernel estimate is the mean integrated square error (MISE). This is expressed as where Ω is the domain of support for f X (x), and the argument b is included to show that minimizing MISE is one criterion for bandwidth selection. However, MISE can only be computed when the true density f X (x) is known. MISE can be decomposed into two terms. The integrated square bias and the integrated variance To understand these expressions, we first need to understand thatf X is a random variable that changes with each sample realization. To illustrate what it means, we work through an example.
Example 1. Let X be uniformly distributed on Ω = [0, 1], and let k(·) be the Gaussian kernel. Then For each kernel function centered at some data point y we have that If we apply that to the kernel estimator (16), we get the integrated square bias By first using that Var(X) = E[X 2 ] − E[X] 2 , we get the following expression for the integrated variance The integrals are evaluated for the Gaussian kernel, and the results are shown in Fig. 2. The bias, which is largest at the boundaries, is minimized when the bandwidth is very large, but a large bandwidth also leads to a large variance. Hence, MISE is minimized by a bandwidth that provides a trade-off between bias and variance. To simplify the analysis, MISE is often replaced with the asymptotic MISE approximation (AMISE). This holds under certain conditions involving the sample size and the bandwidth. The bandwidth depends on the sample size, and we can write b = b(n). We require b(n) ↓ 0 as n −→ ∞, while nb(n) −→ ∞ as n −→ ∞. Furthermore, we need f X (x) to be twice continuously differentiable. We then have [44] the asymptotic approximation where R(g) = g(x) 2 dx and m p (k) = x p k(x) dx. The bandwidth that minimizes AMISE can be analytically derived to be leading to The optimal bandwidth can then be calculate for different kernel functions. We have, e.g., for the Gaussian kernel [11] b G opt = The difficulty in using AMISE is that the norm of second derivative of the unknown density needs to be estimated. This will be further discussed under the subsection on bandwidth selectors. We also mention the skewness γ X of the data, which is a measure that can be used to see if the data is suitable for classical kernel estimation. We estimate it asγ It was shown in [44], see also [41], that minimizing the square integrated error (ISE) for a specific sample is equivalent to minimizing the cross-validation function wheref i (·) is the kernel estimator obtained when leaving the observation x i out.
Other useful measures of the goodness of fit are also discussed in [41].

Bias-Corrected Kernel Estimation
As was illustrated in Example 1, boundaries where the density does not go to zero generate bias. This happens because the kernel functions cross the boundary, and some of the mass ends up outside the domain. We want from the kernel method that E[f X (x)] = f X (x) in all of the support Ω of the density function, but this condition does not hold at boundaries, unless we also have that the density is zero there. An overview of the topic, and of simple boundary correction methods, is given in [34]. By employing a linear bias correction method, we can make the moments of order 0 and 1 satisfy the consistency requirements m 0 = 1 (total probability mass) and m 1 = 0, such that the expectation is consistent to order b 2 at the boundary. A general linear correction method for a density supported on Ω = [x 0 , ∞] that is shown to perform well in [34] has the form for a kernel function centered at the data location x. The coefficients a j = a j (b, x) are computed as where z is the end point of the support of the kernel function. An example with modified Gaussian kernels close to a boundary is shown in Fig. 3. At the boundary, the amplitude of the kernels becomes higher to compensate for the mass loss, while away from the boundary they resume the normal shape and size. The kernel functions closest to the boundary become negative in a small region, but this does not affect the consistency of the estimate. A more recent bias correction method is derived in [38], based on ideas from [15]. This type of correction is applied to the kernel estimator for the cdf, and can be seen as a Taylor expansion. It also improves the capturing of valleys and peaks in the distribution function, compared with the classical kernel estimator. It requires that the density is four times differentiable, that the kernel is symmetric, and at least for the theoretical derivations, that the kernel is compactly supported on Ω = [−1, 1]. The overall bias of the estimator is O(b 4 ) as compared with O(b 2 ) for the linear correction method, while the variance is similar to what is achieved with the uncorrected estimator. This boundary correction approach is used for estimating extreme value distributions in [9].
The parameter λ is kernel dependent, and should be chosen such that AMISE is minimized, but according to [15], the estimator is not that sensitive to the choice. An explicit expression for AMISE with this correction is derived in [38], and is also cited in [9], where the value λ = 0.0799 is also given as an (approximate) minimizer of the variance for the Epanechnikov kernel.

Transformation Kernel Estimation
The objective in transformation kernel estimation is to find a transformation of the random variable X, which for example has a right-skewed distribution into a symmetric random variable Y . Then classical kernel estimation can be successfully applied to Y . The transformation function T (·) should be monotonic and increasing. For a right-skewed true density, it should also be concave. It also needs to have at least one continuous derivative. The transformation is applied to the original data to generate a transformed data sample For the pdfs of the two random variables it holds that and for the cdfs we have that F X (x) = F Y (y). We apply the kernel density estimator to the transformed data, leading to the following estimator for the original density: Several different transformation classes have been proposed for heavy tailed data. The shifted power transformation family was proposed in [45] T (x) = sign(λ 2 )(x + λ 1 ) λ2 , λ 2 = 0, ln(x + λ 1 ), where λ 1 > − min(x i ) and λ 2 ≤ 1. An algorithm for choosing the transformation parameters is given in [10]. First a restriction is made to parameters λ 1,2 that give close to zero skewness (29) for the transformed data. Then AMISE (27) of the classical kernel estimation for the density f Y (y) is minimized assuming an asymptotically optimal bandwidth. This is equivalent to minimizing R(f Y ). As we do not know the true density, an estimator is needed. The estimator suggested in [10] isR where the convolution k * k(u) = k(u − s)k(s)ds, and c is the bandwidth used in this estimate. The Möbius-like mapping introduced in [16] takes data in Ω X = [0, ∞) and maps it to Ω Y = [−1, 1).
The scale M is determined by minimizing R(f Y ). Given a scale M , α is determined such that no probability mass spills over at the right boundary. That is, the resulting density does not have mass at (or beyond) infinity. A modified Champernowne distribution transformation is derived in [12], with transformation function where M can be chosen as the median of the data, and α and c are found by maximizing a log likelihood function, see [12]. So far, we have only considered the possibility of performing one transformation, but one can also transform the data iteratively, or perform two specific consecutive transformations. Doubly transformed kernel estimation is discussed, e.g., in [9]. The idea is to first transform the data to something close to uniform, and then to apply an inverse beta transformation. This makes the final distribution close to a beta distribution, and the optimal bandwidth can then easily be computed.

Bandwidth Selection
As briefly mentioned in Sects. 5.2 and 5.4, the choice of bandwidth b in kernel estimation has a significant impact on the quality of the estimator, but choosing the appropriate bandwidth requires the use of one estimator or another. The rule-of-thumb bandwidth estimator of Silverman [44], is often cited, but it assumes that the underlying density can be approximated by a normal density. This is hence not appropriate for heavy-tailed, right-skewed distributions. Many bandwidth selection methods use a normal reference at some step in the process [11], but this introduces a parametric step in the non-parametric estimation. An interesting alternative, the Improved Sheather-Jones bandwidth we instead assume that b * +1 = b for some large enough . The experiments in [11] indicate that = 5 should be enough. We then get a non-linear equation to Using this relation, no assumptions on the true distribution are made, and this bandwidth selector is shown to perform well also for non-normal distributions.

More Challenges in Estimating the Risk Measures-Financial Time Series and Multivariate Case
A Dynamic Approach. Highlighting the underlying assumptions is relevant for understanding model uncertainty when estimating rare or extreme events. The VaR and ES are estimated given that the distribution of asset returns does not change over time. In the last two sections, when applying the POT approach to the returns in order to calculate these risk measures, their distribution was assumed to be stationary. A dynamic model which captures current risk is then more realistic. EVT can also be used based on a stochastic time series model. These dynamic models use an ARCH/GARCH type process along with the POT to model VaR and ES which depend on and change due to the fluctuations of the market. This approach, studied in [2], reflects two stylized facts exhibited by most financial return series, namely stochastic volatility and the fat-tailedness of conditional return distributions over short time horizons.
The Multivariate Case for EVT. When estimating the VaR of a multi-asset portfolio, under financial crises, correlations between assets often become more positive and stronger. Assuming that the variables are independent and identically distributed is a strong hypothesis. Portfolio losses are the result not only of the individual asset's performance but also, and very importantly, the result of the interaction between assets. Hence, from the accuracy point of view, ideally we would prefer the multivariate approach. An extension of the univariate EVT models using a dependence structure leads to a parametric model and is then expected to be less efficient for scarce data. A non-parametric approach should be preferred to estimate portfolio tail risk. Transformation kernel density estimation is used in [8] for studying multivariate extreme value distributions in temperature measurement data. Future directions involve to apply this type of methodology to real and simulated portfolio data.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.