Abstract
The purpose of this chapter is to examine in depth the new algorithm for counting with dependence, assuming at first to have a hierarchy in the dependence structure and then approaching the problem in very high dimensional cases. The hierarchical structure leads to a set of distorted distributions, selected by a random matrix representing the arrival policy of the random event, while for the tractability of high dimensional problems, one needs to work under approximation assumptions, resulting into a limiting copula-based counting approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
For a basic introduction on random matrix theory, we refer to the brief introductory material of Chap. 3.
- 2.
The matrix A represents an hidden hierarchy among the variables since it selects only some of the combinatoric distribution, i.e., the distorted ones. This hidden hierarchy represents the effect of the hierarchical structure on the arrival policy.
- 3.
See Chap. 2 for details on the hierarchical Archimedean class of copula functions.
- 4.
We refer to D d(k, n) as the number of the combinatorial distributions in the corresponding sub-arrival matrix p k,n, i.e., the most probable ways to distribute the integer k into n places.
- 5.
The rule which explains how the coordinates are generated, is presented in the following Corollary.
- 6.
Following this calling order, the d.c.d. considered here is the second one, i.e., j = 2.
- 7.
In this case we refer to a special case of CHC copula, i.e., a perfectly homogeneous clusterized copula, restricted to the distorted distributions.
- 8.
We work here with incremental conditional probabilities, i.e., \(p_z^{(i,u)}=\mathbb {P}((z-1)x<L^{(i,u)}\leq zx|L^{(i,u)} =(z-1)x)\).
- 9.
We observe that the column’s dimension of \(\bar {\mathbf {A}}^{(s)}\) is equal to that of \(\bar {\mathbf {A}}\) but the row’s dimension is stochastic.
- 10.
The rule which explains how the coordinates are generated is presented in the following Corollary.
- 11.
We talk about weighted mean since we ponderate for the frequency of each scenario with respect to the total number of trials.
- 12.
The pictures are generated by interpolation and smoothing technic in order to represent the discrete distribution as continuous one.
- 13.
This evidence remembers the two-bulk configuration of the spectra of square and symmetric random matrices with iid entries (see Mehta 18).
- 14.
This ratio seems to be relevant in the results concerning the limiting spectra distribution of a random matrix.
- 15.
Marcěnko Pasteur distribution is the limiting distribution of the eigenvalues of a square symmetric random matrix with iid entries. We compute this limiting distribution into the pseudo-spectra range and for this reason we call it pseudo-Marcěnko Pasteur distribution.
- 16.
We omit the comparison of the pseudo-spectra distribution in the not distorted case and the (NClayton, id) because their dependence structure is very similar to picture 5.12b.
- 17.
This is a limiting result which holds true for n → +∞, but it is accepted for large but finite n. The right dimension n which allows to classify it as large depends on the application at hand.
- 18.
There is no confusion here among default and loss cdf since they are the same but the support of the first is \(\mathbb {N}\), while that of the second is \(\mathbb {R}\).
- 19.
We point out that if we count fatal events corresponding to default, we recover the default distribution, while if we count losses, we recover the loss distribution.
- 20.
The tolerance is defined as the maximum acceptable distance of the elements of a cluster from the centroid of the same cluster (see Bernardi and Romagnoli 2).
- 21.
In Clayton PL model, we recover a pdf with a single peak which is near to zero explaining the lower tail dependence of this copula function; here we cannot observe this peak due to approximation issues.
- 22.
We choose a properly defined unit of measure for the losses.
- 23.
If the different ways to distribute the events are equally probable, we must consider the average of copula values. Otherwise we can weigh the values coherently with our confidence assumptions.
- 24.
The cardinality of the groups justifies the large portfolio assumption since it is greater than the threshold of 20 (see Schönbucher 19).
- 25.
We express the dependence level in terms of Kendal’s tau to compare directly the Clayton and the Gaussian case.
- 26.
We say that the class is large if its cardinality is greater than or equal to a fixed bound.
- 27.
This is a second level counting variable since it is linked to a second level dependence structure or a dependence structure within a class.
- 28.
In the case of empty set, we delete the correspondent vector element. For example, if the set of elements equal to one into the sth group and for the zth c.c.d. is empty, we correspond only \(u_{s,2}^z\) to \(u_{s,w}^z,\forall w\) and \(v_{s,2}^z\) to \(v_{s,w}^z,\forall w\).
- 29.
Here the compatibility is with respect to the groups’ cardinalities. These c.c.ds are ordered with respect to some specified criteria.
- 30.
We assume to group with a clustering method, based on the Euclidean metric with a little tolerance, as δ = 0.09.
- 31.
We observe in the first scenario the mean whole loss cdf computed with the HYC approach coincides with the HLC mean whole loss cdf since in this case all classes are large.
- 32.
We talk about a partial error of granularity since we compare the PL with the HYC whole loss cdf that corrects the granularity only for small classes. The properly error of granularity can be recovered by a comparison between the PL and the CHC whole loss cdf.
- 33.
This means that the evidence explained here is true if we consider a concentration toward the riskiest class but also toward the safest one.
References
Barlow, R. E., & Proschan, F. (1975). Statistical Theory of Reliability and Life Testing: Probability Models. Holt: Rinehart and Winston.
Bernardi, E., & Romagnoli, S. (2011). Computing the Volume of an high-dimensional semi-unsupervised Hierarchical copula. International Journal of Computer Mathematics, 88(12), 2591–2607.
Bernardi, E., & Romagnoli, S. (2012). Limiting Loss distribution on a Hierarchical copula-based model. International Review of Applied Financial Issues and Economics, 4(2), 126–145.
Bernardi, E., & Romagnoli, S. (2013). A Clusterized copula-based Probability Distribution of a counting variable for high-dimensional problems. The Journal of Credit Risk, 9(2), 3–26.
Bernardi, E., & Romagnoli, S. (2015). A copula-based hierarchical hybrid loss distribution. Statistics and Risk Modeling, 32, 73–87.
Bernardi, E., & Romagnoli, S. (2016). Distorted copula-based probability distribution of a counting hierarchical variable: A credit risk application. International Journal of Information Technology & Decision Making, 15, 285–310.
Cifuentes, A., & O’Connor, G. (1996). The binomial expansion method applied to CBO/CLO analysis. Moody’s Special Report, December.
Cifuentes, A., Efrat, I., Glouck, J., & Murphy, E. (1999). Buying and selling credit risk: a perspective on credit-linked obligations. In J. Gregory (Ed.). Credit Derivatives. London: Risk Book, pp. 112–123.
Emmer, S., & Tasche, D. (2006). Calculating credit risk capital charges with the one-factor model, Journal of Risk, 7, 85–101.
Gordy, M. (2003). A risk-factor model foundation for ratings-based bank capital rules. Journal of Financial Intermediation, 12(3), 199–232.
Gordy, M. (2004). Granularity adjustment in portfolio credit risk measurement. In G. Szego (Ed.), Risk Measures for the 21st Century. Chichester: Wiley.
Hofert, M., Mächler, M. (2011). Nested Archimedean Copulas meet R: The nacopula package. Journal of Statistical Software, in press.
Joe, H. (1997). Multivariate Models and Dependence Concepts. London: Chapman & Hall.
Lindskog, F., & McNeil, A. J. (2003). Common Poisson shock models: applications to insurance and credit risk modelling. ASTIN Bulletin, 33(2), 209–238.
Marshall, A. W., & Olkin, I. (1967a). A generalized bivariate exponential distribution. Journal of Applied Probability, 4, 291–302.
Marshall, A. W., & Olkin, I. (1967b). A multivariate exponential distribution. Journal of American Statistics Association, 62, 30–44.
Marshall, A. W., & Olkin, I. (1988). Families of multivariate distributions. Journal of the American Statistical Association, 83, 834–841.
Mehta, M. L. (1991). Random Matrices, 3rd edn. New York: Academic.
Schönbucher, P. (2003). Credit Derivatives Pricing Models: Model, Pricing and Implementation. Chichester: Wiley.
Schönbucher, P. J. (2004). Taken to the Limit: Simple and Not-so-simple Loan Loss Distributions, Wilmott magazine.
Vasicek, O. (1987). Probability of loss on loan portfolio. Working paper, KMV Corporation.
Vasicek, O. (1991). Limiting loan loss distributions. Working paper, KMV Corporation.
Vasicek, O. (1997). The loan loss distribution. Working paper, KMV Corporation.
Vasicek, O. (2002). Loan portfolio value. Risk, 15(2), 160–162.
Wang, L., Bo, L., & Jiao, L. (2006). A modified K-means clustering with a density-sensitive distance metric. Rough sets and knowledge technology. Lecture Notes in Computer Science, 4062, 544–551.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Bernardi, E., Romagnoli, S. (2021). A New Copula-Based Approach for Counting: The Distorted and the Limiting Case. In: Counting Statistics for Dependent Random Events. Springer, Cham. https://doi.org/10.1007/978-3-030-64250-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-64250-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-64249-5
Online ISBN: 978-3-030-64250-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)