## 1 Introduction

Multivariate Bernoulli variables and their dependence structure are widely studied in the statistical literature, see e.g., [1]. A part of the literature focuses on exchangeable Bernoulli variables for their importance in applications, as credit risk modeling [6], and for the De Finetti representation theorem, which describes the dependence structure in a very simple way. Nevertheless, the De Finetti theorem holds only for infinite sequences of exchangeable variables (see, e.g., [3]).

A novel representation of the class of multivariate Bernoulli variables with some given moments is provided in [4]. If we consider the Fréchet class of d-dimensional Bernoulli variables with given one-dimensional means $$(p_1, \ldots , p_d)$$, i.e., $${\mathcal {F}}(p_1, \ldots , p_d)$$, we can use this representation to investigate the dependence structure of the class. In fact, in [4], the probability mass functions belonging to $${\mathcal {F}}(p_1, \ldots , p_d)$$ are represented as points in a convex hull whose generators are mass functions which belong to the same class. The generators of the class can be explicitly found, although we do not have a general analytical expression for them. This representation is general and allows us to easily generate a sample of mass functions in the class and to find bounds for the other moments of the distribution. It is worth noting that this method puts no restriction on the number of variables. The range of applications is limited only by the amount of computational effort required, because the number of generators increases very quickly as the dimension of the multivariate Bernoulli variables increases. This limitation is overcome if we consider the class of exchangeable Bernoulli variables. Fontana et al. [5] analytically finds the convex hull generators for the class of exchangeable Bernoulli variables with given mean. This analytical representation holds for any finite sequence of exchangeable Bernoulli variables, thus in a more general framework than the De Finetti representation theorem. The analytical solution allows us to work in any dimension.

The aim of this paper is to investigate some Fréchet classes and to compare the entire Frechét class with the subclass of exchangeable random variables. For this reason, we choose to assume that the vector has identically distributed Bernoulli margins, i.e., they all have the same mean. This analysis is computational because we have the analytical solution only under the assumption of exchangeability. As a consequence, it cannot be developed in very high dimension because of the computational effort required. Nevertheless, we work in a truly multidimensional setting, since we reach dimension six. Our comparison between the whole Fréchet class and the set of exchangeable variables makes it possible to draw some conclusions about the limitation that can derive from the assumption of exchangeability in applications.

The paper is organized as follows. After a preliminary section, we restate the theoretical results in [4], but using the same approach that is used in [5] to focus on the exchangeable case. We also recall the analytical construction in the exchangeable case. In Sect. 4, we investigate two special cases. For each case, we find the generators of the class and we provide bounds for the first four cross-moments, since they drive dependence. In particular, we exhibit the correlation bounds, to underline the admissible strength of linear dependence in the class. We also consider the value at risk (VaR) of the sums of Bernoulli variables. This measure is important in credit risk, where Bernoulli variables are indicators of default. In fact, VaR is an indicator for the possible loss of a portfolio with dependent obligors. Interestingly, we find that the bounds for the VaR remain the same if we consider the subclass of exchangeable variables and therefore they can be computed on this subclass and obtained analytically.

## 2 Preliminaries

Let $${\mathbb {F}}_d$$ be the set of d-dimensional distributions which have Bernoulli univariate marginal distributions. Let us consider the Fréchet class $${\mathcal {F}}(p_1, \dots , p_d)\subseteq {\mathbb {F}}_d$$ of distribution functions in $${\mathbb {F}}_d$$ which have Bernoulli marginal distributions $$B(p_i), 0<p_i<1, i\in \{1,\ldots ,d\}$$. If $$\varvec{X}=(X_1, \dots , X_d)$$ is a random vector with joint distribution in $${\mathcal {F}}(p_1, \dots , p_d)$$, we denote

• Its cumulative distribution function by $$F_{\mathcal {X}_d}$$ and its probability mass function (pmf) by $$f_{\mathcal {X}_d}$$, where $$\mathcal {X}_d=\{0, 1\}^d$$;

• The column vector which contains the values of F and f over $$\mathcal {X}_d$$, by $$\varvec{F}_{\mathcal {X}_d}=(F_{\varvec{p}}(\varvec{x}):\varvec{x}\in \mathcal {X}_d)$$ and $$\varvec{f}_{\mathcal {X}_d}=(f_{\varvec{p}}(\varvec{x}):\varvec{x}\in \mathcal {X}_d)$$ respectively; we make the non-restrictive hypothesis that the set $$\mathcal {X}_d$$ of $$2^d$$ binary vectors is ordered according to the reverse-lexicographical criterion. For example, $$\mathcal {S}_2=\{00, 10, 01, 11\}$$ and $$\mathcal {S}_3=\{000, 100, 010, 110, 001, 101, 011, 111\}$$;

• The marginal cumulative distribution function and the marginal mass function of $$X_i$$ by $$F_{\mathcal {X}_d,i}$$ and $$f_{\mathcal {X}_d,i}$$, respectively, $$i\in \{1,\ldots ,d\}$$;

• The values $$f_{\mathcal {X}_d,i}(0) \equiv F_{\mathcal {X}_d,i}(0)$$ and $$f_{\mathcal {X}_d,i}(1)$$ by $$q_i$$ and $$p_i$$, respectively, $$i\in \{1,\ldots ,d\}$$.

We observe that $$q_i=1-p_i$$ and that the expected value of $$X_i$$ is $$p_i$$, $${{\,\mathrm{E}\,}}(X_i)=p_i$$, $$i\in \{1,\ldots ,d\}$$.

Let now $${\mathcal {E}}_d(p)$$ be the class of d -dimensional exchangeable Bernoulli distributions with mean p. If $$\varvec{X}=(X_1, \dots , X_d)$$ is a random vector with joint distribution in $${\mathcal {E}}_d(p)$$, it holds $$f_{\mathcal {X}_d}(\varvec{x})=f_{\mathcal {X}_d}(\sigma (\varvec{x}))$$ for any $$\sigma \in {\mathcal {P}}_d$$, where $${\mathcal {P}}_d$$ is the set of permutations on $$\{1,\ldots , d\}$$. Thus, any mass function f in $${\mathcal {E}}_d(p)$$ is given by $$f_{i}:=f_{\mathcal {X}_d}(\varvec{x})$$ if $$\varvec{x}=(x_{1},\ldots ,x_{d})\in {\mathcal {X}}_d$$ and $$\#\{x_{j}:x_{j}=1\}=i$$. Therefore, we identify a mass function $$f_{\mathcal {X}_d}$$ in $${\mathcal {E}}_d(p)$$ with the corresponding vector $$\varvec{f}:=(f_0, \ldots , f_d)$$. The simplest example of exchangeable distribution is the case of independence that is linked to the Binomial distribution.

Furthermore, the moments depend only on their order, we therefore use $$\mu _{{\alpha }}$$ to denote a moment of order $$\alpha =\text {ord}(\varvec{\alpha })=\sum _{i=1}^d\alpha _i$$, where $$\varvec{\alpha }\in {\mathcal {X}}_d$$. For example, we have $$\mu _1=p$$. We also observe that the correlation $$\rho$$ between two Bernoulli variables $$X_i \sim B(p)$$ and $$X_j \sim B(p)$$ is related to the second-order moment $$\mu _{2}= {{\,\mathrm{E}\,}}[X_i X_j]$$ as follows:

\begin{aligned} \mu _2=\rho pq+p^2. \end{aligned}
(1)

## 3 Theoretical Background

Building on the results in [4, 5] in this section, we represent the Fréchet class of multivariate d-dimensional Bernoulli distributions with given margins, $$d \ge 2$$ as the points of a convex polytope. We recall that a polytope (or more specifically a d-polytope) is the convex hull of a finite set of points in $$\mathbb {R}^d$$ called the extremal points of the polytope. We say that a set of k points is affinely independent if no one point can be expressed as a linear convex combination of the others. For example, three points are affinely independent if they are not on the same line, four points are affinely independent if they are not on the same plane, and so on. The convex hull of $$k+1$$ affinely independent points is called a simplex or k-simplex. For example, the line segment joining two points is a 1-simplex, the triangle defined by three points is a 2-simplex, and the tetrahedron defined by four points is a 3-simplex. A complete reference on computational geometry is [2].

The representation of $${\mathcal {F}}(p_1, \ldots , p_d)$$ as a convex polytope holds for any $$\varvec{p}$$, with the drawback that the search of the generators, is computationally challenging for high dimension. This limitation is not present in the class of exchangeable d-dimensional Bernoulli distributions with given margins, where we have an analytical expression for the convex polytope generators.

Let $$f_{\mathcal {X}_d}$$ be a multivariate d-dimensional Bernoulli distribution with margins p, i.e., $$f_{\mathcal {X}_d} \in \mathcal {F}(p_1,\dots ,p_d)$$.

Using the conditions on the mean values, we can write any vector density $$\varvec{f}_{\mathcal {X}_d}$$ in $$\mathcal {F}(p_1,\ldots ,p_d)$$ as the solution of a linear system. Formally, since

\begin{aligned} {{\,\mathrm{E}\,}}(X_i) = \sum _{\varvec{x}\in \mathcal {X}_d} x_i f_{\mathcal {X}_d}(\varvec{x}), \end{aligned}

we have

\begin{aligned} {\left\{ \begin{array}{ll} {\sum }_{\varvec{x}\in \mathcal {X}_d} x_i f_{\mathcal {X}_d}(\varvec{x})=p_i \\ 1- {\sum }_{\varvec{x}\in \mathcal {X}_d} x_i f_{\mathcal {X}_d}(\varvec{x})=q_i, \end{array}\right. } \end{aligned}

Let $$\gamma _i=p_i/q_i$$, it holds $$\gamma _i q_i - p_i=0$$. We can write

\begin{aligned}&\gamma _i (1- \sum _{\varvec{x}\in \mathcal {X}_d} x_i f_{\mathcal {X}_d}(\varvec{x}))- \sum _{\varvec{x}\in \mathcal {X}_d} x_if_{\mathcal {X}_d}(\varvec{x}) = 0,\nonumber \\&\sum _{\varvec{x}\in \mathcal {X}_d} (\gamma _i(1- x_i) - x_i)f_{\mathcal {X}_d}(\varvec{x}) = 0. \end{aligned}
(2)

The d equations in Eq. (2) provide a linear system. Let H be its coefficients matrix. The rows of H are $$(\gamma _i (\varvec{1}-\varvec{x}_i)^{\top } - \varvec{x}_i^{\top })$$, $$i\in \{1,\ldots ,d\}$$, where $$\varvec{1}$$ is the vector with all the elements equal to 1 and $$\varvec{x}_i$$ is the projection vector which contains only the i-th element of $$\varvec{x}\in \mathcal {X}_d$$, $$i\in \{1,\ldots ,d\}$$, e.g., for the bivariate case $$\varvec{x}_1=(0, 1,0,1)$$ and $$\varvec{x}_2=(0, 0,1,1)$$.

The densities $$\varvec{f}_{\mathcal {X}_d}$$ in $$\mathcal {F}(p_1,\ldots ,p_d)$$ are the positive solutions of the system $$H\varvec{z}=0$$, whose components sum up to one.

All the positive, normalized, solutions of $$H\varvec{z}=0$$ are elements of the convex polytope $${\mathcal {P}}=\{\varvec{z}\in \mathbb {R}^{ 2^d}: \sum _{i=1}^nz_i=1, H \varvec{z}=0,\, I\varvec{z}\ge 0\}$$, where I is the $$2^d\times 2^d$$ identity matrix. Each point in the polytope is a convex combinations of a set of generators which are referred to as extremal densities of the linear system. We denote them as $${\varvec{R}}_{\mathcal {X}_d}^{(i)}$$, $$i=1,\dots , n_{\mathcal {F}}$$ and $$n_{\mathcal {F}}$$ is the number of generators that depends on d and $$\varvec{p}$$.

Using the above arguments, [4] proved the following theorem.

### Theorem 1

Let $$f_{\mathcal {X}_d}$$ be a multivariate d-dimensional Bernoulli distribution, $$f_{\mathcal {X}_d}\in {\mathbb {F}}_d$$. Then, $$f_{\mathcal {X}_d}$$ is a mass with margins p, i.e., $$f_{\mathcal {X}_d} \in \mathcal {F}(p_1,\dots ,p_d)$$ if and only if there exist $$\lambda _i \ge 0$$, $$i\in \{1,\dots , n_{\mathcal {P}}\}$$, $$\sum _{i=1}^{n_{\mathcal {P}}}\lambda _i=1$$ such that

\begin{aligned} \varvec{f}_{\mathcal {X}_d}=\sum _{i=1}^{n_{\mathcal {P}}}\lambda _i\varvec{R}_{\mathcal {X}_d}^{(i)}, \end{aligned}
(3)

where $$\varvec{R}_{\varvec{p}}^{(i)}=(R_{\varvec{p}}^{(i)}(\varvec{x}), \varvec{x}\in \mathcal {X}_d)\in \mathcal {F}(p_1,\ldots ,p_d)$$ are the extremal points of the polytope $${\mathcal {P}}=\{\varvec{z}\in \mathbb {R}^{ 2^d}: \sum _{i=1}^nz_i=1, H \varvec{z}=0,\, I\varvec{z}\ge 0\}$$ and $$n_{\mathcal {P}}$$ is the number of the extremal points of $$\mathcal {P}$$.

To find the extremal densities, i.e., the generators of $$\mathcal {F}(p_1,\ldots ,p_d)$$, we have to find the extremal solutions of an homogeneous system. If the dimension of the system increases, the number of extremal solutions becomes huge, leading to computational difficulties. These difficulties disappear when we consider the class $$\mathcal {E}_d(p)$$ of exchangeable Bernoulli variables, where we have the analytical expression of the extremal densities. If $$f_{\mathcal {X}_d}(\varvec{x})=f_{\mathcal {X}_d}(\sigma (\varvec{x}))$$ for any $$\sigma \in {\mathcal {P}}_d$$, any mass function $$f_{\mathcal {X}_d}$$ in $${\mathcal {E}} _d(p)$$ is given by $$f_{i}:=f_{\mathcal {X}_d}(\varvec{x})$$ if $$\varvec{x} =(x_{1},\ldots ,x_{d})\in {\mathcal {D}}_{d}$$ and $$\#\{x_{j}:x_{j}=1\}=i$$. Using this fact, we can define a one-to-one correspondence between $${\mathcal {E}}_d(p)$$ and the class of the distributions of their sums.

Let $${\mathcal {S}}_d(p)$$ be the class of distributions $$p_S$$ on $$\{0,\ldots , d\}$$ such that $$S_d=\sum _{i=0}^dX_i$$ with $$\varvec{X}\in {\mathcal {E}}_d(p)$$. Let $$p_S(j)=p_j=P(S_d=j)$$ and $$\varvec{p}_S=(p_0,\ldots , p_d)$$.

The map,

\begin{aligned} \begin{aligned} E: {\mathcal {E}}_d(p)&\rightarrow {\mathcal {S}}_d(p) \\ f_{j}&\rightarrow p_j={\left( {\begin{array}{c}d\\ j\end{array}}\right) }f_j, \end{aligned} \end{aligned}
(4)

is a one-to-one correspondence between $${\mathcal {E}}_d(p)$$ and $${\mathcal {S}} _d(p)$$. Notice that the pmf $$\varvec{f}_I$$ of independent Bernoulli variables is exchangeable, i.e., $$\varvec{f}_I\in {\mathcal {E}}_d(p)$$ and the map E sends $$\varvec{f}_I$$ in the Binomial distribution.

Therefore, we have

\begin{aligned} {\mathcal {E}}_d(p)\leftrightarrow {\mathcal {S}}_d(p). \end{aligned}
(5)

Fontana et al. [5] proved that the class of distributions $${\mathcal {S}}_d(p)$$ coincides with the entire class of discrete distributions with mean dp, say $${\mathcal {D}}_d(dp)$$. This fact is useful to simplify the search of the generators of $${\mathcal {E}}_d(p)$$.

Therefore, the three classes $${\mathcal {E}}_d(p)$$, $${\mathcal {S}}_d(p)$$ and $${\mathcal {D}}_d(dp)$$ are essentially the same class, i.e.,

\begin{aligned} {\mathcal {E}}_d(p)\leftrightarrow {\mathcal {S}}_d(p)\equiv {\mathcal {D}}_d(dp) \end{aligned}
(6)

Thanks to the above correspondence to find the generators of $${\mathcal {S}}_d(p)$$, we can look for the generators of $${\mathcal {D}}_d(dp)$$. This simplifies the search. The generators we find are in one-to-one relationship with the generators of $${\mathcal {E}}_d(p)$$.

Using the equivalence $${\mathcal {S}}_d(p)\equiv {\mathcal {D}}_d(pd)$$ stated in [5], a pmf in $${\mathcal {S}}_d(p)$$ is a pmf on $$\{0,\ldots ,d\}$$ with mean pd. Thanks to the map E in Eq. (6), this is also equivalent to find a set of conditions that a pmf of a multivariate Bernoulli has to satisfy for being in $${\mathcal {E}}_d(p)$$. Following the approach developed in the proof of Theorem 1, the set of conditions are homogeneous equations, whose unknown are the values of a pmf in $${\mathcal {D}}_d(pd)$$.

### Proposition 1

Let $$\varvec{Y}$$ be a discrete random variable defined over $$\{0,\ldots ,d\}$$ and let $$p_Y$$ be its pmf. Then,

\begin{aligned} Y\in {\mathcal {S}}_d(p) \Longleftrightarrow \sum _{j=0}^d(j-pd)p_Y(j)=0. \end{aligned}

Using Proposition 1, we can find all generators of $${\mathcal {S}}_d(p)$$. Thanks to the map E, that is equivalent to finding all the generators of $${\mathcal {E}}_d(p)$$.

We have to find the normalized extremal points of the convex cone

\begin{aligned} {\mathcal {C}}_p=\left\{ \varvec{z}\in {\mathbb {R}}^{d+1}: \sum _{j=0}^da_jz_j=0,\, I \varvec{z}\ge 0\right\} , \end{aligned}
(7)

where $$a_j=j-pd$$ and I is the $$(d+1)\times (d+1)$$ identity matrix. The following proposition, proved in [5], provides the analytical expression of the extremal points in $${\mathcal {S}}_d(p)$$.

### Proposition 2

The extremal points of the convex cone $${\mathcal {C}}_p$$ in (7) are

\begin{aligned} p_{j_1,j_2}(y)=\left\{ \begin{array}{ll} \frac{j_2-pd}{j_2-j_1} &{}\quad y=j_1 \\ \frac{pd-j_1}{j_2-j_1} &{}\quad y=j_2 \\ 0 &{}\quad \mathrm{otherwise} \end{array} \right. , \end{aligned}
(8)

with $$j_1=0,1,\ldots , j_1^{M}$$, $$j_2=j_2^m, j_2^m+1, \ldots , d$$, $$j_1^M$$ is the largest integer less than pd and $$j_2^m$$ is the smallest integer greater than pd.

If pd is integer, the extremal points contain also

\begin{aligned} p_{pd}(y)=\left\{ \begin{array}{ll} 1 &{} \quad y=pd \\ 0 &{} \quad \mathrm{otherwise} \end{array} \right. . \end{aligned}
(9)

A corollary of the above proposition is the number of ray densities.

### Corollary 1

• If pd is not integer there are $$n_p=(j_1^M+1)(d-j_1^M)$$ extremal densities.

• If pd is integer there are $$n_p=d^2p(1-p)+1$$ extremal densities.

### 3.1 Moments, Quantiles and their Bounds

This section focuses on the problem of finding bounds for the moments of multivariate Bernoulli variables in $$\mathcal {F}(p_1,\dots ,p_d)$$ and in $$\mathcal {E}_d(p)$$. Given $$f_{\mathcal {X}_d} \in \mathcal {F}(p_1,\dots ,p_d)$$, from Theorem 1, we observe that each moment $${{\,\mathrm{E}\,}}(X^{\varvec{\alpha }}), \varvec{\alpha }\in \mathcal {X}_d$$ can be computed as

\begin{aligned} \varvec{\mu }= M^{\otimes d} \varvec{f}_{\mathcal {X}_d} = M^{\otimes d} R_{\mathcal {X}_d} \varvec{\lambda }. \end{aligned}

We denote by $$A_{\mathcal {X}_d}$$ the matrix whose columns contain all the moments of the extremal mass functions, $$A_{\mathcal {X}_d}=M^{\otimes d} R_{\mathcal {X}_d}$$. Let $$A_{k\mathcal {X}_d}=\left( M^{\otimes d}\right) _k R_{\mathcal {X}_d}$$ where $$\left( M^{\otimes d}\right) _k$$ is the sub-matrix of $$M^{\otimes d}$$ obtained by selecting the rows corresponding to the k-order moments and $$R_{\mathcal {X}_d}$$ is the ray matrix. We observe that the columns of the matrix $$A_{k\mathcal {X}_d}$$ contain the moments of the extremal mass functions, i.e., the bounds for the k-th order moment are reached on the extremal densities.

### Proposition 3

For each $$\varvec{\alpha }\in \mathcal {X}_d$$, $$\Vert \varvec{\alpha }\Vert _0=k$$, the k-order moment $$\varvec{\mu }_k^{(\varvec{\alpha })}$$ must satisfy the following bounds:

\begin{aligned} \min A_{k\mathcal {X}_d}^{(\varvec{\alpha })} \le \varvec{\mu }_k^{(\varvec{\alpha })} \le \max A_{k\mathcal {X}_d}^{(\varvec{\alpha })} \end{aligned}

where $$A_{k\mathcal {X}_d}^{(\varvec{\alpha })}$$ is the row of the matrix $$A_{k\mathcal {X}_d}$$ such that $$\varvec{\mu }_k^{(\varvec{\alpha })}=A_{k\mathcal {X}_d}^{(\varvec{\alpha })} \varvec{\lambda }$$.

Important special cases are the second-order moments which allow us to find bounds for correlations:

### Proposition 4

The correlations $$\rho _{ij}$$ must satisfy the following bounds:

\begin{aligned} \frac{\min A_{2\mathcal {X}_d}^{(\varvec{\alpha })}-p_ip_j}{\sqrt{p_iq_ip_jq_j}} \le \rho _{ij} \le \frac{\max A_{2\mathcal {X}_d}^{(\varvec{\alpha })}-p_ip_j}{\sqrt{p_iq_ip_jq_j}}, \end{aligned}

where $$A_{2\mathcal {X}_d}^{(\varvec{\alpha })}$$ is the row of the matrix $$A_{2\mathcal {X}_d}$$ such that $$\varvec{\mu }_2^{(\varvec{\alpha })}=A_{2\varvec{p}}^{(\varvec{\alpha })} \varvec{\lambda }$$ and $$\{i,j\}=\{k: \alpha _k=1\}$$.

As we observed, for a given p, the class of exchangeable multivariate pmfs $$\mathcal {E}_d(p)$$ is a subclass of the Frechét class $$\mathcal {F}(p,\ldots , p)$$ where all margins are equal to p. For the sake of simplicity, we denote $$\mathcal {F}_d(p,\ldots , p)$$ by $$\mathcal {F}_d(p)$$.

If we consider the class $$\mathcal {E}_d(p)$$ of exchangeable distributions, the moments depend only on their order. Therefore, as said in the preliminaries, we use $$\mu _{{\alpha }}$$ to denote a moment of order $$\alpha$$. Being $$\mathcal {E}_d(p) \subset \mathcal {F}_d(p)$$, the above bounds are still true; thus, the minimum and the maximum moments are reached on the ray densities of $$\mathcal {E}_d(p)$$. We expect that the bounds for the moments of the variables in $$\mathcal {E}_d(p)$$ are more binding that the bounds for the moments in $$\mathcal {F}_d(p)$$. We computationally investigate this aspect on some cases in Sect. 4. There is no particular relation between the extremal densities of the Fréchet class $${\mathcal {F}}_d(p)$$ and the extremal densities of the exchangeable class $${\mathcal {E}}_d(p)$$ apart the fact that the extremal density $$f^U$$ (the upper Fréchet bound) defined as

\begin{aligned} f^U(x_1,\ldots ,x_d)= {\left\{ \begin{array}{ll} 1-p &{}\text {if}\;\, x_1=\ldots =x_d=0 \\ p &{}\text {if}\;\, x_1=\ldots =x_d=1 \\ 0 &{}\text {otherwise} \end{array}\right. } \end{aligned}

belongs to both sets of extremal densities. An example of extremal densities is given provided in Sect. 4.

The class $$\mathcal {E}_d(p)$$ is of interest in several fields including finance, where exchangeable Bernoulli variables are used to model indicators of default of the obligors in a credit risk portfolio. In this framework, the distribution of the number of defaults, i.e., the sum of the components of an exchangeable multivariate Bernoulli variable, is studied. One of the quantities of interest are the quantiles of the distribution, $$q_{\alpha }$$. For some levels of $$\alpha$$, the quantiles are measures of risk and often referred to as Value at risk ($$\text {VaR}_{\alpha }$$).

### Definition 1

Let Y be a random variable with finite mean. Then, the $$\text {VaR}_{\alpha }$$ at level $$\alpha$$ is defined by

\begin{aligned} q_{\alpha }(Y)=\inf \{y\in {\mathbb {R}}:P(Y\le y)\ge \alpha \} \end{aligned}

In [5], the authors prove that the bounds of the quantiles of a distribution $$p_S\in {\mathcal {S}}_d(p)$$ are reached on the ray densities and they analytically find them. In particular, they prove the following.

### Proposition 5

Let us consider the class $${\mathcal {S}}_d(p)$$. Let $$j_1^p=\frac{(p-(1-\alpha ))d}{\alpha }$$, $$j_1^M$$ be the largest integer less than pd and $$j_2^m$$ be the smallest integer greater than pd.

1. 1.

If $$j_1^p<0$$, $$\min {q}_{\alpha }(R_{(j_1,j_2)})=0$$ and $$\max {q}_{\alpha }(R_{(j_1,j_2)})=j_2^*$$, where $$j_2^*$$ is the largest integer smaller than $$\frac{pd}{ 1-\alpha }$$.

2. 2.

If $$0\le j_1^p\le j_1^M$$, $$\min {q}_{ \alpha }(R_{(j_1,j_2)})=j_1^*$$, where $$j_1^*$$ is the smallest integer greater or equal to $$j_1^p$$ and $$\max {q}_{\alpha }(R_{(j_1,j_2)})=d$$.

3. 3.

If $$j_1^p>j_1^M$$, $$\min {q}_{ \alpha }(R_{(j_1,j_2)})=j_2^m=j_1^M+1$$ and $$\max {q} _{\alpha }(R_{(j_1,j_2)})=d$$. In this case, if pd is integer, $$j_1^M+1=pd$$.

The proof of the above propositions relies on the analytical expression of the extremal densities of the convex polytope $${\mathcal {S}}_d(p)$$. For this reason, the assumption of exchangeability does not affect these bounds. Precisely, let $$\varvec{X}\in \mathcal {F}_d(p)$$, Then, $$S_X=\sum _{i=1}^d X_i\in {\mathcal {D}}(dp)\equiv \mathcal {S}_d(p)$$. Therefore, the quantile of $$S_X$$ is the quantile of a distribution in the class $${\mathcal {S}}_d(p)$$ and satisfies the bounds in Proposition 5. This fact is of interest in credit risk, since it states that the assumption of exchangeability does not effect the bounds of the value at risk.

## 4 Computational Results for Some Frechét Classes

This section explores some Fréchet classes for given one-dimensional marginal probabilities. To make comparisons between the general case and the exchangeable case, we choose two Fréchet classes of d-dimensional Bernoulli variables with identically distributed one-dimensional margins. We consider the classes $${\mathcal {F}}_d\left( \frac{1}{2}\right)$$ and $${\mathcal {F}}_d\left( \frac{1}{5}\right)$$ and their subclasses $${\mathcal {E}}_d\left( \frac{1}{2}\right)$$ and $${\mathcal {E}}_d\left( \frac{1}{5}\right)$$, for $$d=2,\ldots ,5$$. Table 1 provides the number of extremal points for each class and exhibits the computational effort necessary to work in the general case and high dimension.

Case $$d=2$$ is analytical. We know that the extremal densities are the upper and lower Fréchet bound, as proved in [4]. The same extremal densities generate $${\mathcal {E}}_2(\frac{1}{2})$$; in fact, in the bi-dimensional case, the condition to have the same margins implies exchangeability.

As a simple example, for case $$d=3$$ and $$p=1/2$$, we provide the extremal densities of the Fréchet class $${\mathcal {F}}_3\left( \frac{1}{2}\right)$$ (Table 2) and the extremal densities of the exchangeable class $${\mathcal {E}}_3\left( \frac{1}{2}\right)$$ (Table 3). As we already pointed out, the upper Fréchet bound ($$\varvec{R}_{\mathcal {F}}^{(5)}$$ in Table 2 and $$\varvec{R}_{\mathcal {E}}^{(2)}$$ in Table 3) belongs to both classes.

As can be seen, the number of the generators of the whole Fréchet class increases very quickly, while the number of generators of the subclass of exchangeable variables is much smaller. This means that working under the assumption of exchangeability is far easier. The following two sections explore how much it could be binding to assume exchangeability in terms of dependence flexibility. To do this, we find the bounds for the cross-moments of the entire Fréchet class and of the exchangeable subclass to consider both linear and nonlinear dependence. We also consider the VaR of the sums, whose bounds—as discussed in Sect. 3.1—are not affected by the assumption of exchangeability.

### 4.1 The Class $${\mathcal {F}}_d\left( \frac{1}{2}\right)$$

In this section, we consider the case $$p=\frac{1}{2}$$ and $$d=2,\ldots , 6$$.

Table 4 reports the bounds for moments of order $$2,\ldots ,6$$ both for the Fréchet class $${\mathcal {F}}_d\left( \frac{1}{2}\right)$$ and the exchangeable class $${\mathcal {E}}_d\left( \frac{1}{2}\right)$$, $$d=2,\ldots ,6$$ .

We conclude this section with the bounds for the value at risk $$\hbox {VaR}_{0.95}$$ of the sums, i.e., the quantile $$q_{0.95}$$ of the distribution of $$S_X=X_1+\ldots +X_d$$, where $$\varvec{X}$$ has pmf in $${\mathcal {F}}_d(\frac{1}{2})$$. The bounds, in Table 5, remain the same if we assume that $$\varvec{X}$$ has pmf in $${\mathcal {E}}_d(\frac{1}{2})$$. Notice that the maximum VaR is always the dimension d; this is probably due to the fact that marginal probability $$p=\frac{1}{2}$$ is quite large. The results in [5], where marginal default probabilities are small, support this interpretation.

### 4.2 The Class $${\mathcal {F}}_d\left( \frac{1}{5}\right)$$

In this section, we consider the case $$p=\frac{1}{5}$$ and $$d=2,\ldots , 6$$.

Table 6 reports the bounds for moments of order $$2,\ldots ,6$$ both for the Fréchet class $${\mathcal {F}}_d\left( \frac{1}{5}\right)$$ and the exchangeable class $${\mathcal {E}}_d\left( \frac{1}{5}\right)$$, $$d=2,\ldots ,6$$ .

We conclude this section with the bounds for the value at risk $$\hbox {VaR}_{0.95}$$ of the sums, i.e., the quantile $$q_{0.95}$$ of the distribution of $$S_X$$, where $$\varvec{X}$$ has pmf in $$\mathcal {F}_d\left( \frac{1}{5}\right)$$. The bounds are in Table 7. Also in this case, the bounds remain the same if we assume that $$\varvec{X}$$ has pmf in $$\mathcal {E}_d\left( \frac{1}{5}\right)$$ and the maximum VaR is always the dimension d.