Some properties of the unified skew-normal distribution

For the family of multivariate probability distributions variously denoted as unified skew-normal, closed skew-normal and other names, a number of properties are already known, but many others are not, even some basic ones. The present contribution aims at filling some of the missing gaps. Specifically, the moments up to the fourth order are obtained, and from here the expressions of the Mardia's measures of multivariate skewness and kurtosis. Other results concern the property of log-concavity of the distribution, and closure with respect to conditioning on intervals.

1 The unified skew-normal distribution

Early development, applications and some open problems
In recent years, there has been a vigorous impulse in the development of flexible parametric families of distributions. This activity is specially lively and stimulating in the multivariate setting, correspondingly to the ever increasing availability and treatment of multivariate data in applied work.
An active direction of research within this process is represented by a family of continuous distributions which has originated as a generalization of the multivariate skew-normal (SN) distribution, which itself is a generalization of the classical normal distribution; for a review of the SN distribution and its ramifications, see Azzalini & Capitanio (2014). The generalization we are concerned with has originated from multiple independent sources, with some differences in the technical development, but with common underlying structure, as explained later. Specifically, González-Farías et al. (2004a) and González-Farías et al. (2004b) have developed the 'closed skew-normal distribution'. Motivated by Bayesian inference considerations, Liseo & Loperfido (2003) have presented the 'hierarchical skew-normal'. Another related construction is the 'fundamental skew-normal' proposed by Arellano-Valle & Genton (2005), who also consider a second version of closed skew-normal.
The interconnections among these apparently separate formulations have been examined by Arellano-Valle & Azzalini (2006), showing their essential equivalence, as well as the presence of overparameterizations in some cases. To accomplish their project, they introduced a unifying version which embraces the above-recalled specific proposals, removing at the same time

Main properties of the SUN family
We summarize the main facts about the SUN family; this term is used to embrace also the closed skew-normal and other essentially equivalent classes, provided a suitable parameterization is adopted. The notation here is the one of Subsection 7.1.2 of Azzalini & Capitanio (2014), which is largely the same of Arellano-Valle & Azzalini (2006), with minor variations.
For positive integers d and m, consider the (d + m)-dimensional normal random variable where Ω * is a full-rank correlation matrix. Define Z to be a d -dimensional random variable with the same distribution of (X 0 |X 1 + τ > 0), where τ = (τ 1 , . . . , τ m ) ⊤ and the notation X 1 + τ > 0 means that the inequality sign must hold component-wise for each one of the m components. Next, introduce the transformed variable Y = ξ + ω Z , where ξ = (ξ 1 , . . . , ξ d ) ⊤ and ω is a d × d diagonal matrix with positive diagonal elements ω 1 , . . . , ω d , and denote Ω = ωΩω. It can be show that the density of Y at x ∈ R d is where ϕ h (u; Σ) and Φ h (u; Σ) denote the N h (0, Σ) density function and distribution function at u ∈ R h , respectively, for any symmetric (h × h) positive-definite matrix Σ. In this case, we shall write Y ∼ SUN d,m (ξ, Ω, ∆, τ, Γ).
The SUN family enjoys numerous formal properties. For instance, we have already anticipated in Subsection 1.1 that this family is closed with respect to convolution. Many other interesting facts hold, but it would take too much space to review all such properties here, and we only recall those which are required for the subsequent development; additional information is summarized in Section 7.1 of Azzalini & Capitanio (2014). A key fact is the expression of the moment generating function, M (t ) or, equivalently, the cumulant generating function of (2) as given by Arellano-Valle & Azzalini (2006) is essentially as in González-Farías et al. (2004a) and González-Farías et al. (2004b), up to a change of parameterization. From this expression, many other results can be derived. One of them is represented by the rule for obtaining the distribution of an affine transformation: if a is a p-vector and A is a full-rank d × p matrix, then where ∆ A = Diag(A ⊤ ΩA) −1/2 A ⊤ ω∆, using the notation Diag(M ) to denote the diagonal matrix formed by the diagonal elements of a square matrix M , as in Mardia et al. (1979, p. 455). Clearly, (4) can be used to compute the distribution of p-dimensional marginals. Another result to be used in our development is the expression of the distribution function, which has been given in Lemma 2.2.1 of González-Farías et al. (2004b). Since we adopt the SUN formulation for the reasons discusses by Arellano-Valle & Azzalini (2006), we shall use the equivalent expression, given by Azzalini & Bacchieri (2010), There exist two stochastic representations of the SUN distribution, or equivalently two constructive ways to generate a random variable Y with density (2). The first of these is essentially the above-described process leading from the normal variable X in (1) to the variable Y , via the intermediate variable Z . This is denoted 'representation by conditioning' since it operates through the condition X 1 + τ > 0.
The other stochastic representation is of convolution type, that is, as the distribution of the sum of two independent random variables. Specifically, from the above-defined quantities, introduceΨ ∆ =Ω − ∆Γ −1 ∆ ⊤ , and the two independent variables U 0 ∼ N d (0,Ψ ∆ ) and U 1,−τ which is obtained by the component-wise truncation below −τ of a variate U 1 ∼ N m (0, Γ). Then, Y ∼ SUN d,m (ξ, Ω, ∆, τ, Γ) can be expressed via the so-called additive representation which will play a key role in our development. For a detailed discussion of the interplay of these two stochastic representations, see Section 2.1 of Arellano-Valle & Azzalini (2006).
Although the moment generating function M (t ) has been known since the early work on this theme, it has not translated into decisive advances in the computation of moments and cumulants. Most of the available results for E{Y } and var{Y } are limited is some way or another. For instance, results in Section 3 of Gupta et al. (2004) refer to the case m = d , and even so they employ very involved auxiliary functions. For the case where Γ is a diagonal matrix, Arellano-Valle & Azzalini (2006) provide explicit expressions for the expected value and the variance matrix, applicable for all d and m.
To our knowledge, the general expression of E{Y } has been obtained by Azzalini & Bacchieri (2010). This expression involves the following quantities: τ − j denotes the vector obtained by removing the j component of τ, for j = 1, . . . , m; Γ − j is the (m − 1) × (m − 1) matrix obtained by removing the j th row and column of Γ; γ − j denotes the j th column of Γ − j ; finally, Then the mean value can be written as where ∇Φ m is the m-vector with j th element An expression of type (7) or (8) can be regarded as 'essentially explicit', at least for moderate values of m, even if it involves the distribution function of a multivariate normal distribution function, Φ m . The phrase 'essentially explicit' seems justified in the light of the current advances for computing Φ m , similarly to the process which, a few decades ago, has led to consider 'explicit' an expression involving the univariate normal distribution function, Φ.
Some intermediate expressions of the SUN variance matrix have been provided by Gupta & Aziz (2012) and Gupta et al. (2013), where the word 'intermediate' reflects the presence in their result of the matrix of the second derivatives of Φ m . Since these second derivatives have been provided in an explicit form only for some special sub-cases of the SUN family, the question of the general expression of the SUN variance matrix appears to be open. This is the problem to be tackled in our next section, followed by consideration of higher order moments.

The variance matrix
We compute the variance matrix var{Y } using second-order differentiation of the cumulant generating function (3). Write where P (t ) = Φ m (τ + ∆ ⊤ ωt ; Γ). The only non-obvious terms are ∂P /∂t j , for j = 1, . . . , m. Denote where ∆ j is the j th column of ∆, for j = 1, . . . , m. For notational simplicity, we focus on j = 1 since the other terms are analogous. Write the joint m-normal density involved by P as the product of the first marginal component times the conditional density of the other components, leading to where x −1 is the (d − 1)-vector obtained by removing the first component of x, µ −1 (x 1 ) = γ −1 x 1 denotes the mean value of the conditional normal distribution when the first component of N m (0, Γ) is fixed at x 1 , andΓ −1 denotes the corresponding variance matrix; we have used the quantities introduced in connection with (7). Therefore where the term Φ m−1 (·;·) must be interpreted as 1 when m = 1. This convention will apply also to subsequent expressions. Application of (11) with the other values of the subscript j produces the entire gradient of P . Next, evaluation of the gradient (9) at t = 0 delivers the mean vector (7).
The second derivative of K (t ) is obtained by differentiation of (9), yielding where two generic entries of the final term are of the type The first summand on the right side is the product of quantities of type (11). For the second summand consider first the case with t 1 = t 2 , and follow a similar logic used for (10), but now separate out two components. Focusing of the first two components, for notational simplicity, write where x 1:2 = (x 1 , x 2 ), Γ 1:2 is the submatrix of Γ formed by its top-left 2 × 2 block, and so on, in the same logic and notational scheme used before.
Here we have implicitly assumed that m ≥ 2. This is a legitimate assumption since the case with m = 1 corresponds to the ESN distribution, for which var{Y } has been given by Capitanio et al. (2003) along with other moment-related results of the ESN distribution.
When j = k, take j = k = 1 for simplicity of notation and write where (∂P /∂t 1 ) is given by (11). Consider its core part (∂P /∂u 1 ) and take the successive derivative Hence the second derivative (∂ 2 P /∂t 2 1 ) evaluated at t 1 = 0 is which, similarly to earlier expressions, must be replicated for the other values of j . Finally, as a general expression encompassing all terms in a matrix notation, we arrive at say, where H is the matrix formed by the elements other than ∆ω given in (11), (14) and (15). Even if we have derived (16) under the assumption that m ≥ 2, a subsequent inspection has shown that the expression remains valid provided the above derivatives are computed setting the ϕ m−1 and Φ m−1 terms equal to 1 when m = 1, as we recover the known expressions for the ESN distribution. With this convention, (16) holds for all m.
Starting from a different motivation, expressions for the derivatives of Φ m similar to those obtained above have been presented in Lemma 2.3 of Arellano-Valle et al. (2013). Their motivation was the computation of mean value and the variance matrix of the truncated multivariate normal distribution, which are given in their Lemma 2.2. Taking into account the additive representation (6) of a SUN variable, those expression could also be used to derive the SUN lower moments.

Higher-order moments
While in principle one could consider successive differentiations of K (t ) to compute higher-order moments, this process becomes algebraically cumbersome. We therefore follow another route, based on the additive representation (6).
Our plan of work is as follows. A preliminary step is the development of various expressions concerning moments of the sum of two independent random vectors, which are presented separately in an appendix. Simplification can be obtained by the using the fact that one of the components of (6) is a zero-mean normal variable. On this front, we benefit from the extensive literature on computational method for the moments of a multivariate truncated normal distribution. Combining representation (6) with these results for the moments of a truncated normal variable, we obtain expressions for the desired SUN moments.
For a p-dimensional random variable X , define its moments up to the fourth order as provided the involved expected values exist. The equivalence of the various expressions for a given moment follows from standard properties of the Kronecker product. The vec operator stacks the columns of a matrix in a single vector. Also, the following notation will be used, adopted from Magnus & Neudecker (1979) and Neudecker & Wansbeek (1983). For arbitrary natural numbers s, p, q, denote by e i :s the i th sdimensional unit vector formed by all 0's except a 1 in the i th position, and from here define For algebraic convenience, we rewrite (6) in an equivalent form. Introduce the quantities and denote by Ψ 1/2 the unique symmetric square root of Ψ; however, it would make no difference if another square root of Ψ is considered. The fact that Ψ > 0 follows from the assumption that

Proposition 1 (SUN moments) Consider
The inequality Ω > Σ holds, in the sense that the difference of the matrices is positive definite.
Proof. Make use of Proposition A.5 in the Appendix for the moments of a linear combination of two independent multivariate variables, combined with expressions in Proposition A.3 for the moments of V under the assumption V ∼ N d (0, I d ). After some algebraic simplifications, one arrives at the stated expressions. The term Γ − Σ u of (20) is a positive definite matrix, taking account (A.3) in the appendix, which in the present case holds in the strict version of the matrix inequality. This implies that Ω > Σ.

QED
The expressions µ k (X ) given in Proposition 1 refer to a SUN variable X with location parameter ξ = 0. For the general case with arbitrary ξ, consider the shifted variable Y = ξ + X and use expressions (A.5)-(A.8) in the appendix. Another annotation is that Proposition 1 includes an expression of var{X } alternative to (16).
The actual usage of the expressions provided in Proposition 1 requires knowledge of the moments of the truncated normal component, µ k (U ). In general, these moments are not amenable to explicit treatment, and one must resort on numerical computations. As already mentioned, there exists a vast literature concerned with this problem, and its exhaustive review would take far too much space. We therefore indicate only some recent results, referring the reader to the references quoted therein for earlier developments. Among the more recent proposals, we mention the methods for computing these moments presented by Arismendi (2013) and Kan & Robotti (2017). For the latter approach, there exist publicly available computing routines written by the authors in the Matlab language. Of these routines, a corresponding version is available in the R computing environment via either of its packages mnormt (Azzalini & Genz, 2020) or MomTrunc (Galarza et al., 2020).
We now want to obtain expressions for the Mardia's measures of multivariate skewness and kurtosis, denoted β 1,d and β 2,d in the original publications of Mardia (1970Mardia ( , 1974, apart from the symbol d adopted here to denote the dimensionality. To simplify the algebraic work, it is convenient to work with a suitably transformed variable, exploiting the invariance properties of Mardia's measures with respect to nonsingular affine transformations. For a random variable Y ∼ SUN d,m (ξ, Ω, ∆, τ, Γ), consider again its representation (19), and introduce additionally where X is as in Proposition 1. Ruling out degenerate cases, Σ = var{Y } = var{X 0 } is non-singular. Consider then any non-singular d ×d matrix C such that Σ = C C ⊤ ; although not strictly necessary, a common choice is to set C = Σ 1/2 , the unique symmetric positive-definite square root of Σ. Next, introduce the standardized variablẽ such that On settingΛ = C −1 Λ andΨ = C −1 Ψ(C −1 ) ⊤ , where Λ and Ψ are given in (17), and a matching definition ofΨ 1/2 , we also note thatZ where d = means identically distributed. The reason for introducing the variableZ is represented by the following fact. For a standardized variable X * , say, having zero mean vector and identity variance matrix, the Mardia's measures can be conveniently computed using the expressions given by Kollo & Srivastava (2005), namely The next statement presents the evaluation of these expressions for the SUN variableZ .
Proposition 2 For the random variableZ specified as in (21) or, equivalently, as in (22), the following expected values hold: Proof. The expressions of µ 3 (Z ) and µ 4 (Z ) follow directly from Proposition 1, by using it with the terms X , Λ, Ψ specified asZ ,Λ,Ψ. Therefore, we concentrate on the derivation of β 1,p and β 2,p only. An algebraically convenient route to obtain these quantities is from the stochastic representation in (21). Denote by Y ′ = µ +CZ ′ an independent replicate of Y = µ +CZ , so that the Mardia's measure of skewness can be expressed as First, introduce the matrices M 00 = Λ ⊤ Σ −1 Λ, M 01 = Λ ⊤ Σ −1 Ψ 1/2 , M 10 = Ψ 1/2 Σ −1 Λ = M ⊤ 01 , and M 11 = Ψ 1/2 Σ −1 Ψ 1/2 . Then expand Take into account that U 0 and U ′ 0 are independent and identically distributed (i.i.d.) random vectors with mean zero, as well as that V and V ′ are i.i.d. random vectors with spherical normal distribution, so that all the terms involving odd functions of V or V ′ have zero expectation.
Consider also that U 0 , U ′ 0 , V and V ′ are mutually independent. Then, by taking the expectation and removing the terms with zero mean, we have

Now use the equality tr(E FG H
where µ 3 (U ′ 0 ) = µ 3 (U 0 ) since U ′ 0 and U 0 are i.i.d. variables. Proceeding in a similar way for the measure of kurtosis, we have where the terms with zero expectation have been removed, namely those associated with odd functions of V . The remaining expected values can be worked out recalling that the powers of quadratic forms can be expressed as the trace of matrix products, combined with properties of the trace of products of matrices, specifically that tr(K pq (P ⊤ ⊗Q)) = tr(P ⊤ Q) as stated by Theorem 3.1, item (xiii), of Magnus & Neudecker (1979) and, in case p = q, tr(P ⊗ Q) = tr(P)tr(Q), tr(P ⊤ Q) = vec(P ) ⊤ vec(Q). We then obtain +2 tr M 00 µ 2 (U 0 ) tr M 11 µ 2 (V ) + 2 tr(M 01 µ 2 (V )M ⊤ 01 µ 2 (U 0 )) +tr (M 11 ⊗ M 11 )µ 4 (V ) .

Log-concavity of the SUN distribution
The SN distribution is known to be log-concave, even in its extended version, ESN; see Azzalini & Regoli (2012) for a proof. Since the ESN distribution corresponds to the SUN with m = 1, it is natural to investigate the same property for a general value of m. An often-employed definition of log-concave distribution in the continuous case requires that the logarithm of its density function is a concave function. In the more specialized literature, the concept of log-concavity is expressed via the corresponding probability measure, by requiring that for any two Borel sets A and B , and for any 0 < λ < 1. For general information on this theme, we refer to Chapter 2 of Dharmadhikari & Joag-dev (1988) and Chapter 4 of Prékopa (1995), which provide extensive compendia of a vast literature.
Established results ensure the equivalence of the definition of log-concavity based on the density function and the one in (23); see Theorems 4.2.1 of Prékopa (1995), and Theorem 2.8 of Dharmadhikari & Joag-dev (1988). Moreover, also the corresponding distribution function is a log-concave function; see Theorem and 4.2.4 II of Prékopa (1995).

Proposition 3 The SUN distribution is log-concave.
Proof. The proof is based on its additive representation in the form (19), which involves the underling variable Z 0 indicated in (18). For the multivariate normal distribution, log-concavity is a well-known fact. Next, recall Theorem 9 of Horrace (2005) which ensures log-concavity of a normal distribution subject to one-sided truncation. In our case the truncation operates on the variable Z 0 = (V ⊤ ,W ⊤ ) in the form W + τ > 0. Since U d = (W |W + τ > 0), this establishes logconcavity of the distribution of (V ⊤ ,U ⊤ ). A variable Y ∼ SUN d,m (ξ, Ω, ∆, τ, Γ) can be obtained from Preservation of log-concavity after an affine transformation has been proved by Henningsson & Åström (2006). Strictly speaking, their statement refers to a transformation involving a square matrix, having dimension d + m in our notation, but it is easy to see that fact extends to reduceddimension transformations, since one can think of a full-rank transformation to an augmented variable of dimension d +m, followed by marginalization to extract the Y component. Since marginalization preserves log-concavity, as stated for instance by Theorem 4.2.2 of Prékopa (1995), this concludes the proof.

Conditional density generated by interval selection
For a random variable Y ∼ SUN d,m (ξ, Ω, ∆, τ, Γ), consider a partition of Y and its associated quantities, as follows where Y 1 and Y 2 have dimension d 1 and d 2 , with a corresponding partition for the scaled matrix Ω which appears in (1) and (2). In Proposition 2.3.2 of González-Farías et al. (2004b), it is proved that the conditional distribution of Y 2 given that Y 1 = y 1 , for any vector y 1 ∈ R d 1 , is still of SUN type. Here, we want to examine another conditional distribution of Y 2 , namely the one which arises when the conditioning event on Y 1 is instead an orthant-type interval of the form (Y 1 + y 1 > 0), where the inequality sign holds for each variable component, or some similar orthant-type condition.

Proposition 4
If Y ∼ SUN d,m (ξ, Ω, ∆, τ, Γ) with elements partitioned as indicated in (24), then wherez 1 = ω −1 1 (ξ 1 + y 1 ) and the inequality sign must be intended to hold for each component of Y 1 , if d 1 > 1. In the case where the inequality sign is reversed, we have Proof. Recall formula (4) for computing the distribution of an affine transformation of a SUN variable. Using these transformation rule, Then, on setting z 2 = ω −1 2 (y 2 −ξ 2 ), the conditional distribution function of (Y 2 |Y 1 + y 1 > 0) evaluated at y 2 ∈ R d 2 is where F X (·) denotes the distribution function of a SUN variable X , given by (5). Using again formula (4), write and so that the two ingredients of (27) are Taking the ratio of the last two expressions, we obtain which is the distribution function of a SUN variable with parameters indicated in (25).
For statement (26), notice that the event {Y 1 + y 1 < 0} coincides with {−Y 1 + (−y 1 ) > 0} and apply (25) to the distributions of (−Y 1 , Y 2 ) with y 1 replaced by −y 1 . The distribution of (−Y 1 , Y 2 ) is essentially given by (28), up to a change of location and scale. QED Clearly, the special case of Proposition 4 where m = 1 applies to the extended SN distribution. By following the same logic of Proposition 4, it is conceptually simple, although algebraically slightly intricate, to write the conditional distribution of (Y 2 |Y 1 ∈ I ) where I is an event specified by mixed-direction inequalities on the components of Y 1 .

A.1 On moments of a variable after a selection
The following results are presumably well-known. However, since we are not aware of similarly explicit statements in the literature, they are included here.
Proof. Using Lemma A.1 with h equal to the identity function and to the function selecting the generic entry of W W ⊤ , write and then since the two summands of (A.4) are non-negative definite matrices. Consider now the case when var{W } > 0 and 0 < π < 1. For an arbitrary vector a ∈ R m , define W a = a ⊤ W and U a = a ⊤ U . Provided a = 0, the condition var{W } > 0 ensures that var{W a } = a ⊤ var{W } a > 0, which means that W a is a non-degenerate variable. To show that also U a is a non-degenerate variable, assume that the opposite holds, which means that there exists a vector a such that U a = a ⊤ U ≡ b, for some constant b. Then where the last equality uses the condition π > 0. Since we have obtained a contradiction, then U a cannot be degenerate and var{U } > 0. By a similar argument and the condition π < 1, we can establish that var U c > 0. Therefore the term var{U } + var U c in (A.4) is positive definite, while the final summand of (A.4) is at least positive semidefinite, implying that var{W }−var{U } > 0.

A.2 On moments of multivariate normal variables
The proof of statements (i) and (ii) is direct. For (iii) and (iv), see for instance Theorem 4.1 (i) and Lemma 4.1 (ii) of Magnus & Neudecker (1979).

A.3 On moments of the sum of two independent multivariate random variables
Proposition A.4 Let X = AU + BV , where A ∈ R p×q and B ∈ R p×r are constant matrices, and U ∈ R q and V ∈ R r are independent random vector. If the required moments exist, then Proof: The proof of µ 1 (X ) is trivial. To obtain the other moments, first note that leading to µ 2 (X ). Also, where we have used This leads to µ 3 (X ). Finally, where we used the fact that, if C and D are p ×q and s ×t matrices, respectively, then the following equalities hold: (D ⊗C ) = K sp (C ⊗ D)K qt , K ps (D ⊗C ) = (C ⊗ D)K qt , (D ⊗C )K t q = K sp (C ⊗ D).