Skip to main content

Moments and random number generation for the truncated elliptical family of distributions


This paper proposes an algorithm to generate random numbers from any member of the truncated multivariate elliptical family of distributions with a strictly decreasing density generating function. Based on the ideas of Neal (Ann stat 31(3):705–767, 2003) and Ho et al. (J Stat Plan Inference 142(1):25–40, 2012), we construct an efficient sampling method by means of a slice sampling algorithm with Gibbs sampler steps. We also provide a faster approach to approximate the first and the second moment for the truncated multivariate elliptical distributions where Monte Carlo integration is used for the truncated partition and explicit expressions for the non-truncated part (Galarza et al., in J Multivar Anal 189(104):944, 2022). Examples and an application to environmental spatial data illustrate its usefulness. Methods are available for free in the new R library relliptical.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  • Andersen, M., Goedman, R., Grothendieck, G., et al.: Ryacas: R interface to the YACAS Computer Algebra System. R package version (2020)

  • Bertolacci, M.: armspp: Adaptive rejection metropolis sampling (ARMS) via ’Rcpp’. R package version 0.0.2 (2019)

  • Besag, J., Green, P.J.: Spatial statistics and bayesian computation. J. Roy. Stat. Soc.: Ser. B (Methodol.) 55(1), 25–37 (1993)

    MathSciNet  MATH  Google Scholar 

  • Brent, R.P.: Algorithms for Minimization Without Derivatives. Prentice-Hall, Englewood Cliffs, New Jersey (2013)

    MATH  Google Scholar 

  • Damien, P., Walker, S.G.: Sampling truncated normal, beta, and gamma densities. J. Comput. Graph. Stat. 10(2), 206–215 (2001)

    Article  MathSciNet  Google Scholar 

  • De Alencar, F.H., Galarza, C.E., Matos, L.A., et al.: Finite mixture modeling of censored and missing data using the multivariate skew-normal distribution. Adv. Data Anal. Class. (2021)

  • Delyon, B., Lavielle, M., Moulines, E.: Convergence of a stochastic approximation version of the EM algorithm. Ann. Stat. 27(1), 94–128 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  • Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)

    MathSciNet  MATH  Google Scholar 

  • Diggle, P.J., Ribeiro, P.J.: Model-based Geostatistics. Springer, New York (2007)

    Book  MATH  Google Scholar 

  • Fang, K.W., Kotz, S., Ng, K.W.: Symmetric Multivariate and Related Distributions. Chapman and Hall/CRC (2018)

    Book  MATH  Google Scholar 

  • Fridley, B.L., Dixon, P.: Data augmentation for a bayesian spatial model involving censored observations. Environmetrics 18(2), 107–123 (2007)

    Article  MathSciNet  Google Scholar 

  • Galarza, C.E., Kan, R., Lachos, V.H.: MomTrunc: Moments of folded and doubly truncated multivariate distributions. R Package Version 5, 97 (2021)

    Google Scholar 

  • Galarza, C.E., Lachos, V.H., Bourguignon, M.: A skew-t quantile regression for censored and missing data. Stat 10(1), e379 (2021)

    MathSciNet  Google Scholar 

  • Galarza, C.E., Lin, T.I., Wang, W.L., et al.: On moments of folded and truncated multivariate Student-t distributions based on recurrence relations. Metrika 84(6), 825–850 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  • Galarza, C.E., Matos, L.A., Castro, L.M., et al.: Moments of the doubly truncated selection elliptical distributions with emphasis on the unified multivariate skew-t distribution. J. Multivar. Anal. 189(104), 944 (2022)

    MathSciNet  MATH  Google Scholar 

  • Galarza, C.E., Matos, L.A., Lachos, V.H.: An EM algorithm for estimating the parameters of the multivariate skew-normal distribution with censored responses. METRON 80, 231–253 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  • Gelfand, A.E., Smith, A.F.: Sampling-based approaches to calculating marginal densities. J. Am. Stat. Assoc. 85(410), 398–409 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  • Gelfand, A.E., Smith, A.F., Lee, T.M.: Bayesian analysis of constrained parameter and truncated data problems using Gibbs sampling. J. Am. Stat. Assoc. 87(418), 523–532 (1992)

    Article  MathSciNet  Google Scholar 

  • Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6 6, 721–741 (1984)

    Article  MATH  Google Scholar 

  • Gilks, W.R., Wild, P.: Adaptive rejection sampling for Gibbs sampling. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 41(2), 337–348 (1992)

    MATH  Google Scholar 

  • Gilks, W.R., Best, N.G., Tan, K.K.: Adaptive rejection metropolis sampling within Gibbs sampling. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 44(4), 455–472 (1995)

    MATH  Google Scholar 

  • Gómez, E., Gómez-Villegas, M., Marín, J.M.: A multivariate generalization of the power exponential family of distributions. Commun. Stat.-Theory Methods 27(3), 589–600 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • Hadfield, J.: MCMCglmm: MCMC generalised linear mixed models. R Package Version 2, 34 (2022)

    Google Scholar 

  • Ho, H.J., Lin, T.I., Wang, W.L., et al. TTmoment: sampling and calculating the first and second moments for the doubly truncated multivariate t distribution. R Package version 1.0 (2015)

  • Ho, H.J., Lin, T.I., Chen, H.Y., et al.: Some results on the truncated multivariate t distribution. J. Stat. Plan. Inference 142(1), 25–40 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Kan, R., Robotti, C.: On moments of folded and truncated multivariate normal distributions. J. Comput. Graph. Stat. 26(4), 930–934 (2017)

    Article  MathSciNet  Google Scholar 

  • Lachos, V.H., Matos, L.A., Barbosa, T.S., et al.: Influence diagnostics in spatial models with censored response. Environmetrics 28(7), e2464 (2017)

    Article  MathSciNet  Google Scholar 

  • Lachos, V.H., A. Matos, L., Castro, L.M., et al.: Flexible longitudinal linear mixed models for multiple censored responses data. Stat. Med. 38(6), 1074–1102 (2019)

    Article  MathSciNet  Google Scholar 

  • Martino, L., Read, J., Luengo, D.: Independent doubly adaptive rejection Metropolis sampling within Gibbs sampling. IEEE Trans. Signal Process. 63(12), 3123–3138 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Martino, L., Yang, H., Luengo, D., et al.: A fast universal self-tuned sampler within Gibbs sampling. Digit. Signal Process. 47, 68–83 (2015)

  • Matos, L.A., Prates, M.O., Chen, M.H., et al.: Likelihood-based inference for mixed-effects models with censored response using the multivariate-t distribution. Stat. Sin. 23(3), 1323–1345 (2013)

  • Matos, L.A., Castro, L.M., Lachos, V.H.: Censored mixed-effects models for irregularly observed repeated measures with applications to HIV viral loads. TEST 25(4), 627–653 (2016)

  • Mattos, T.B., Lachos, V.H., Castro, L.M., et al.: Extending multivariate Student’s-t semiparametric mixed models for longitudinal data with censored responses and heavy tails. Stat. Med. 41(19), 3696–3719 (2022)

    Article  MathSciNet  Google Scholar 

  • Meyer, R., Cai, B., Perron, F.: Adaptive rejection Metropolis sampling using Lagrange interpolation polynomials of degree 2. Comput. Stat. Data Anal. 52(7), 3408–3423 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Morán-Vásquez, R.A., Ferrari, S.L.: New results on truncated elliptical distributions. Commun. Math. Stat. 9, 299–313 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  • Muirhead, R.J.: Aspects of Multivariate Statistical Theory, vol. 197. Wiley, New York (2009)

    MATH  Google Scholar 

  • Nash, J.C., Varadhan, R., Grothendieck, G.: optimx: Expanded replacement and extension of the ‘optim’ function. R package version 2020-4.2 (2020)

  • Neal, R.M.: Slice sampling. Ann. Stat. 31(3), 705–767 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  • Olivari, R.C., Zhong, K., Garay, A.M., et al.: ARpLMEC: censored mixed-effects models with different correlation structures. R Package Version 2(4), 1 (2022)

    Google Scholar 

  • Ordoñez, J.A., Bandyopadhyay, D., Lachos, V.H., et al.: Geostatistical estimation and prediction for censored responses. Spatial Stat. 23, 109–123 (2018)

    Article  MathSciNet  Google Scholar 

  • Pan, Y., Pan, J.: roptim: An R package for general purpose optimization with C++. R package version 0.1.6 (2022)

  • R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, (2021)

  • Robert, C.P.: Simulation of truncated normal variables. Stat. Comput. 5(2), 121–125 (1995)

    Article  Google Scholar 

  • Robert, C.P., Casella, G.: Introducing Monte Carlo Methods with R, vol. 18. Springer, New York (2010)

    Book  MATH  Google Scholar 

  • Swendsen, R.H., Wang, J.S.: Nonuniversal critical dynamics in Monte Carlo simulations. Phys. Rev. Lett. 58(2), 86 (1987)

    Article  Google Scholar 

  • Tallis, G.M.: The moment generating function of the truncated multi-normal distribution. J. Roy. Stat. Soc.: Ser. B (Methodol.) 23(1), 223–229 (1961)

    MathSciNet  MATH  Google Scholar 

  • Valeriano, K.A., Ordoñez, A., Galarza, C.E., et al.: RcppCensSpatial: Spatial estimation and prediction for censored/missing responses. R package version 0.3.0 (2022)

  • Wei, G.C., Tanner, M.A.: A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. J. Am. Stat. Assoc. 85(411), 699–704 (1990)

    Article  Google Scholar 

  • Wilhelm, S.: tmvtnorm: Truncated multivariate normal and Student t distribution. R Package Version 1, 5 (2022)

    Google Scholar 

  • Zirschky, J.H., Harris, D.J.: Geostatistical analysis of hazardous waste site data. J. Environ. Eng. 112(4), 770–784 (1986)

    Article  Google Scholar 

Download references


The research of Katherine A. L. Valeriano was supported by CAPES. Larissa A. Matos acknowledges support from FAPESP-Brazil (Grant 2020/16713-0).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Christian E. Galarza.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


Appendix A: Further results for some multivariate elliptical distributions

1.1 The multivariate pearson VII distribution

A random variable \({\textbf{X}}\in {\mathbb {R}}^p\) is said to have a multivariate Pearson VII distribution with location parameter \(\varvec{\mu }\in {\mathbb {R}}^p\), positive-definite scale matrix \(\varvec{\Sigma }\in {\mathbb {R}}^{p\times p}\), extra parameters \(m>p/2\) and \(\nu >0\), if its pdf is given by

$$\begin{aligned} f_{{\textbf{X}}}({\textbf{x}})= & {} \frac{\Gamma (m)}{(\pi \nu )^{p/2} \Gamma (m - p/2)} \,\,\vert \varvec{\Sigma }\vert ^{-1/2} \\{} & {} \times \left( 1 + \frac{1}{\nu }({\textbf{x}}- \varvec{\mu })^\top \varvec{\Sigma }^{-1} ({\textbf{x}}- \varvec{\mu }) \right) ^{-m}, \end{aligned}$$

with \({\textbf{x}}\in {\mathbb {R}}^p\). The random vector \({\textbf{X}}\) can also be represented as a scale mixture of normal (SMN) distributions, i.e., \({\textbf{X}}= \varvec{\mu }+ U^{-1/2}{\textbf{Z}}\), where \({\textbf{Z}}\) has a p-variate normal distribution with mean \({{\textbf {0}}}\in {\mathbb {R}}^p\) and variance-covariance matrix \(\varvec{\Sigma }\in {\mathbb {R}}^{p\times p}\). Here, U follows Gamma distribution with scale parameter \(m - p/2\) and rate parameter \(\nu /2\), where \({\textbf{Z}}\) is independent of U. This implies that \({\textbf{X}}\mid (U = u) \sim N_p(\varvec{\mu }, u^{-1}\varvec{\Sigma })\) and \(U\sim Gamma(m-p/2, \nu /2)\).

Therefore, the mean and the variance-covariance matrix of \({\textbf{X}}\) are

$$\begin{aligned} {\mathbb {E}}({\textbf{X}})&= {\mathbb {E}}({\mathbb {E}}({\textbf{X}}\mid U)) = \varvec{\mu }, \quad m> \frac{p + 1}{2}.\\ \mathrm {\textrm{C}ov}({\textbf{X}})&= \textrm{Cov}({\mathbb {E}}({\textbf{X}}\mid U)) + {\mathbb {E}}(\textrm{Cov}({\textbf{X}}\mid U)) \\&= {\mathbb {E}}(U^{-1})\varvec{\Sigma }= \frac{\nu \varvec{\Sigma }}{2m - p - 2}, \quad m > \frac{p + 2}{2}. \end{aligned}$$

1.1.1 Marginal and conditional distribution

Now suppose that the vector \({\textbf{X}}\) is partitioned into two random vectors \({\textbf{X}}_1\in {\mathbb {R}}^{p_1}\) and \({\textbf{X}}_2\in {\mathbb {R}}^{p_2}\), such that \(p=p_1+p_2\), and consider the partition of \(\varvec{\mu }\) and \(\varvec{\Sigma }\) used in Proposition 1, i.e.,

$$\begin{aligned} {\textbf{X}}= \left( \begin{array}{c} {\textbf{X}}_1 \\ {\textbf{X}}_2 \end{array}\right) , \, \varvec{\mu }= \left( \begin{array}{c} \varvec{\mu }_1 \\ \varvec{\mu }_2 \end{array}\right) \,\, \text{ and } \,\, \varvec{\Sigma }= \left( \begin{array}{cc} \varvec{\Sigma }_{11} &{} \varvec{\Sigma }_{12} \\ \varvec{\Sigma }_{21} &{} \varvec{\Sigma }_{22} \end{array}\right) . \end{aligned}$$

First, notice that \(({\textbf{X}}-\varvec{\mu })^\top \varvec{\Sigma }^{-1}({\textbf{X}}-\varvec{\mu }) = \delta _1({\textbf{X}}_1) + \delta _{2.1}({\textbf{X}}_{2.1})\), where \(\delta _1({\textbf{X}}_1) = ({\textbf{X}}_1-\varvec{\mu }_1)^\top \varvec{\Sigma }_{11}^{-1}({\textbf{X}}_1-\varvec{\mu }_1)\), \(\delta _{2.1}({\textbf{X}}_{2.1}) = ({\textbf{X}}_2 - \varvec{\mu }_{2.1})^\top \varvec{\Sigma }_{2.1}^{-1}({\textbf{X}}_2 - \varvec{\mu }_{2.1})\), \(\varvec{\mu }_{2.1} = \varvec{\mu }_2 + \varvec{\Sigma }_{21}\varvec{\Sigma }_{11}^{-1}({\textbf{X}}_1 - \varvec{\mu }_1)\) and \(\varvec{\Sigma }_{2.1}=\varvec{\Sigma }_{22} - \varvec{\Sigma }_{21}\varvec{\Sigma }_{11}^{-1}\varvec{\Sigma }_{12}\). By the results above, the marginal pdf of \({\textbf{X}}_1\) is given by

$$\begin{aligned} f_{{\textbf{X}}_1}({\textbf{x}}_1)&= \int _{{\mathbb {R}}^{p_2}} f_{{\textbf{X}}}({\textbf{x}}) \textrm{d}{\textbf{x}}_2 \\&= \frac{\Gamma (m)}{(\pi \nu )^{p/2} \Gamma (m-p/2)} \,\vert \varvec{\Sigma }\vert ^{-1/2} \\&\quad \int _{{\mathbb {R}}^{p_2}} \left( 1 + \frac{\delta _1( {\textbf{x}}_1)}{\nu } + \frac{\delta _{2.1}( {\textbf{x}}_{2.1})}{\nu } \right) ^{-m} \textrm{d}{\textbf{x}}_2 \\&= \frac{\Gamma (m)}{(\pi \nu )^{p/2}\Gamma (m-p/2)} \,\vert \varvec{\Sigma }\vert ^{-1/2} \left( 1 + \frac{\delta _1( {\textbf{x}}_1)}{\nu }\right) ^{-m} \\&\quad \int _{{\mathbb {R}}^{p_2}} \left( 1 + \frac{ \delta _{2.1}( {\textbf{x}}_{2.1}) }{\nu + \delta _1( {\textbf{x}}_1)}\right) ^{-m} \textrm{d}{\textbf{x}}_2 \\&= \frac{\Gamma (m - p_2/2)}{(\pi \nu )^{p_1/2} \Gamma (m - p/2)} \,\vert \varvec{\Sigma }_{11} \vert ^{-1/2} \\&\quad \left( 1 + \frac{\delta _1( {\textbf{x}}_1)}{\nu }\right) ^{-(m-p_2/2)}, \quad {\textbf{x}}_1\in {\mathbb {R}}^{p_1}. \end{aligned}$$

Hence, the marginal distribution of \({\textbf{X}}_1\) is also Pearson VII with parameters \(\varvec{\mu }_1\), \(\varvec{\Sigma }_{11}\), \(m-p_2/2\) and \(\nu \), i.e., \({\textbf{X}}_1\sim \text{ PVII}_{p_1}(\varvec{\mu }_1, \varvec{\Sigma }_{11}, m-p_2/2, \nu )\). On the other hand, the conditional pdf of \({\textbf{X}}_2 \mid ({\textbf{X}}_1 = {\textbf{x}}_1)\) is given by

$$\begin{aligned} f_{{\textbf{X}}_2 \vert {\textbf{X}}_1}( {\textbf{x}}_2 \mid {\textbf{x}}_1)&= \frac{f_{{\textbf{X}}}( {\textbf{x}}_1, x_2)}{f_{{\textbf{X}}_1}( {\textbf{x}}_1)} \\&= \frac{\Gamma (m) \,\vert \varvec{\Sigma }_{2.1} \vert ^{-1/2}}{(\pi (\nu +\delta _1( {\textbf{x}}_1)))^{p_2/2} \Gamma (m-p_2/2)} \\&\quad \left( 1 + \frac{\delta _{2.1}( {\textbf{x}}_{2.1})}{\nu + \delta _1( {\textbf{x}}_1)}\right) ^{-m}, \end{aligned}$$

\( {\textbf{x}}_1\in {\mathbb {R}}^{p_1}, \, {\textbf{x}}_2\in {\mathbb {R}}^{p_2}\). Therefore, the conditional distribution has also a Pearson VII distribution with parameters \(\varvec{\mu }_{2.1}\), \(\varvec{\Sigma }_{2.1}\), m, and \(\nu + \delta _1( {\textbf{x}}_1)\), i.e., \({\textbf{X}}_2 \mid ({\textbf{X}}_1= {\textbf{x}}_1) \sim \text{ PVII}_{p_2}(\varvec{\mu }_{2.1}, \varvec{\Sigma }_{2.1}, m, \nu +\delta _1( {\textbf{x}}_1))\).

1.1.2 Existence of its truncated moments

Let \({\textbf{X}}\sim \text{ PVII}_p(\varvec{\mu }, \varvec{\Sigma }, m, \nu ), m> p/2, \nu > 0\), and let \({A} \subseteq {\mathbb {R}}^p\) be a truncation region of interest. Then, the expectation and the variance-covariance matrix of \({\textbf{X}}\) given \({\textbf{X}}\in {A}\) exist in the following cases:

  • If \({A} = {\mathbb {R}}^p\) or A is unbounded (at most one finite limit in each dimension), so the expectation exists for \(m > (p+1)/2\) and the covariance matrix exists for \(m > (p+2)/2\), as usual.

  • If A is bounded (all truncation points are finite), then \({\mathbb {E}}({\textbf{X}}\,\vert \, {\textbf{X}}\in {A})\) and \(\textrm{Cov}({\textbf{X}}\,\vert \, {\textbf{X}}\in {A})\) exist for all \(m > p/2\), since the distribution is bounded.

  • If \({\textbf{X}}\) can be partitioned into two random variables \({\textbf{X}}_1\in {\mathbb {R}}^{p_1}\) and \({\textbf{X}}_2\in {\mathbb {R}}^{p_2}\) such that the truncation region associated to \({\textbf{X}}_1\) (say, \({A}_1\)) is bounded, from the last item we have \({\mathbb {E}}({\textbf{X}}_1 \,\vert \, {\textbf{X}}\in {A})\) and \(\textrm{Cov}({\textbf{X}}_1 \,\vert \, {\textbf{X}}\in {A})\) exist for all \(m > p/2\) and \(\nu > 0\). On the other hand, it follows from Fubini’s theorem that \({\mathbb {E}}({\textbf{X}}_2 \,\vert \, {\textbf{X}}\in {A})\) will exist if and only if \({\mathbb {E}}({\textbf{X}}_2 \,\vert \, {\textbf{X}}_1)\) exists; this occurs for all \(m > (p_2 + 1)/2\). Note that the existence of \({\mathbb {E}}({\textbf{X}}_2 \,\vert \, {\textbf{X}}_1)\) also implies that \(\textrm{Cov}({\textbf{X}}_1,{\textbf{X}}_2 \,\vert \, {\textbf{X}}\in {A})\) exists. Additionally, \(\textrm{Cov}({\textbf{X}}_2 \,\vert \, {\textbf{X}}\in {A})\) exists if and only if \(\textrm{Cov}({\textbf{X}}_2 \,\vert \, {\textbf{X}}_1)\) exists, which holds for all \(m > (p_2 + 2)/2\).

Remark: It is equivalent to say that \({\mathbb {E}}({\textbf{X}}\,\vert \, {\textbf{X}}\in {A})\) exists for all m, if at least one dimension containing a finite limit exists. Besides, if at least two dimensions containing finite limits exist, we have that \(\textrm{Cov}({\textbf{X}}\,\vert \, {\textbf{X}}\in {A})\) exists for all \(m > p/2\).

In order to illustrate the result, consider \({\textbf{X}}\sim \text{ PVII}_2 (\varvec{\mu }, \varvec{\Sigma }, m, \nu )\), with \(\nu = 1\), \(\varvec{\mu }= {{\textbf {0}}}\), and \(\varvec{\Sigma }= \left( \begin{array}{cc} 1 &{} 0.20 \\ 0.20 &{} 1 \end{array}\right) \). We are interested in observing what happens with the elements of \({\mathbb {E}}({\textbf{X}}\,\vert \, {\textbf{X}}\in {A})\) and \(\textrm{Cov}({\textbf{X}}\,\vert \, {\textbf{X}}\in {A})\) for \({A} = \{ {\textbf{x}}\in {\mathbb {R}}^2: {{\textbf {a}}}< {\textbf{x}}< {{\textbf {b}}}\}\) in the following three scenarios:

  1. (a)

    \(m = 2\), \({{\textbf {b}}}= (\infty , \infty )^\top \);

  2. (b)

    \(m = 1.40\), \({{\textbf {b}}} = (0.80, \infty )^\top \);

  3. (c)

    \(m = 2\), \({{\textbf {b}}} = (0.80, \infty )^\top \);

and lower limit \({{\textbf {a}}} = (-0.80, -0.60)^\top \) for all scenarios. Figure 5 displays the trace evolution of the MC estimates for the mean and variance-covariance elements \(\mu _1\), \(\mu _2\), \(\sigma _{11}\), \(\sigma _{12}\) and \(\sigma _{22}\) for each case. The red dashed line represents the value for the parameter estimated via MC with \(10^6\) samples, and we refer to this value as the “true value”.

Fig. 5
figure 5

Trace plots of the evolution of the MC estimates for the mean and variance-covariance elements of \({\textbf{X}}\mid ({\textbf{X}}\in A)\) under scenarios a), b) and c). The red dashed line represents the true estimated value computed using numerical methods

For the first case, we have that \((p+1)/2 =3/2 < 2 = m\), then only the first moment exists. Therefore, we observe in the first row of Fig. 5 that only the estimates of \(\mu _1\) and \(\mu _2\) converge to their true values as the sample size increase. In the second scenario (middle row), we have that all elements converge except \(\sigma _{22}\). This happens because the truncation limits for the first variable are finite and \(m > (p_2 + 1)/2 = 1\). In the last case, scenario c), convergence is attained for all parameters, since the condition \(m > (p_2 + 2)/2 = 3/2\) holds. Note that even with 2000 MC simulations there exists a significant variability in the chains.

1.2 The multivariate slash distribution

A random vector \({\textbf{X}}\in {\mathbb {R}}^p\) has multivariate slash distribution with location parameter \(\varvec{\mu }\in {\mathbb {R}}^p\), positive-definite scale matrix \(\varvec{\Sigma }\in {\mathbb {R}}^{p\times p}\), and \(\nu >0\) degrees of freedom, denoted by \({\textbf{X}}\sim \text{ SL}_p (\varvec{\mu }, \varvec{\Sigma }, \nu )\), if its pdf is given by

$$\begin{aligned} f_{\textbf{X}}({\textbf{x}}) = \nu \int _0^1 u^{\nu -1} \phi _p\left( {\textbf{x}}; \varvec{\mu }, u^{-1}\varvec{\Sigma }\right) \textrm{d}u, \quad {\textbf{x}}\in {\mathbb {R}}^p, \end{aligned}$$

where \(\phi _p({\textbf{x}}; \varvec{\mu }, \varvec{\Sigma })\) is the pdf of a p-variate normal distribution with mean \(\varvec{\mu }\) and covariance matrix \(\varvec{\Sigma }\). We denote its pdf by \(SL_p({\textbf{x}}; \varvec{\mu }, \varvec{\Sigma }, \nu )\) which can be evaluated through numerical methods, e.g., using the R function integrate. The random vector \({\textbf{X}}\) can also be represented in the family of the SMN distributions, this is, \({\textbf{X}}= \varvec{\mu }+ U^{-1/2}{\textbf{Z}}\), where the random variables U and \({\textbf{Z}}\) are both independent and have \(\textrm{Beta}(\nu , 1)\) and \(N_p({{\textbf {0}}},\varvec{\Sigma })\) distributions, respectively. Therefore, the mean and variance-covariance matrix of the random vector \({\textbf{X}}\) are given by

$$\begin{aligned} {\mathbb {E}}({\textbf{X}})&= {\mathbb {E}}\left( {\mathbb {E}}({\textbf{X}}\mid U) \right) = {\mathbb {E}}(\varvec{\mu }) = \varvec{\mu }.\\ \textrm{Cov}({\textbf{X}})&= \textrm{Cov}({\mathbb {E}}({\textbf{X}}\mid U)) + {\mathbb {E}}(\textrm{Cov}({\textbf{X}}\mid U))\\&= {\mathbb {E}}(U^{-1})\varvec{\Sigma }= \frac{\nu }{\nu -1}\varvec{\Sigma }, \quad \nu >1. \end{aligned}$$

1.2.1 Marginal and conditional distribution

Considering a partition in the same manner as used for the Pearson VII distribution, the marginal pdf of \({\textbf{X}}_1\) is given by

$$\begin{aligned} f_{{\textbf{X}}_1}({\textbf{x}}_1)&= \int _{{\mathbb {R}}^{p_2}} f_{{\textbf{X}}}({\textbf{x}}) d{\textbf{x}}_2 \\&= \int _{{\mathbb {R}}^{p_2}} \nu \int _0^1 u^{\nu - 1} \phi _p\left( {\textbf{x}}; \varvec{\mu }, u^{-1}\varvec{\Sigma }\right) \textrm{d}u \, d{\textbf{x}}_2 \\&= \nu \int _{{\mathbb {R}}^{p_2}} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_{1}, u^{-1}\varvec{\Sigma }_{11}\right) \\&\quad \times \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1}, u^{-1}\varvec{\Sigma }_{2.1}\right) \textrm{d}u \,\textrm{d}{\textbf{x}}_2 \\&= \nu \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_{1}, u^{-1}\varvec{\Sigma }_{11}\right) \\&\quad \times \int _{{\mathbb {R}}^{p_2}} \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1}, u^{-1}\varvec{\Sigma }_{2.1}\right) \textrm{d}{\textbf{x}}_2\,\textrm{d}u \\&= \nu \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_{1}, u^{-1}\varvec{\Sigma }_{11}\right) \textrm{d}u. \end{aligned}$$

Thus, \({\textbf{X}}_1\in {\mathbb {R}}^{p_1}\) follows a slash distribution with location parameter \(\varvec{\mu }_1\in {\mathbb {R}}^{p_1}\), scale matrix \(\varvec{\Sigma }_{11}\in {\mathbb {R}}^{p_1\times p_1}\), and \(\nu >0\) degrees of freedom. On the other hand, the conditional pdf of \({\textbf{X}}_2\mid ({\textbf{X}}_1={\textbf{x}}_1)\) is given by

$$\begin{aligned}&f_{{\textbf{X}}_2\mid {\textbf{X}}_1}({\textbf{x}}_2\mid {\textbf{x}}_1)\\&\quad = \frac{f_{{\textbf{X}}}({\textbf{x}}_1,{\textbf{x}}_2)}{f_{{\textbf{X}}_1}({\textbf{x}}_1)}\\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)}\int _0^1 u^{\nu -1} \phi _p\left( {\textbf{x}}; \varvec{\mu },u^{-1}\varvec{\Sigma }\right) \textrm{d}u \\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)}\int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \\&\qquad \times \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1},u^{-1}\varvec{\Sigma }_{2.1}\right) \textrm{d}u. \end{aligned}$$

Then, it is possible to notice that the slash distribution is not closed under conditioning. Furthermore, the pdf of \({\textbf{X}}_2 \,\vert \, ({\textbf{X}}_1={\textbf{x}}_1)\) belongs to the elliptical family of distributions with dgf \(g(t) = \int _0^1 u^{\nu +p/2-1} \exp \{-u(t + \delta _1({\textbf{x}}_1))/2\} \, \textrm{d}u\), i.e., \({\textbf{X}}_2 \,\vert \, ({\textbf{X}}_1={\textbf{x}}_1) \sim \text{ E }\ell (\varvec{\mu }_{2.1}, \varvec{\Sigma }_{2.1}, \nu ; g)\). To determine the mean of the random vector \({\textbf{X}}_2 \,\vert \, ({\textbf{X}}_1={\textbf{x}}_1)\), we compute the conditional expected value of the ith element of \({\textbf{X}}_2\) as follows

$$\begin{aligned}&{\mathbb {E}}({\textbf{X}}_{2i}\mid {\textbf{X}}_1={\textbf{x}}_1)\\&\quad = \int _{{\mathbb {R}}^{p_2}} x_{2i} f_{{\textbf{X}}_2\mid {\textbf{X}}_1}({\textbf{x}}_2\mid {\textbf{x}}_1) \textrm{d}{\textbf{x}}_2 \\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)}\\&\quad \int _{{\mathbb {R}}^{p_2}} x_{2i} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \\&\qquad \times \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1},u^{-1}\varvec{\Sigma }_{2.1}\right) \textrm{d}u\, \textrm{d}{\textbf{x}}_2 \\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \\&\qquad \times \int _{{\mathbb {R}}^{p_2}} x_{2i} \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1},u^{-1}\varvec{\Sigma }_{2.1}\right) d{\textbf{x}}_2 \, \textrm{d}u \\&\quad = \frac{\mu _{2.1}^{(i)} \nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \textrm{d}u \\&\quad = \mu _{2.1}^{(i)}, \quad \forall i, \nu >0, \end{aligned}$$

where \(\mu _{2.1}^{(i)}\) represents the ith element of the vector \(\varvec{\mu }_{2.1}\), and \({\mathbb {E}}({\textbf{X}}_2 \,\vert \, {\textbf{X}}_1={\textbf{x}}_1) = \varvec{\mu }_{2.1}\). Now, to compute the elements of the variance-covariance matrix of the conditional random vector, we first determine \({\mathbb {E}}(X_{2i}X_{2j} \,\vert \, {\textbf{X}}_1={\textbf{x}}_1)\) for all \(i,j=1,\ldots ,p_2\), as

$$\begin{aligned}&{\mathbb {E}}(X_{2i}X_{2j}\mid {\textbf{X}}_1={\textbf{x}}_1)\\&\quad = \int _{{\mathbb {R}}^{p_2}} x_{2i}x_{2j} f_{{\textbf{X}}_2\mid {\textbf{X}}_1}({\textbf{x}}_2\mid {\textbf{x}}_1) d{\textbf{x}}_2 \\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)} \int _{{\mathbb {R}}^{p_2}} x_{2i} x_{2j} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \\&\qquad \times \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1},u^{-1}\varvec{\Sigma }_{2.1}\right) \textrm{d}u \,\textrm{d}{\textbf{x}}_2 \\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \\&\qquad \times \int _{{\mathbb {R}}^{p_2}} x_{2i} x_{2j} \phi _{p_2}\left( {\textbf{x}}_2; \varvec{\mu }_{2.1},u^{-1}\varvec{\Sigma }_{2.1}\right) \textrm{d}{\textbf{x}}_2 \,\textrm{d}u \\&\quad = \frac{\nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)} \int _0^1 u^{\nu -1} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \\&\qquad \times \left( u^{-1}\sigma _{2.1}^{(ij)} + \mu _{2.1}^{(i)}\mu _{2.1}^{(j)} \right) \textrm{d}u \\&\quad = \frac{\sigma _{2.1}^{(ij)} \nu }{f_{{\textbf{X}}_1}({\textbf{x}}_1)} \int _0^1 u^{\nu -2} \phi _{p_1}\left( {\textbf{x}}_1; \varvec{\mu }_1,u^{-1}\varvec{\Sigma }_{11}\right) \textrm{d}u + \mu _{2.1}^{(i)}\mu _{2.1}^{(j)} \\&\quad = \frac{\nu }{\nu -1}\left( \frac{{SL}_{p_1}({\textbf{x}}_1; \varvec{\mu }_1,\varvec{\Sigma }_{11}, \nu -1)}{{SL}_{p_1}({\textbf{x}}_1; \varvec{\mu }_1,\varvec{\Sigma }_{11}, \nu )}\right) \sigma _{2.1}^{(ij)} + \mu _{2.1}^{(i)}\mu _{2.1}^{(j)}, \quad \nu >1, \end{aligned}$$

where \(\sigma _{2.1}^{(ij)}\) is the (ij)th element of the matrix \(\varvec{\Sigma }_{2.1}\). From these results, we have that

$$\begin{aligned}&\textrm{Cov}(X_{2i}, X_{2j}\mid {\textbf{X}}_1={\textbf{x}}_1)\\&\quad = \frac{\nu }{\nu -1}\left( \frac{{SL}_{p_1}({\textbf{x}}_1; \varvec{\mu }_1,\varvec{\Sigma }_{11}, \nu -1)}{{SL}_{p_1}({\textbf{x}}_1; \varvec{\mu }_1,\varvec{\Sigma }_{11}, \nu )}\right) \sigma _{2.1}^{(ij)}, \end{aligned}$$

\(\nu >1\). Therefore, the covariance matrix of the random vector \({\textbf{X}}_2\mid ({\textbf{X}}_1={\textbf{x}}_1)\) will be given by

$$\begin{aligned}&\textrm{Cov}({\textbf{X}}_2\mid {\textbf{X}}_1={\textbf{x}}_1)\\&\quad = \frac{\nu }{\nu -1}\left( \frac{{SL}_{p_1}({\textbf{x}}_1; \varvec{\mu }_1,\varvec{\Sigma }_{11}, \nu -1)}{{SL}_{p_1}({\textbf{x}}_1; \varvec{\mu }_1,\varvec{\Sigma }_{11}, \nu )}\right) \varvec{\Sigma }_{2.1}. \end{aligned}$$
Table 3 Median of the CPU time (in seconds) based on 100 simulations

Appendix B: CPU time to compute moments from truncated distributions

A complementary study of Simulation study II (Sect. 4) was conducted to examine the computational time required for our method in order to estimate the first two moments and the variance-covariance matrix of a p-variate random vector considering different distributions in the truncated elliptical family, with \(p=50\) and 100. As in Simulation study II, we consider 10%, 20%, and 40% of doubly truncated variables for each case.

Table 3 shows the median of the CPU time (in seconds) needed for function mvtelliptical to compute the first two moments and the covariance matrix. We considered a TMVN, a truncated contaminated normal with \(\nu =1/2\) and \(\rho =1/5\), a truncated Pearson VII with parameters \(m=55\) and \(\nu =3\), a truncated slash with \(\nu =2\) degrees of freedom, and a truncated power exponential distribution with kurtosis \(\beta =1/2\). For each case, our method was applied setting \(n=10^4\) and \(10^5\) with \(thinning=3\). Notice that the time needed by the algorithm for TMVN, TMVT, and truncated Pearson VII distributions are similar and depend only on the number of truncated variables and samples used in the approximation. Our method requires more time to compute moments from the truncated contaminated normal distribution when compared to the latter results. This is because the algorithm uses a numerical method to calculate the inverse of the dgf. Besides, it is interesting noting that there is no time difference between computing the moments for a truncated slash distribution with five or ten doubly truncated variables. This occurs since the function used to approximate the integral on the dgf is more time-consuming when \(\nu + p/2 - 1\) is not an integer. Finally, the computation of the moments for the truncated power exponential distribution required approximately the same time for random vectors of equal length regardless of the number of doubly truncated variables. For this case, the method samples values for the whole vector, leading to no time difference.

Appendix C: The relliptical R package

The relliptical package offers random numbers generation from members of the truncated multivariate elliptical family of distribution such as the truncated versions of the normal, Student-t, Pearson VII, slash, logistic, Kotz-type, among others. Particular distributions can be provided by specifying the density-generating function. It also computes the first two moments (covariance matrix as well) for some particular distributions. Next, we will show the functions available.

1.1 Random number generator

Its main function for random number generation is called rtelliptical, which is based on the methods described in Sect. 3 of the main document, and whose signature is the following.

figure c

In this function, \(n \ge 1\) is the number of observations to be sampled, nu is the additional parameter or vector of parameters depending on the distribution of \({\textbf{X}}\), mu is the location parameter, Sigma is the positive-definite scale matrix, and lower and upper are the lower and upper truncation points, respectively. The truncated normal, Student-t, power exponential, Pearson VII, slash, and contaminated normal distributions can be specified through the argument dist.

The following examples illustrate the function rtelliptical, for drawing samples from truncated bivariate distributions with location parameter \(\varvec{\mu }=(0,0)^\top \), scale matrix elements \(\sigma _{11} = \sigma _{22} = 1\), and \(\sigma _{12} = \sigma _{21} = 0.70\), and truncation region \({A}=\{{\textbf{x}}: {{\textbf {a}}}< {\textbf{x}}< {{\textbf {b}}}\}\), with \({{\textbf {a}}}=(-2,-2)^\top \) and \({{\textbf {b}}}=(3,2)^\top \). The distributions considered are the predefined ones in the package.

  • Truncated normal

    figure d
  • Truncated Student-t with \(\nu =3\) degrees of freedom

    figure e
  • Truncated power exponential with \(\beta =2\)

    figure f
  • Truncated Pearson VII with parameters \(m=5/2\) and \(\nu =3\)

    figure g
  • Truncated slash with \(\nu = 1.5\) degrees of freedom

    figure h
  • Truncated contaminated normal with \(\nu = 0.70\) and \(\rho = 0.20\)

    figure i
Fig. 6
figure 6

Scatterplot and marginal histograms for the \(n = 10^4\) observations sampled for some bivariate truncated elliptical distributions

Note that no additional arguments are passed for the TMVN distribution. On the opposite way, for the truncated contaminated normal and Pearson VII distributions, nu is a vector of length two, and for the remaining distributions, this parameter is a non-negative scalar. An important remark is that there exist closed-form expressions to compute \(\kappa _y = g^{-1}(y)\) for the normal, Student-t, power exponential, and Pearson VII distributions; however, the contaminated normal and slash distributions require numerical methods for this purpose. This value is calculated as the root of the function \(g(t) - y = 0, t \ge 0\), through the Newton–Raphson algorithm for the contaminated normal, and using Brent’s method (Brent 2013), for the slash distribution, a mixture of linear interpolation, inverse quadratic interpolation, and the bisection method.

Fig. 7
figure 7

Sample autocorrelation plots of \(X_1\) and \(X_2\) sampled from the bivariate truncated elliptical distributions in Fig. 6

This function also allows generating random numbers from other truncated elliptical distributions not specified in the dist argument, by supplying the dgf through arguments either expr or gFun. The easiest way is to provide the dgf expression to argument expr as a character. The notation used in expr needs to be understood by the Ryacas0 package (Andersen et al. 2020), and the R environment. For instance, for the dgf \(g(t) = e^{-t}\), the user must provide . For this case, when a character expression is provided to expr, the algorithm tries to compute a closed-form expression for the inverse function of g(t); however, this is not always possible (a warning message is returned). On the other hand, if it is not possible to pass an expression to expr, due to the complexity of the expression, the user may provide a custom R function to the gFun argument. By default, its inverse function is approximated numerically; however, the user may also provide its inverse to the ginvFun argument to gain some computational time. When gFun is provided, arguments dist and expr are ignored.

For example, to generate samples from the bivariate truncated logistic distribution with same parameters as before, and which has dgf \(g(t)=e^{-t}/(1+e^{-t})^2, t\ge 0\), we can run the following code.

figure k

Another distribution that belongs to the elliptical family is the Kotz-type distribution with parameters \(r>0, s>0\), and \(2N + p > 2\), whose dgf is \(g(t)=t^{N-1} e^{-r t^s}, t\ge 0\) (Fang et al. 2018). For this distribution, g(t) is not strictly decreasing for all parameter values, however, for \((2-p)/2 < N \le 1\), it holds. Hence, our proposal works for \(r > 0\), \(s > 0\), and \((2 - p)/2 < N \le 1\). For this type of more complex dgf, it is advisable to pass it through the gFun argument as an R function (with other parameters as fixed values). In the following example, we draw samples from a bivariate Kotz-type distribution with settings as before, and extra parameters \(r = 2, s = 1/4\), and \(N = 1/2\).

figure l

Figure 6 shows the scatterplot and marginal histograms for the \(n = 10^4\) observations sampled from each of the truncated bivariate distributions referred above.

As mentioned by Robert and Casella (2010) and Ho et al. (2012), the slice sampling algorithm with Gibbs steps generates random samples conditioned on previous values, resulting in a sequence of correlated samples. Thus, it is essential to analyze the dependence effect of the proposed algorithm. Figure 7 displays the autocorrelation plots for each one of the distributions, where we notice that the autocorrelation drops quickly and becomes negligibly small when lags become large, evidencing well mixing and quickly converging for these examples. If necessary, initial observations can be discarded by means of the argument. Finally, autocorrelation can be decimated by setting the thinning argument. Thinning consists of picking separated points from the sample at each kth step. The thinning factor reduces the autocorrelation of the random points in the Gibbs sampling process. As natural, this value must be an integer greater than or equal to 1.

1.2 Mean and variance-covariance matrix computation

Algorithm 2 for the distributions detailed in subsection 4.1 is available through the function mvtelliptical, whose signature, together with default values, is the following.

figure m

The arguments lower and upper are the lower and upper truncation points of length p, respectively, mu is the location parameter of length p, Sigma is the \(p\times p\) positive-definite scale matrix, nu is the additional parameter or vector of parameters depending on the dgf g. The argument dist indicates the distribution to be used. The parameters n,, and thinning are related to the Monte Carlo approximation, where n is the number of samples to be generated, is the number of samples to be discarded as burn-in phase, and thinning is a factor for reducing autocorrelation between observations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Valeriano, K.A.L., Galarza, C.E. & Matos, L.A. Moments and random number generation for the truncated elliptical family of distributions. Stat Comput 33, 32 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Elliptical distributions
  • Slice sampling algorithm
  • Truncated distributions
  • Truncated moments