Advertisement

Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

Mixtures of multivariate contaminated normal regression models

  • 203 Accesses

  • 9 Citations

Abstract

Mixtures of regression models (MRMs) are widely used to investigate the relationship between variables coming from several unknown latent homogeneous groups. Usually, the conditional distribution of the response in each mixture component is assumed to be (multivariate) normal (MN-MRM). To robustify the approach with respect to possible elliptical heavy-tailed departures from normality, due to the presence of mild outliers, the multivariate contaminated normal MRM is here introduced. In addition to the parameters of the MN-MRM, each mixture component has a parameter controlling the proportion of outliers and one specifying the degree of contamination with respect to the response variable(s). Crucially, these parameters do not have to be specified a priori, adding flexibility to our approach. Furthermore, once the model is estimated and the observations are assigned to the groups, a finer intra-group classification in typical points and (mild) outliers, can be directly obtained. Identifiability conditions are provided, an expectation-conditional maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients are evaluated through Monte Carlo experiments and compared with other procedures. The performance of this novel family of models is also illustrated on artificial and real data, with particular emphasis to the application in allometric studies.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

References

  1. Aitken AC (1926) A series formula for the roots of algebraic and transcendental equations. Proc R Soc Edinb 45(1):14–22

  2. Aitkin M, Wilson GT (1980) Mixture models, outliers, and the EM algorithm. Technometrics 22(3):325–331

  3. Andrews JL, McNicholas PD (2011) Extending mixtures of multivariate \(t\)-factor analyzers. Stat Comput 21(3):361–373

  4. Andrews JL, McNicholas PD, Subedi S (2011) Model-based classification via mixtures of multivariate \(t\)-distributions. Comput Stat Data Anal 55:520–529

  5. Baek J, McLachlan GJ (2011) Mixtures of common \(t\)-factor analyzers for clustering high-dimensional microarray data. Bioinformatics 27(9):1269–1276

  6. Bagnato L, Punzo A (2013) Finite mixtures of unimodal beta and gamma densities and the \(k\)-bumps algorithm. Comput Stat 28(4):1571–1597

  7. Bagnato L, Punzo A, Zoia MG (2017) The multivariate leptokurtic-normal distribution and its application in model-based clustering. Can J Stat 45(1):95–119

  8. Bai X, Yao W, Boyer JE (2012) Robust fitting of mixture regression models. Comput Stat Data Anal 56(7):2347–2359

  9. Banfield JD, Raftery AE (1993) Model-based Gaussian and non-Gaussian clustering. Biometrics 49(3):803–821

  10. Berkane M, Bentler PM (1988) Estimation of contamination parameters and identification of outliers in multivariate data. Sociol Methods Res 17(1):55–64

  11. Berta P, Ingrassia S, Punzo A, Vittadini G (2016) Multilevel cluster-weighted models for the evaluation of hospitals. METRON 74(3):275–292

  12. Biernacki C, Celeux G, Govaert G (2000) Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans Pattern Anal Mach Intell 22(7):719–725

  13. Biernacki C, Celeux G, Govaert G (2003) Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models. Comput Stat Data Anal 41(3–4):561–575

  14. Böhning D (1999) Computer Assisted Analysis of Mixtures and Applications: Meta Analysis, Disease Mapping, and Others, Chapman & Hall/CRC Monographs on Statistics & Applied Probability, vol 81. Taylor & Francis

  15. Böhning D, Dietz E, Schaub R, Schlattmann P, Lindsay B (1994) The distribution of the likelihood ratio for mixtures of densities from the one-parameter exponential family. Ann Inst Stat Math 46(2):373–388

  16. Browne RP, Subedi S, McNicholas PD (2013) Constrained optimization for a subset of the Gaussian parsimonious clustering models. http://arxiv.org/abs/1306.5824

  17. Campbell NA, Mahon RJ (1974) A multivariate study of variation in two species of rock crab of genus Leptograpsus. Aust J Zool 22(3):417–425

  18. Celeux G, Govaert G (1995) Gaussian parsimonious clustering models. Pattern Recogn 28(5):781–793

  19. Celeux G, Hurn M, Robert CP (2000) Computational and inferential difficulties with mixture posterior distributions. J Am Stat Assoc 95(451):957–970

  20. Clarke BR, Davidson T, Hammarstrand R (2017) A comparison of the \(l_2\) minimum distance estimator and the em-algorithm when fitting \(k\)-component univariate normal mixtures. Stat Papers pp 1–20 https://doi.org/10.1007/s00362-016-0747-x

  21. Cuesta-Albertos JA, Gordaliza A, Matrán C (1997) Trimmed \(k\)-means: an attempt to robustify quantizers. Ann Stat 25(2):553–576

  22. Dang UJ, McNicholas PD (2015) Families of parsimonious finite mixtures of regression models. In: Morlini I, Minerva T, Vichi M (eds) Advances in Statistical Models for Data Analysis. Studies in Classification, Data Analysis and Knowledge Organization. Springer, Switzerland pp 73–84

  23. Dang UJ, Browne RP, McNicholas PD (2015) Mixtures of multivariate power exponential distributions. Biometrics 71(4):1081–1089

  24. Dang UJ, Punzo A, McNicholas PD, Ingrassia S, Browne RP (2017) Multivariate response and parsimony for Gaussian cluster-weighted models. J Classif 34(1):4–34

  25. Davies L, Gather U (1993) The identification of multiple outliers. J Am Stat Assoc 88(423):782–792

  26. Dayton CM, Macready GB (1988) Concomitant-variable latent-class models. J Am Stat Assoc 83(401):173–178

  27. Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc: Ser B 39(1):1–38

  28. Depraetere N, Vandebroek M (2014) Order selection in finite mixtures of linear regressions. Stat Pap 55(3):871–911

  29. DeSarbo WS, Cron WL (1988) A maximum likelihood methodology for clusterwise linear regression. J Classif 5(2):249–282

  30. Fraley C, Raftery AE, Murphy TB, Scrucca L (2012) mclust version 4 for R: Normal mixture modeling for model-based clustering, classification, and density estimation. Technical report 597, Department of Statistics, University of Washington, Seattle, Washington

  31. Frühwirth-Schnatter S (2006) Finite mixture and Markov switching models. Springer, New York

  32. Galimberti G, Soffritti G (2014) A multivariate linear regression analysis using finite mixtures of \(t\) distributions. Comput Stat Data Anal 71:138–150

  33. García-Escudero LA, Gordaliza A, Mayo-Iscar A, San Martín R (2010) Robust clusterwise linear regression through trimming. Comput Stat Data Anal 54(12):3057–3069

  34. Golam Kibria BM, Safiul Haq M (1999) The multivariate linear model with multivariate \(t\) and intra-class covariance structure. Stat Pap 40(3):263–276

  35. Gómez E, Gómez-Viilegas MA, Marin JM (1998) A multivariate generalization of the power exponential family of distributions. Commun Stat Theory Methods 27(3):589–600

  36. Greselin F, Punzo A (2013) Closed likelihood ratio testing procedures to assess similarity of covariance matrices. Am Stat 67(3):117–128

  37. Grün B, Leisch F (2008) FlexMix version 2: Finite mixtures with concomitant variables and varying and constant parameters. J Stat Softw 28(4):1–35

  38. Hartigan JA (1985) Statistical theory in clustering. J Classif 2(1):63–76

  39. Hastie T, Tibshirani R (1996) Discriminant analysis by Gaussian mixtures. J Roy Stat Soc 58(1):155–176

  40. Hennig C (2000) Identifiablity of models for clusterwise linear regression. J Classif 17(2):273–296

  41. Hennig C (2004) Breakdown points for maximum likelihood estimators of location-scale mixtures. Ann Stat 32(4):1313–1340

  42. Ingrassia S (2004) A likelihood-based constrained algorithm for multivariate normal mixture models. Stat Methods Appl 13(2):151–166

  43. Ingrassia S, Punzo A (2016) Decision boundaries for mixtures of regressions. J Korean Stat Soc 45(2):295–306

  44. Ingrassia S, Rocci R (2007) Constrained monotone em algorithms for finite mixture of multivariate Gaussians. Comput Stat Data Anal 51(11):5339–5351

  45. Ingrassia S, Minotti SC, Punzo A (2014) Model-based clustering via linear cluster-weighted models. Comput Stat Data Anal 71:159–182

  46. Ingrassia S, Punzo A, Vittadini G, Minotti SC (2015) The generalized linear mixed cluster-weighted model. J Classif 32(1):85–113

  47. Jiang W, Tanner MA (1999) Hierarchical mixtures-of-experts for exponential family regression models: approximation and maximum likelihood estimation. Ann Stat 27(3):987–1011

  48. Karlis D, Xekalaki E (2003) Choosing initial values for the EM algorithm for finite mixtures. Comput Stat Data Anal 41(3–4):577–590

  49. Karlsson M, Laitila T (2014) Finite mixture modeling of censored regression models. Stat Pap 55(3):627–642

  50. Klingenberg CP (1996) Multivariate allometry. Advances in Morphometrics. Springer, New York pp 23–49

  51. Knoebel BR, Burkhart HE (1991) A bivariate distribution approach to modeling forest diameter distributions at two points in time. Biometrics 47(1):241–253

  52. Lachos VH, Angolini T, Abanto-Valle CA (2011) On estimation and local influence analysis for measurement errors models under heavy-tailed distributions. Stat Pap 52(3):567–590

  53. Lamont AE, Vermunt JK, Van Horn ML (2016) Regression mixture models: Does modeling the covariance between independent variables and latent classes improve the results? Multivar Behav Res 51(1):35–52

  54. Lange KL, Little RJA, Taylor JMG (1989) Robust statistical modeling using the \(t\) distribution. J Am Stat Assoc 84(408):881–896

  55. Leisch F (2004) FlexMix: A general framework for finite mixture models and latent class regression in R. J Stat Softw 11(8):1–18

  56. Little RJA (1988) Robust estimation of the mean and covariance matrix from data with missing values. Appl Stat 37(1):23–38

  57. Maruotti A, Punzo A (2017) Model-based time-varying clustering of multivariate longitudinal data with covariates and outliers. Comput Stat Data Anal 113(4):475–496

  58. Mazza A, Punzo A, Ingrassia S (2015) flexCWM: Flexible Cluster-Weighted Modeling. http://cran.r-project.org/web/packages/flexCWM/index.html

  59. Mazza A, Punzo A, Ingrassia S (2018) flexCWM. A flexible framework for cluster-weighted models. J Stat Softw pp 1–29

  60. McLachlan G, Krishnan T (2007) The EM algorithm and extensions, Wiley Series in Probability and Statistics, vol 382, 2nd edn. Wiley, New York

  61. McLachlan GJ, Peel D (2000) Finite Mixture Models. Wiley, New York

  62. McNicholas PD (2010) Model-based classification using latent Gaussian mixture models. J Stat Plan Inference 140(5):1175–1181

  63. McNicholas PD, Subedi S (2012) Clustering gene expression time course data using mixtures of multivariate \(t\)-distributions. J Stat Plan Inference 142(5):1114–1127

  64. McNicholas PD, Murphy TB, McDaid AF, Frost D (2010) Serial and parallel implementations of model-based clustering via parsimonious Gaussian mixture models. Comput Stat Data Anal 54(3):711–723

  65. Meng XL, Rubin DB (1993) Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika 80(2):267–278

  66. Neykov N, Filzmoser P, Dimova R, Neytchev P (2007) Robust fitting of mixtures using the trimmed likelihood estimator. Comput Stat Data Anal 52(1):299–308

  67. Niu X, Li P, Zhang P (2016) Testing homogeneity in a scale mixture of normal distributions. Stat Pap 57(2):499–516

  68. Peel D, McLachlan GJ (2000) Robust mixture modelling using the \(t\) distribution. Stat Comput 10(4):339–348

  69. Punzo A (2014) Flexible mixture modeling with the polynomial Gaussian cluster-weighted model. Stat Model 14(3):257–291

  70. Punzo A, Ingrassia S (2015) Parsimonious generalized linear Gaussian cluster-weighted models. In: Morlini I, Minerva T, Vichi M (eds). Advances in Statistical Models for Data Analysis. Studies in Classification, Data Analysis and Knowledge Organization. Springer International Publishing, Switzerland, pp 201–209

  71. Punzo A, Ingrassia S (2016) Clustering bivariate mixed-type data via the cluster-weighted model. Comput Stat 31(3):989–1013

  72. Punzo A, Maruotti A (2016) Clustering multivariate longitudinal observations: The contaminated Gaussian hidden Markov model. J Comput Gr Stat 25(4):1097–1116

  73. Punzo A, McNicholas PD (2016) Parsimonious mixtures of multivariate contaminated normal distributions. Biometr J 58(6):1506–1537

  74. Punzo A, McNicholas PD (2017) Robust clustering in regression analysis via the contaminated Gaussian cluster-weighted model. J Classif 34(2):249–293

  75. Punzo A, Browne RP, McNicholas PD (2016) Hypothesis testing for mixture model selection. J Stat Comput Simul 86(14):2797–2818

  76. Punzo A, Bagnato L, Maruotti A (2017) Compound unimodal distributions for insurance losses. Insur: Math Econ. https://doi.org/10.1016/j.insmatheco.2017.10.007

  77. Punzo A, Mazza A, McNicholas PD (2018) ContaminatedMixt: An R package for fitting parsimonious mixtures of multivariate contaminated normal distributions. J Stat Softw pp 1–25

  78. Qin LX, Self SG (2006) The clustering of regression models method with applications in gene expression data. Biometrics 62(2):526–533

  79. R Core Team (2013) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, http://www.R-project.org/

  80. Ritter G (2015) Robust cluster analysis and variable selection. CRC Press, Baco Raton, CRC Monographs on Statistics & Applied Probability. Chapman & Hall/

  81. Rousseeuw PJ, Driessen KV (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41(3):212–223

  82. Schreuder HT, Hafley WL (1977) A useful bivariate distribution for describing stand structure of tree heights and diameters. Biometrics 33(3):471–478

  83. Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464

  84. Seo B, Kim D (2012) Root selection in normal mixture models. Comput Stat Data Anal 56(8):2454–2470

  85. Skrondal A, Rabe-Hesketh S (2004) Generalized Latent Variable Modeling: Multilevel, Longitudinal, and Structural Equation Models. Interdisciplinary Statistics. Taylor & Francis, Baco Raton

  86. Song W, Yao W, Xing Y (2014) Robust mixture regression model fitting by Laplace distribution. Comput Stat Data Anal 71:128–137

  87. Stephens M (2000) Dealing with label switching in mixture models. J Royal Stat Soc B 62(4):795–809

  88. Subedi S, Punzo A, Ingrassia S, McNicholas PD (2013) Clustering and classification via cluster-weighted factor analyzers. Adv Data Anal Classif 7(1):5–40

  89. Subedi S, Punzo A, Ingrassia S, McNicholas PD (2015) Cluster-weighted \(t\)-factor analyzers for robust model-based clustering and dimension reduction. Stat Methods Appl 24(4):623–649

  90. Tukey JW (1960) A survey of sampling from contaminated distributions. In: Olkin I (ed) Contributions to probability and statistics: essays in honor of Harold Hotelling, Stanford studies in mathematics and statistics. Stanford University Press, California, pp 448–485

  91. Wedel M, Kamakura W (2001) Market segmentation: Conceptual and methodological foundations, 2nd edn. Kluwer Academic Publishers, Boston

  92. Yao W (2012) Model based labeling for mixture models. Stat Comput 22(2):337–347

  93. Yao W, Lindsay BG (2009) Bayesian mixture labeling by highest posterior density. J Am Stat Assoc 104(486):758–767

  94. Yao W, Wei Y, Yu C (2014) Robust mixture regression using the \(t\)-distribution. Comput Stat Data Anal 71:116–127

Download references

Acknowledgements

The authors acknowledge the financial support from the grant “Finite mixture and latent variable models for causal inference and analysis of socio-economic data” (FIRB 2012-Futuro in Ricerca) funded by the Italian Government (RBFR12SHVV).

Author information

Correspondence to Antonio Punzo.

Appendix A: Updates in the first CM-step

Appendix A: Updates in the first CM-step

The estimates of \(\pi _j, {\varvec{B}}_j, \varvec{\varSigma }_j\), and \(\alpha _j, j=1,\ldots ,k\), at the \(\left( r+1\right) \)th first CM-step of the ECM algorithm, require the maximization of

$$\begin{aligned} Q\left( \varvec{\vartheta }_1|\varvec{\vartheta }^{\left( r\right) }\right) =Q_1\left( \varvec{\pi }|\varvec{\vartheta }^{\left( r\right) }\right) +Q_2\left( \varvec{\alpha }|\varvec{\vartheta }^{\left( r\right) }\right) +Q_3\left( {\varvec{B}},\varvec{\varSigma }|\varvec{\vartheta }^{\left( r\right) }\right) , \end{aligned}$$
(14)

where

$$\begin{aligned} Q_1\left( \varvec{\pi }|\varvec{\vartheta }^{\left( r\right) }\right)&=\sum _{i=1}^{n}\sum _{j=1}^{k}z_{ij}^{\left( r\right) }\ln \pi _j,\\ Q_2\left( \varvec{\alpha }|\varvec{\vartheta }^{\left( r\right) }\right)&=\sum _{i=1}^{n}\sum _{j=1}^{k}z_{ij}^{\left( r\right) }\left[ u_{ij}^{\left( r\right) }\ln \alpha _j+\left( 1-u_{ij}^{\left( r\right) }\right) \ln \left( 1-\alpha _j\right) \right] ,\\ Q_3\left( {\varvec{B}},\varvec{\varSigma }|\varvec{\vartheta }^{\left( r\right) }\right)&=-\frac{1}{2}\sum _{i=1}^n\sum _{j=1}^k\Biggl \{z_{ij}^{\left( r\right) }\ln \left| \varvec{\varSigma }_j\right| \\&\quad +z_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) \delta \left( {\varvec{x}}_i,\varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j\right) ;\varvec{\varSigma }_j\right) \Biggr \}. \end{aligned}$$

Terms which are independent by the parameters of interest have been removed from \(Q_3\). As the three terms on the right-hand side of (14) have zero cross-derivatives, they can be maximized separately.

A.1 Update of \(\varvec{\pi }\)

The maximum of \(Q_1\left( \varvec{\pi }|\varvec{\vartheta }^{\left( r\right) }\right) \) with respect to \(\varvec{\pi }\), subject to the constraints on those parameters, is obtained by maximizing the augmented function

$$\begin{aligned} \sum _{i=1}^n\sum _{j=1}^kz_{ij}^{\left( r\right) }\ln \pi _j-\lambda \left( \sum _{j=1}^k\pi _j-1\right) , \end{aligned}$$
(15)

where \(\lambda \) is a Lagrangian multiplier. Setting the derivative of equation (15) with respect to \(\pi _j\) equal to zero and solving for \(\pi _j\) yields

$$\begin{aligned} \pi _j^{\left( r+1\right) }=\displaystyle \displaystyle \sum _{i=1}^nz_{ij}^{\left( r\right) }\Big /n. \end{aligned}$$

A.2 Update of \(\varvec{\alpha }_{{\varvec{Y}}}\)

The updates for \(\varvec{\alpha }\) can be obtained through the first partial derivatives

$$\begin{aligned} \frac{\partial Q_2\left( \varvec{\alpha }|\varvec{\vartheta }^{\left( r\right) }\right) }{\partial \alpha _j} = \frac{\displaystyle \sum _{i=1}^n z_{ij}^{\left( r\right) }u_{ij}^{\left( r\right) }-\alpha _j\sum _{i=1}^n z_{ij}^{\left( r\right) }}{\alpha _j\left( 1-\alpha _j\right) }, \qquad j=1,\ldots ,k. \end{aligned}$$
(16)

Equating (16) to zero yields

$$\begin{aligned} \alpha _j^{\left( r+1\right) }=\displaystyle \sum _{i=1}^n z_{ij}^{\left( r\right) }u_{ij}^{\left( r\right) }\Bigg /\displaystyle \sum _{i=1}^n z_{ij}^{\left( r\right) }, \qquad j=1,\ldots ,k. \end{aligned}$$

A.3 Update of \({\varvec{B}}\) and \(\varvec{\varSigma }_{{\varvec{Y}}}\)

Using properties of trace and transpose, the updates for \({\varvec{B}}\) can be obtained through the first partial derivatives

$$\begin{aligned}&\frac{\partial Q_3\left( {\varvec{B}},\varvec{\varSigma }_{{\varvec{Y}}}|\varvec{\vartheta }^{\left( r\right) }\right) }{ \partial {\varvec{B}}_j'} \nonumber \\&\quad = \frac{\partial \left\{ -\displaystyle \frac{1}{2}\displaystyle \sum _{i=1}^nz_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) \left[ {\varvec{y}}_i-\varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j\right) \right] '\varvec{\varSigma }_j^{-1}\left[ {\varvec{y}}_i-\varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j\right) \right] \right\} }{\partial {\varvec{B}}_j'}\nonumber \\&\quad = \frac{\partial \left[ -\displaystyle \frac{1}{2}\displaystyle \sum _{i=1}^nz_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) \left( -{\varvec{y}}_i'\varvec{\varSigma }_j^{-1}{\varvec{B}}_j'{\varvec{x}}_i^*-{\varvec{x}}_i^{*'}{\varvec{B}}_j\varvec{\varSigma }_j^{-1}{\varvec{y}}_i+{\varvec{x}}_i^{*'}{\varvec{B}}_j\varvec{\varSigma }_j^{-1}{\varvec{B}}_j'{\varvec{x}}_i^*\right) \right] }{\partial {\varvec{B}}_j'} \nonumber \\&\quad = \frac{\partial \left\{ \displaystyle \frac{1}{2}\displaystyle \sum _{i=1}^nz_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) \left[ \text{ tr }\left( {\varvec{B}}_j'{\varvec{x}}_i^*{\varvec{y}}_i'\varvec{\varSigma }_j^{-1}\right) +\text{ tr }\left( \left( \varvec{\varSigma }_j^{-1}{\varvec{y}}_i{\varvec{x}}_i^{*'}\right) '{\varvec{B}}_j'\right) -\text{ tr }\left( {\varvec{B}}_j'{\varvec{x}}_i^*{\varvec{x}}_i^{*'}{\varvec{B}}_j\varvec{\varSigma }_j^{-1}\right) \right] \right\} }{\partial {\varvec{B}}_j'}\nonumber \\&\quad =\displaystyle \frac{1}{2}\displaystyle \sum _{i=1}^nz_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) \left( 2\varvec{\varSigma }_j^{-1}{\varvec{y}}_i{\varvec{x}}_i^{*'}-2\varvec{\varSigma }_j^{-1}{\varvec{B}}_j'{\varvec{x}}_i^*{\varvec{x}}_i^{*'}\right) , \qquad j=1,\ldots ,k. \end{aligned}$$
(17)

Equating (17) to the null matrix yields

$$\begin{aligned} {\varvec{B}}_j^{\left( r+1\right) }= & {} \left[ \sum _{i=1}^n z_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) {\varvec{x}}_i^*{\varvec{x}}_i^{*'}\right] ^{-1}\\&\left[ \sum _{i=1}^n z_{ij}^{\left( r\right) }\left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r\right) }}\right) {\varvec{x}}_i^*{\varvec{y}}_i\right] , \qquad j=1,\ldots ,k. \end{aligned}$$

Finally, the updates for \(\varvec{\varSigma }\) can be obtained through the first partial derivatives

$$\begin{aligned} \frac{\partial Q_3\left( {\varvec{B}},\varvec{\varSigma }|\varvec{\vartheta }^{\left( r\right) }\right) }{\partial \varvec{\varSigma }_j^{-1}}= & {} \displaystyle \frac{1}{2}\displaystyle \sum _{i=1}^n z_{ij}^{\left( r\right) } \left\{ \varvec{\varSigma }_j+ \left( u_{ij}^{\left( r\right) }+\displaystyle \frac{1-u_{ij}^{\left( r\right) }}{ \eta _j^{\left( r\right) }}\right) \right. \nonumber \\&\left. \left[ {\varvec{y}}_i-\varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j^{\left( r+1\right) }\right) \right] \right. \nonumber \\&\left. \left[ {\varvec{y}}_i-\varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j^{\left( r+1\right) }\right) \right] '\right\} , \qquad j=1,\ldots ,k. \end{aligned}$$
(18)

Equating (18) to the null matrix yields

$$\begin{aligned} \varvec{\varSigma }_j^{\left( r+1\right) }= & {} \frac{1}{n_j^{\left( r\right) }}\sum _{i=1}^nz_{ij}^{\left( r\right) } \left( u_{ij}^{\left( r\right) }+\frac{1-u_{ij}^{\left( r\right) }}{\eta _j^{\left( r \right) }}\right) \\&\left[ {\varvec{y}}_i-\displaystyle \varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j^{\left( r+1\right) }\right) \right] \left[ {\varvec{y}}_i-\displaystyle \varvec{\mu }\left( {\varvec{x}}_i;{\varvec{B}}_j^{\left( r+1\right) }\right) \right] ', \qquad j=1,\ldots ,k. \end{aligned}$$

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mazza, A., Punzo, A. Mixtures of multivariate contaminated normal regression models. Stat Papers (2017). https://doi.org/10.1007/s00362-017-0964-y

Download citation

Keywords

  • Contaminated normal distribution
  • Mixtures of regression models
  • Model-based clustering