Skip to main content

Finite Mixture of Linear Regression Models: An Adaptive Constrained Approach to Maximum Likelihood Estimation

  • Conference paper
  • First Online:
Book cover Soft Methods for Data Science (SMPS 2016)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 456))

Included in the following conference series:

  • 1635 Accesses

Abstract

In order to overcome the problems due to the unboundedness of the likelihood, constrained approaches to maximum likelihood estimation in the context of finite mixtures of univariate and multivariate normals have been presented in the literature. One main drawback is that they require a knowledge of the variance and covariance structure. We propose a fully data-driven constrained method for estimation of mixtures of linear regression models. The method does not require any prior knowledge of the variance structure, it is invariant under change of scale in the data and it is easy and ready to implement in standard routines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Biernacki C (2004) An asymptotic upper bound of the likelihood to prevent Gaussian mixtures from degenerating. Technical Report, Université de Franche-Comté

    Google Scholar 

  2. Biernacki C, Chrétien S (2003) Degeneracy in the maximum likelihood estimation of univariate Gaussian mixtures with the EM. Stat Probab Lett 61:373–382

    Article  MathSciNet  MATH  Google Scholar 

  3. Chen J, Tan X, Zhang R (2008) Inference for normal mixtures in mean and variance. Statistica Sinica 18(2):443

    MathSciNet  MATH  Google Scholar 

  4. Ciuperca G, Ridolfi A, Idier J (2003) Penalized maximum likelihood estimator for normal mixtures. Scand J Stat 30(1):45–59

    Article  MathSciNet  MATH  Google Scholar 

  5. Day NE (1969) Estimating the components of a mixture of two normal distributions. Biometrika 56:463–474

    Article  MathSciNet  MATH  Google Scholar 

  6. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc: Ser B (Stat Methodol) 39:1–38

    MathSciNet  MATH  Google Scholar 

  7. DeSarbo WS, Cron WL (1988) A maximum likelihood methodology for clusterwise linear regression. J Classif 5(2):249–282

    Article  MathSciNet  MATH  Google Scholar 

  8. Eggermont PPB, LaRiccia VN (2001) Maximum penalized likelihood estimation, vol 1. Springer, New York

    MATH  Google Scholar 

  9. Gallegos MT, Ritter G (2009) Trimmed ML estimation of contaminated mixtures. Sankhya: Indian J Stat Ser A (2008-):164–220

    Google Scholar 

  10. García-Escudero LA, Gordaliza A, Matran C, Mayo-Iscar A (2008) A general trimming approach to robust cluster analysis. Ann Stat 36:1324–1345

    Article  MathSciNet  MATH  Google Scholar 

  11. García-Escudero LA, Gordaliza A, San Martń R, Van Aelst S, Zamar R (2009) Robust linear clustering. J R Stat Soc: Ser B (Stat Methodol) 71(1):301–318

    Article  MathSciNet  MATH  Google Scholar 

  12. García-Escudero LA, Gordaliza A, Mayo-Iscar A, San Martń R (2010) Robust clusterwise linear regression through trimming. Comput Stat Data Anal 54(12):3057–3069

    Article  MathSciNet  MATH  Google Scholar 

  13. Green PJ (1990) On use of the EM for penalized likelihood estimation. J R Stat Soc: Ser B (Stat Methodol) 443–452

    Google Scholar 

  14. Hathaway RJ (1985) A constrained formulation of maximum-likelihood estimation for normal mixture distributions. Ann Stat 13:795–800

    Article  MathSciNet  MATH  Google Scholar 

  15. Hennig C (2000) Identifiablity of models for clusterwise linear regression. J Classif 17(2):273–296

    Article  MathSciNet  MATH  Google Scholar 

  16. Huber PJ (1967) The behavior of maximum likelihood estimates under nonstandard conditions. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol 1, no 1, pp 221–233

    Google Scholar 

  17. Huber PJ (1981) Robust statistics. Wiley, New York

    Book  MATH  Google Scholar 

  18. Ingrassia S (2004) A likelihood-based constrained algorithm for multivariate normal mixture models. Stat Methods Appl 13:151–166

    Article  MathSciNet  Google Scholar 

  19. Ingrassia S, Rocci R (2007) A constrained monotone EM algorithm for finite mixture of multivariate Gaussians. Comput Stat Data Anal 51:5339–5351

    Article  MathSciNet  MATH  Google Scholar 

  20. Kiefer NM (1978) Discrete parameter variation: efficient estimation of a switching regression model. Econometrica 46:427–434

    Article  MathSciNet  MATH  Google Scholar 

  21. Kiefer J, Wolfowitz J (1956) Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. Ann Math Stat 27:886906

    MathSciNet  MATH  Google Scholar 

  22. McLachlan GJ, Peel D (2000) Finite mixture models. John Wiley and Sons, New York

    Google Scholar 

  23. Quandt RE (1972) A new approach to estimating switching regressions. J Am Stat Assoc 67(338):306–310

    Article  MATH  Google Scholar 

  24. Quandt RE, Ramsey JB (1978) Estimating mixtures of normal distributions and switching regressions. J Am Stat Assoc 73(364):730–738

    Article  MathSciNet  MATH  Google Scholar 

  25. Ritter G (2014) Robust cluster analysis and variable selection. Monographs on statistics and applied probability, vol 137. CRC Press

    Google Scholar 

  26. Snoussi H, Mohammad-Djafari A (2001) Penalized maximum likelihood for multivariate Gaussian mixture. In: Fry RL (ed) MaxEnt workshops: Bayesian inference and maximum entropy methods. pp 36–46, Aug 2001

    Google Scholar 

  27. Seo B, Kim D (2012) Root selection in normal mixture models. Comput Stat Data Anal 56:2454–2470

    Article  MathSciNet  MATH  Google Scholar 

  28. Tan X, Chen J, Zhang R (2007) Consistency of the constrained maximum likelihood estimator in finite normal mixture models. In: Proceedings of the American Statistical Association, American Statistical Association, Alexandria, VA, 2007, pp 2113–2119 [CD-ROM]

    Google Scholar 

  29. Xu J, Tan X, Zhang R (2010) A note on Phillips (1991): “A constrained maximum likelihood approach to estimating switching regressions”. J Econom 154:35–41

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Di Mari .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this paper

Cite this paper

Di Mari, R., Rocci, R., Gattone, S.A. (2017). Finite Mixture of Linear Regression Models: An Adaptive Constrained Approach to Maximum Likelihood Estimation. In: Ferraro, M., et al. Soft Methods for Data Science. SMPS 2016. Advances in Intelligent Systems and Computing, vol 456. Springer, Cham. https://doi.org/10.1007/978-3-319-42972-4_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-42972-4_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-42971-7

  • Online ISBN: 978-3-319-42972-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics