Skip to main content
Log in

The minimum S-divergence estimator under continuous models: the Basu–Lindsay approach

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Robust inference based on the minimization of statistical divergences has proved to be a useful alternative to the classical maximum likelihood based techniques. Recently Ghosh et al. (A Generalized Divergence for Statistical Inference, 2013a) proposed a general class of divergence measures for robust statistical inference, named the S-divergence family. Ghosh (Sankhya A, doi:10.1007/s13171-014-0063-2, 2014) discussed its asymptotic properties for the discrete model of densities. In the present paper, we develop the asymptotic properties of the minimum S-divergence estimators under continuous models. Here we use the Basu–Lindsay approach (Ann Inst Stat Math 46:683–705, 1994) of smoothing the model densities that, unlike previous approaches, avoids much of the complications of the kernel bandwidth selection. Illustrations are presented to support the performance of the resulting estimators both in terms of efficiency and robustness through extensive simulation studies and real data examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Altman T, Leger C (1994) Cross-validation, the bootstrap, and related methods for tuning parameter selection. Technical Report, The Cornell University Library

  • Basu A, Harris IR, Hjort NL, Jones MC (1998) Robust and efficient estimation by minimising a density power divergence. Biometrika 85:549–559

    Article  MathSciNet  MATH  Google Scholar 

  • Basu A, Lindsay BG (1994) Minimum disparity estimation for continuous models: efficiency, distributions and robustness. Ann Inst Stat Math 46:683–705

    Article  MathSciNet  MATH  Google Scholar 

  • Basu A, Shioya H, Park C (2011) Statistical inference: the minimum distance approach. Chapman & Hall/CRC, Boca Raton

    MATH  Google Scholar 

  • Beran RJ (1977) Minimum Hellinger distance estimates for parametric models. Ann Stat 5:445–463

    Article  MathSciNet  MATH  Google Scholar 

  • Bickel PJ (1978) Some recent developments in robust statistics. Presented at the 4th Australian statistical conference

  • Bregman LM (1967) The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput Math Math Phys 7:200–217

    Article  MathSciNet  MATH  Google Scholar 

  • Brown LD, Hwang JTG (1993) How to approximate a histogram by a normal density. Am Stat 47(4):251–255

    MathSciNet  Google Scholar 

  • Burbea J, Rao CR (1982) Entropy differential metric, distance and divergence measures in probability spaces: a unified approach. J Multivar Anal 12:575–596

    Article  MathSciNet  MATH  Google Scholar 

  • Cressie N, Read TRC (1984) Multinomial goodness-of-fit tests. J R Stat Soc B 46:440–464

    MathSciNet  MATH  Google Scholar 

  • Csiszár I (1963) Eine informations theoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizitat von Markoffschen Ketten. Publ Math Inst Hungar Acad Sci 3:85–107

    MathSciNet  MATH  Google Scholar 

  • Donoho DL, Liu RC (1988) The “automatic”’ robustness of minimum distance functionals. Ann Stat 552–586

  • Fang Y, Wang B, Feng Y (2013) Tuning parameter selection in regularized estimations of large covariance matrices. J Stat Comput Simul 84(7):1597–1607

    Google Scholar 

  • Ghosh A (2014) Asymptotic properties of minimum \(S\)-divergence estimator for discrete models. Sankhya A. doi:10.1007/s13171-014-0063-2

  • Ghosh A, Basu A (2013) Robust estimation for independent non-homogeneous observations using density power divergence with applications to linear regression. Electron J Stat 7:2420–2456

    Article  MathSciNet  MATH  Google Scholar 

  • Ghosh A, Basu A (2015a) Robust estimation in generalised linear models : the density power divergence approach. TEST. doi:10.1007/s11749-015-0445-3

  • Ghosh A, Basu A (2015b) Robust Bayes estimation using the density power divergence. Ann Inst Stat Math. doi:10.1007/s10463-014-0499-0

  • Ghosh A, Basu A (2015c) Robust estimation for non-homogeneous data and the selection of the optimal tuning parameter: the density power divergence approach. J Appl Stat. doi:10.1080/02664763.2015.1016901

  • Ghosh A, Basu A, Pardo L (2015) On the robustness of a divergence based test of simple statistical hypotheses. J Stat Plan Inference 161:91–108

    Article  MathSciNet  MATH  Google Scholar 

  • Ghosh A, Harris IR, Maji A, Basu A, Pardo L (2013a) A Generalized Divergence for Statistical Inference. Technical Report (BIRU/2013/3), Bayesian and Interdisciplinary Research Unit, Indian Statistical Institute, India

  • Ghosh A, Maji A, Basu A (2013b) Robust inference based on divergences in reliability systems. Applied reliability engineering and risk analysis. Probabilistic models and statistical inference, Ilia Frenkel, Alex, Karagrigoriou, Anatoly Lisnianski & Andre Kleyner, Eds, Dedicated to the Centennial of the birth of Boris Gnedenko, Wiley, New York, USA

  • Hong C, Kim Y (2001) Automatic selection of the tuning parameter in the minimum density power divergence estimation. J Korean Stat Soc 30:453–465

    MathSciNet  Google Scholar 

  • Jones MC, Hjort NL, Harris IR, Basu A (2001) A comparison of related density-based minimum divergence estimators. Biometrika 88(3):865–873

    Article  MathSciNet  MATH  Google Scholar 

  • Kawano S (2014) Selection of tuning parameters in bridge regression models via Bayesian information criterion. Stat Pap 55(4):1207–1223

    Article  MathSciNet  MATH  Google Scholar 

  • Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22:79–86

    Article  MathSciNet  MATH  Google Scholar 

  • Landaburu E, Morales D, Pardo L (2005) Divergence-based estimation and testing with misclassified data. Stat Pap 46(3):397–409

    Article  MathSciNet  MATH  Google Scholar 

  • Lee T, Lee S (2009) Consistency of minimizing a penalized density power divergence estimator for mixing distribution. Stat Pap 50(1):67–80

    Article  MathSciNet  MATH  Google Scholar 

  • Lindsay BG (1994) Efficiency versus robustness: the case for minimum Hellinger distance and related methods. Ann Stat 22:1081–1114

    Article  MathSciNet  MATH  Google Scholar 

  • Martin N, Pardo L (2008) Minimum phi-divergence estimators for log-linear models with linear constraints and multinomial sampling. Stat Pap 49(1):15–36

    Article  MATH  Google Scholar 

  • Menendez M, Morales D, Pardo L, Salicru M (1995) Asymptotic behavior and statistical applications of divergence measures in multinomial populations: a unified study. Stat Pap 36(1):1–29

    Article  MATH  Google Scholar 

  • Menendez M, Morales D, Pardo L, Vajda I (1998) Two approaches to grouping of data and related disparity statistics. Commun Stat-Theory Methods 27(3):609–633

    Article  MathSciNet  MATH  Google Scholar 

  • Menendez M, Pardo L, Pardo MC (2009) Preliminary phi-divergence test estimators for linear restrictions in a logistic regression model. Stat Pap 50(2):277–300

    Article  MathSciNet  MATH  Google Scholar 

  • Mihoko M, Eguchi S (2002) Robust blind source separation by beta divergence. Neural Comput 14(8):1859–1886

    Article  MATH  Google Scholar 

  • Pardo JA, Pardo L, Pardo MC (2006) Minimum \(\Phi \)-divergence estimator in logistic regression models. Stat Pap 47(1):91–108

    Article  MathSciNet  MATH  Google Scholar 

  • Park H, Sakaori F, Sadanori Konishi S (2014) Robust sparse regression and tuning parameter selection via the efficient bootstrap information criteria. J Stat Comput Simul 84(7):1597–1607

    Article  MathSciNet  Google Scholar 

  • Patra S, Maji A, Basu A, Pardo L (2013) The power divergence and the density power divergence families: the mathematical connection. Sankhya B 75:16–28

    Article  MathSciNet  MATH  Google Scholar 

  • Pearson K (1900) On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philos Mag 50:157–175

    Article  MATH  Google Scholar 

  • Rieder H (1994) Robust asymptotic statistics. Springer, New York

    Book  MATH  Google Scholar 

  • Scott DW (2001) Parametric statistical modeling by minimum integrated square error. Technometrics 43:274–285

    Article  MathSciNet  Google Scholar 

  • Simpson DG (1987) Minimum Hellinger distance estimation for the analysis of count data. J Am Stat Assoc 82:802–807

    Article  MathSciNet  MATH  Google Scholar 

  • Simpson DG (1989) Hellinger deviance test: efficiency, breakdown points, and examples. J Am Stat Assoc 84:107–113

    Article  MathSciNet  Google Scholar 

  • Stigler SM (1977) Do robust estimators work with real data? Ann Stat 5:1055–1098

    Article  MathSciNet  MATH  Google Scholar 

  • Tamura RN, Boos DD (1986) Minimum Hellinger distance estimation for multivariate location and covariance. J Am Stat Assoc 81:223–229

    Article  MathSciNet  MATH  Google Scholar 

  • Toma A, Broniatowski M (2010) Dual divergence estimators and tests: robustness results. J Multivar Anal 102:20–36

    Article  MathSciNet  MATH  Google Scholar 

  • Warwick J, Jones MC (2005) Choosing a robustness tuning parameter. J Stat Comput Simul 75:581–588

    Article  MathSciNet  MATH  Google Scholar 

  • Wied D, Weibach R (2012) Consistency of the kernel density estimator: a survey. Stat Pap 53(1):1–21

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors thank the editor and two anonymous referees for several useful suggestions and comments that lead to an improved version of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayanendranath Basu.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 139 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghosh, A., Basu, A. The minimum S-divergence estimator under continuous models: the Basu–Lindsay approach. Stat Papers 58, 341–372 (2017). https://doi.org/10.1007/s00362-015-0701-3

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-015-0701-3

Keywords

Navigation