Advertisement

Statistical inference based on bridge divergences

  • Arun Kumar Kuchibhotla
  • Somabha Mukherjee
  • Ayanendranath Basu
Article
  • 18 Downloads

Abstract

M-estimators offer simple robust alternatives to the maximum likelihood estimator. The density power divergence (DPD) and the logarithmic density power divergence (LDPD) measures provide two classes of robust M-estimators which contain the MLE as a special case. In each of these families, the robustness of the estimator is achieved through a density power down-weighting of outlying observations. Even though the families have proved to be useful in robust inference, the relation and hierarchy between these two families are yet to be fully established. In this paper, we present a generalized family of divergences that provides a smooth bridge between DPD and LDPD measures. This family helps to clarify and settle several longstanding issues in the relation between the important families of DPD and LDPD, apart from being an important tool in different areas of statistical inference in its own right.

Keywords

Divergence Robustness M-estimators 

Notes

Acknowledgements

The authors gratefully acknowledge the comments of two anonymous referees as well as the members of the editorial board which led to a significantly improved version of the paper. The authors are indebted to Srijata Samanta of University of Florida for her contribution toward Remark 13.

Supplementary material

10463_2018_665_MOESM1_ESM.fig (14.2 mb)
Supplementary material 1 (fig 14554 KB)
10463_2018_665_MOESM2_ESM.xls (28 kb)
Supplementary material 2 (xls 28 KB)
10463_2018_665_MOESM3_ESM.pdf (531 kb)
Supplementary material 3 (pdf 531 KB)
10463_2018_665_MOESM4_ESM.fig (622 kb)
Supplementary material 4 (fig 621 KB)
10463_2018_665_MOESM5_ESM.fig (621 kb)
Supplementary material 5 (fig 620 KB)
10463_2018_665_MOESM6_ESM.xls (64 kb)
Supplementary material 6 (xls 64 KB)

References

  1. Basu, A., Lindsay, B. G. (1994). Minimum disparity estimation for continuous models: efficiency, distributions and robustness. Annals of the Institute of Statistical Mathematics, 46(4), 683–705.Google Scholar
  2. Basu, A., Harris, I. R., Hjort, N. L., Jones, M. C. (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika, 85(3), 549–559.Google Scholar
  3. Bickel, P. J., Doksum, K. A. (2015). Mathematical statistics–basic ideas and selected topics (Vol. 1). Texts in Statistical Science Series, second edition. Boca Raton, FL: CRC Press.Google Scholar
  4. Broniatowski, M., Toma, A., Vajda, I. (2012). Decomposable pseudodistances and applications in statistical estimation. Journal of Statistical Planning and Inference, 142(9), 2574–2585.Google Scholar
  5. Chen, J., Tan, X. (2009). Inference for multivariate normal mixtures. Journal of Multivariate Analysis, 100, 1367–1383.Google Scholar
  6. Dunlavy, D. M. (2005). Homotopy Optimization Methods and Protein Structure Prediction. PhD dissertation, The Graduate School of the University of Maryland, College Park.Google Scholar
  7. Dunlavy, D. M., O’Leary, D. P. (2005). Homotopy optimization methods for global optimization. Technical report, Sandia National Laboratories, SAND2005-7495, Albuquerque, New Mexico 87185 and Livermore, California 94550.Google Scholar
  8. Ferguson, T. S. (1996). A course in large sample theory. Texts in Statistical Science Series. London: Chapman & Hall.Google Scholar
  9. Fujisawa, H. (2013). Normalized estimating equation for robust parameter estimation. Electronic Journal of Statistics, 7, 1587–1606.MathSciNetCrossRefzbMATHGoogle Scholar
  10. Fujisawa, H., Eguchi, S. (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis, 99(9), 2053–2081.Google Scholar
  11. Hong, C., Kim, Y. (2001). Automatic selection of the tuning parameter in the minimum density power divergence estimation. Journal of the Korean Statistical Society, 30(3), 453–465.Google Scholar
  12. Jones, M. C., Hjort, N. L., Harris, I. R., Basu, A. (2001). A comparison of related density-based minimum divergence estimators. Biometrika, 88(3), 865–873.Google Scholar
  13. Kanamori, T., Fujisawa, H. (2014). Affine invariant divergences associated with proper composite scoring rules and their applications. Bernoulli. Official Journal of the Bernoulli Society for Mathematical Statistics and Probability, 20(4), 2278–2304.Google Scholar
  14. Lehmann, E. L., Casella, G. (1998). Theory of point estimation. Springer Texts in Statistics, second edition. New York: Springer-Verlag.Google Scholar
  15. Lindsay, B. G. (1994). Efficiency versus robustness: the case for minimum Hellinger distance and related methods. The Annals of Statistics, 22(2), 1081–1114.MathSciNetCrossRefzbMATHGoogle Scholar
  16. Seo, B., Lindsay, B. G. (2013). A universally consistent modification of maximum likelihood. Statistica Sinica, 23(2), 467–487.Google Scholar
  17. van der Vaart, A. W. (1998). Asymptotic statistics. Cambridge: Cambridge University Press.CrossRefzbMATHGoogle Scholar
  18. Warwick, J., Jones, M. C. (2005). Choosing a robustness tuning parameter. Journal of Statistical Computation and Simulation, 75(7), 581–588.Google Scholar
  19. Windham, M. P. (1995). Robustifying model fitting. Journal of the Royal Statistical Society. Series B. Methodological, 57(3), 599–609.MathSciNetzbMATHGoogle Scholar
  20. Zhang, S., He, X. (2016). Inference based on adaptive grid selection of probability transforms. Statistics. A Journal of Theoretical and Applied Statistics, 50(3), 667–688.Google Scholar

Copyright information

© The Institute of Statistical Mathematics, Tokyo 2018

Authors and Affiliations

  • Arun Kumar Kuchibhotla
    • 1
  • Somabha Mukherjee
    • 1
  • Ayanendranath Basu
    • 2
  1. 1.University of PennsylvaniaPhiladelphiaUSA
  2. 2.Indian Statistical InstituteKolkataIndia

Personalised recommendations