Abstract
Under standard regularity conditions, the maximum likelihood estimator (MLE) is the most efficient estimator at the model. However, modern practice recognizes that it is rare for the hypothesized model to hold exactly, and small departures from it are never entirely unexpected. But classical estimators like the MLE are extremely sensitive to the presence of noise in the data. Within the class of robust estimators, which constitutes parametric inference techniques designed to overcome the problems due to model misspecification and noise, minimum distance estimators have become quite popular in recent times. In particular, density-based distances under the umbrella of the Bregman divergence have been demonstrated to have several advantages. Here we will consider an extension of the ordinary Bregman divergence, and investigate the scope of parametric estimation under continuous models using this extended divergence proposal. Many of our illustrations will be based on the GSB divergence, a particular member of the extended Bregman divergence, which appears to hold high promise within the robustness area. To establish the usefulness of the proposed minimum distance estimation procedure, we will provide detailed theoretical investigations followed by substantial numerical verifications.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs13571-024-00333-z/MediaObjects/13571_2024_333_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs13571-024-00333-z/MediaObjects/13571_2024_333_Fig2_HTML.png)
Similar content being viewed by others
References
Basak, S. and Basu, A. (2022). The extended Bregman divergence and parametric estimation. Statistics 56(3), 699–718
Basak, S., Basu, A., Jones, M.C. (2021). On the ‘optimal’ density power divergence tuning parameter. J. Appl. Stat. 48(3), 536–556
Basu, A. and Lindsay, B.G. (1994). Minimum disparity estimation for continuous models: efficiency, distributions and robustness. Ann. Inst. Stat. Math. 46(4), 683–705
Basu, A., Harris, I.R., Hjort, N.L. and Jones, M.C. (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika 85(3), 549–559
Basu, A., Shioya, H. and Park, C. (2011). Statistical Inference: The Minimum Distance Approach. CRC Press, Boca Raton, Florida
Beran, R. (1977). Minimum Hellinger distance estimates for parametric models. Ann. Stat. 5(3), 445–463
Bregman, L.M. (1967). The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217
Cressie, N. and Read, T.R.C. (1984). Multinomial goodness-of-fit tests. J. R. Stat. Soc. B (Methodological) 46(3), 440–464
Fernholz, L.T. (1983). Von Mises Calculus for Statistical Functionals, Lecture Notes in Statistics, Vol. 19. Springer, New York
Ghosh, A. and Basu, A. (2017). The minimum \({S}\)-divergence estimator under continuous models: the Basu-Lindsay approach. Stat. Papers 58(2), 341–372
Ghosh, A., Harris, I.R., Maji, A., Basu, A. and Pardo, L. (2017). A generalized divergence for statistical inference. Bernoulli 23(4A), 2746–2783
Hong, C. and Kim, Y. (2021). Automatic selection of the tuning parameter in the minimum density power divergence estimation. J. Korean Stat. Assoc. 30(3), 453–465
Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and Sons, New York
Liese, F. and Vajda, I. (1987). Convex Statistical Distances. Teubner, Boston
Mukherjee, T., Mandal, A., Basu, A. (2019). The B-exponential divergence and its generalizations with applications to parametric estimation. Stat. Methods Appl. 28(2), 241–257
Pardo, L. (2006). Statistical Inference Based on Divergence Measures. Chapman and Hall/CRC, Boca Raton, Florida
Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London, United Kingdom
Stigler, S.M. (1977). Do robust estimators work with real data? Ann. Stat. 5(6), 1055–1098
Thode Jr., H.C. (2002). Testing For Normality. Marcel Dekker, New York
Warwick, J. and Jones, M.C. (2005). Choosing a robustness tuning parameter. J. Stat. Comput. Simul. 75(7), 581–588
Funding
The authors did not receive support from any organization for the submitted work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Basak, S., Basu, A. The Extended Bregman Divergence and Parametric Estimation in Continuous Models. Sankhya B (2024). https://doi.org/10.1007/s13571-024-00333-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13571-024-00333-z