Theory and Decision

, Volume 40, Issue 2, pp 191–214

Intrinsic losses

  • Christian P. Robert

DOI: 10.1007/BF00133173

Cite this article as:
Robert, C.P. Theor Decis (1996) 40: 191. doi:10.1007/BF00133173


Since the choice of a particular loss function strongly influences the resulting inference, it seems necessary to rely on “intrinsic” losses when no information is available about the utility function of the decision-maker, rather than to call for classical losses like the squared error loss. Since this setting is quite similar to the derivation of noninformative priors in Bayesian analysis, we first recall the conditions of this derivation and deduce from these conditions some requirements on the intrinsic losses. It then appears that these loss functions should only depend on the sampling distribution and that they should be independent of the parameterization of the distribution. The resulting estimators are therefore transformation equivariant. We study the properties of two natural intrinsic losses, namely entropy and Hellinger losses, and show that they can be expressed in closed form for exponential families. Moreover, the entropy loss also provides analytic expressions of Bayes estimators under conjugate priors; the derivation of Bayes estimators associated with the Hellinger loss is more cumbersome, as shown in Poisson and Gamma cases, while leading to similar estimators.

AMS Subject Classification (1990)


Key words

Utility theorynon-informative priordistributional distanceentropyHellinger distanceconjugate priorFisher informationexponential familiesbayes estimator

Copyright information

© Kluwer Academic Publishers 1996

Authors and Affiliations

  • Christian P. Robert
    • 1
  1. 1.INSEE, Centre de Recherche en Economie et StatistiqueMalakoff cedexFrance