- Cite this article as:
- Robert, C.P. Theor Decis (1996) 40: 191. doi:10.1007/BF00133173
- 110 Views
Since the choice of a particular loss function strongly influences the resulting inference, it seems necessary to rely on “intrinsic” losses when no information is available about the utility function of the decision-maker, rather than to call for classical losses like the squared error loss. Since this setting is quite similar to the derivation of noninformative priors in Bayesian analysis, we first recall the conditions of this derivation and deduce from these conditions some requirements on the intrinsic losses. It then appears that these loss functions should only depend on the sampling distribution and that they should be independent of the parameterization of the distribution. The resulting estimators are therefore transformation equivariant. We study the properties of two natural intrinsic losses, namely entropy and Hellinger losses, and show that they can be expressed in closed form for exponential families. Moreover, the entropy loss also provides analytic expressions of Bayes estimators under conjugate priors; the derivation of Bayes estimators associated with the Hellinger loss is more cumbersome, as shown in Poisson and Gamma cases, while leading to similar estimators.