# The Generalized HH- Space as Parameter Space

• Siegfried Gabler
Part of the Lecture Notes in Statistics book series (LNS, volume 64)

## Abstract

We have seen that the computation of a general minimax strategy with regard to the parameter space
$$\otimes = \left\{ {\theta \in \mathbb{R}^N :\sum\limits_\text{i} {\frac{1} {{\text{p}_\text{i} }}} (\text{y}_\text{i} \text{ - p}_\text{i} \text{y})^2 \leqslant \text{c}^2 } \right\}$$
with c>0 in nD1 or in nD 1 u often is not feasible. The classical minimax criterion as an optimal decision rule appears to be too unmanageable to get feasible solutions. This fact changes at once if we are interested not only in the maximum of the risk on ⊗ but also in the other extrema of the risk on ⊗, as indicated in 1.8. The conditional minimax approach of 1.9 is another way of obtaining feasible solutions. Mathematically both approaches lead to similar results and can be treated together. Without difficulty we can generalize the parameter space to the generalized HH- space
$$\otimes = \left\{ {\theta \in \mathbb{R}^N :\theta 'V\theta \leqslant \text{c}^\text{2} } \right\}$$
where V is a nonnegative definite symmetric matrix of rank N- H, and VQ=0, Q a N×H matrix of rank H. For H=1 the HH- space is an example of such a ⊗. We set V = diag(1/p1,…1/pN) — ee’, where e = (1,…1)’, and Q=(p1,…pN)’.

## Keywords

Parameter Space Linear Estimator Regular Matrix Optimal Decision Rule Minimax Estimator
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.