Equivariance and Invariance for Optimal Designs in Generalized Linear Models Exemplified by a Class of Gamma Models

The main intention of the present work is to outline the concept of equivariance and invariance in the design of experiments for generalized linear models and to demonstrate its usefulness. In contrast with linear models, pairs of transformations have to be employed for generalized linear models. These transformations act simultaneously on the experimental settings and on the location parameters in the linear component. Then, the concept of equivariance provides a tool to transfer locally optimal designs from one experimental region to another when the nominal values of the parameters are changed accordingly. The stronger concept of invariance requires a whole group of equivariant transformations. It can be used to characterize optimal designs which reflect the symmetries resulting from the group actions. The general concepts are illustrated by models with gamma distributed response and a canonical link. There, for a given transformation of the experimental settings, the transformation of the parameters is not unique and may be chosen to be nonlinear in order to fully exploit the model structure. In this case, we can derive invariant maximin efficient designs for the D- and the IMSE-criterion.


Introduction
Generalized linear models are a powerful tool to analyze data for which the standard linear model approach is not adequate. The idea of generalized linear models goes back to Nelder and Wedderburn [28], and their concept is comprehensively presented in the monograph by McCullagh and Nelder [27]. The statistical analysis is well developed in generalized linear models, and there is also a considerable amount of literature on optimal design in this situation (see, e.g., Atkinson and Woods [3] and the literature cited therein).
In generalized linear models, the performance of a design depends not only on the experimental settings but, in contrast with linear models, also on the values of the underlying parameters. Even more crucially, the solutions for optimal designs also depend on the parameters. As those are commonly unknown at the design step, nominal values of these parameters have to be specified prior to the experiment which leads to the concept of locally optimal designs (see Chernoff [5]). This approach has been frequently employed (see [1,13,14,36,[41][42][43] among others), and it provides at least a benchmark for the quality of a design.
To overcome the parameter dependence, robust criteria have been proposed which either impose a prior weight on the parameters (Bayesian design, see, e.g., Atkinson et al. [2], ch. 18) or choose a minimax approach over a parameter region of interest (maximin efficiency, see, e.g., [8,10,17,22]).
The construction of optimal designs for generalized linear models is difficult, and often numerical algorithms are employed to find a solution. To reduce the complexity of the search for a good design, one can make use of symmetries ("invariance") in the design problem which can be described by transformations of the experimental settings and the location parameters in the linear component. The concept of invariance or, more specifically, equivariance with respect to transformations has been used for long in statistical analysis and dates back to Pitman [30] (see, e.g., Lehmann [24, ch. 6], for a comprehensive description). The underlying idea of transformations which are conformable with the model ("reparameterization") has been successfully adapted to optimal design theory in linear models. In contrast with equivariance in statistical analysis, the parameter values do not play a role in optimal designs for linear models. Therefore, only transformations of the experimental settings have to be considered there. With these transformations, optimal designs may be first determined on a standardized experimental region and then transferred to more general regions as long as the transformation is order preserving with respect to the design criterion (see, e.g., Heiligers and Schneider [19]). This covers, for example, the situation of D-optimal designs for polynomial regression on an arbitrary (multivariate) interval.
The stronger concept of invariance requires a whole group of equivariant transformations. In linear models, invariance has been widely used to characterize optimal designs which reflect the symmetries resulting from the group actions (see Pukelsheim [33, ch. 5], or Schwabe [38, ch. 3]). These groups of transformations may cover reflections and rotations for quantitative variables as well as permutations of levels and factors for categorical variables and combinations thereof. In the context of generalized linear models, however, the concept of invariance is not well established. This seems to be mainly due to the fact that local optimality criteria lack symmetries, in general, because they depend on the parameter values. Therefore, we have to account for this dependence by also transforming the parameters similar to the situation in statistical analysis.
For the underlying concept of equivariance, we thus need a pair of transformations which acts simultaneously on the experimental settings and on the parameter values. The most famous representative of this concept is known as the canonical transformation defined in Ford et al. [13]. But the motivation there is different from our approach exhibited in Radloff and Schwabe [34]. The canonical transformation starts with a standardization of the nominal value of the parameters. This standardization is compensated by an associated transformation of the experimental settings which leaves the value of the linear component unchanged. In contrast with that, we start with a transformation of the experimental settings as in linear models. This transformation is conformable ("linearly equivariant," see Schwabe [38, ch. 3]) with the regression functions in the sense that it results in a linear reparameterization of the linear component. This linear reparameterization might be the associated action on the parameter values as in the canonical transformation, but we allow for more general, even nonlinear, transformations of the parameters. Moreover, in its standard formulation, the canonical transformation deals with one quantitative explanatory variable with a straight line relationship for the linear component. A generalization of the canonical transformation to multiple explanatory variables is given in Sitter and Torsney [40]. In our approach, there is no restriction on the number of explanatory variables, on their impact on the linear component described by the regression functions, or on whether they are quantitative or categorical.
For the concept of invariance in generalized linear models, symmetries are also required in the parameters which concur with the symmetries in the experimental settings. This requirement is hardly met in the case of local optimality, but Bayesian or maximin efficiency criteria can incorporate symmetries in their prior or in their parameter region of interest (see Radloff and Schwabe [34]).
Based on this approach, we develop in the following the concept of equivariance and invariance in generalized linear models and their application to optimal designs step-by-step and illustrate each step by a running example of gamma models with canonical link functions. This kind of gamma model is chosen because it exhibits an additional scaling property which provides a more complex, nonlinear symmetry structure.
The paper is organized as follows. In Sect. 2, we introduce the model assumptions and the design criteria. In Sect. 3, we discuss the concept of equivariance under standard linear transformations of the parameters and show how optimal designs can be transferred from one experimental region to another. In Sect. 4, the concept of equivariance is extended to nonlinear transformations of the parameters. In Sect. 5, the general concept of invariance is introduced and optimal designs are obtained for various situations. Finally, Sect. 6 concludes the paper with a short discussion and an outlook.

Basics: Model Specification, Information, and Design
We consider a response variable Y for which the dependence on a (potentially multidimensional) covariate x can be described by a generalized linear model. This means that the distribution of Y comes from a given exponential family and the mean . . , f p−1 (x) and β = (β 0 , . . . , β p−1 ) T is a p-dimensional vector of parameters β 0 , . . . , β p−1 to be estimated. Traditionally the link function maps the mean to the linear component (see McCullagh and Nelder [27,ch. 2]). For analytical purposes, however, it is more convenient to describe the dependence of the mean on the linear component, where η is the inverse of the link function. For example, for the log link η is the exponential and for the inverse link η is the reciprocal function. As a particular case and for illustrative purposes, we consider gamma models. Such models are frequently used in engineering applications. For example, in Dette et al. [9] a gamma model is considered in a thermal spraying process. Further applications in the fields of ecology, medicine, and psychology can be found in Gea-Izquierdo and Cañellas [15], Grover et al. [18], and Ng and Cribbie [29]. In a gamma model, the response Y is gamma distributed. One possibility to parameterize its density is given by f Y (y) = y κ−1 exp(−y/θ )/(θ κ (κ)), where κ > 0 and θ > 0 denote the shape and scale parameters, respectively. In this case, the expectation of Y is given by μ = κθ. In order to end up with a one-parametric exponential family, we suppose that the shape parameter κ is a fixed nuisance parameter (see Atkinson and Woods [3]). For example, κ = 1 gives the family of exponential distributions, or for fixed integer κ one obtains a family of certain Erlang distributions.
For the link function, we assume the inverse link κ/μ = f(x) T β. Alternatively, the log link is frequently used in gamma models (see, e.g., Ford et al. [13]). However, the inverse link appears to be more suitable for illustrative purposes. Moreover, the inverse link is equal to the canonical link −κ/μ = f(x) T β up to the minus sign (see McCullagh and Nelder [27, ch. 3]). This means that all subsequent results will also be valid for the canonical link, and the minus sign is suppressed for notational reasons.
The inverse η of the inverse link is given by which itself is equal to the inverse −κ/z of the canonical link up to the minus sign. Then, the responses Y i of a sample Y 1 , . . . , Y n with covariates x 1 , . . . , x n are gamma distributed with means μ i = η(f(x i ) T β) and common shape parameter κ.
In an experimental design setup, the covariates x i may be chosen by the experimenter from an experimental region X over which the model under consideration is assumed to be valid. For gamma distributed responses, as an additional side condition, the means μ i have to be positive (μ(x i ; β) > 0). This implies the natural restriction on the parameter region B of potential values for the parameter vector β that for every β ∈ B the linear component has to be positive (f(x) T β > 0) for all x ∈ X . Further note that for reasons of parameter identifiability the regression functions f 0 , . . . , f p−1 are assumed to be linearly independent on the experimental region X .
The aim of experimental designs is to optimize the performance of statistical analysis. The contribution of an observation Y i to the performance is measured in terms of its information. In the present generalized linear models framework, for a single observation at an experimental setting x the elemental information matrix is given by (see Fedorov and Leonov [12] or Atkinson and Woods [3]) where λ is a positive valued function which is called the intensity function. Note that through the intensity function the elemental information depends on the parameter vector β.
In generalized linear models, the intensity is given by In the case of a canonical link, we have Var(Y ) = η (f(x) T β) and the intensity reduces to the variance. In particular, in the gamma model with inverse link the intensity function is because the minus sign in the inverse of the link function does not affect the intensity (cf. Gaffke et al. [14]). The (per experiment) Fisher information of n independent observations Y i at experimental settings x i is then given by The aim of finding an exact optimal design x * 1 , . . . , x * n is to optimize the Fisher information in a certain sense because the inverse is proportional to the asymptotic covariance matrix of the maximum likelihood estimator for β (see Fahrmeir and Kaufmann [11]).
As this discrete optimization problem is too difficult, in general, we will deal with approximate (continuous) designs ξ in the spirit of Kiefer [23] (see also Silvey [39, p. 15]) throughout the remainder of the present paper. An approximate design ξ is defined on the experimental region X by mutually distinct support points x 1 , . . . , x m and corresponding weights w 1 , . . . , w m > 0 such that m i=1 w i = 1. In terms of an exact design, the support points x i may be interpreted as the distinct experimental settings and the weights w i as their corresponding relative frequencies in the sample. The relaxation of an approximate design is then that the weights w i may be chosen continuously and need not be multiples of 1/n. The standardized (per observation) information matrix of a design ξ is defined by Design optimization is now concerned with finding an approximate design ξ * which minimizes a convex real-valued criterion function of the Fisher information M(ξ ; β) of the design ξ . A design ξ * will then be called -optimal when it minimizes (ξ ), (ξ * ) = min (ξ ). As the information matrix depends on the parameter vector β, the obtained design ξ * is locally -optimal at a given parameter value β [5] and may change with β. To avoid the parameter dependence, so-called robust versions of the criteria can be considered like "Bayesian" criteria which involve a weighting measure ("prior") on the parameters (see Atkinson et al. [2, ch. 18]) or "minimax" criteria which aim at minimizing the worst case scenario for the parameter settings (see the "standardized minimax" criteria in [8]). In the following, we will focus on the local D-and IMSE-criteria and the corresponding maximin efficiency ("standardized maximin") criteria.
The D-criterion is the most commonly used design criterion. It is related to the estimation of the model parameters β and aims at minimizing the determinant of the asymptotic covariance matrix, (M) = det(M −1 ) for positive definite information matrix M, and (M) = ∞ for singular M. A design ξ * is then called locally D-optimal at β when det(M(ξ * ; β) −1 ) = min det(M(ξ ; β) −1 ). The D-criterion can be motivated by the fact that it measures the (squared) volume of the asymptotic confidence ellipsoid of the maximum likelihood estimator for β. However, its popularity predominantly stems from its nice analytic properties.
Note that in the present situation the property of M(ξ ; β) being nonsingular does not depend on the value of the parameter vector β because the intensity λ(f(x) T β) is greater than zero for all x ∈ X and all β ∈ B.
The definition of the IMSE-criterion (alternatively, also called I-, V -or Q-optimality in the literature) is based on the estimation (prediction) of the mean response μ(x; β). It aims at minimizing the average asymptotic variance of the predicted mean responsê μ(x) = μ(x;β), where averaging is taken with respect to a standardized measure ν on X (see Li and Deng [25,26]). For a generalized linear model, the asymptotic variance is given by for all x ∈ X . For a canonical link, we have λ = η and hence The integrated mean-squared error (IMSE) is then defined as the average prediction variance with respect to a given standardized measure ν on the experimental region X (ν(X ) = 1). By a standard method to express the IMSE-criterion (see, e.g., Li and Deng [26]), the asymptotic variance can be rewritten as Hence, the IMSE is given by where denotes a weighted "moment" matrix with respect to the measure ν. Note that the leading term under the integral in V(β; ν) differs from that in the virtual information matrix M(ν; β) by replacing the intensity λ by λ 2 . Moreover, in contrast with the D-criterion, the IMSE-criterion does not solely depend on the information matrix M(ξ ; β), but also depends through the weighting matrix V(β; ν) explicitly on the parameter vector β and additionally on the measure ν as a supplementary argument. The IMSE-criterion is thus defined by (M; β, ν) = trace(V(β; ν) M −1 ). A design ξ * is then called locally IMSE-optimal with respect to ν at β when trace(V(β; ν)M(ξ * ; β) −1 ) = min trace(V(β; ν)M(ξ ; β) −1 ).
To avoid the parameter dependence of an optimal design under local criteria, we will also consider as a "robust" alternatives maximin efficiency criteria which are also called standardized optimality criteria (see Dette et al. [10]). For this, we first have to introduce the concept of efficiency. Let the local criterion β at β depend homogeneously on the information matrix, i.e., β (ξ ) = φ(M(ξ ; β)) for some function φ on the set of positive definite matrices satisfying φ(cM) = c −1 φ(M) for c > 0 (cf Pukelsheim [33], ch. 5, for the related concept of information functions). Then, the efficiency of a design ξ (locally at β) is defined by where ξ * β is the -optimal design (locally at β). Maximin efficiency then aims at maximizing the worst efficiency inf β∈B eff (ξ ; β) over a given subset B of interest of the parameter region B. In order to arrive at a minimization problem, we define the maximin efficiency criterion by the inverse relation . (14) Note that is convex if, for all β, the local criteria β are convex. For maximin D-efficiency, we have to choose the homogeneous version β (ξ ) = (det(M(ξ ; β))) −1/ p of the local D-criterion (see [33, ch. 6]) to get the maximin Defficiency criterion where ξ * β denotes the locally D-optimal design at β. The D-efficiency can then be interpreted as the proportion of observations required under the D-optimal design ξ * β to obtain the same value of the determinant as for design ξ . For example, an efficiency of 0.5 means that with a D-optimum design ξ * β only half as many observations as for ξ are necessary to get the same precision. A design ξ * is then called maximin D-efficient on B when D-ME (ξ * ) = min D-ME (ξ ).
The local IMSE-criterion is already homogeneous because it is a linear criterion. Thus, the maximin IMSE-efficiency criterion can be defined directly as where ξ * β denotes the locally IMSE-optimal design at β. A design ξ * is then called maximin IMSE-efficient with respect to ν on B when IMSE-ME (ξ * ; ν) = min IMSE-ME (ξ ; ν).
In particular, for the gamma model with inverse link we have λ(z) = κ/z 2 (see (5)) which implies that and Hence, in both the D-and the IMSE-criterion the shape parameter κ occurs only as a factor which does not affect the optimization problem. Without loss of generality, we may thus assume κ = 1 in the remainder of the text.

Equivariance
Invariance and equivariance play an important role for optimal design in linear models. However, these concepts can also be applied in the context of generalized linear models as established in Radloff and Schwabe [34].
The essential idea of equivariance in the design setup is to transfer an already known optimal design on a given (standardized) experimental region to another experimental region of interest by a suitable transformation while keeping the model structure unchanged. The most prominent approach of this kind is the method of canonical transformation propagated by Ford et al. [13].
Throughout we accompany each conceptual step by a simple running example (Example 1). We start with a one-to-one transformation g : X → Z which maps the experimental region X onto a potentially different region Z.
Then, the shift and scale transformation The next ingredient connects the transformation g with the vector of regression functions: f is said to be linearly equivariant with respect to g if there exists a (nonsingular) matrix Q g such that f(g(x)) = Q g f(x) for all x ∈ X , which will be assumed to hold throughout the remainder of this text.
Example (Example 1 continued) Let f(x) = (1, x) T be the vector of regression functions for a simple one-dimensional linear regression, p = 2, such that the linear component is f(x) T β = β 0 + β 1 x. Then, for g(x) = a + cx the transformation matrix Q g is given by In contrast with the situation in linear models, additionally a transformationg : B →B of the parameter vector β is required in the present setup of generalized linear models. This approach of equivariance with respect to a pair (g,g) of transformations of the settings x and the parameters β, respectively, is in accordance with the general concept of equivariance in statistical analysis (see, e.g., Lehmann [24, ch. 6]).
A natural choice for the transformationg is a reparameterization which leaves the value of the linear component unchanged, Example (Example 1 continued) For g(x) = a + cx and simple linear regression f(x) = (1, x) T , the transformation matrix for the parameter vector is is chosen in such a way that β = (0, 1) T , i.e., c = β 1 and a = β 0 , then g represents essentially the canonical transformation used in Ford et al. [13].
Note that for each pair (g,g) of transformations the mean response and the intensity remain unchanged, Having this in mind, we study how these transformations act on a design and its information matrix: For a design ξ with support points x i and corresponding weights w i , i = 1, . . . , m, we denote by ξ g its image under the transformation g, i.e., ξ g has support points z i = g(x i ) with weights w i , i = 1, . . . , m, respectively, and is hence a design on Z. Then, for the associated information matrices we obtain (see Radloff and Schwabe [34]). In short, the pair (g,g) of simultaneous transformations induces the transformation M(ξ ; β) → Q g M(ξ ; β)Q T g of the information matrix.
Example (Example 1 continued) Let ξ be supported on the endpoints x 1 = 0 and x 2 = 1 of the experimental region X = [0, 1] with corresponding weights w 1 = 1 − w and w 2 = w, respectively. For the gamma model with simple linear regression, f(x) = (1, x) T , denote by λ 0 = λ(β 0 ) and λ 1 = λ(β 0 + β 1 ) the intensities at the support points 0 and 1. The information matrix of ξ is given by For g(x) = a + cx, the induced design ξ g is supported on the endpoints z 1 = a and z 2 = b of the induced experimental region Z = [a, b] with weights 1 − w at a and w at b. Underβ =g(β), the intensities at a and b are λ 0 and λ 1 , respectively, and the information matrix of ξ g is The final step is the equivariance of the criterion . In analogy to the terminology in Heiligers and Schneider [19] for linear models, we will call a convex optimality criterion equivariant with respect to a transformation g, if preserves the ordering under the transformation g, i.e., for any two designs ξ 1 and ξ 2 the relation ( In the present situation of generalized linear models, more care has to be taken, since in addition the parameter vector β and eventually some supplementary arguments have to be changed in the criterion during the transformation. We therefore introduce a second criterion function = g,g for the designs on Z which may depend on the transformations g andg. Then, we will call a pair of criteria and equivariant with respect to the pair (g,g) of transformations, when the ordering is preserved, i.e., the relation ( With these definitions, we obtain the following result that in the case of equivariance the optimality of designs is preserved under transformations.

Theorem 1 Let the pair of criteria and
be equivariant with respect to the pair (g,g) of transformations. If ξ * is -optimal, then its image (ξ * ) g is -optimal.
We will now establish that the D-and IMSE-criteria are equivariant, if simultaneously the parameter vector β and potential supplementary arguments are transformed. By (17), we obtain for the D-criterion Let be the local D-criterion at β and be the local D-criterion atg(β), then the D-criterion is equivariant under simultaneous transformation of β, and by Theorem 1 the locally D-optimal design can be transferred.
Example (Example 1 continued) For the gamma model with simple linear regression, f(x) = (1, x) T , the locally D-optimal design ξ * on the unit interval X = [0, 1] is supported by the endpoints x 1 = 0 and x 2 = 1 and assigns equal weights w * = 1/2 to these endpoints for any value of the parameter vector β ∈ B (see Gaffke et al. [14]). Then, for any other interval Z = [a, b] as the experimental region we may consider the transformation g(x) = a + cx, c = b − a, together withg(β) = Q −T g β. By Corollary 1, the design (ξ * ) g which assigns equal weights w * = 1/2 to the endpoints z 1 = a and z 2 = b of the experimental region Z is locally D-optimal for any value of the parameter vectorβ =g(β) ∈B =g(B).
In the situation of Example 1, the locally D-optimal design does not depend on the parameter β. This will typically not hold true, if the underlying model for the linear component becomes more complex.

Example 2 We consider the gamma model with the linear component f(x)
This region, depicted in the left panel of Fig. 1, constitutes a cone in the three-dimensional Euclidean space.
According to Burridge and Sebastiani [4], the minimally supported design ξ * which assigns equal weights w * i = 1/3 to the support points x i , i = 1, 2, 3, is locally Doptimal at β, when β satisfies β 2 0 − β 1 β 2 ≤ 0. The subset B 1 of these β in B is shown in the right panel of Fig. 1. Now equivariance can be used to find D-optimal designs for other parameter values different from those in B 1 . For this, we use transformations which map the experimental region onto itself, Z = X : Here g 3 and g 4 represent the reflection with respect to the first and second covariate x 1 and x 2 , respectively, and g 2 is the simultaneous reflection with respect to both covariates. Alternatively, g 2 can also be described as a rotation by 180 degree. We also introduce g 1 = id as the identity mapping.
The regression function f(x) = (1, x 1 , x 2 ) T is linearly equivariant with respect to these transformations with corresponding matrices For each g k , k = 2, 3, 4, the corresponding parameter transformation is given bỹ Because g k maps the experimental region X onto itself, also the related transformationg k maps the parameter regions onto itself,B = B.
Starting from the parameter subregion B 1 , where the design ξ * is locally D-optimal, we can define parameter subregions B k =g k (B 1 ) induced by the transformations g k , k = 2, 3, 4. These subregions are characterized explicitly in Table 1 by the inequalities in the last column, and they are also shown in the right panel of Fig. 1. All these subregions constitute cones. Now by equivariance we can conclude that the designs ξ * k = (ξ * ) g k are locally D-optimal at β for β ∈ B k . The results are explicitly stated in Table 1. Note that the same optimal designs have been obtained before in Idais [20] by a straightforward application of the celebrated Kiefer-Wolfowitz equivalence theorem ). Further note that the interior region shown in the right panel of Fig. 1 contains those values for the parameter vector β for which locally D-optimal designs are supported on all four vertices and the corresponding weights depend on the values of β (see Idais [20]).
Next we investigate equivariance for the IMSE-criterion. There also the supplementary argument of the weighting measure ν has to be transformed. Similar to the information matrix in (17), the weighting matrix V is equivariant under the transformations g andg, in the case of a generalized linear model with canonical link. This implies Let be the local IMSE-criterion at β with respect to ν and be the local IMSEcriterion atg(β) with respect to ν g , then the IMSE-criterion is equivariant under simultaneous transformation of β and the supplementary argument ν, and by Theorem 1 the locally IMSE-optimal design can be transferred.

Corollary 2
If ξ * is locally IMSE-optimal on X at β with respect to ν, then (ξ * ) g is locally IMSE-optimal on Z atβ =g(β) with respect to ν g . Table 1 Minimally supported locally D-optimal designs and optimality regions for the two-factor gamma model on Note that the results of Corollaries 1 and 2 hold not only for any generalized linear model, but also, more generally, for all models, where the elemental information matrix is of the form (3) (see, e.g., Schmidt and Schwabe [37], for further examples).
Example (Example 1 continued) In order to apply the equivariance result of Corollary 2 to the gamma model with simple linear regression, f(x) = (1, x) T , the locally IMSEoptimal design ξ * on the unit interval X = [0, 1] has to be determined first.
Proposition 1 For the one-factor gamma model with simple linear regression f(x) T β = β 0 + β 1 x on the experimental region X = [0, 1] locally IMSE-optimal designs can be found which are supported on the endpoints 0 and 1 of the experimental region.
Locally optimal weights 1 − w * at 0 and w * at 1, respectively, are given by The proof of Proposition 1 is given in "Appendix." Note that in Proposition 1 the locally optimal weights may depend on the weighting measure ν used. In particular, for the two measures in Proposition 1 (b) and (c) which are concentrated on the endpoints and the midpoint, respectively, the locally optimal weights at 0 and 1 are interchanged. For the continuous uniform measure (Proposition 1 (a)), equal weights, w * = 1/2, are assigned to both endpoints, and the (locally) IMSE-optimal design does not depend on the value of the parameter vector β. Now equivariance can be employed to obtain locally IMSE-optimal designs for any other interval Z = [a, b] as the experimental region. We again use the transformation g(x) = a + cx, c = b − a, together withg(β) = Q −T g β. Let ξ * be the locally IMSE-optimal design of Proposition 1 at β with respect to one of the given weighting measures ν. Then, by Corollary 2, the design (ξ * ) g is the locally IMSE-optimal design atβ =g(β) with respect to ν g .
In order to obtain locally optimal designs at a given value ofβ on the transformed design region Z, the inverse transformations g −1 (z) = Q −1 g z andg −1 (β) = Q gβ of g andg, respectively, have to be used. We give this general result only for the case of the D-and the IMSE-criterion.

Corollary 3 Let the equivariance conditions be fulfilled.
(a) The design (ξ * ) g is locally D-optimal on Z atβ if ξ * is locally D-optimal on X at β =g −1 (β). (b) The design (ξ * ) g is locally IMSE-optimal on Z atβ with respect to ν if ξ * is locally IMSE-optimal on X at β =g −1 (β) with respect to ν g −1 .
Example (Example 1 continued) By Corollary 3, we can obtain locally IMSE-optimal designs for the one-factor gamma model with simple linear regression f(x) Tβ = β 0 +β 1 x on a given interval Z = [a, b] with respect to suitably specified weighting measures ν Z . The inversely transformed parameter vector β =g −1 (β) is given by By Corollary 3 and Proposition 1, the optimal designs are supported on the endpoints a and b of the interval and the optimal weights 1 − w * at a and w * at b, respectively, can be obtained as (a) 1−w * = w * = 1/2 for ν Z the uniform (Lebesgue) measure on the interval [a, b], one-point measure on the midpoint (a + b)/2 of the experimental region.
The continuous uniform measure in (a) is the common choice for the IMSEcriterion. The discrete uniform measure in (b) lays equal interest in the extreme values of the experimental region and may also be applied for the restricted experimental region X = {a, b} which can be used to describe two groups "a" and "b." In that case, the IMSE-optimal weights are inverse proportional to the standard deviations λ x = 1/(f(x) Tβ ) 2 , x = a, b, in the groups in accordance with known results on Aoptimality for group means. The one-point measure in (c) coincides with the c-criterion for estimating the mean response at the midpoint of the interval.
Note that the D-and IMSE-criteria are equivariant with respect to any transformation g of x for which the regression function f is linearly equivariant, f(g(x)) = Q g f(x), and the corresponding transformationg(β) = Q −T g β of β. For other criteria, additional requirements may have to be fulfilled by the transformations to obtain equivariance results. For example, in the case of Kiefer's class of q -criteria (including the A-criterion) the transformation matrix Q g should be orthogonal or, at least, satisfy that Q T g Q g is a multiple of the p × p identity matrix. For the equivariance of maximin efficiency criteria, we require additionally that the underlying local criteria are multiplicatively equivariant with respect to (g,g), which means that for every β ∈ B there is a constant c > 0 such that g(β) (ξ g ) = c β (ξ ) uniformly in ξ . Then, for the corresponding maximin efficiency criterion we get where in the second equality it is used that by Theorem 1 the image of the locally optimal design at β under g is locally optimal atg(β). Hence, the resulting maximin efficiency criterion is equivariant. By (18), the homogeneous version β (ξ ) = (det(M(ξ ; β))) −1/ p of the local Dcriterion is multiplicatively equivariant with c = det(Q g ) −2/ p > 0. Accordingly, the local IMSE-criterion is multiplicatively equivariant with c = 1 by (21). Hence, both the maximin D-efficiency criterion and the maximin IMSE-efficiency criterion retain their value under the transformation and are thus equivariant.
(b) If ξ * is maximin IMSE-efficient with respect to ν on B , then (ξ * ) g is maximin IMSE-efficient with respect to ν g onB =g(B ).
Example (Example 1 continued) In the gamma model with simple linear regression on [0, 1] the design ξ * which assigns equal weights 1/2 to both endpoints 0 and 1 is both locally D-optimal and by Proposition 1 locally IMSE-optimal with respect to the uniform measure ν on Further maximin D-and IMSE-efficient designs are derived in Sect. 5.

Extended Equivariance
The concept of equivariance can be extended when the structure of the intensity function is compatible with some transformation of the parameters. More specifically, we will consider situations where the intensity function λ is multiplicatively equivariant with respect to a transformationg 0 of β, i.e., there exists a constant c 0 > 0 such that λ(f(x) Tg 0 (β)) = c 0 λ(f(x) T β) for all x ∈ X . For example, in the gamma model with inverse link we have λ(f(x) Tc β) = c −2 λ(f(x) T β) for any scaling factorc (see Idais and Schwabe [21], for some specific models). Hence, the intensity function is multiplicatively equivariant with respect to any transformationg 0 (β) =cβ which scales all components of the parameter vector β simultaneously by the same factorc > 0, and the multiplicative factor is c 0 =c −2 > 0. Note that the scalingg 0 retains the positivity of the linear component f(x) Tg 0 (β) =cf(x) T β > 0 for the scaled vectorg 0 (β) =cβ,c > 0. Thus, the maximal region B of parameter values β such that the linear component f(x) T β is positive constitutes a cone in the p-dimensional Euclidean space, i.e., for each vector β ∈ B and every positive scale factorc > 0 the scaled vectorcβ lies also in B.
Another, more basic example arises in Poisson regression with canonical log link when the value β 0 of the intercept parameter is changed toβ 0 . The corresponding transformation of β can be described by the (affine) linear mappingg 0 (β) = β + (β 0 − β 0 )e 1 , where e 1 denotes the first unit vector of appropriate length p. Then, the intensity function λ(z) = exp(z) is multiplicatively equivariant with respect tog 0 with multiplicative factor c 0 = exp(β 0 − β 0 ) > 0. This has been implicitly applied in the literature when concluding that optimal designs do not depend on the value β 0 of the intercept parameter (see, e.g., Russell et al. [36]).
To embed these transformationsg 0 into the concept of equivariance of Sect. 3, we combine them with the identity mapping g = id on the experimental region X . Then, the multiplicative equivariance obviously carries over from the intensity to the information matrix.
To transfer optimal designs by Theorem 1, it remains to show that the criteria under consideration are order preserving with respect to transformations which act multiplicatively on the intensity. By Lemma 1, we directly get det(M(ξ ;g 0 (β))) = c p 0 det(M (ξ ; β)) and, hence, the equivariance of the D-criterion.

Corollary 5 If the intensity function λ is multiplicatively equivariant with respect tõ g 0 , then
(a) A locally D-optimal design ξ * at β is also locally D-optimal atβ =g 0 (β), (b) If additionally η is multiplicatively equivariant, a locally IMSE-optimal design ξ * at β with respect to ν is also locally IMSE-optimal atβ =g 0 (β) with respect to ν.
When a whole family of such transformationsg 0 is available, as scaling byc > 0 in the gamma model with inverse link or shifting the intercept in Poisson regression, then we can use the result of Corollary 5 to reduce the number of parameters in the optimization problem. Therefore, we first solve the optimization problem for a standardized parameter setting and then transfer the obtained optimal design to a general parameter vector by a suitable choice of the transformationg 0 . For example, in the gamma model one component of the parameter vector can be set equal to 1 for standardization. Then, a general parameter vector can be obtained by choosingc equal to the nominal value of the component of the parameter vector used for standardization. Similarly, in Poisson regression, we can first set the intercept parameter equal to 0 and then transfer the optimal design to the parameter vector with given nominal value β 0 .
By combination of the transformationg 0 with the linear transformations of the preceding Sect. 3, we get an extension of Corollaries 1 and 2 by Theorem 1.

Corollary 6 If the intensity function λ is multiplicatively equivariant with respect tõ g 0 , then:
(a) If ξ * is locally D-optimal on X at β, then (ξ * ) g is locally D-optimal on Z at β =g 0 (Q −T g β), (b) If ξ * is locally IMSE-optimal on X at β with respect to ν and if, additionally, η is multiplicatively equivariant, then (ξ * ) g is locally IMSE-optimal on Z at β =g 0 (Q −T g β) with respect to ν g .
This result indicates that for a given transformation g of x the associated transformationg(β) =g 0 (Q −T g β) of β needs not be unique. Moreover, we may let the transformationg 0 =g 0,β depend on the parameter vector β, where the intensity function λ is multiplicatively equivariant with respect tog 0,β for any β. Then, also the multiplicative factor c 0 = c 0,β will depend on β so that λ(f(x) Tg 0,β (β)) = c 0,β λ(f(x) T β). In combination with the linear transformation of Sect. 3, this leads to a nonlinear transformationg(β) =g 0,β (Q −T g β) of the parameter vector β so that the information matrix is equivariant with respect to the pair (g,g) of transformations. For the gamma model with inverse link, this can be accomplished by choosing the scaling factorc =c β in dependence on β.
The result of Corollary 6 carries over directly also for the nonlinear transformation, wheng 0 is replaced byg 0,β .
The standardization with respect to the intercept can be extended to more complex models.
Similar results hold for IMSE-optimality.
For maximin efficiency criteria, we additionally allow here that the multiplicative factor in the equivariance of the underlying local criteria may depend on the parameter β, c = c β . This does not affect the arguments in (22), and hence, the resulting maximin efficiency criteria remain equivariant. The homogeneous version of the local D-criterion and the local IMSE-criterion is multiplicatively equivariant with c β = c −1 0,β det(Q g ) −2/ p > 0 and c β = c 0,β > 0, respectively. Hence, for both the maximin D-efficiency and the maximin IMSE-efficiency criterion their value is not changed under the transformation. These criteria are thus equivariant, and the result of Corollary 4 remains valid so that maximin efficient designs can be transferred also for nonlinear transformationsg(β) =g 0,β (Q −T g β) when the intensity function is multiplicatively equivariant with respect tog 0,β for all β.

Invariance
While equivariance can be used to transfer optimal designs, the concept of invariance allows reduction in the complexity of finding optimal designs by exploiting symmetries (see, e.g., Schwabe [38, ch. 3], in the case of linear models). As in linear models, we need a (finite) group G of transformations g which map the experimental region X onto itself. For each of these transformations g, the regression functions f are assumed to be linearly equivariant, f(g(x)) = Q g f(x). For generalized linear models, we require additionally that the corresponding transformationsg of β also constitute a groupG such that the set (G,G) of pairs (g,g) of transformations shares the group structure. This requirement is automatically fulfilled for the linear transformations g(β) = Q −T g β, because the transformation matrices Q g , g ∈ G, constitute a group with respect to matrix multiplication. For extended equivariance (Sect. 4), also the factor c 0,β has to share the group property. This holds in the gamma model for rescaling byc β which leaves the value for the standardized component of the parameter vector unchanged. Similarly, for Poisson regression, standardization of the intercept to 0 preserves the group structure.
The final ingredient for invariance is that the optimality criterion is invariant with respect to the group G of transformations, i.e., (ξ g ) = (ξ ) for all g ∈ G and any design ξ . Then, we can make use of convexity arguments to improve designs by symmetrization. For this, define byξ = (1/|G|) g∈G ξ g the symmetrized version of a design ξ with respect to the group G, where |G| denotes the number of elements in the (finite) group G. Note thatξ is itself a design and thatξ is invariant with respect to G, i.e.,ξ g =ξ for all g ∈ G. If is invariant and convex, we obtain where the inequality follows from convexity and the equation from invariance. From this majorization property, we can conclude that the designs which are invariant with respect to G constitute an essentially complete class with regard to . This means that we can confine the search for a -optimal design to the class of invariant designs.

Theorem 2 If is invariant (with respect to G) and convex, then there exists an invariant design ξ * (with respect to G) which is -optimal over all designs.
The class of invariant designs is often much smaller than the class of all designs, and optimization can be simplified. Invariant designs are uniform on orbits O x = {g(x); g ∈ G} ⊂ X , i.e., all x in the same orbit have the same weight. In particular, for an invariant design, either all x in an orbit O are included with weight w O or the whole orbit is not in the support of the design. For optimization in the class of invariant designs, it remains to find the optimal orbits and the corresponding optimal weights which is often a much easier task than to optimize over all possible designs.
Example (Example 1 continued) For the reflection group G = {id, g}, g(x) = 1 − x on [0, 1], the orbits are all of the form {x, 1 − x} for x < 1/2 and {1/2} for x = 1/2, respectively. In the one-factor gamma model with simple linear regression, it is known that the optimal designs are supported at the endpoints 0 and 1 (see Gaffke et al. [14]). Thus, the only remaining orbit for an optimal design is {0, 1}, and hence, there is only one invariant design which assigns equal weights 1/2 to each endpoint. This design is optimal with respect to each convex invariant criterion.
In the case of local optimality criteria, the requirement of invariance is rather restrictive. In particular, the local parameter β has to be invariant under all transformations, i.e.,g(β) = β for all g ∈ G. This condition typically holds only for a few values of β.
Example (Example 1 continued) For the one-factor gamma model with simple linear regression under the reflection g(x) = 1 − x, the parameter β is only invariant if β 1 = 0, i.e., there is no effect of the covariate x. The invariant design which assigns equal weights 1/2 to the endpoints is locally optimal at β for β 1 = 0.
Example (Example 2 continued) For the two-factor gamma model with multiple linear regression on [0, 1] 2 , the reflections g 2 , g 3 , and g 4 are all self-inverse and the composition of any two reflections yields the third one. Together with the identity g 1 = id, the reflections constitute a group G = {g 1 , g 2 , g 3 , g 4 } of transformations. Locally optimal designs are supported at the vertices of [0, 1] 2 (see Gaffke et al. [14]). The vertices lie all on one orbit, and hence, the unique invariant design on the vertices assigns equal weights 1/4 to each of the vertices. Under the group G, the parameter vector β is only invariant if β 1 = β 2 = 0, i.e., there is no effect for both covariates x 1 and x 2 . Thus, the invariant design which assigns equal weights 1/4 to the vertices is locally optimal at β only in the case β 1 = β 2 = 0.
Note that in both examples above locally optimal designs are obtained for the situation of constant intensity λ. In that case, the information matrix is proportional to that in the corresponding linear model with the same linear component. Hence, the locally optimal design coincides with the optimal design in the linear model (see Cox [6,Section 4]).
In more complex situations, however, invariance may be helpful for local optimality at certain parameter values which are invariant with respect tog for all g ∈ G. To this end, first note that in the case of a finite group G of transformations g the corresponding transformation matrices Q g are unimodal, i.e., | det(Q g )| = 1 (see Schwabe [38, ch. 3]). For the IMSE-criterion, we additionally require that the weighting measure ν is invariant with respect to G, i.e., ν g = ν for all g ∈ G.
Corollary 7 Ifg(β) = β for all g ∈ G, then there exists a locally D-optimal design ξ * at β which is invariant with respect to G.
If additionally ν is invariant with respect to G, then there exists a locally IMSEoptimal design ξ * at β with respect to ν which is invariant with respect to G.
Example (Example 2 continued) In the two-factor gamma model on [0, 1] 2 , we consider nominal parameter values β with β 1 = 0, i.e., where the first covariate x 1 has no effect. Such parameter vectors are invariant with respect to the linear transformatioñ g 3 (β) = (β 0 + β 1 , −β 1 , β 2 ) T associated with the reflection g 3 (x) = (1 − x 1 , x 2 ) T of the first covariate x 1 . As the transformation g 3 is self-inverse, together with the identity id it constitutes a group G 3 = {id, g 3 }. Then, the local D-criterion at such β with β 1 = 0 is invariant with respect to G 3 . By Corollary 7 a locally D-optimal design can be found in the class of designs which are invariant with respect to G 3 . Moreover, here we can also restrict attention to designs supported by the vertices. With respect to G 3 , the relevant orbits are then (x 1 , x 2 ) and (x 3 , x 4 ), and invariant designs on the vertices have equal weights w at x 1 and x 2 and equal weights 1/2 − w at x 3 and x 4 , respectively. We will denote such designs byξ w . The optimization problem for a local D-optimal design reduces to finding the optimal weight w * . Note that for β 1 = 0 the intensities on the orbits are constant, i.e., λ 1 = λ 2 and λ 3 = λ 4 , where again λ i denotes the intensity at x i . For the designsξ w , the determinant of the information in the case β 1 = 0. The optimal weight w * can be determined by straightforward computations as for β 2 = 0, and w * = 1/4 for β 2 = 0 or β 2 = 2β 0 . The dependence of the optimal weight w * on γ 2 is shown in Fig. 4. The resulting invariant designξ w * is locally D-optimal at β with β 1 = 0. An analogous result holds for β 2 = 0, when the reflection g 4 of the second covariate x 2 is used instead of g 3 .

Example 3
In the two-factor gamma model on [0, 1] 2 , there are further symmetries which can be employed. In particular, we may consider parameter vectors β with equal slopes, i.e., β 1 = β 2 = β for some β when both covariates x 1 and x 2 have an effect of the same size. These values for the parameter vector β are invariant with respect to the linear transformationg 5 (β) = (β 0 , β 2 , β 1 ) T associated with the permutation g 5 (x) = (x 2 , x 1 ) T of the covariates. The transformation g 5 is self-inverse and constitutes together with the identity id a group G 5 = {id, g 5 }. Because locally optimal designs are supported by the vertices of [0, 1] 2 , there are only three relevant orbits {x 1 }, {x 2 , x 3 }, and {x 4 }. Optimal invariant designs can thus be characterized by two weights w * 1 assigned to x 1 and w * 4 assigned to x 4 , while the remaining equal weights w * 2 = w * 3 = (1 − w * 1 − w * 4 )/2 are assigned to each of x 2 and x 3 . For the local D-criterion, optimal weights have been obtained (see Gaffke et al. [14,Theorem 4.3]). There it was shown that minimally supported designs are locally D-optimal at β with β 1 = β 2 = β for β > β 0 or −β 0 /2 < β ≤ β 0 /3 with weights w * 2 = w * 3 = 1/3 and w * 1 = 1/3 or w * 4 = 1/3, respectively. In the intermediate case −β 0 /3 < β ≤ β 0 the locally D-optimal designs are supported on all four vertices with weights where γ = β/β 0 . In particular, uniform weights w * i = 1/4 are again seen to be optimal in the case β = 0 of constant intensity, as indicated in Fig. 4 by the vertical and horizontal dashed lines at γ 2 = β 2 = 0 and w * = 0.25, respectively.
Also for IMSE-optimality, the optimal weights depend only on γ = β/β 0 by the scaling property of Sect. 4. The locally optimal weights can only be determined numerically. For selected values of γ , we present some numerical solutions in Table 2 in the case of the uniform weighting measure ν on [0, 1] 2 which is invariant with respect to g 5 . These results were obtained by the method of augmented Lagrange multipliers implemented in the R package Rsolnp [35]. Similar to the D-criterion, the locally IMSE-optimal designs are seen to be minimally supported on x 1 , x 2 , and x 3 when the standardized effect γ = β/β 0 is sufficiently large, but the optimal weights vary in contrast to the local D-criterion. All four vertices are required for smaller values of γ ≥ 0. Note that this parameter region is considerably larger here than for the D-criterion. In the case γ = 0, the optimal weights are again uniform on all vertices. Moreover, by the additional reflection g 2 (x) = (1 − x 1 , 1 − x 2 ) T the optimal weights can be transferred from γ > 0 to γ < 0 by the nonlinear transformationg 2 described in Sect. 4. For example, in the last column of Table 2, the locally IMSEoptimal design atβ = (1, −3/7, −3/7) T is obtained from the locally IMSE-optimal design at β = (1, 3, 3) T by g 2 and the corresponding (nonlinear) transformationg 2 (β) which results inγ = −γ /(1 + 2γ ).
We now turn to maximin efficiency where invariance can become a powerful tool. For this, we additionally require that the subregion B of interest is also invariant with respect to the pair (g,g) of transformations, i.e.,g(B ) = B for all g ∈ G.
As mentioned in Sect. 3, the homogeneous version of the D-criterion and the IMSEcriterion is multiplicatively equivariant. Hence, by (22) both maximin D-and IMSEefficiency are invariant with respect to any group G of transformations satisfying the conditions of this section.
Corollary 8 Ifg(B ) = B for all g ∈ G, then there exists a maximin D-efficient design ξ * on B which is invariant with respect to G.
If additionally ν is invariant with respect to G, then there exists a maximin IMSEefficient design ξ * on B with respect to ν which is invariant with respect to G.
Example (Example 1 continued) In the one-factor gamma model with simple linear regression on [0, 1], the invariant designξ which assigns equal weights 1/2 to the endpoints is both maximin D-efficient and maximin IMSE-efficient with respect to the uniform measure ν on [0, 1] on B as has already been pointed out at the end of Sect. 3.
However, in contrast with the local criteria, there is no general majorization argument available for maximin efficiency criteria which allows restriction of the support of an optimal design to the extremal points of the experimental region. Therefore, to keep argumentation simple and to concentrate on the concept of invariance, we deliberately confine the support of the designs under consideration to these endpoints. Then, with respect to reflection, g(x) = 1 − x, the only invariant design which assigns equal weights 1/2 to the endpoints is maximin efficient for any invariant criterion. In particular, this design is maximin IMSE-efficient on B with respect to any invariant weighting measure ν as specified in Proposition 1.
Although, generally, the determination of the efficiencies requires the knowledge of all locally optimal designs, the maximin efficient design may be constructed without this information as the above example shows. This result can be extended to more complex models.
Example (Example 2 continued) In the two-factor gamma model on [0, 1] 2 with multiple regression, we first consider maximin efficiency on the region B of all possible values for the parameter vector. This region is invariant under the transformations associated with the group G = {g 1 , . . . , g 4 } of reflections of the covariates. As in the case of the one-factor gamma model, we deliberately confine the support of the designs to the vertices x 1 , . . . , x 4 of the experimental region to keep argumentation simple. Then, there is only one orbit which contains all vertices, and the only invariant design with respect to G is the uniform designξ on the vertices which assigns equal weights 1/4 to each vertex. Hence, the designξ is maximin efficient on B for any invariant criterion with respect to G.
This result carries over to any parameter subregion B which is invariant with respect to G. For example, if the intercept β 0 is restricted to a subset, β 0 ∈ B 0 , of its marginal region (0, ∞) or, more specifically, set to a fixed value (B 0 = {β 0 }), while the slopes may vary across their corresponding (conditional) regions, then the resulting subregion B = {β ∈ B; β 0 ∈ B 0 } is invariant with respect to the rescaled transformationsg associated with the transformations g ∈ G. Hence, the uniform designξ is also maximin efficient on B for any invariant criterion with respect to G. In particular, this holds for the reduced parameter region C displayed in Fig. 3 when β 0 = 1 is fixed.
Invariance can also be employed in cases where there are fewer symmetries, and thus, there is more than one orbit, so that the weights of the orbits still have to be optimized.
To make use of the symmetries with respect to the transformations g 2 and g 5 jointly, we consider the group G = {id, g 2 , g 5 , g 6 } generated by them, where the composition g 6 of g 2 and g 5 is the reflection at the secondary diagonal of the unit square X , g 6 (x) = (1 − x 2 , 1 − x 1 ) T . We again restrict attention to the vertices of the experimental region. Then there are just two orbits {x 1 , x 4 } and {x 2 , x 3 } of the group G . The invariant designsξ w can thus be characterized by the weight w assigned to each of the settings x 1 and x 4 in the first orbit, 0 < w < 1/2, while weight 1/2 − w is assigned to each of the settings x 2 and x 3 in the second orbit. Design optimization is then reduced to determining the optimal weight w.

Discussion
In this article, we present an outline of the concept of invariance and equivariance in the design of experiments for generalized linear models. In contrast with the wellknown results in linear models, where only the experimental settings are transformed, we have to consider pairs of transformations in generalized linear models which act simultaneously on the experimental settings and on the location parameters in the linear component. We focus on local optimality and maximin efficiency for the common D-and IMSE-criteria which allow a wide range of transformations for the experimental settings like scaling, permutations or reflections. As in linear models, the transformation of the experimental settings has to act in a linear way on the regression functions of the linear component. The parameters can then be transformed linearly in such a way that the value of the linear component and, hence, of the intensity is not changed (see Radloff and Schwabe [34]). Besides this natural choice, nonlinear transformations of the parameters may also be employed, if additional properties of the intensity function can be used. We illustrate this feature by the gamma model with inverse link for which the intensity is only scaled by a multiplicative factor based on the parameter. This scaling does not affect standardized design criteria like maximin efficiency, and invariance can also be used here. In Table 3, we exhibit which concepts of equivariance and invariance can be used under the model conditions of linear and generalized linear models and, in particular, for the gamma model with canonical (inverse) link.
The general results on equivariance and invariance in generalized linear models can be extended in a straightforward manner to other model specifications in which the intensity depends only on the linear component, as in censoring (see Schmidt and Table 3 Transformations for equivariance (Equiv.) and invariance (Inv.); "+" indicates whether this property is required / can be used

Transformation
Linear model GLM Gamma (inverse link) Equiv.
Schwabe [37], for examples). How far the results on nonlinear transformations can be extended, however, depends on the structure of the intensity function. For other optimality criteria, such as A-, E-or, more generally, Kiefer's q -criteria which are based on the eigenvalues of the information matrix, the use of equivariance and invariance is limited, because additional structures of the transformations would be required, like orthogonality of the transformation matrices Q g .
For the case of maximin efficiency in the gamma model, it would be also desirable to obtain the majorization results as we have found for local optimality. These would allow us to restrict the search for the optimal experimental settings to the extremal points of the experimental region. However, the findings in Gaffke et al. [14] do not carry over, because the arguments used there are of a local nature and do not work uniformly on the parameter region. Alternatively, equivalence theorems could be employed for establishing maximin efficiency (see Pronzato and Pázman [31,ch. 8]), but in their formulation these theorems require that the minimal efficiency is attained inside the parameter region which is violated in our examples. It thus remains an open problem whether the restriction to the extremal points of the experimental region can be justified.
The concepts of equivariance and invariance can further be extended to models with random effects (see Graßhoff et al. [16] and Debusho and Haines [7] for the estimation of population parameters, and Prus and Schwabe [32] for individual prediction in linear mixed models).