1 Introduction

In this study, we aim at solving a multi-objective optimization problem that consists in the maximization of a minimum design-efficiency. In optimal design literature, different approaches can be classified as maxi-min efficiency criteria. The standardized max-min criterion, introduced to tackle the problem of parameter uncertainty, is the most common. This last issue, however, is not considered in this work because it has been already extensively studied (see for instance, Chen et al. 2015; Dette and Biedermann 2003; Nyquist 2013; Fackle-Fornius et al. 2015; Dette et al. 2007, among others); furthermore, parameter uncertainty is not easily interpretable as a multi-task problem. Differently, examples of maxi-min efficiency criteria that can be interpreted as multi-objective problems are: the SMV-criterion (proposed by Dette (1997)), which aims to obtain an accurate estimation of each one of the model parameters, taking into account their different scale (see also López-Fidalgo and Tommasi 2004 and the references therein) and the extensions of T- and KL-criteria (proposed by Atkinson and Fedorov (1975) and Tommasi et al. (2016), respectively) to handle the problem of model uncertainty. Another interesting application might be the identification of an optimal design for model identification, precise parameter estimation and accurate predictions. This multiple objective could be achieved by maximizing the minimum efficiency of three criteria reflecting these three distinct goals.

The maxi-min approach arises naturally when we wish to protect against the worst case scenario; however, it is difficult to compute the corresponding optimal design (the maxi-min efficiency design) because this criterion is not differentiable. Consequently, a standard directional derivative argument cannot be applied to check whether a given design is optimal because unfortunately, the directional derivative involves an unknown measure; see for instance Wong (1992) and Atkinson and Fedorov (1975).

In addition, the construction of the maxi-min efficiency design is not straightforward at all. Frequently, it is found numerically by the application of some algorithm, but there is no way to prove that it is really the optimum.

The main contribution of this study is to prove the equivalence between the maxi-min efficiency approach and the Bayesian criterion for a specific prior, which is differentiable. Hence, the directional derivative of the Bayesian criterion can be used to check for the minimum efficiency optimality. Let us note that the Bayesian criterion is another kind of multi-objective optimality function, being a convex combination of different quantities. The connection between maxi-min efficiency and Bayesian optimum designs has been already explored by other authors, see for instance Schervish (1995), Müller and Pazman (1998) and Dette et al. (2007); other versions of the equivalence theorem can be found but they are specialized for specific problems; for instance, Dette and Biedermann (2003) or Berger et al. (2000) consider parameter uncertainty in a non-linear model and the D-criterion.

In this study, we prove a more general version of the equivalence theorem, because it covers any multi-objective problem that can be expressed as a minimum design-efficiency (for any component-wise criteria). Furthermore, following similar ideas as in Chen et al. (2017), we provide a method to determine the prior probability that matches the maxi-min efficiency criterion and the Bayesian optimality; this makes possible the application of the equivalence theorem.

The paper is organized as follows. In Sect. 2, we recall some background information and the used notation. In Sect. 3, we state the equivalence theorem and the rule to determine the prior probability that makes the minimum efficiency and the Bayesian criteria equivalent. Section 4 concerns a pair of illustrative examples. Section 5 provides some conclusions, and finally Appendix A includes the proofs of the theoretical results.

2 Background and notation

In this section, we introduce the main ideas of optimal experimental design and the notation used in what follows.

Let us assume that \(f(y,x,\theta )\) is a statistical model that describes the response Y at the experimental condition x, which may be chosen in a compact set \({{\mathcal {X}}}\) and \(\theta \in \Theta \subseteq \mathrm{I\!R}^p\) denotes a \(p\times 1\) parameter vector.

An approximate design is a probability measure on the design space \({\mathcal {X}}\) with a finite support, i.e.

$$\begin{aligned} \xi = \left\{ \begin{array}{cccc} x_1 &{} x_2 &{} \cdots &{} x_r\\ \xi (x_1) &{} \xi (x_2) &{} \cdots &{} \xi (x_r) \end{array}\right\} , \end{aligned}$$

where \(\xi (x_i)\approx n_i/n,\) and \(n_i\) is the number of observations to be taken at the experimental condition \(x_i\), \(i=1, \dots , r\).

The aim is to find a design \(\xi ^*_\theta \) maximizing (minimizing) a concave (convex) optimality criterion function \(\Phi (\xi ;\theta )\) defined on the space of all designs \(\Xi \) to the real line. This means that an optimal design \(\xi ^*_\theta \) may be found according to several criteria reflecting different inferential goals: parameter estimation, prediction or model discrimination. Many optimality criteria for the precise estimation of \(\theta \) are concave (or convex) functions of the information matrix of a design \(\xi \in \Xi \), i.e. \(\Phi (\xi ;\theta )=\Phi [M(\xi ,\theta )]\), where

$$\begin{aligned} M(\xi ,\theta )=\int _{{\mathcal {X}}} \textrm{E}_Y \left\{ \frac{\partial \log f(y,x,\theta )}{\partial \theta }\, \frac{\partial \log f(y,x,\theta )}{\partial \theta ^T} \right\} \, d\xi (x). \end{aligned}$$
(1)

If \(\Phi (\xi ;\theta )\) is a non-negative concave function, then a measure of the goodness of a design \(\xi \) with respect to the optimal design \(\xi ^*_\theta \), is the following efficiency function:

$$\begin{aligned} 0\le \textrm{Eff}(\xi ,\theta ) =\frac{\Phi (\xi ,\theta )}{\Phi (\xi ^*_\theta ,\theta )} \le 1. \end{aligned}$$
(2)

If \(\Phi (\xi ;\theta )\) is convex, then the ratio on the right-hand side of Eq. (2) should be reversed.

3 Minimum efficiency and pseudo-Bayesian criteria

Let \(\Phi _i(\xi ;\theta _i)\) with \(i=1,\ldots ,k\) be k different concave optimality criteria, that reflect distinct goals and possibly depend on some unknown parameter vector \(\theta _i\). Let \(\theta _{0i}\) be a guessed value for \(\theta _i\); thus, \(\xi _i^*=\xi _{i;\theta _{0i}}^*=\arg \max _{\xi \in \Xi } \Phi _i(\xi ;\theta _{0i})\) are local optimum designs. When we are interested in a compromise design that is ‘good’ for all the different criteria, we need to combine \(\Phi _i(\xi ;\theta _i)\), for \(i=1,\ldots ,k\), in a multi-objective criterion. To this aim, as suggested by Dette (1997), we should first standardize the criteria \(\Phi _i(\xi )=\Phi _i(\xi ;\theta _{0i})\), obtaining their efficiency functions: \(\textrm{Eff}_i(\xi )={\Phi _i(\xi )}/{\Phi _i(\xi _{i}^*)}\), \(i=1,\ldots ,k\).

An easy way of combining the standardized criteria is through a linear combination. If we have same prior knowledge about criteria \(\Phi _i(\xi )\) for \(i=1,\ldots ,k\), we might compute a Bayesian optimum design maximizing the following criterion:

$$\begin{aligned} \Phi _B(\xi ;\pi )\!=\!\sum _{i=1}^{k} \pi _i\cdot \textrm{Eff}_{i}(\xi ),\quad 0\le \pi _i\le 1,\;\; \displaystyle \sum _{i=1}^{k}\pi _i =1, \end{aligned}$$
(3)

where \(\pi ^T=(\pi _1,\ldots ,\pi _k)\) is a prior probability on the set \(\{1,\ldots ,k\}\). For an application of this criterion, see for instance Tommasi and López-Fidalgo (2010).

A design \(\xi _{\pi }^*\) is Bayesian optimal if and only if \(\partial \Phi _B(\xi _{\pi }^*,{\bar{\xi }};\pi )\le 0\) for any \({\bar{\xi }}\), where

$$\begin{aligned} \partial \Phi _B(\xi _{\pi }^*,{\bar{\xi }};\pi )=\int \sum _{i=1}^{k} \pi _i \frac{\partial \Phi _{i}(\xi _{\pi }^*,\xi _x)}{\Phi _i(\xi _i^*)} d{\bar{\xi }}(dx) \end{aligned}$$

is the directional derivative of criterion (3) at \(\xi _{\pi }^*\) in the direction of \({\bar{\xi }}-\xi _{\pi }^*\), and \(\partial \Phi _{i}(\xi _{\pi }^*,\xi _x) \) denotes the directional derivative of the component-wise criterion \(\Phi _i(\cdot )\) at \(\xi _{\pi }^*\) in the direction of \(\xi _x-\xi _{\pi }^*\). It is easy to prove that \(\xi _{\pi }^*\) is a Bayesian optimal design if and only if it satisfies the following inequality:

$$\begin{aligned} \sum _{i=1}^{k} \pi _i \frac{\partial \Phi _{i}(\xi _{\pi }^*,\xi _x)}{\Phi _i(\xi _i^*)}\le 0,\qquad x\in \mathcal{X}, \end{aligned}$$
(4)

and that

$$\begin{aligned} \sum _{i=1}^{k} \pi _i \frac{\partial \Phi _{i}(\xi _{\pi }^*,\xi _x)}{\Phi _i(\xi _i^*)}=0,\quad \text{ at } \text{ the } \text{ support } \text{ points } \text{ of } \xi _{\pi }^*. \end{aligned}$$
(5)

When we are unable to provide a prior distribution \(\pi \), another possibility to takes into consideration all the objectives represented by the k different criteria is the following minimum efficiency criterion:

$$\begin{aligned} \Phi (\xi )=\min _{i\in \{1,\ldots ,k\}} \textrm{Eff}_i(\xi )= \bigg [\max _{i \in \{1, \ldots , k\}} \frac{1}{\textrm{Eff}_{i}(\xi )}\bigg ]^{-1}. \end{aligned}$$

This multi-objective optimality function, differently from the previous one, is not differentiable, and thus the computation of \(\Phi \)-optimal designs is not straightforward at all.

A design \(\xi ^*\) is a maxi-min efficiency design if and only if

$$\begin{aligned} \xi ^* = \arg \max _{\xi } \min _{i \in \{1, \ldots , k\}} \textrm{Eff}_{i}(\xi ) = \arg \min _{\xi } \max _{i \in \{1, \ldots , k\}} \frac{1}{\textrm{Eff}_{i}(\xi )}. \end{aligned}$$

From the last equation, \(\xi ^*\) is also the design that minimizes the maximum inefficiency optimality criterion:

$$\begin{aligned} \Phi ^{-1}(\xi )=\max _{i \in \{1, \ldots , k\}} \frac{1}{\textrm{Eff}_{i}(\xi )}. \end{aligned}$$
(6)

We find maxi-min efficiency designs by minimizing \(\Phi ^{-1}(\xi )\), for which we can state the following propositions:

Proposition 1

The maximum inefficiency criterion \(\Phi ^{-1}(\xi )\) is a convex function.

The proof is straightforward.

Proposition 2

The directional derivative of \(\Phi ^{-1}(\xi )\) at \(\xi \) in the direction of \({\bar{\xi }}-\xi \) is

$$\begin{aligned} \partial \Phi ^{-1}(\xi ;{\bar{\xi }})=\max _{e_i\in {{\mathcal {C}}}(\xi )} \int _{{\mathcal {X}}} \psi (x,e_i,\xi ){\bar{\xi }}(d x), \end{aligned}$$

where \(e_i\) denotes the canonical vector of the Euclidean space,

$$\begin{aligned} {{\mathcal {C}}}(\xi ) =\left\{ e_i:\;i=\arg \max _{j\in \{1,\ldots k\}} \frac{1}{\textrm{Eff}_{j}(\xi )}\right\} = \left\{ e_i:\;i=\arg \min _{j\in \{1,\ldots k\}} \textrm{Eff}_{j}(\xi )\right\} , \end{aligned}$$

and \(\displaystyle \psi (x,e_i,\xi ) = -\Phi _{i}(\xi _i^*) \,\frac{\partial \Phi _{i}(\xi ,\xi _x)}{\Phi _{i}^2(\xi )}\).

The proof is deferred to Appendix A.

3.1 Equivalence theorem

Bayesian optimum designs are usually found by applying standard algorithms, because the equivalence inequality (4) is completely known. Maxi-min efficiency designs are difficult to determine because they are not differentiable (their equivalence inequality depends on an unknown measure); see for instance, Wong (1992). See also Chen et al. (2017) and Dette and Biedermann (2003) for an equivalence theorem for the standardized max-min D-optimal design criterion. In this section, we provide a new formulation of the equivalence theorem, which establishes a connection between \(\Phi _B(\xi ;\pi )\) and \(\Phi (\xi )\).

Theorem 3

(Equivalence Theorem) A design \(\xi ^*\) is a maxi-min efficiency design if and only if there exists a probability distribution \(\pi ^*\) on the index set

$$\begin{aligned} {{\mathcal {I}}}(\xi ^*)=\left\{ i: \;i=\arg \min _{j\in \{1,\ldots , k\}} \textrm{Eff}_j(\xi ^*) \right\} , \end{aligned}$$
(7)

such that \(\xi ^*\) is a Bayesian optimum design for the prior distribution \(\pi ^*\), that is, if and only if \(\xi ^*\) fulfils the following inequality,

$$\begin{aligned} \sum _{i\in {{\mathcal {I}}}(\xi ^*)} \pi _i^* \,\frac{\partial \Phi _{i}(\xi ^*,\xi _x)}{\Phi _{i}(\xi _i^*)}\le 0 ,\qquad x\in \mathcal{X}. \end{aligned}$$
(8)

The detailed proof of the equivalence theorem is deferred to Appendix A. In addition, from (5), we can state the following corollary:

Corollary 3.1

The quantity \(\sum _{i\in {{\mathcal {I}}}(\xi ^*)} \pi _i^* \,\frac{\partial \Phi _{i}(\xi ^*,\xi _x)}{\Phi _{i}(\xi _i^*)}\) attains its maximum value of zero at every support point of \(\xi ^*\).

The equivalence between the minimum efficiency and the Bayesian optimality criteria can be used to check whether a design is optimal with respect to criterion (6). Recently, several algorithms have been applied to construct optimal designs numerically; see for instance, Dette et al. (2003) who apply the Nedler–Mead algorithm, (Chen et al. 2015, 2020), where the authors use particle swarm optimization, or (Belmiro et al. 2015), where a semi-infinite programming based algorithm is considered. These algorithms provide a solution based on a suitable stopping rule; however, it is necessary to check the equivalence inequality to prove that an ‘optimum’ has been reached. We follow the same idea as in Chen et al. (2017) (page 87). Given a solution of a numerical procedure \(\xi _s^*\), from the equivalence inequality (8) with \(\xi _s^*\) instead of \(\xi ^*\), we can compute the prior distribution \(\pi ^*\) solving the minimization problem

$$\begin{aligned} \min _{\pi _i\in [0;1],\sum _{i\in {{\mathcal {I}}}(\xi _s^*)}\pi _i=1} \sum _{x\in {{\mathcal {S}}}_{\xi _s^*}} \left[ \sum _{i=1}^{k} \pi _i \frac{\partial \Phi _{i}(\xi _s^*,\xi _x)}{\Phi _i(\xi _i^*)}\right] ^2, \end{aligned}$$
(9)

where \({{\mathcal {S}}}_{\xi _s^*}\) denotes the support of \(\xi _s^*\) and \({{\mathcal {I}}}(\xi _s^*)\) is the set defined in (7) with \(\xi ^*\) replaced by \(\xi _s^*\). Equation (9) comes out from the equivalence theorem, for which the weighted sum of the component-wise criteria’s derivatives must be zero at each support point of the optimal design, and thus, the weights can be chosen by minimizing the sum of squares of these expressions for all the support points.

Given a design \(\xi _s^*\), using the solutions of (9) we can check whether \(\xi _s^*\) really is an optimal design by computing the equivalence inequality (8).

Remark 1

At the optimal design, the value of (9) should be zero (except for rounding approximations).

4 Illustrative examples

The first example of this section underlines the difficulty in finding out a maxi-min efficiency design, when the search is done step-by-step by comparing the k efficiencies. This leads to the conclusion that suitable optimization algorithms should be applied, and then their numerical solutions should be checked for their optimality through Equivalence inequality (8). This procedure is followed in Example 4.2.

4.1 SMV-optimum designs in biology immunoassays

In biology, immunoassays are usually performed to quantify the concentration of an analyte. In this example, the SMV-optimality criterion is applied to the four-parameter logistic model, which is the most frequently used model for symmetric immunoassay data,

$$\begin{aligned} y=\theta _1+\frac{\theta _2-\theta _1}{1+ \displaystyle \left( \frac{x}{\theta _4}\right) ^{\theta _3}}+\varepsilon , \quad x\in \mathcal{X}=[0,\infty ), \end{aligned}$$
(10)

where y is the response at the concentration x, \(\varepsilon \sim N(0;\sigma ^2)\) is a random error, and \(\theta _1>0\), \(\theta _2>0\), \(\theta _3\in \mathrm{I\!R}\), and \(\theta _4>0\) are unknown parameters.

The SMV-optimality criterion, proposed by Dette (1997),

$$\begin{aligned} \Phi _{SMV}(\xi )=\max _{i \in \{1, \ldots , 4\}}\frac{e_i^T M^{-1}\!(\xi ,\theta _0)\,e_i}{e_i^T M^{-}\!(\xi _i^*,\theta _0)\,e_i} \end{aligned}$$

is an example of maximum inefficiency criterion (6), where \(k=4\) is the dimension of \(\theta =(\theta _1,\theta _2,\theta _3,\theta _4)\); \(\theta _0\) is a guessed value for \(\theta \); \(\Phi _i(\xi )\) is given by

$$\begin{aligned} \Phi _i(\xi )= \left\{ \begin{array}{ll} [{e}_i^T {M}^{-}\!(\xi ,\theta _0)\,{e}_i]^{-1} &{} \text{ if } {e}_i\in \textrm{Range}[{M}(\xi ,\theta _0)]\\ 0 &{} \text{ otherwise }\\ \end{array} \right. ,\qquad i=1,\ldots ,4 \end{aligned}$$

where \({M(\xi ,\theta )}\) is the information matrix (1) for model (10), and \(e_i\), \(i=1,...,4\) are the canonical basis of \(\mathrm{I\!R}^4\).

In this example, \({{\mathcal {X}}} = [0, 5]\), \(\theta _0=(1,2,1,1)\) and the gradient in (1) is

$$\begin{aligned} \frac{\partial \log f(y,x,\theta )}{\partial \theta }=\left( 1-\frac{1}{1+x},\frac{1}{1+x},-\frac{x\log {[x+10^{-6}]}}{(1+x)^2}, \frac{x}{(1+x)^2} \right) ^T, \end{aligned}$$

where the third component has been slightly modified for computational reasons.

The procedure followed to find out the optimal design is quite cumbersome, but the prior probabilities which solve (9) enable us to identify the right maxi-min efficiency design. At first we search for designs that have the same efficiencies for any pair of the indices. Let \(I=\{i_1,\ldots ,i_l\}\), with \(l=2,\ldots ,k\), be an index set. For instance, for \(I=\{2,4\}\) we obtain the design

$$\begin{aligned} \xi _1^{(2, 4)} = \left\{ \begin{array}{cccc} 0 &{} 1.321 &{} 2.756 &{} 5\\ 0.212 &{} 0.221 &{} 0.395 &{} 0.173 \end{array}\right\} , \end{aligned}$$

which has the same efficiency, 0.2116, for both 2 and 4 standardized criteria, but the efficiencies of the other criteria are smaller (0.1139 and 0.0525 for standardized criteria 1 and 3, respectively). Thus, \(\xi _1^{(2, 4)}\) is not a minimum efficiency design and is discarded. In particular, one of the other efficiencies is very low, and thus after a new search the design

$$\begin{aligned} \xi _2^{(2, 4)} = \left\{ \begin{array}{cccc} 0 &{} 0.649 &{} 1.192 &{} 5\\ 0.139 &{} 0.574 &{} 0.115 &{} 0.172 \end{array}\right\} , \end{aligned}$$

gets a common efficiency of 0.1386 for indices 2 and 4, and efficiencies for the other criteria which are not so bad as in the previous case (0.1696 and 0.1140 for indices 1 and 3, respectively). However, once again one of the efficiencies is smaller than that for the indices in I, and \(\xi _2^{(2, 4)}\) is discarded as well.

After some attempts, finally we find out the design

$$\begin{aligned} \xi _3^{(2, 4)} = \left\{ \begin{array}{cccc} 0 &{} 0.101 &{} 1.244 &{} 5\\ 0.478 &{}0.171 &{} 0.242 &{} 0.109 \end{array}\right\} , \end{aligned}$$

giving the same efficiency, 0.4778, for indices 2 and 4, which is smaller than the other efficiencies, 0.5734 and 0.5879 (for indices 1 and 3, respectively); therefore, \(\xi _3^{(2, 4)}\) is a candidate design for the Bayesian optimality. To prove that, it is necessary to identify a prior distribution in I, \(\pi = \{\pi _ 2, \pi _ 4\} \), such that \(\xi _ 3^{(2, 4)} \) is Bayesian optimal for \(\pi \). To find such a distribution, we employ condition (9). The weights minimizing this expression are \(\pi _2=0.608\) and \(\pi _4=1-\pi _2\), but the minimum value obtained from these weights is 3.547, which is far from zero. Thus, this design cannot be Bayesian optimal (and neither maxi-min efficiency optimal).

We obtain similar results with every pair \((i_1, i_2)\) of indices. Thus, the search proceeds among designs producing equal efficiencies for three of the component-wise criteria. At first we look for designs that produce a common efficiency for the indices in \(I = \{1, 2, 4\}\); Table lists some designs verifying this condition, however, none of them has the remaining efficiency larger than this common value.

Table 1 Designs giving the same efficiency for \(i\in \{1,2,4\}\)

The same happens with the triplets of indices \(\{1,3,4\}\) and \(\{1,2,3\}\). Differently, for the set \(I=\{2,3,4\}\), we find out some designs with the same common efficiency, which is smaller than that for \(i=1\); see Table .

Table 2 Designs giving the same efficiency for \(i\in \{2,3,4\}\) and a larger one for \(i=1\)

However, none of them can be Bayesian optimal because no distribution of weights in I gets a minimum value zero in (9). Finally, for the same index set, we obtain the design

$$\begin{aligned} \xi ^* = \left\{ \begin{array}{cccc} 0 &{} 0.126 &{} 1.279 &{} 5\\ 0.497 &{}0.114 &{} 0.241 &{} 0.148 \end{array}\right\} , \end{aligned}$$

with efficiencies \(\{0.5963, 0.4970, 0.4970, 0.4970\}\). Setting \(\xi ^*_s=\xi ^*\) in (9), we get the solution \(\pi ^*=\{0, 0.493, 0.054, 0.453\}\) with a minimum value of 6.644 x \(10^{-4}\) and hence, \(\xi ^*\) turns out to be Bayesian optimal for \(\pi ^*\).

4.2 Maxi-min optimal discriminating designs in toxicology studies

In toxicology studies, we may have a continuous response and several possible models for the true mean response. As in Dette et al. (2010), we assume the following rival models for the mean response of the outcome Y:

$$\begin{aligned} \eta _1(x,\theta )= & {} \! a e^{-b x}; \quad \theta \!=\!(a,b)^T\!, a>0, b>0, \\ \eta _2(x,\theta )= & {} \! a e^{-b x^d}; \quad \theta \!=\!(a,b,d)^T\!, a>0, b>0, d\ge 1, \\ \eta _3(x,\theta )= & {} \! a\big [c-(c-1) e^{-b x}\big ]; \quad \theta \!=\!(a,b,c)^T\!, a>0, b>0, c\!\in \![0,1], \\ \eta _4(x,\theta )= & {} \! a\Big [c-(c-1) e^{-b x^d}\Big ]; \quad \theta \!=\!(a,b,c,d)^T\!, a>0, b>0, c\!\in \![0,1], d\ge 1, \end{aligned}$$

and the following criterion for discriminating between pairs of models:

$$\begin{aligned} \min _{i\in \{1,2,3,4\}} \textrm{Eff}_{i}(\xi )=\min \Big \{\textrm{Eff}^{2-1}(\xi ), \textrm{Eff}^{3-1}(\xi ), \textrm{Eff}^{4-2}(\xi ), \textrm{Eff}^{4-3}(\xi )\Big \}, \end{aligned}$$
(11)

where index i denotes 4 different pairwise comparisons: \(\eta _1\) vs \(\eta _2\), \(\eta _1\) vs \(\eta _3\), \(\eta _2\) vs \(\eta _4\), and \(\eta _3\) vs \(\eta _4\), respectively. In other terms, for a fixed value \(\theta _0\),

$$\begin{aligned} \textrm{Eff}_{i}(\xi )=\frac{\min _\xi {\textbf{e}}_i^T {\textbf{M}}_i^{-}(\xi ,\theta _0) {\textbf{e}}_i}{{\textbf{e}}_i^T {\textbf{M}}_i^{-1}(\xi ,\theta _0) {\textbf{e}}_i}, \end{aligned}$$

with

$$\begin{aligned} {\textbf{e}}_i=\left\{ \begin{matrix} e_3 \in \mathrm{I\!R}^3 &{} \text{ for } i=1,2\\ e_3 \in \mathrm{I\!R}^4 &{} \text{ for } i=3\\ e_4 \in \mathrm{I\!R}^4 &{} \text{ for } i=4 \end{matrix} \right. \quad \textrm{and} \quad {\textbf{M}}_i(\xi ,\theta _0)= \left\{ \begin{matrix} M_2(\xi ,\theta _0) &{} \text{ for } i=1\\ M_3(\xi ,\theta _0) &{} \text{ for } i=2\\ M_4(\xi ,\theta _0) &{} \text{ for } i=3,4 \end{matrix}\right. , \end{aligned}$$

where \(e_j\), \(j=3,4\) is the j-th canonical basis of the Euclidean space and \(M_{j}(\xi ,\theta _0)\) is the information matrix (1) for the mean response \(\eta _j(x,\theta )\), with \(j=1,2,3,4\). Dette et al. (2003) found the maxi-min efficiency designs using a numerical procedure based on the Nedler–Mead algorithm. Setting \(\theta _0=(1,3,0,1)^T\) (see Table 3 in Dette et al. 2003), the authors found the following numerical solution:

$$\begin{aligned} \xi _s^* = \left\{ \begin{array}{cccc} 0 &{} .105 &{} .44 &{} 1\\ .141 &{} .233 &{} .199 &{} .427 \end{array}\right\} , \end{aligned}$$

for which \(\textrm{Eff}_{1}(\xi _s^*)=.705\), \(\textrm{Eff}_{2}(\xi _s^*)=\textrm{Eff}_{4}(\xi _s^*)=.682\) and \(\textrm{Eff}_{3}(\xi _s^*)=.871\), and hence \({{\mathcal {I}}}(\xi _s^*)=\{2;4\}\) and \(\pi _1^*=\pi _3^*=0\). From (9), where

$$\begin{aligned} \Phi _i(\xi )= \left\{ \begin{array}{ll} [{\textbf{e}}_i^T {\textbf{M}}_i^{-}\!(\xi ,\theta _0)\,{\textbf{e}}_i]^{-1} &{} \text{ if } {\textbf{e}}_i\in \textrm{Range}[{\textbf{M}}_i(\xi ,\theta _0)]\\ 0 &{} \text{ otherwise }\\ \end{array} \right. ,\qquad i=1,\ldots ,p \end{aligned}$$
(12)

and

$$\begin{aligned} \partial \Phi _i(\xi ;\xi _x)=\frac{[{\textbf{e}}_i^T {\textbf{M}}_i^{-1}\!(\xi ,\theta _0)\,\nabla \eta (x,\theta _0)]^{2}-{\textbf{e}}_i^T {\textbf{M}}_i^{-1}\!(\xi ,\theta _0)\,{\textbf{e}}_i}{[{\textbf{e}}_i^T {\textbf{M}}_i^{-1}\!(\xi ,\theta _0)\,{\textbf{e}}_i]^2}, \end{aligned}$$

with \(\nabla \eta (x,\theta )= [\frac{\partial \eta (x,\theta )}{\partial \theta _1},\ldots ,\frac{\partial \eta (x,\theta )}{\partial \theta _4}]^T\), we obtain \(\pi _2^*=.574\) and \(\pi _4^*=1-\pi _2^*\). Figure , which shows the sensitivity function on the left-hand side of (8),

Fig. 1
figure 1

Sensitivity function

proves that the numerical solution \(\xi _s^*\) actually is a maxi-min efficiency design.

5 Conclusions and discussion

In practice, obtaining an optimal design that accounts for several goals or experimenter’s interests, is a difficult task. There is much literature on different approaches, usually considering specific situations. In this study, we consider a quite general setting; we aims at finding a max-min efficiency design which maximizes the minimum of the efficiencies of several component-wise criteria (reflecting different tasks). This multi-objective criterion depends on some nominal values of the parameters, therefore a sensitivity analysis to assess this dependence is advisable.

We provide theoretical results, including an equivalence theorem which states that the maxi-min efficiency design is Bayesian optimal for a specific prior distribution on the set of the component-wise criteria. Furthermore, a method to identify this prior distribution is given. This is important for two reasons.

  1. (i)

    It enables the application of the equivalence theorem in such a way the optimality of a particular design, e.g. found by the implementation of an algorithm, can be checked through the equivalence theorem since the prior probability can be determined.

  2. (ii)

    This prior distribution tells the practitioner the weight the optimal design is assigning to each component-wise criterion. Notice that if a criterion does not receive any weight, this does not mean that the optimal design is going to be bad for that criterion. It is quite the opposite, as the efficiency of the optimal design with respect to that specific component-wise criterion will be higher than those with positive weights.