Skip to main content
Log in

Optimal approximate designs for comparison with control in dose-escalation studies

  • Original Paper
  • Published:
TEST Aims and scope Submit manuscript

Abstract

Consider an experiment, in which a new drug is tested for the first time on human subjects, namely healthy volunteers. Such experiments are often performed as dose-escalation studies: a set of increasing doses is preselected; individuals are grouped into cohorts; and in each cohort, dose number i can be administered only if dose number \(i-1\) has already been tested in the previous cohort. If an adverse effect of a dose is observed, the experiment is stopped, and thus, no subjects are exposed to higher doses. In this paper, we assume that the response is affected both by the dose or placebo effects and by the cohort effects. We provide optimal approximate designs for estimating the effects of the drug doses compared with the placebo with respect to selected optimality criteria (E-, MV- and LV-optimality). In particular, we prove the optimality of the so-called Senn designs with respect to all of the studied optimality criteria, and we provide optimal extensions of these designs for selected criteria.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Atkinson AC (2009) Commentary on Designs for dose-escalation trials with quantitative responses. Stat Med 28(30):3739–3741

    Article  MathSciNet  Google Scholar 

  • Atkinson AC, Donev A, Tobias R (2007) Optimum experimental designs, with SAS. Oxford University Press, New York

    MATH  Google Scholar 

  • Bailey RA (2009a) Author’s rejoinder to commentaries on Designs for dose-escalation trials with quantitative responses. Stat Med 28(30):3759–3760

    Article  MathSciNet  Google Scholar 

  • Bailey RA (2009b) Designs for dose-escalation trials with quantitative responses. Stat Med 28(30):3721–3738

    Article  MathSciNet  Google Scholar 

  • Bechhofer RE, Tamhane AC (1981) Incomplete block designs for comparing treatments with a control: general theory. Technometrics 23:45–57

    MathSciNet  MATH  Google Scholar 

  • Berman A, Plemmons RJ (1979) Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York

    MATH  Google Scholar 

  • Eccleston JA, Hedayat A (1974) On the theory of connected designs: characterization and optimality. Ann Stat 2(6):1238–1255

    Article  MathSciNet  MATH  Google Scholar 

  • Giovagnoli A, Wynn HP (1985) Schur-optimal continuous block designs for treatments with a control. In: Le Cam LM, Olshen RA (eds) Proceedings of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer. Wadsworth, California, pp 651–666

    Google Scholar 

  • Haines LM, Clark AE (2014) The construction of optimal designs for dose-escalation studies. Stat Comput 24(1):101–109

    Article  MathSciNet  MATH  Google Scholar 

  • Harman R, Sagnol G (2015) Computing D-optimal experimental designs for estimating treatment contrasts under the presence of a nuisance time trend. In: Steland A, Rafajlowicz E, Szajowski K (eds) Stochastic Models, Statistics and Their Applications. Springer, Wroclaw, pp 83–91

    Chapter  Google Scholar 

  • Harman R, Bachratá A, Filová L (2016) Construction of efficient experimental designs under multiple resource constraints. Appl Stoch Model Bus 32:3–17

    Article  MathSciNet  Google Scholar 

  • Harville DA (1997) Matrix algebra from a statiscian’s perspective. Springer-Verlag, New York

    Book  MATH  Google Scholar 

  • Kunert J, Martin RJ, Eccleston J (2010) Optimal block designs comparing treatments with a control when the errors are correlated. J Stat Plan Infer 140:2719–2738

    Article  MathSciNet  MATH  Google Scholar 

  • Majumdar D (1996) Optimal and efficient treatment-control designs. In: Ghosh S, Rao CR (eds) Handbook of statistics 13: design and analysis of experiments. North Holland, Amsterdam, pp 1007–1053

    Google Scholar 

  • Majumdar D, Notz WI (1983) Optimal incomplete block designs for comparing treatments with a control. Ann Stat 11:258–266

    Article  MathSciNet  MATH  Google Scholar 

  • Morgan JP, Wang X (2011) E-optimality in treatment versus control experiments. J Stat Theory Pract 5(1):99–107

    Article  MathSciNet  MATH  Google Scholar 

  • Notz WI (1985) Optimal designs for treatment-control comparisons in the presence of two-way heterogeneity. J Stat Plan Infer 12:61–73

    Article  MathSciNet  MATH  Google Scholar 

  • Pukelsheim F (1980) On linear regression designs which maximize information. J Stat Plan Infer 4(4):339–364

    Article  MathSciNet  MATH  Google Scholar 

  • Pukelsheim F (2006) Optimal design of experiments. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  • Pukelsheim F, Studden WJ (1993) E-optimal designs for polynomial regression. Ann Stat 21(1):402–415

    Article  MathSciNet  MATH  Google Scholar 

  • Rosa S, Harman R (2016) Optimal approximate designs for estimating treatment contrasts resistant to nuisance effects. Stat Pap 57:1077–1106

    Article  MathSciNet  MATH  Google Scholar 

  • Senn S (2009) Commentary on Designs for dose-escalation trials with quantitative responses. Stat Med 28:3754–3758

    Article  MathSciNet  Google Scholar 

  • Senn S, Amin D, Bailey RA, Bird SM, Bogacka B, Colman P, Garrett A, Grieve A, Lachmann P (2007) Statistical issues in first-in-man studies. J Roy Stat Soc A Sta 170(3):517–579

    Article  MathSciNet  Google Scholar 

  • Senn SJ (1997) Statistical issues in drug development. Wiley, Chichester

    MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by the VEGA 1/0521/16 Grant of the Slovak Scientific Grant Agency. The research of the first author was additionally supported by the UK/214/2016 Grant of the Comenius University in Bratislava. The authors would like to thank two anonymous referees who helped to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel Rosa.

Appendix

Appendix

Proof of Theorem 1

The Senn design \(\xi \) satisfies

$$\begin{aligned} r=(1/2,(2n)^{-1}1_n^T)^T,\quad X=(2n)^{-1}(1_n, I_n)^T\end{aligned}$$

and thus, \(Z=(2n)^{-1}I_n\). Note that \(\xi \) is connected [see Eccleston and Hedayat (1974)] and therefore feasible. Then, Eq. (4) yields that \(N_A=(4n)^{-1}I_n\), which has the smallest eigenvalue 1 / (4n). The proof of Theorem 6 by Rosa and Harman (2016) states that when there are no design constraints, the optimal value of the E-criterion is \(1/[4(v-1)]\), where v is the number of treatments. In our model, \(v=n+1\), and thus, the E-optimal value is 1 / (4n), which the Senn design attains.

Now, consider the converse part. For a feasible design \(\xi \) to be E-optimal, it must have the smallest eigenvalue equal to that of the Senn design, which coincides with the optimal value of the E-optimality criterion in the model without the design constraints. Thus, such \(\xi \) is E-optimal in the model without design constraints, and from Lemma 1, it follows that \(\xi \) must satisfy

$$\begin{aligned} r_0(\xi ) = 1/2, \quad r_i(\xi ) = 1/(2n), \quad i=1,\ldots ,n; \end{aligned}$$
(7)

hence, \(r=(1/2,(2n)^{-1}1_n^T)^T\).

Observe that \(N_A(\xi ) = C = (2n)^{-1}I_n - nZZ^T\), as defined in (3). Then, the smallest eigenvalue \(\lambda _{\min }\) of \(N_A(\xi )\) satisfies \(\lambda _{\min } = 1/(2n) - n\mu _{\max }\), where \(\mu _{\max }\) is the largest eigenvalue of \(ZZ^T\). Let us denote the columns of matrix \(Z=(z_{ij})\) by \(z_1, \ldots , z_n\). Then,

$$\begin{aligned} \mu _{\max } = \max _{ ||u ||=1} u^TZZ^Tu = \max _{ ||u ||=1} ||Z^Tu ||^2 = \max _{ ||u ||=1} \sum _{j=1}^{n} ( z_j^Tu )^2, \end{aligned}$$

and for the particular choice of \(u=n^{-1/2}1_n\), we obtain

$$\begin{aligned} \mu _{\max } \ge \frac{1}{n} \sum _{j=1}^{n} ( z_j^T1_n )^2 = \frac{1}{n} \sum _{j=1}^{n} \left( \sum _{i=1}^n z_{ij}\right) ^2 = \frac{1}{n} \sum _{j=1}^{n} q_j^2, \end{aligned}$$

where \(q_j:\,=\sum _{i=1}^{n}z_{ij}, j=1,\ldots ,n\). From (i), it follows that \(q_j = n^{-1} - \xi (0,j)\), and (7) yields \(\sum _j q_j = 1- 1/2 = 1/2\). Therefore, \(\mu _{\max }\) is not smaller than the optimal value of the following convex optimization problem:

$$\begin{aligned} \min _{q\ge 0} \frac{1}{n} \sum _{j=1}^{n} q_j^2 \quad \text {subject to}\quad \sum _{j=1}^{n} q_j = \frac{1}{2}. \end{aligned}$$
(8)

Because the objective function of (8) is strictly convex on the affine hyperplane formed by the equality constraints and invariant with respect to the permutations of \(q_1, \ldots , q_n\), the single optimal solution \(q^*\) of (8) satisfies \(q_1^* = \cdots = q_{n}^* = 1/(2n)\). Hence, the optimal value of (8) is \(1/(4n^2)\). It follows that if q does not satisfy \(q_1 = \cdots = q_{n} = 1/(2n)\), then \(\mu _{\max } > 1/(4n^2)\). Therefore, the smallest eigenvalue \(\lambda _{\min }\) of \(N_A(\xi )\) for such \(\xi \) satisfies \(\lambda _{\min } < 1/(4n)\), and thus, \(\xi \) is not E-optimal. Hence, any E-optimal design \(\xi \) must satisfy \(q_1 = \cdots = q_n = 1/(2n)\), i.e., \(\xi (0,1) = \cdots = \xi (0,n) = 1/(2n)\).

From (7) and (i), \(\xi \) needs to satisfy \(r_i(\xi ) = 1/(2n)\) for \(i>0\) and \(s_k(\xi )=\sum _i \xi (i,k) = 1/n\) for all k. Because \(\xi (n,k)\) can be nonzero only for \(k=n\), we obtain \(\xi (n,n) = 1/(2n)\). Since \(\xi (0,n)=1/(2n)\), (ii) yields \(\xi (i,n) = 0\) for \(0<i<n\). Then, \(\xi (n-1,k)\) is nonzero only for \(k=n-1\), which implies \(\xi (n-1,n-1) = 1/(2n), \xi (i,n-1) = 0\) for \(0<i<n-1\), and so forth. It follows that the single E-optimal standard design for given n is the Senn design. \(\square \)

Proof of Theorem 2

Let \(\xi \) be a design that satisfies the conditions of Theorem 2. First, observe that \(\xi \) is connected [see Eccleston and Hedayat (1974)] and therefore feasible. The partitions of \(M_\tau \) defined in (3) satisfy \(z=(2(n+1))^{-1}1_{n+1}, X1_{n+1}=r= \big (1/2,(2n)^{-1}1_n^T\big )^T, Z1_{n+1} = (2n)^{-1} 1_n, X^T1_{n+1}=(n+1)^{-1} 1_{n+1}\) and \(Z^T1_n = (2(n+1))^{-1} 1_{n+1}\).

Thus, \(N_A = C = (2n)^{-1} I_n - (n+1)ZZ^T\). Moreover, \(N_A 1_n = (4n)^{-1} 1_n.\) That is, \(1_n\) is an eigenvector of \(N_A\) with eigenvalue \(\lambda _1=1/(4n)\). Analogous to the proof of Theorem 1, \(\lambda _1\) is equal to the optimal smallest eigenvalue in the model without constraints; see Rosa and Harman (2016). Therefore, to prove that \(\xi \) is E-optimal, it suffices to prove that \(\lambda _1\) is the smallest eigenvalue of \(N_A\). The smallest eigenvalue of \(N_A\) satisfies \(\lambda _{\min }(N_A) = 1/(2n) - (n+1)\mu _{\max }\), where \(\mu _{\max }\) is the largest eigenvalue of \(ZZ^T\).

Let \({\hat{Z}}=\big (2nZ^T, (n+1)^{-1} 1_{n+1}\big )\). The matrix \({\hat{Z}}\) satisfies \({\hat{Z}}1_{n+1} = 1_{n+1}\) and \(1_{n+1}^T{\hat{Z}} = 1_{n+1}^T\), i.e., \({\hat{Z}}\) is doubly stochastic. Using the Birkhoff-von Neumann theorem, we obtain that

$$\begin{aligned} {\hat{Z}} = \sum _{\pi \in {\mathfrak {P}}_{n+1}} \alpha _\pi P_\pi , \end{aligned}$$

where \(\alpha _\pi \ge 0\) for all \(\pi , \sum _\pi \alpha _\pi = 1, {\mathfrak {P}}_{n+1}\) denotes the set of all permutations of \(n+1\) elements and \(P_\pi \) is the permutation matrix given by \(\pi \). Let us partition the matrices \(P_\pi \) as \(P_\pi = \big ( {\tilde{P}}_\pi , v_\pi \big )\), where \({\tilde{P}}_\pi \) is an \((n+1) \times n\) matrix. Then,

$$\begin{aligned} 2nZ^T= \sum _{\pi \in {\mathfrak {P}}_{n+1}} \alpha _\pi {\tilde{P}}_\pi , \quad Z = \frac{1}{2n} \sum _{\pi \in {\mathfrak {P}}_{n+1}} \alpha _\pi {\tilde{P}}_\pi ^T, \quad \frac{1}{n+1}1_{n+1} = \sum _{\pi \in {\mathfrak {P}}_{n+1}} \alpha _\pi v_\pi . \end{aligned}$$

For any \(i \in \{1,...,n+1\}\), let \({\mathfrak {P}}_{n+1}^{(i)}\) be the set of all permutations \(\pi \) of \(n+1\) elements, the corresponding matrices of which have their last column equal to \(e_i\), i.e., \(\pi (n+1)=i\). Then, because the last column of \({\hat{Z}}\) is \((n+1)^{-1}1_{n+1}\), we have \(\sum _{\pi \in {\mathfrak {P}}_{n+1}^{(i)}}\alpha _\pi = 1/(n+1)\).

The largest eigenvalue \(\mu _{\max }\) of \(ZZ^T\) is identical to the largest eigenvalue of \(Z^TZ\). Therefore, it satisfies

$$\begin{aligned} \begin{aligned} \mu _{\max }&= \max _{ ||u ||=1} u^TZ^TZ u = \max _{ ||u ||=1} ||Z u ||^2\\&= \frac{1}{4n^2} \max _{ ||u ||=1} \Big ||\sum _{\pi \in {\mathfrak {P}}_{n+1}} \alpha _\pi {\tilde{P}}_\pi ^Tu \Big ||^2 \\&= \frac{1}{4n^2} \max _{ ||u ||=1} \Big ||\sum _{i=1}^{n+1} \sum _{\pi \in {\mathfrak {P}}_{n+1}^{(i)}} \alpha _\pi {\tilde{P}}_\pi ^Tu \Big ||^2 \\&\le \frac{n+1}{4n^2} \max _{ ||u ||=1} \sum _{i=1}^{n+1} \Big ||\sum _{\pi \in {\mathfrak {P}}_{n+1}^{(i)}} \alpha _\pi {\tilde{P}}_\pi ^Tu \Big ||^2, \end{aligned} \end{aligned}$$

where the last inequality follows from the fact that \(||\sum _{i=1}^{k} x_i ||^2 \le k \sum _{i=1}^{k} ||x_i ||^2\) for any vectors \(x_1, \ldots , x_k\) of the same dimension.

Let \(i \in \{1,\ldots ,n+1\}\). Then, the set of the extremal points of the convex set

$$\begin{aligned} {\mathfrak {K}}_i:\,=\left\{ \sum _{\pi \in {\mathfrak {P}}_{n+1}^{(i)}} \alpha _\pi {\tilde{P}}_\pi ^Tu \,\Big \vert \, \alpha _{\pi } \ge 0 \text { for all } \pi \in {\mathfrak {P}}_{n+1}^{(i)}, \sum _{\pi \in {\mathfrak {P}}_{n+1}^{(i)}} \alpha _\pi = \frac{1}{n+1} \right\} \end{aligned}$$

is a subset of \(\{ (n+1)^{-1}{\tilde{P}}_{\pi }^T u \,\vert \, \pi \in {\mathfrak {P}}_{n+1}^{(i)} \}\), and the convex function \(||x ||^2, x \in {\mathfrak {K}}_i\), attains its maximum in at least one of the extremal points of \({\mathfrak {K}}_i\). It follows that

$$\begin{aligned} \Big ||\sum _{\pi \in {\mathfrak {P}}_{n+1}^{(i)}} \alpha _\pi {\tilde{P}}_\pi ^Tu \Big ||^2 \le \max _{ \pi _i \in {\mathfrak {P}}_{n+1}^{(i)} } \Big ||\frac{1}{n+1} {\tilde{P}}_{\pi _i}^Tu \Big ||^2 = \frac{1}{(n+1)^2} \max _{ \pi _i \in {\mathfrak {P}}_{n+1}^{(i)} } ||{\tilde{P}}_{\pi _i}^Tu ||^2, \end{aligned}$$

and thus,

$$\begin{aligned} \mu _{\max } \le \frac{1}{4n^2(n+1)} \max _{ ||u ||=1} \max _{ \pi _1 \in {\mathfrak {P}}_{n+1}^{(1)},\ldots , \pi _{n+1} \in {\mathfrak {P}}_{n+1}^{(n+1)} } \sum _{i=1}^{n+1} ||{\tilde{P}}_{\pi _i}^Tu ||^2. \end{aligned}$$

Consider \(\pi _1 \in {\mathfrak {P}}_{n+1}^{(1)},\ldots , \pi _{n+1} \in {\mathfrak {P}}_{n+1}^{(n+1)}\) and \(u \in {\mathbb {R}}^{n+1}, ||u ||=1\), that maximize the expression \(\sum _{i=1}^{n+1} ||{\tilde{P}}_{\pi _i}^Tu ||^2\), and let us denote \(v_i=(v_{i,1},\ldots ,v_{i,n})^T:\,={\tilde{P}}^T_{\pi _i} u, \) \(i=1,\ldots ,n+1\). Then, \(\mu _{\max }\) satisfies

$$\begin{aligned} \mu _{\max } \le \frac{1}{4n^2(n+1)} \sum _{i=1}^{n+1} \sum _{j=1}^n v_{i,j}^2 = \frac{1}{4n^2(n+1)} \sum _{i=1}^{n+1} S, \end{aligned}$$

where S is the sum of squares of all elements of vectors \(v_1,\ldots ,v_{n+1}\). For each \(i \in \{1,\ldots ,n+1\}\), the vector \(v_i={\tilde{P}}_{\pi _i}^Tu\) consists of all elements of u except \(u_i\). Therefore, \(u_i^2\) occurs n-times in S. Hence, \(S= nu_1^2 + \cdots + nu_{n+1}^2 = n ||u ||^2 = n\), and it follows that \(\mu _{\max } \le (4n(n+1))^{-1}\). Since \(\mu _{\max }\le (4n(n+1))^{-1}\), the smallest eigenvalue of \(N_A\) is at least \( 1/(2n) - (n+1)/(4n(n+1)) = 1/(4n)\), and thus, \(\lambda _1\) is indeed the smallest eigenvalue of \(N_A\).

For the converse part, note that similar to the proof of Theorem 1, the optimal value of the E-criterion is 1 / (4n) and Lemma 1 yields that any E-optimal design \(\xi \) must satisfy (7). Let \(\xi \) be a design that satisfies (7), but that does not satisfy \(\xi (0,k) = 1/(2(n+1)), k=1,\ldots ,n+1.\) Let \(q_j:\,= 1/(n+1) - \xi (0,j), j=1,\ldots ,n+1\); hence, \(\sum _j q_j = 1-1/2=1/2\). Then, analogous to the proof of Theorem 1, the largest eigenvalue \(\mu _{\max }\) of \(ZZ^T\) is not smaller than the optimal value of the following convex optimization problem:

$$\begin{aligned} \min _{q \ge 0} \frac{1}{n} \sum _{j=1}^{n+1} q_j^2 \quad \text {subject to}\quad \sum _{j=1}^{n+1} q_j = \frac{1}{2}. \end{aligned}$$

This problem has a unique optimal solution \(q^*\) satisfying \(q_1^* = \cdots = q_{n+1}^* = (2(n+1))^{-1}\), and the optimal value is \((4n(n+1))^{-1}\). It follows that if q does not satisfy \(q_1 = \cdots = q_{n+1} = (2(n+1))^{-1}\), then \(\mu _{\max } > (4n(n+1))^{-1}\). Therefore, the smallest eigenvalue \(\lambda _{\min }\) of \(N_A\) for such \(\xi \) satisfies \(\lambda _{\min } < (4n)^{-1}\), and thus, \(\xi \) is not E-optimal. \(\square \)

In the proof of Theorem 3, Theorems 7.21 and 7.23 by Pukelsheim (2006) will be used. Here, we state them as Lemmas 2 and 3.

Lemma 2

A feasible design \(\xi \) with its moment matrix M and information matrix \(N_A\) is E-optimal for estimating \(A^T\beta \) if and only if there exist a generalized inverse G of M and a nonnegative definite matrix \(E, \mathrm {tr}(E)=1\), such that

$$\begin{aligned} \mathrm {tr}(M({\tilde{\xi }})GAN_AEN_AA^TG^T) \le \lambda _{\min }(N_A) \text { for all } {\tilde{\xi }} \in \varXi . \end{aligned}$$
(9)

Lemma 3

Let \(\xi \) be a feasible design for \(A^T\beta \) with its information matrix \(N_A\) and let \(h, ||h ||= 1\), be an eigenvector of \(N_A\) corresponding to the smallest eigenvalue of \(N_A\). Then, \(\xi \) is E-optimal for \(A^T\beta \) and \(E=hh^T\) satisfies (9) if and only if \(\xi \) is c-optimal, where \(c=Ah\). If the smallest eigenvalue of \(N_A\) has multiplicity 1, \(\xi \) is E-optimal for \(A^T\beta \) if and only if it is c-optimal, where \(c=Ah\).

Proof of Theorem 3

Recall that \(A^T=(Q^T,0_{n \times (t+1)})\) and \(Q^T=(-1_n,I_n)\). Let \({\tilde{c}}:\,=(-1,n^{-1}1_n^T,0_{t+1}^T)^T\). For the (standard) Senn design, we have \(N_A=(4n)^{-1}I_n\). Let us choose

$$\begin{aligned} G=\begin{bmatrix} M_\tau ^-&- M_\tau ^-M_{12}M_{22}^- \\ -M_{22}^- M_{12}^TM_\tau ^-&M_{22}^- + M_{22}^-M_{12}^TM_\tau ^-M_{12}M_{22}^- \end{bmatrix}, \end{aligned}$$
(10)

where

$$\begin{aligned} M_\tau ^- = \begin{bmatrix} 0&0_n^T\\ 0_n&4nI_n \end{bmatrix} \quad \text { and }\quad M_{22}^- = \begin{bmatrix} 0&0_n^T\\ 0_n&nI_n \end{bmatrix}, \end{aligned}$$
(11)

and \(E=1_n1_n^T/n = hh^T\), where \(h=n^{-1/2}1_n\). The matrix G is indeed a generalized inverse of M; see Theorem 9.6.1 by Harville (1997). Then, it is possible to verify that there is equality in (9) for any permissible design \({\tilde{\xi }}\). Therefore, Lemma 3 implies that any E-optimal standard design (i.e., Senn design) is also c-optimal, where \(c= n^{-1/2}A1_n\). This is equivalent to optimality for \(n^{-1}A1_n={\tilde{c}}\).

Let \(\xi \) be an E-optimal extended design for comparing the treatments with the placebo, with its moment matrix M and information matrix \(N_A\). Then, \(h=n^{-1/2}1_{n}\) is an eigenvector of \(N_A\) corresponding to the smallest eigenvalue \(\lambda _{\min }=1/(4n)\) (see the proof of Theorem 2). The left-hand side of the normality inequality (9) for \(E=hh^T\) and a given \({\tilde{\xi }} \in \varXi \) becomes

$$\begin{aligned} \mathrm {tr}({\tilde{M}}GAN_Ahh^TN_AA^TG^T)=\lambda _{\min }^2h^TA^TG^T{\tilde{M}}GAh = \frac{1}{16n^3}1_n^TA^TG^T{\tilde{M}}GA1_n,\nonumber \\ \end{aligned}$$
(12)

where \({\tilde{M}}:\,=M({\tilde{\xi }})\). Moreover, \(A1_n=(-n,1_n^T,0_{n+2}^T)^T\).

Recall the partitioning of \(M_\tau \) and X defined in (3). Let the generalized inverse G of M be given by (10), where

$$\begin{aligned} M_\tau ^-=\begin{bmatrix} 0&0_n^T \\ 0_n&C^{-1} \end{bmatrix}. \end{aligned}$$

Then,

$$\begin{aligned}GA1_n = \begin{bmatrix} 0 \\ C^{-1}1_n \\ 0 \\ -(n+1)Z^TC^{-1}1_n \end{bmatrix}. \end{aligned}$$

Since \(C^{-1}=N_A^{-1}\), we obtain \(C^{-1}1_n = 4n1_n\). Then, \(GA1_n=(0, 4n1_n^T, 0, -(n+1)4n 1_n^T Z )^T = (0, 4n1_n^T, 0, -2n1_{n+1}^T )^T\) because \(Z^T1_n =(2(n+1))^{-1} 1_{n+1}\).

Let us partition \({\tilde{M}}=M({\tilde{\xi }})\) using \({\tilde{M}}_{11}, {\tilde{M}}_{12}, {\tilde{M}}_{22}\) as in (2), and let us denote \({\tilde{Z}}=Z({\tilde{\xi }})\) and \({\tilde{r}}=r({\tilde{\xi }})\). Then, (12) becomes

$$\begin{aligned}\begin{aligned} \frac{1}{16n^3}&\left( \left( 0,4n1_n^T\right) {\tilde{M}}_{11}\begin{pmatrix} 0 \\ 4n1_n \end{pmatrix} + (0,-2n1_{n+1}^T){\tilde{M}}_{22}\begin{pmatrix} 0 \\ -2n1_{n+1} \end{pmatrix} \right. \\&\left. + 2 (0,4n1_n^T) {\tilde{M}}_{12} \begin{pmatrix} 0 \\ -2n1_{n+1} \end{pmatrix} \right) , \end{aligned} \end{aligned}$$

which is equal to

$$\begin{aligned} \frac{1}{16n^3}\left( 16n^2 \sum _{i>0}{\tilde{r}}_i + 4n^2 -16n^2\sum _{i>0}{\tilde{r}}_i \right) = \frac{1}{4n} =\lambda _{\min }, \end{aligned}$$

because \({\tilde{Z}}1_{n+1}=({\tilde{r}}_1, \ldots , {\tilde{r}}_n)^T\). Therefore, the left-hand side of (9) is always equal to the right-hand side. Hence, Lemma 3 yields that \(\xi \) is \({\tilde{c}}\)-optimal. \(\square \)

Proof of Proposition 2

The information matrix \(N_A(\xi )\) given by (4) clearly satisfies that its non-diagonal elements are non-positive. For any matrix H with non-positive non-diagonal elements, Theorem 6.2.3 of Berman and Plemmons (1979) yields that the following statements are equivalent: \(\text {(D}_{16}\text {)}\) every real eigenvalue of H is positive, and \(\text {(N}_{38}\text {)}\) every element of \(H^{-1}\) is nonnegative. Since \(N_A(\xi )\) is a non-singular information matrix, it is positive definite, and thus, \(N_A(\xi )\) satisfies \(\text {(D}_{16}\text {)}\). Therefore, every element of \(N_A^{-1}(\xi )\) is nonnegative. \(\square \)

Proof of Theorem 4

Let \(\xi \) be a standard design and let \(k \in \{1,\ldots ,n\}\). The covariance matrix of the least squares estimator of the contrasts of interest, \(\mathrm {Var}_\xi (\widehat{Q^T\tau })\), is proportional to \(N_A^{-1}(\xi )=Q^TM_\tau ^-Q\). The variance calculated from the first k cohorts under a given design \(\xi , \mathrm {Var}_k(\widehat{\tau _k - \tau _0})\), can be obtained from a stage-k ’design’ \(\xi ^{(k)}\). Such a design is given by deleting all the trials in cohorts \(k+1,\ldots ,n\) from \(\xi \). For any i, we have \(\xi ^{(k)}(i,j) = \xi (i,j)\) for \(j\le k\) and \(\xi ^{(k)}(i,j) = 0\) for \(j>k\). Note that for \(k<n, \xi ^{(k)}\) is not a proper approximate design because all of its elements do not sum to 1, indicating that the trials in the future cohorts are ignored.

The stage-k moment matrix \(M^{(k)}\) given by \(\xi ^{(k)}\) satisfies \(M_{11}^{(k)}=\mathrm {diag}(r_0,\ldots ,r_k,0_{n-k}^T)\),

$$\begin{aligned} M_{12}^{(k)}=\begin{bmatrix} \begin{bmatrix} r_0 \\ \vdots \\ r_k \end{bmatrix}&\begin{bmatrix} \xi (0,1)&\xi (0,2)&\ldots&\xi (0,k) \\ \xi (1,1)&\xi (1,2)&\ldots&\xi (1,k) \\ 0&\xi (2,2)&\ldots&\xi (2,k) \\ \vdots&\vdots&\vdots \\ 0&0&\ldots&\xi (k,k) \end{bmatrix}&0_{(k+1) \times (t-k)} \\ 0_{n-k}&0_{(n-k) \times k}&0_{(n-k) \times (t-k)} \end{bmatrix} \end{aligned}$$

and

$$\begin{aligned} M_{22}^{(k)} = \begin{bmatrix} 1&n^{-1}1_k^T&0_{t-k}^T\\ n^{-1}1_k&n^{-1}I_k&0_{k \times (t-k)} \\ 0_{(t-k)}&0_{(t-k) \times k}&0_{(t-k) \times (t-k)} \end{bmatrix}, \end{aligned}$$

where \(t=n\), because we are considering standard designs. Then, the latest variance \(\mathrm {Var}_k(\widehat{\tau _k - \tau _0})\) is proportional to \(d_k(\xi ):\,=(e_{k+1}^T- e_1^T, 0_{t+1}^T) \big (M^{(k)}\big )^- (e_{k+1}^T- e_1^T, 0_{t+1}^T)^T\), where \(e_1\) and \(e_{k+1}\) are elementary unit vectors of length \(n+1\).

To calculate \(d_k(\xi )\), we disregard cohorts \(k+1,\ldots ,n\) and we are not allowed to use doses \(k+1,\ldots ,n\) in the first k cohorts. Such a model coincides with model (1), where \(n=k\) and a design \(\xi '\) given by \(\xi \) for doses \(0,\ldots ,k\) and cohorts \(1,\ldots ,k\). Then, the latest variance under \(\xi \) is proportional to the inverse of the value of the c-optimality criterion for \(\xi '\), which is \(\varPhi _c(\xi ') = \big ( c^TM^-(\xi ') c \big )^{-1}\) for \(c^T=(e_{n+1}^T-e_1^T,0_{n+1}^T)\). Therefore, without loss of generality, we may assume that \(k=n\) and prove that the Senn design has the highest value of c-optimality criterion, where \(c^T=(e_{n+1}^T-e_1^T,0_{n+1}^T)\).

The General Equivalence Theorem in the case of c-optimality becomes [Corollary 5.1 by Pukelsheim (1980)]: Let \({\mathfrak {M}}\) be a set of competing moment matrices. The moment matrix \(M \in {\mathfrak {M}}\) is c-optimal in \({\mathfrak {M}}\) if and only if there exists a generalized inverse G of M such that \(c^TG^TBGc \le c^TM^-c\) for all \(B \in {\mathfrak {M}}\).

Now, let \(\xi \) be a Senn design, let \(c=(e_{n+1}^T-e_1^T,0_{n+1}^T)^T\), and let us denote \(M:\,= M(\xi )\). Then, \(r=2^{-1}\big (1,n^{-1}1_n^T)^T, X=(2n)^{-1}\big (1_n, I_n\big )^T\),

$$\begin{aligned} M_\tau = \frac{1}{4n} \begin{bmatrix} n&-1_n^T\\ -1_n&I_n \end{bmatrix}, \text { and let } M_\tau ^- = \begin{bmatrix} 0&0_n^T\\ 0_n&4nI_n \end{bmatrix}. \end{aligned}$$

Thus, \(c^TM^- c = (e_{n+1}-e_1)^TM_\tau ^- (e_{n+1}-e_1) = 4n\).

Let G be given by (10), where \(M_{22}^-\) is given by (11). It follows that

and therefore, for any feasible design \(\xi '\) satisfying conditions (i), (ii), we have

$$\begin{aligned} \begin{aligned} c^TG^TM(\xi ')Gc&= \begin{bmatrix} 0_n^T&4n&0_n^T&-2n \end{bmatrix} \begin{bmatrix} \mathrm {diag}(r')&r'&X' \\ {r'}^T&1&n^{-1}1_n^T \\ {X'}^T&n^{-1}1_n&n^{-1}I_n \end{bmatrix} \begin{bmatrix} 0_n \\ 4n \\ 0_n \\ -2n \end{bmatrix} \\&= (4n)^2 r_n' + (-2n)^2 \frac{1}{n} + 2 \times 4n (-2n) \xi '(n,n) = 4n, \end{aligned} \end{aligned}$$

because \(\xi '(n,n) = r_n'\). Therefore, any design satisfying conditions (i), (ii) satisfies the desired inequality \(4n \le c^TM^-c = 4n\). Hence, for any k, the Senn design attains the minimum possible latest variance. \(\square \)

Proof of Theorem 5

For the first n latest variances, the argument is the same as in the standard design case; see the proof of Theorem 4. Optimality of \(\xi \) with respect to the \((n+1)\)-st latest variance can be proved by using the General Equivalence Theorem for c-optimality, \(c^T=(e^T_{n+1}-e^T_1, 0_{n+1}^T)\).

Design \(\xi \) satisfies

$$\begin{aligned}X=\frac{1}{2(n+1)}\begin{bmatrix} 1_{n-1}^T&1_2^T\\ I_{n-1}&0_{(n-1)\times 2} \\ 0_{n-1}^T&1_2^T\end{bmatrix},\quad r=\begin{bmatrix} \frac{1}{2} \\ \frac{1}{2(n+1)}1_{n-1} \\ \frac{1}{n+1} \end{bmatrix}.\end{aligned}$$

Then,

$$\begin{aligned}M_\tau =\begin{bmatrix} 1/4&-\frac{1}{4(n+1)}1_{n-1}^T&-\frac{1}{2(n+1)} \\ -\frac{1}{4(n+1)}1_{n-1}&\frac{1}{4(n+1)}I_{n-1}&0_{n-1} \\ -\frac{1}{2(n+1)}&0_{n-1}^T&\frac{1}{2(n+1)} \end{bmatrix} \end{aligned}$$

and

$$\begin{aligned}M_\tau ^- =2(n+1)\begin{bmatrix} 0&0_{n-1}^T&0 \\ 0_{n-1}&2I_{n-1}&0_{n-1} \\ 0&0_{n-1}^T&1 \end{bmatrix} \end{aligned}$$

is a generalized inverse of \(M_\tau \). Hence, \(c^TM^- c = (e^T_{n+1}-e^T_1) M_\tau ^- (e_{n+1}-e_1) = 2(n+1)\). Furthermore,

$$\begin{aligned} M_{22}^- = \begin{bmatrix} 0&0_{n+1}^T\\ 0_{n+1}&(n+1)I_{n+1} \end{bmatrix} \end{aligned}$$

is a generalized inverse of \(M_{22}\). By choosing the generalized inverse G of M given by (10), after some calculations similar to the proof of Theorem 4, the normality inequality becomes \(c^TG^TM(\xi ')Gc = 2(n+1) \le c^TM^-c = 2(n+1)\) for any design \(\xi '\) satisfying (i), (ii). \(\square \)

Proof of Theorem 6

Let \(\xi \) be a standard design, and let us denote \(v_i(\xi ):\,=\mathrm {Var}_\xi (\widehat{\tau _i-\tau _0}), i=1,\ldots ,n\). Then, \(\varPsi (\xi ) = \max _i v_i(\xi )\). Theorem 4 states that the Senn design \(\xi _S\) is LV-optimal, i.e., \(d_k(\xi ) \ge d_k(\xi _S)\) for all \(k=1,\ldots ,n\). If \(k=n\), then the latest variance and the final variance (the variance when the entire design is carried out) coincide. It follows that \(v_n(\xi ) \ge v_n(\xi _S)\).

In the proof of Theorem 1, we can observe that \(Q^TM_\tau ^-(\xi _S)Q = 4nI_n\). It follows that \(v_1(\xi _S) = \ldots = v_n(\xi _S) \propto 4n\), and thus, \(\varPsi (\xi _S) = v_n(\xi _S)\). Because \(v_n(\xi ) \ge v_n(\xi _S)\), we obtain \(\varPsi (\xi ) \ge v_n(\xi ) \ge \varPsi (\xi _S)\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rosa, S., Harman, R. Optimal approximate designs for comparison with control in dose-escalation studies. TEST 26, 638–660 (2017). https://doi.org/10.1007/s11749-017-0529-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11749-017-0529-3

Keywords

Mathematics Subject Classification

Navigation