## Abstract

Hierarchical random effect models are used for different purposes in clinical research and other areas. In general, the main focus is on population parameters related to the expected treatment effects or group differences among all units of an upper level (e.g. subjects in many settings). Optimal design for estimation of population parameters are well established for many models. However, optimal designs for the prediction for the individual units may be different. Several settings are identified in which individual prediction may be of interest. In this paper, we determine optimal designs for the individual predictions, e.g. in multi-cluster trials or in trials that investigate a new treatment in a number of different subpopulations, and compare them to a conventional balanced design with respect to treatment allocation. Our investigations show that in the case of uncorrelated cluster intercepts and cluster treatments the optimal allocations are far from being balanced if the treatment effects vary strongly as compared to the residual error and more subjects should be recruited to the active (new) treatment. Nevertheless, efficiency loss may be limited resulting in a moderate sample size increase when individual predictions are foreseen with a balanced allocation.

### Similar content being viewed by others

Avoid common mistakes on your manuscript.

## 1 Introduction

Hierarchical random effect models are used for different purposes. Common applications, e.g. in clinical research, are random effect meta analyses of several clinical trials using individual patient data (IPD), multi-centre trials assuming random centre effects or mixed or random effect models for repeated measurements (MMRM) for longitudinal data within subjects. Clinical trials or meta-analyses of clinical trials may involve several populations or subgroups of patients (e.g. defined by genetic biomarkers), where in some settings between-population variability may be modelled using a hierarchical approach.

Precision medicine aims at investigating new treatments in different subpopulations of patients, e.g. with specific tumour subtypes, in order to detect potentially differential treatment effects indicating the application in specific subpopulations. Basket trials investigate a new treatment in a number of different groups of patients, where the treatment is compared to a control in each of the different subpopulations. If, for example, two binary or dichotomized genetic biomarkers identify biomarker positive or negative patients, the whole patient population could be split into four disjoint subsets of patients. However, far more subpopulations of interest may be identified, especially in oncological settings, where one treatment may be investigated in a large number of different patient populations. Furthermore, in many oncological indications, biomarker defined subgroups might be rather small and early phase clinical trials aim at exploring different subpopulations that may be highly heterogeneous with respect to the response to treatment, since depending on the individual biomarker status, different treatment effects are often expected. In these difficult settings with a limited number of trial participants available, high design efficiency is an important prerequisite for a successful development of new targeted drugs in small populations.

Further applications refer or observational studies data on quality parameters of drugs arise from hierarchical settings with multiple layers, i.e. with multiple sources of variability according to the underlying manufacturing process, e.g. given by manufacturing sites, production batches and samples within batches. In general, the main focus is on population parameters related to the expected treatment effects or group differences among all units of an upper level (e.g. trials in IPD meta-analyses, centres in multi-centre trials, patients in longitudinal trials, batches in quality control, etc.). Several authors considered optimal design for estimation of population parameters (expected values of random effects) in similar models; see e.g. [2,3,4,5,6, 8]. In general, prediction of the outcome in the individual units may also be of interest in several settings, as for treatment effects in single clusters to assess qualification of individual clinics, manufacturing sites in manufacturing control, treatment effects in different subpopulations of patients. In these cases, the question arises, whether optimal designs for populations parameters can also be used for individual predictions of random effects, whether optimal designs for individual predictions differ from those for populations parameters and to which extent and which efficiency loss (or sample size increase) can be anticipated if another conventional design is chosen.

In this paper, we investigate optimal designs for random effects to be applied, e.g. in basket trials (or multi-cluster trials), investigating a number of disjoint patient populations and compare them to a conventional balanced design with respect to treatment allocation. Nevertheless, the results are applicable to very different settings as outlined above.

The structure of the paper is as follows: In Sect. 2, a hierarchical model is specified and the best linear unbiased predictions of the individual parameters (intercepts and treatment effects for the individual units or clusters) are derived. In Sect. 3, analytical results are presented to characterize optimal designs for prediction of the cluster specific effects. The results are illustrated by some numerical examples. The paper is concluded by a short discussion.

## 2 Model Specification

We consider a randomized comparative trial with *K* different clusters (e.g. subpopulations). In all of these clusters, individuals are allocated to two treatment groups. In the first group (denoted by \(x=1\)), the individuals receive an active treatment while in the second group (denoted by \(x=0\)) a placebo or a control treatment is applied.

We consider *K* clusters \(i=1,\ldots ,K\). Denote by \(\mu _i\) and \(\alpha _i\) the intercept (mean response at placebo or control) and the effect of the active treatment (compared to placebo or control), respectively, in cluster *i*, which both may vary across the clusters. The response \(Y_{ij}\) of an individual \(j=1,\ldots ,N_i\) in cluster *i* can be described as

where \(x_{ij}\) is equal to 1, if the individual belongs to the treatment group, and \(x_{ij}\) is equal to 0 for the control (or placebo) group and \(\varepsilon _{ij}\) denotes the random variation in the response of the individuals. The individual variations \(\varepsilon _{ij}\) are assumed to have zero mean and to be homoscedastic with common variance \(\sigma ^2\).

The cluster specific intercepts \(\mu _{i}\) and treatment effects \(\alpha _{i}\) can be assumed as random with (unknown) expected values \(E(\mu _{i})=\mu\) and \(E(\alpha _{i})=\alpha\) characterizing the mean intercept and mean treatment effect across the clusters and covariance structure \({\mathrm {Cov}}((\mu _i, \alpha _i)^\top )=\sigma ^2{\mathbf {D}}\) for some \(2\times 2\) positive definite dispersion matrix \({\mathbf {D}}\). All random effects and all individual variations are assumed to be uncorrelated.

For the sake of simplicity, we further assume that the total number *N* of individuals per cluster is the same for all clusters (\(N_i=N\)) and that the allocation rate is constant across the clusters, i. e. the number *n* of individuals in the treatment group is the same for all clusters. The design problem can then be formulated in terms of finding the optimal allocation rate \(w=n/N\) for the treatment group.

Because of exchangeability of the individuals within each cluster, we may sort them in the analysis regardless of randomization in such a way that the first *n* individuals \(j=1,\ldots ,n\) to be analyzed receive the active treatment and the remaining \(N-n\) individuals \(j=n+1,\ldots ,N\) are in the control group. Then the experimental settings \(x_{ij}\) in (1) can be specified by

and are independent of the cluster *i*.

Hence, the multi-cluster model (1) can be identified as a particular case of the random coefficient regression model

investigated by [7] when the regression functions and the cluster parameters are specified by \({\mathbf {f}}(x)=(1, x)^\top\) and \(\varvec{\beta }_i=(\mu _i,\alpha _i)^\top\), respectively. In general, this model can be written in vector notation as

where \({\mathbf {Y}}_{i}=(Y_{i1},\ldots ,Y_{iN})^\top\) and \(\varepsilon _{i}=(\varepsilon _{i1},\ldots ,\varepsilon _{iN})^\top\) are the *N*-dimensional vectors of observations and individual variations at cluster *i*, respectively, and \({\mathbf {F}}=({\mathbf {f}}(x_1),\ldots , {\mathbf {f}}(x_N))^\top\) is the within cluster design matrix which is equal across all clusters.

For the present multi-cluster model, the design matrix \({\mathbf {F}}\) simplifies to

where \({\mathbb {1}}_\ell\) and \({\mathbf {0}}_{\,\ell }\) denote the \(\ell\)-dimensional vectors with all entries equal to 1 and 0, respectively.

According to [7], Corollary 1 (see also [2]) in model (2), the best linear unbiased predictors (BLUPs)

of the random parameters \(\varvec{\beta }_i\) are weighted combinations of the estimates \(\hat{\varvec{\beta }}_{i;{\mathrm {within}}}=({\mathbf {F}}^\top {\mathbf {F}})^{-1}{\mathbf {F}}^\top {\mathbf {Y}}_i\) based only on the observations within cluster *i* and the best linear unbiased estimator (BLUE) \(\hat{\varvec{\beta }}_0=({\mathbf {F}}^\top {\mathbf {F}})^{-1}{\mathbf {F}}^\top \bar{{\mathbf {Y}}}\) of the population parameter \(\varvec{\beta }_0=E(\varvec{\beta }_i)=(\mu , \alpha )^\top\), where \(\bar{{\mathbf {Y}}}=\frac{1}{K}\sum _{i=1}^{K}{\mathbf {Y}}_i\) is the mean observational vector averaged across the clusters.

We additionally assume that the cluster intercepts \(\mu _i\) and the cluster treatment effects \(\alpha _i\) are uncorrelated for all clusters, i. e. \({\mathbf {D}}={\mathrm {diag}}(u,v)\), where \(u=\sigma _{\mu }^2/\sigma ^2>0\) and \(v=\sigma _{\alpha }^2/\sigma ^2>0\) are the variance ratios of the intercepts and the treatment effects in relation to the observational variance of the individuals.

With the common notations \(\bar{Y}_{i\,\cdot }^{(T)}=\frac{1}{n}\sum _{j=1}^nY_{ij}\) and \(\bar{Y}_{i\,\cdot }^{(C)}=\frac{1}{N-n}\sum _{j=n+1}^NY_{ij}\) for the mean response in the treatment (“*T*”) and the control (“*C*”) group in cluster *i*, \(\bar{Y}_{\cdot \,\cdot }^{(T)} =\frac{1}{K}\sum _{i=1}^K\bar{Y}_{i\,\cdot }^{(T)}\) and \(\bar{Y}_{\cdot \,\cdot }^{(C)}=\frac{1}{K}\sum _{i=1}^K\bar{Y}_{i\,\cdot }^{(C)}\) for the overall mean of the treatment and the control group, respectively, the BLUPs for the cluster parameters \(\hat{\varvec{\beta }}_i=({\hat{\mu }}_i,{\hat{\alpha }}_i)^\top\) of the random intercepts and the random treatment effects \({\varvec{\beta }}_i=({\mu }_i,{\alpha }_i)^\top\) in model (1) can be written as weighted averages

where \(c_0=\frac{nu}{(Nu+1)(nv+1)-n^2uv}\) and \(c=\frac{u(N-n)(nv+1)}{(Nu+1)(nv+1)-n^2uv}\), and

with weights \(c_1=\frac{(Nu+1)nv-n^2uv}{(Nu+1)(nv+1)-n^2uv}\) and \(c_2=\frac{Nu+nv+1}{(Nu+1)(nv+1)-n^2uv}\).

The derivation of formulas (5) and (6) is deferred to “Appendix”.

To measure the quality of a design, we will use the mean squared error (MSE) matrix of the BLUP \(\hat{\varvec{\beta }}=\left( \hat{\varvec{\beta }}_1^\top ,\ldots , \hat{\varvec{\beta }}_K^\top \right) ^\top\) of the complete vector \(\varvec{\beta }=(\varvec{\beta }_1^\top ,\ldots , \varvec{\beta }_K^\top )^\top\) of all random parameters. The mean squared error (MSE) matrix for the BLUP \(\hat{\varvec{\beta }}\) can be computed by means of the following formula :

(see [7], Corollary 2), where \({\mathbb {I}}_\ell\) denotes the \(\ell \times \ell\) identity matrix and \(\otimes\) is the symbol for the Kronecker product of matrices or vectors.

Further denote by \(\varvec{\alpha }=\left( \alpha _1,\ldots , \alpha _K\right) ^\top\) the vector of treatment effects for all clusters. Then \(\varvec{\alpha }= \left( {\mathbb {I}}_K\otimes (0,\, 1) \right) \varvec{\beta }\) and, hence,

for the MSE matrix of the BLUP \(\hat{\varvec{\alpha }}=({\hat{\alpha }}_1,\ldots , {\hat{\alpha }}_K)^\top\) of \(\varvec{\alpha }\). Using this and formula (7), we obtain the MSE matrix

Note that the MSE matrix for \({{\hat{\alpha }}}\) is completely symmetric and that, hence, the MSE of \({\hat{\alpha _i}}\) attains the same value for all cluster treatment effects.

## 3 Optimal Design

As individuals are interchangeable within treatment groups, we may define an exact within cluster design

by the allocation numbers *n* and \(N-n\) to the treatment and control groups *T* and *C*, respectively.

For analytical purposes, we generalize this to the definition of an approximate design:

where \(w=\frac{n}{N}\) is the allocation rate to the treatment group and \(1-w=\frac{N-n}{N}\) is the allocation rate to the control group. For finding an optimal design, only the optimal allocation rate \(w^*\) to the treatment group has to be determined.

For an approximate design, the definition of the MSE matrix (9) of the BLUP \(\hat{\varvec{\alpha }}\) is extended in a straightforward manner and can be rewritten (neglecting \(\sigma ^2\)) as :

in terms of the allocation rate *w*.

For the assessment of the MSE matrix, we focus on the *A*-optimality criterion for the cluster treatment effects \(\varvec{\alpha }\), which averages the mean squared errors of the predicted cluster treatment effects \({\hat{\alpha _i}}\). More specifically, the *A*-criterion \(\Phi _{A,\alpha }\) is the trace of the MSE matrix of the prediction \(\hat{\varvec{\alpha }}\) of the cluster treatment effects. For an approximate design, we obtain

for the criterion function \(\Phi _{A,\alpha }\) in terms of the allocation rate *w*.

Alternatively, one may consider the MV-optimality criterion for the cluster treatment effects \(\varvec{\alpha }\), which measures the maximal MSE over all predicted cluster treatment effects \({\hat{\alpha }}_i\). More specifically, the MV-criterion \(\Phi _{{\mathrm{MV}},\alpha }\) is the maximal diagonal entry of the MSE matrix of the prediction \(\hat{\varvec{\alpha }}\) of the cluster treatment effects. Because the MSE is equal for all \({\hat{\alpha _i}}\), the MV-criterion is related to the *A*-criterion by \(\Phi _{{\mathrm{MV}},\alpha }=\frac{1}{K}\Phi _{A,\alpha }\), and hence, the *A*-optimal designs are also MV-optimal.

Because, in general, there is no explicit solution for the optimal allocation rate, which minimizes (13), we will give an insight in the qualitative behaviour by some numerical examples below.

It is worthwhile mentioning that the criterion (13) is convex, and therefore, an optimal exact design may be obtained by choosing the best of the two exact designs adjacent to an optimal approximate design.

###
*Example 1*

For illustrative purposes, we consider a numerical example with \(K=16\) clusters and \(N=4\) individuals in each clusters, corresponding, for example, to a phase II trial investigating the potentially heterogeneous treatment effect in a population that is subdivided by four different dichotomous biomarkers. Obviously, the resulting sample size per subgroup will be small resulting in a high need for optimal designs.

Figure 1 exhibits the behaviour of the optimal allocation rate \(w^*\) to the treatment group in dependence on the variance ratio *v* of the treatment effects for some fixed values 0.01, 0.1, 0.25, 0.5 and 1.5 of the variance ratio *u* of the intercept. For reasons of presentation, we plot the optimal allocation rate \(w^*\) against the rescaled variance ratio \(r_v=v/(1+v)\) in the spirit of intra-class correlation in order to cover all possible values of the treatment effects variance ratio by a finite interval ((0, 1)), where \(r_v\rightarrow 0\) and \(r_v\rightarrow 1\) relate to \(v\rightarrow 0\) and \(v\rightarrow \infty\), respectively, and \(r_v\) is monotonically increasing in *v*. What can be seen from the picture is that for fixed values of *u* the optimal allocation rate \(w^*\) is equal to 0.5 for \(v\rightarrow 0\) and increases with increasing values of the variance ratio *v* of the treatment effects. The different lines associated with the different values of *u* appear in descending order which means that the optimal allocation rates decrease when the variance ratio *u* of the intercepts gets larger.

The next figure (Fig. 2) shows the behaviour of the optimal allocation rate in dependence on the variance ratio *u* of the intercepts for fixed values 0.01, 0.1, 0.2, 0.5 and 2 of the variance ratio *v* of the treatment effects, where again the variance ratio is rescaled (\(r_u=u/(u+1)\)). Also here it can be seen that the optimal allocation rate decreases with increasing values of *u* and increases with increasing values of *v*.

Figures 3 and 4 present the efficiency of the equal allocation rate \(w_0=0.5\) which is optimal in the fixed effects model (\(u=v=0\)). The efficiency for the *A*-criterion (*A*-efficiency) has been computed using the common formula

Note that for the MV-criterion the MV-efficiency, which is defined analogously, coincides with the *A*-efficiency. The efficiencies decrease with increasing values of the variance of the treatment effects and increase with increasing values of the variance of the intercepts.

###
*Example 2*

Extending the illustrative example, we further consider trials with \(K=16\) or \(K=32\) clusters and \(N=6\) or \(N=4\) individuals in each cluster. Figure 5 exhibits the behaviour of the optimal allocation rate \(w^*\) to the treatment group in dependence on the variance ratio \(q=v/u\) for some fixed values 0.01, 0.1, 0.25, 0.5 and 1.5 of the variance ratio *u* of the intercept. For reasons of presentation, we again plot the optimal allocation rate \(w^*\) against the rescaled variance ratio \(r_q=q/(1+q)\). As we can observe from the graphics, the dependence on the variance ratio *q* is more essential for \(K=32\) and \(N=4\) than for \(K=16\) and \(N=6\).

Figure 6 presents the efficiency of the equal allocation rate \(w_0=0.5\) which is optimal in the fixed effects model. The efficiency is more sensible with respect to the variance ratio in the case \(K=32\) and \(N=4\) than for \(K=16\) and \(N=6\).

## 4 Discussion

In the present paper, we focused on *A*- and MV-optimality for prediction of the cluster treatment effects \(\varvec{\alpha }\). Alternatively, one may be tempted to employ the *D*-optimality criterion \(\Phi _{D,\alpha }\) which is defined as the log determinant of the MSE matrix of the prediction \(\hat{\varvec{\alpha }}\) of the cluster treatment effects \(\varvec{\alpha }\),

The *D*-criterion is commonly used in the case of estimation of fixed effects and measures essentially the volume of a simultaneous confidence ellipsoid for the parameters of interest taking correlations between the parameter estimates into account. In the present setting of prediction of cluster treatment effects, especially when considering treatment effects in patient subpopulations, stand-alone interpretations for the given clusters are targeted and the simultaneous prediction ellipsoid corresponding to the *D*-criterion appears less relevant for the clinical interpretation. From a practical point of view, simultaneous prediction intervals based on a Bonferroni-like approach would better fit to the intended application. The corresponding *R*-optimality criterion (see e.g. [1]) measures the volume of these multidimensional intervals and is defined as the product of the diagonal elements of the MSE matrix. In the present situation of equal diagonal elements, the *R*-criterion \(\Phi _{R,\alpha }\) is related to the *A*- and MV-criterion by \(\Phi _{R,\alpha }=(\Phi _{{\mathrm{MV}},\alpha })^K=(\frac{1}{K}\Phi _{A,\alpha })^K\), and hence, the *A*-optimal designs are also *R*-optimal.

As illustrated in the examples, the larger the between-unit (between-cluster) variability of the treatment effects, the more the optimal weight deviates from equal allocation, especially if the variance of the treatment effects is large compared to the variance of the intercepts of the units. An increasing heterogeneity in the treatment effect leads to a decreased precision of the design that is optimal for overall population parameters: A balanced design is far from optimal if the treatment effects vary strongly as compared to the residual error and more subjects should be recruited to the active (new) treatment in multi-cluster trials. Nevertheless, efficiency loss may still be limited as in the examples resulting in a total sample size increase of about 10–20% in the considered scenarios if individual predictions are foreseen with a less efficient balanced allocation. If between-unit variability of treatment effects is considered to be small, equal allocation may suffice. However, using the results given in the paper, specific settings with different expectations can be assessed properly, in order to make optimally use of a limited number of patients or sample units to predict random effects of individual units. Basket trials with few patients in a number of biomarker defined subpopulations could be designed as efficient as possible in order to better facilitate the development of new targeted drugs.

## Change history

### 12 July 2021

A Correction to this paper has been published: https://doi.org/10.1007/s42519-021-00205-z

## References

Dette H (1997) Designing experiments with respect to ‘standardized’ optimality criteria. J R Stat Soc B 59:97–110

Fedorov V, Jones B (2005) The design of multicentre trials. Stat Methods Med Res 14:205–248

Fedorov V, Leonov S (2013) Optimal design for nonlinear response models. CRC Press, Boca Raton

Lemme F, van Breukelen GJP, Berger MPF (2015) Efficient treatment allocation in two-way nested designs. Stat Methods Med Res 24:494–512

Liski E, Louma A, Mandal N, Sinha B (1998) Optimal designs for prediction in random coefficient linear regression models. J Combin Inf Syst Sci (J. N. Srivastava Felicitation Volume) 23:1–16

Liski E, Mandal N, Shah K, Sinha B (2002) Topics in optimal designs, vol 163. Lecture notes in statistics. Springer, New York

Prus M, Schwabe R (2016) Optimal designs for the prediction of individual parameters in hierarchical models. J R Stat Soc B 78:175–191

Schwabe R, Schmelter T (2008) On optimal designs in random intercept models. Tatra Mt Math Publ 39:145–153

## Funding

Open Access funding enabled and organized by Projekt DEAL.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research has been partially supported by Grant SCHW 531/16 of the German Research Foundation (DFG).

The original online version of this article was revised due to a retrospective Open Access order.

## Appendix: Derivations of the Formulas for the BLUPs for the Cluster Parameters

### Appendix: Derivations of the Formulas for the BLUPs for the Cluster Parameters

We use formula (4) for computing the BLUPs. First note

and

This implies

and

with constants \(c_0\), *c*, \(c_1\), and \(c_2\) as given in (5) and (6).

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Prus, M., Benda, N. & Schwabe, R. Optimal Design in Hierarchical Random Effect Models for Individual Prediction with Application in Precision Medicine.
*J Stat Theory Pract* **14**, 24 (2020). https://doi.org/10.1007/s42519-020-00090-y

Published:

DOI: https://doi.org/10.1007/s42519-020-00090-y