On Ensembles, I-Optimality, and Active Learning

We consider the active learning problem for a supervised learning model: That is, after training a black box model on a given dataset, we determine which (large batch of) unlabeled candidates to label in order to improve the model further. We concentrate on the large batch case, because this is most aligned with most machine learning applications, and because it is more theoretically rich. Our approach blends three ideas: (1) We quantify model uncertainty with jackknife-like 50-per cent sub-samples (“half-samples”). (2) To select which n of C candidates to label, we consider (a rank-(M-1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(M-1)$$\end{document} estimate of) the associated C×C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C\times C$$\end{document} prediction covariance matrix, which has good properties. (3) Our algorithm works only indirectly with this covariance matrix, using a linear-in-C object. We illustrate by fitting a deep neural network to about 20 percent of the CIFAR-10 image dataset. The statistical efficiency we achieve is 3×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times$$\end{document} random selection.


Introduction
In part, machine learning (ML) technology has developed in response to the large data volumes of online systems, so-called big data. In parallel, physical laboratories have applied automation to scale up the range and scale of their experiments. DNA encoded libraries [28] enable in vitro selection of molecules of interest. Gregoire et al. [17] develop high-throughput experiments in order to discover new materials. Gongara et al. [16] apply such methods to additive manufacturing to optimize macroscopic structural properties. Such combinatorial experiments combine two technologies: high-throughput fabrication and high-throughput measurement.
Such high volumes of experimental data are good candidates for machine learning technology. Further, they suggest an iterative experimentation loop: fabricate, measure, model, predict, select, validate by fabrication, and measurement. Contemporary deep neural network (DNN) technology is readily useful for the modeling and prediction steps. Active learning-ML algorithms that can actively query for additional observations-has potential for the selection and validation steps. And high-throughput fabrication implies that any proposed new data ("selection") are likely to be fabricated and measured in batches of substantial size.
This work considers active learning for such high-throughput experiments. We abstract two key properties: batches and continuous measurements. As we do so, we implicitly explore the boundary between the statistical theory of experimental design and a relatively contemporary ML technology, deep neural networks (DNNs). To this end, as we attend to notions of statistical efficiency, we consider primarily objects in the prediction-not parameter-domain.
Our key contributions revolve around three interconnected ideas: (1) We represent prediction uncertainty by a jackknife-like method called here half-sampling. Even in the canonical case where we predict a scalar, this representation of uncertainty is multivariate. (2) In a principled way, we construct batches of new candidates for labeling. This update-without-label property exploits the multivariate representation of uncertainty. Unlike other batch active learning algorithms, which require something ad hoc in order to achieve diverse samples, our algorithm, with its multivariate representation of uncertainty, intrinsically ensures diversity. (3) We quantify the benefit of active learning by comparing sample sizes required to achieve the same global precision; this is a computational version of relative statistical efficiency. Each of these ideas has roots in statistical theory.
By way of introduction, we briefly recap some linear model theory, optimal experimental design, and active learning. In Sect. 2, we discuss model ensembles. This motivates Sect. 3, where we introduce a jackknife-like approach to model uncertainty, interesting in its own right. To quantify the benefit of using model uncertainty, we develop an active learning algorithm in Sects. 4 and 5.
As our exposition proceeds, we are aware of two audiences. For statistics researchers, please note that two of our efficiency claims, in Sects. 3.5 and 6, are based on simulations. These are perhaps amenable to theoretical analysis, so we anticipate that this work may attract further research on this important topic from the statistical community. For ML researchers and practitioners, we have included a little more statistical background than might otherwise be expected, and for this audience, we have endeavored to make our statistical arguments more accessible, if perhaps less formal.

Supervised Training
We have a training dataset S consisting of N i.i.d. data points, S = {(x i , y i ) ∶ i = 1, 2, … , N}, where x i ∈ ℝ D are D-dimensional features. For classification, y i denotes one of K classes, y i ∈ {1, 2, … , K} . For regression, y i ∈ ℝ.
Our development and algorithm emphasize regression; our example (Sect. 6) involves classification. A note on index notation: Observed, that is, labeled data (x i , y i ) are typically indexed by the letters h and i. Candidates for labeling are known by their features x a and are typically indexed by the letters a, b, and c. Subsets of data are denoted by S, with subscripts j, m, and m ′ . We are not aware of any statement in the following that depends on these details of notation, but hope this i-vs-a-vs-j convention may assist the reader in inferring the author's implied context. At any rate, in this paper, AL algorithms propose a batch {x a , a ∈ S C } , #S C = n , for which the labels {y a , a ∈ S C } are then requested.

Linear Models and Experimental Design
Consider the regression problem and linear models: where hi is Kronecker's delta. The ordinary least squares estimate of is ̂ = (X ⊤ X) − Experimental design is based on such linear models. The theory underlying, say, Box and Hunter [2,3] and Hunter [4] is that of orthogonal arrays, arrays such that X ⊤ X are diagonal, and for which all of the diagonal elements of (X ⊤ X) −1 are in some sense small.
The algorithm-oriented branch of experimental design theory, optimal design, emphasizes the computational problem of making good design matrices X . Optimal design can proceed in settings where orthogonal arrays are unavailable.
Optimal experimental design needs to map the design or feature matrix X into a scalar, to enable the scoring, ranking, and selection of better designs. Consider the error ellipse defined by this equation in ∶ (̂ − ) ⊤ (X ⊤ X) −1 (̂ − ) = . Its (squared) volume is proportional to ((X ⊤ X) −1 ), which is Wald's (1943) D-optimality criterion and is to be minimized.
In the prediction domain, y a is predicted as x ⊤ â with a squared standard error of [22] also acts in the prediction domain; it seeks to minimize (X(X ⊤ X) −1 X) , while I-optimality [40] minimizes the average prediction error. A numerical calculation of I-optimality involves minimizing this criterion: the summation over some integration grid {x a , a ∈ S G } with #S G elements.
The more standard implementation of I-optimality minimizes instead this criterion, which is free of the numerically enumerated integration grid S G .
the integration taking place over a specified experimental domain which is ultimately represented by the constant matrix . In the following, we implement a version of I-optimality that depends on a numerical integration grid like S G .

Rank-1 Updates
Optimal experimental design is inherently computationally intensive, and techniques that reduce computational burden are therefore quite attractive. One class of such techniques updates the inverse matrix (X ⊤ X) −1 rather than recalculating it from scratch. An example is the rank-1 update of Sherman-Morrison (1949): If X 1 = (X ⊤ 0 , x) ⊤ , the one-more-row version of X 0 , then An analogous result updates when a row x is deleted: (2) has at least a pair of uses: (a) When x = x i , the i-th row in X 1 , then Eq. (2), or its second term, can be fashioned as a measure of influence for observation x i . (b) There is a class of optimal design algorithms that alternately add and delete rows to matrices like X 1 and X 0 . The canonical exchange algorithm is that of Fedorov [15]. Equations (1) and (2) can be applied in tandem to accelerate such algorithms.
Sherman-Morrison is the rank-1 version of Woodbury's (1950) rank-k result. The author is not aware of Woodbury's update being used for optimal design algorithms, but this potential remains.
In the prediction domain, of particular interest are the prediction variances and covariances of candidates. An analog to (1) can update prediction variances and covariances: . This makes precise an often-cited intuition about experimental design: prediction variances are most reduced near where the new observations are added. Further, such updates as (3) work intrinsically to discourage the repeated selection of any particular x ; once selected, prediction variances nearby to x are reduced, so points further from x represent relatively better opportunities for variance reduction.
If we define the covariance function can be re-expressed as (2) This remains a rank-1 update, involving the outer product of the column V 0 [⋅, x] with itself. Note the following is the analog to (2): The utility of updates (4) and (5) derive from four properties: (1) They operate in the prediction domain, making them more suitable for deep neural network applications.
(2) They do not require the label y associated with new observation x . This updatebefore-label property helps us select batches of new observations. (3) V 0 and V 1 can be used to quantify useful design criteria, e.g., G-optimality minimizes quantities like ( (V 1 )) , while I-optimality minimizes objective functions of the form (V 1 ). (4) These updates work intrinsically to ensure diversity among selected candidates.
We pause to recognize two open issues. (a) V is a C × C matrix. When the number of candidates C to label is not small, V does not appear scalable. This we address in Sect. 5, algorithm 2. (b) We have not specified how to calculate the initial value of V . This we address in Sect. 4, which in turn depends importantly on Sect. 3.

Active Learning
Active learning (AL) is the machine learning specialty that addresses the problem of which additional candidates to label for training. Settles [35] offers a still well-cited survey of this field. Lewis and Gale [26] describe the classic sequential algorithm, and Seung et al. [36] present what has come to be called the query-by-committee algorithm. Cohn et al [10] implement AL for statistical models, while Schohn and Cohn [34], Tong and Koller [42], and Tong and Chang [41] interweave AL with support vector machines.
For batch labeling problems, Brinker [5] and Xu et al. [48] build on Tong and Koller [42] by explicitly incorporating a diversity measure; Brinker uses a minimax correlation, Xu et al a Kullback-Liebler density distance. Working within the framework of logistic regression, Hoi et al [20] consider the parameters' Hessian matrix. Guo and Schuurmans [18] propose the entropy of any proposed batch, working around the problem of not knowing labels by an "optimistic" heuristic. Zhou and Sun [49] extend margin sampling to the batch case; their approach (manifoldpreserving graph reduction, MPGR) uses the distances between feature vectors of nearest-neighbor observations.
In the context of language models, Hartford et al. [19] adapt AL algorithms to increase observations of rare categories. Oversampling of rare cases is relatively robust to diversity issues. Hu et al. [21] consider small-batch active learning on graph neural networks, and in particular on the problem of transferring or borrowing knowledge from labeled graphs to unlabeled ones. They seek to label the candidates with maximum entropy-most uncertainty-in their predicted labels.

Ensembles
Given input features x , let us consider DNN predictions of y given x from a model f (x), say. In particular, we are interested in an ensemble of such neural network models, indexed by m: {f m (⋅) ∶ m = 1, 2, … , M}.
Ensembles are sets of predictive models. Their predictions are typically averaged together to achieve better results than that of any single ensemble member. DNNs can be sensitive to the starting point of its internal coefficients (its "weights"); ensembling several such models mitigates this sensitivity. Ensembles are naturally trained in parallel, which makes them quite amenable to cloud-based approaches to model fitting. For online applications, the extra computations that ensembles involve make them rather unattractive; in the present context, supporting experiments performed in physical laboratories, the extra computation of ensembles is much less of an issue. Research continues to reduce ensemble computation; see Wenzel et al. [46] and Singh and Jaggi (2020) and references therein.
Cross-validation, e.g., Allen [1] and Stone [39], is one popular approach to ensemble making: A training set is partitioned into M mutually exclusive folds, and model m is trained on the M − 1 folds {1, 2, ..., M} ⧵ {m} , resulting in M different models. Cross-validation solves a particular problem, quantifying the effect of fitting and over-fitting by comparing predictions to out-of-sample labels. For this reason, cross-validation-based ensembles are rather popular in practice.
Breiman (1996) offers an alternative form of ensemble making, bagging, whereby each model is fit to a with-replacement (bootstrap) sample of training data. Breiman observes that bagging helps most with unstable models, of which DNNs are an example.
Perrone and Cooper (1992) recognize that for neural networks, ensembles work better (a) when each individual ensemble member predicts well, and (b) when the ensemble members correlate less with one another. The latter has come to be called ensemble diversity.
Building on Liu's (1998) negative correlation learning, Brown et al. [6] propose a term penalizing the systematic agreement of the predictions of different ensemble members. Mariet et al. [29] propose an analogous term to inhibit the correlation among a DNN's internal coefficients (the so-called weights).
Lakshminarayanan, et al. (2017) combine neural network ensembles with adversarial training to fit a two-output model, one that predicts the prediction mean function and prediction variance functions, jointly.
By data structure, ensembles would seem to be useful for estimating a prediction's uncertainty. Section 2.1 shows this limitations of this idea, while Sect. 3 constructs a new class of ensembles to estimate uncertainty better.

Minimum Variance Ensemble Weights
Perrone and Cooper (1992) calculate variance-minimizing weights on ensemble members, which we briefly recap and slightly expand.
Consider an ensemble of M models, each of which predicts the same quantity, i.e., has the same estimand. Let us postulate an M × M matrix U of the covariances of the predictions among the M ensemble members. We want weights w , which sum to one, and minimize the variance of the combined predictions: When U ′ is diagonal, w * is proportional to the reciprocals of U 's diagonal elements; this recapitulates a well known rule of thumb.
Of course, ensembles constructed by symmetric processes such as bootstrapping and cross-validation do not to benefit from calculating optimal weights, since such symmetry implies equal weights. In particular, if U ij = constant for all i ≠ j , then the off-diagonal elements of U −1 ij also equal a constant. That fact, and that the diagonal elements are also constant, together imply the row sums U −1 1 are constant too.
Note, however, that w * sets a lower limit on the variance achievable from reweighting the ensemble with covariance U : 1∕1 ⊤ U −1 1 , one that is quite informative: A symmetric ensemble's covariance matrix is proportional to its correlation matrix U = (1 − )I + 11 ⊤ , where is the (constant) correlation between any pair of ensemble members m, m ′ . In this case, its inverse is also symmetric: , so it follows that the lower bound on the ensemble variance is As M → ∞ , the lower bound approaches , not 0, so this relationship limits the practical benefits of reducing variance by increasing ensemble size.

Prediction Correlations for Cross-Validation and Bagging
Consider M i.i.d. folds fully and equally partitioning the training data. Suppose the estimand is the population average, , estimated by the overall average ȳ = ∑ M m=1ȳ m ∕M , the average over all M folds. Consider two cross-validation samples, ȳ (m) and ȳ (m � ) , each based on folds This correlation generalizes to model families amenable to convex optimization, that converge to unique solutions, that is, the stable models in the sense of Breiman (1996). (Section 3.4.2 also offers an analysis in the framework of stable models.) A simple approximation allows us to estimate the correlation between two bootstrap samples. For observations i = 1, 2, … , N , define two N−vectors of weights as i.i.d. draws from the Poisson distribution with mean rate ; = 1 is the standard bootstrap, w 1 and w 2 , respectively.
So for the standard bootstrap with = 1, this correlation is 0.5. The 2× bootstrap, which uses bootstrap samples of size 2N and = 2 , the inter-ensemble correlation grows to 2/3. Again, this correlation generalizes to model families amenable to convex optimization, i.e., stable models.
Using the value of 0.5, one can observe that an ensemble size of M = 10 achieves a variance only 10 percent higher than the lower bound in (6); this reproduces a rule of thumb that Breiman (1996) observed empirically: "[M]ost of the improvement us[es] only 10 bootstrap replicates. More that 25 bootstrap replicates is love's labor lost."

Zero-Correlation Ensembles?
Given the lower bound in (6) of , can ensembles be formed with correlations of = 0 ? Were that achieved, then the squared standard errors gained by ensembles becomes proportional to 1/M, the more ensembles the better.
Note that there are two effects here: For an ensemble member m using weights w m on observations in S to estimate the prediction function f m (⋅) , then And, (8) quantifies the efficiency of ensemble member m-proportional to the amount of data, while (9) quantifies ensemble diversity.
So, of course, one can achieve zero correlations, trivially, by defining ensembles that have no observations in common. But (8) tells us how such a practice would be highly inefficient.
Orthogonal arrays (OAs) are matrices with a finite set of symbols; two-level orthogonal arrays consist of two symbols, {0, 1} or {−1, 1}, say. The defining property of an orthogonal array is that for any pair of columns, m, m ′ , all combinations of symbol pairs occur with equal frequency. For an M-row two-level orthogonal array Z encoded with ±1, orthogonality implies that Z ⊤ Z = MI.
OA(M) signifies an orthogonal array of M rows; two-level OAs have M − 1 columns. In this paper, we use only two-leval OAs with M = 2 k , k = 4, 5, 6, and 7. Plackett and Burman (1946) construct many two-level OAs for integer multiples of 4. Under the term "fractional factorial designs," OAs are typically the concluding topic in an undergraduate class in experimental design for engineers (Box,Hunter,and Hunter [4]; Montgomery [30]).
Obviously, applying w m ∈ {−1, +1} requires some interpretation. This is the topic of Sect. 3, to which we now turn.

Signed Weights and Half-Samples
We interpret the signed weights w m of Sect. 2.3 as follows: We denote our training data by S, Let us denote a model trained on a dataset S j ⊆ S by f (⋅|S j ). Consider a given signed weight N-vector w, and assume the sign of w[i] assigned at random.
In the following, d(x) 2 is treated as an estimate of ℝ{f (x|S)}-with one degree of freedom.

File Shards
In what follows, we break the training set S into mutually exclusive, exhaustive, equally sized, i.i.d. partitions called shards. Our resampling scheme is in terms of these shards. When implemented in a computer file system, large datasets often consist of multiple physical files, shards. A computationally convenient interpretation of half-sampling is that each half-sample uses half the shards. A recognizably natural practice is for the number of shards to be M = 2 k for some integer k; k = 5 to 12 give shard counts ranging from 32 to 4096. Using a two-level OA ensures that each shard is used in exactly half the samples S j .
For each observation i ∈ S , it is convenient to assign it to a single shard (i), where , exactly half the shards. For observations in approximately random order, a common sharding function is (i) = (i−1) mod M , which assigns observations to shards as most card games deal out cards into players' hands.

Half-Sampling and the Jackknife
Half-sampling is a particular form of the jackknife of Quenouille [33] and Tukey [43]. In our use, half-samples (a) are not exhaustive of all possible half-samples and (b) are guided by an orthogonal array. For these-arguably second-order-distinctions, we find it appropriate to designate this jackknife-like scheme by its own term, hence halfsampling. Half-sampling is random to the extent that the initial assignment of observations (y i , x i ) to shards can be considered random.
Half-samples have a minor, if intriguing, property. The variance among random half-samples is an unbiased estimate of the variance of estimates based on the whole sample.
As above, denote all available observations by S, which has N observations and denote the j-th half-sample by S j , which has N j = N∕2 observations. Denote the estimand by Of course, this cannot be calculated directly because is unknown. However, Note that the right hand side of (11) is estimable, and of course, it estimates {(T(S) − ) 2 } , exactly the squared standard error we want to estimate. This derivation has a geometric version, presented in Fig. 1a.
Many ML practitioners find N j > N∕2 of special interest, and, indeed, this case is the more traditional use of the jackknife. Consider a random subset S j ⊂ S , consisting of #S j = N j < N observations. N j now is not necessarily equal to N/2 : As before, The geometric argument for this calculation is presented in Fig. 1(b).
In our applications below, we consistently use half-sampling, so N j = N∕2 uniformly.

Orthogonal Arrays and Half-Samples
The orthogonality property ensures that these M − 1 columns have approximately zero correlation, that is, for j ≠ k , {d(x|j), d(x|k)} ≈ 0.
The case for zero correlation has two elements, the assertion of additive attribution and an heuristic argument in favor of additive attribution. These are the

Additive Attribution and Zero Correlation
Consider two pairs of half-samples, (S j+ , S j− ) and (S m+ , S m− ) , j ≠ m. Additive attribution means that any estimator of interest T(S j ) can be decomposed in to a sum its shards' contributions: for some function L(s) of shard s. For Sect. 3.1, recall that we are interested in the half-differences (T(S j+ ) − T(S j− ))∕2: where j is the j-th column in the orthogonal array and j (s) = +1 when s ∈ S j+ and j (s) = −1 when s ∈ S j− . Consider now the covariance of such differences: The first term factors into For the same reason, ℂ {L(s 1 ), L(s 2 )} is constant for s 1 ≠ s 2 , and ℝ{L(s)} is also constant. The second term simplifies as follows: Because orthogonal arrays are balanced, ∑ s j (s) = 0, and the C term becomes zero. By defining property of two-level orthogonal arrays, ∑ j (s) m (s) = 0 , so the (V − C) term is also zero.

Heuristic for Additive Attribution
In this section, we make the case for additive attribution. Our development uses linear approximations reminiscent of maximum likelihood theory, and implicitly assumes that small changes in the underlying data induce approximately linear changes.
Our heuristic assumes we can uniquely fit a model by maximizing an objective function L S ( ) with respect to parameters . Further, we assume L S (⋅) is a sum over i.i.d. shards indexed by t: For dataset S, define S as the solution to this equation in : Now let us consider the dataset S without exactly one shard, S ⧵ u , denoted more compactly as −u with solution −u such that ∇L −u ( −u ) = 0.
Because −u is only a small perturbation of S, our heuristic assumes we can approximate −u linearly: The second line recognizes that L S = L −u + L u , the latter term specific to shard u. The third line notes that −u solves the equation ∇L −u ( −u ) = 0 . The fourth line asserts that the hessian ∇L S ∇ ⊤ evaluated at −u can be approximated by evaluating it at S . The fifth line approximates the u-shard-specific gradient ∇L u (⋅) evaluated at −u with one evaluated nearby at S . The last line merely shifts notation from Expressions (15) suggests this approximation: Note that H S is the sum over all shards, so H S is rather big-at least compared to ∇L u (⋅), which is based on only one shard. For this reason, approximation (16) approximates −u by only a small shift from S . Now consider a generic estimator T based on parameters −u . By approximation (16), If we sum the latter expression over all shards u ∈ S , the right hand side sums to zero, because

Recap
Sections 3.4.1 and 3.4.2 help motivate how half-differences guided by orthogonal arrays might plausibly achieve nearly zero correlation. Their common framework assumes that the equation ∇L S ( ) = 0 yields a unique solution. Of course, this assumption is in substantial tension with our primary application of interest, deep neural networks, which are quite sensitive to their initial starting point.
This same tension shapes the implementation of half-sampling for DNNs: All half-samples are given the same initial starting point, those to which the full training set S has converged. Computationally, this can be accomplished through checkpoints. Checkpoints consist of recording parameter states before an algorithm has converged. Usually recorded as insurance against computer crashes, checkpoints allow restarting a computation from an intermediate state rather than at the beginning. In our applications, we use checkpoints to reduce the sensitivity to the initial values of parameters, which are often initialized by pseudo-random values, giving them instead the parameters to which the fully trained model has converged, S . Half-samples thereby move away from S as a result of (randomly) subsetting the underlying training data and not from the more arbitrary mechanisms of resetting the initial starting point.
To conclude, Sect so the mean square term in the left hand side of (19) estimates the uncertainty of f (x|S) and does so with M − 1 degrees of freedom. (The half-normal plots of Daniel [11] depend on similar contrasts and a similar relationship.) The result (19) gives an interesting estimate of model uncertainty in its own right. Our basic plan is to apply Eqs. (19) and (4) to achieve a batch-based active learning algorithm. The next Sect. (3.5) is a slight detour from this effort, an attempt to quantify the benefit of implementing half-sampling by orthogonal arrays rather than by random selection.

Orthogonal Arrays and Efficiency
Intuitively, the careful balancing among shards achieved by orthogonal arrays should be somehow better than half-sampling by random selection. Here, by simulation, we assess the magnitude of this benefit by simulation. We construct orthogonal arrays of size m = 2 k , k ∈ {4, 5, 6, 7, 8}, which we assess for p ∈ {0, 1, 2, 4, 8} boolean features. The results are presented in Fig. 2. Note that the largest p considered is 8, while the smallest m is 16; this boundary is chosen to reduce the probability of random half-samples becoming fatally collinear to something manageably small and ignorable.
For 80,000 simulations, Fig. 2 plots the relative statistical efficiencies of orthogonal arrays versus random half-sampling. The quantity estimated is the average prediction variance of a linear model at the q = (1, 1, … , 0)-corner, where (q i ) = p. Figure 2 plots ratio of two prediction variances, the numerator is the median prediction variance from random halves, and the denominator is that estimated from orthogonal array-based halves. The group labeled p = 1 + 0 corresponds to an intercept and no two-level features, p = 1 + 1 to an intercept and one two-level feature, and so on. (The median prediction variance is chosen to mitigate the problem of right-tailed outliers among the random half-samples.) Note that when p/n is small, the benefit of orthogonal arrays is small also. When p/n is larger, by eye one can see the relative efficiency become roughly 1 + p∕n . Plotted by a red "|" symbol, a curve-fitting exercise empirically suggests the relative statistical efficiency can be usefully approximated as According to the uppermost right point pair, this appears to underestimate the relative efficiency gain for the highest p/n, where it approaches a relative efficiency of 2× . Note that, in practice, DNNs often fit models in this range, with relatively high p-to-n ratios, that is, relatively high ratios of parameter count to observation count. For this reason, the gain in efficiency using orthogonal arrays would seem especially useful for a parameter-rich model.
The results reported in this section seem amenable to theoretical analysis, and we encourage further research into quantifying more precisely the benefit of using OAs for half-sampling.

Considerations for Active Learning Algorithms
In this section, we describe broadly our approach to batch active learning. In Sect. 5, we become more precise in specifying our algorithm. Section 6 works out an example and calculates some relative statistical efficiencies.
Suppose we have C candidates for labeling; this is an external and prescribed data structure, known by their features {x c , c ∈ C 0 } . Values of C = #C 0 ≈ 10 6 -10 7 are common enough. We want to determine which subset of size n, n ≪ C, might best improve model prediction error, or best improve model prediction error and something else, a yield or other measure of economic consequence.
Our approach combines three ideas. Fig. 2 The simulated relative efficiency of orthogonal arrays relative to random half-sampling. Blue diamonds plot the simulation-estimated RE50 (median relative efficiency), the symbol | plots the heuristic approximation (n − 1)∕(n − 1 − p) Idea#1 Suppose we have the C × C prediction covariance matrix V . For any candidate c, we can update V by Eq. (4) to V c , say. We choose that candidate that minimizes (V c ) . As we accept (greedily) a candidate c, by Eq. (4) we update V. Idea#2 Before any candidate selection can proceed, we need an initial estimate of V, V 0 , say. We form the following C × (M − 1) matrix : (Note that, in spite of being motivated by Eqs. (6)- (8), this estimate V 0 involves no matrix inversion.) Idea#3 For even moderate C, V has on the order of C 2 elements, awkwardly large. However, by idea#2, V is of rank M − 1, and we are able to select from and update directly with operations on the C × (M − 1) matrix instead.

Idea#1: The Prediction Covariance Matrix
Consider two candidates a ≠ b ∈ C 0 , with feature vectors x a and x b , respectively, and unobserved labels y a and y b .
We have a model f (⋅|S) that predicts f (x a |S) and f (x b |S) , respectively. Let us suppose we can estimate V[a, b] = V[a, a] and is merely the squared standard error of prediction ŷ(x a ) . It is obvious, in a way approaching tautological, that In this way, V captures a sense in which observing x a might make observing x b unnecessary. This issue we call the problem of near duplicates : When x b ≈ x a ,  V[a, a] ≈ V[b, b] ≈ V[a, b] and ℂ ℝ(ŷ(x a ),ŷ(x b )) → 1 , and there is not much incremental improvement in uncertainty from observing both a and b beyond that from just observing one of a or b.
By Sherman-Morrison update in Eq. (4), we can reflect the consequence of including c as We now make three observations regarding V: (1) First, various optimal design criteria are functions of V . In particular, its largest diagonal element, ( (V)) , corresponds to the G-optimal criterion. In similar vein, ( (V)) gives the I-optimal criterion, (V)∕C.
is the influence of including candidate c upon the mean squared prediction error of all the other candidates. When (a) dominates, the I-optimal criterion prescribes labeling the most uncertain candidates ("direct observation"). When (b) dominates, the I-optimal criterion prioritizes those candidates whose predictions are more thickly correlated with many other candidates ("interpolation").
(3) Note that this Sherman-Morrison update can be calculated without actually needing to observe y(x c ) ("update before label"). This update-before-label property makes V attractive for designing supplemental batches, because we can reflect the impact of including x c without observing its associated label/response until later.
By these three properties, V emerges as a principled object for constructing supplemental batches.
(Of course, for batches of size 1, one can and should update models to reflect the newly observed label. We strictly limit our proposal to active learning contexts where the batch size is intrinsically bigger than 1 by the nature of the laboratory setup.) The case against using V is computational: as a C × C matrix, which we mitigate by idea#3. Viswanathan et al [44] likewise manage and manipulate a C × C precision matrix, in their case by using sparse matrix representations; their approach is a qualitatively different from that presented here.

Idea#2: Contrast Matrix 1
For any given column of an orthogonal array, we have two half-samples, S j+ and S j− , so we fit two predictive models, f (x|S j+ ) and f (x|S j− ) . A natural common estimate is their average, Further, their (signed) half-difference, estimates in some sense the sensitivity of this estimate to perturbing data. In particular, |Δ[x, j]| estimates-with one degree of freedom-the standard error of f (x|S) ≈ the standard error of (f (x|S j+ ) + f (x|S j− ))∕2.
Properties 1 and 2 give us an estimate of the important prediction covariance matrix V. Properties 3 and 4 suggest this estimate has high statistical efficiency, as is typical when using two-level orthogonal arrays.

Idea#3: Updating 1 not V
Our initial prediction covariance matrix V = ⊤ ∕(M − 1) . Were we to include candidate c, the updated matrix would be (1 + V[c, c]) . In this section, we work out that With such a matrix H c in hand, we can form algorithms that operate on C × (M − 1) matrices like -linear in the number of candidates C -rather than C × C matrices like V , which are quadratic in C.
It is convenient to shift notation: Let us define 0 = ∕ , the latter is a projection matrix with exactly one zero eigenvalue, that associated with c ∕‖ c ‖ . In contrast, H c does not fully zero out the eigenvalue associated with the unit vector c ∕‖ c ‖, but H c does move to reduce its associated eigenvalue away from 1 and toward 0.
We close this section with one more observation. The I-optimal criterion can be defined operationally by and ⊤ 0 0 are (M − 1) × (M − 1) matrices. In this way, the (H, )-representation enables us to avoid calculating C × C objects.

Algorithms
Let us now assemble ideas #1, #2, and #3 into working algorithms. For clarity, we define algorithm 1, which uses ideas #1 and #2, then add in idea #3 to form algorithm 2.
For a criterion, we use that of I-optimality, which works to minimize {V} . In principle, one could apply instead G-optimality, which minimizes ( {V}) . However, G-optimality has performed too poorly empirically to include in this investigation. Further, for our applications, the boundaries of protein, molecule, and alloy space are not well defined, and G-optimality is a poor match conceptually.
The algorithms we present are greedy algorithms. This is because of the rather large number C of candidates we screen. In principle, one could implement exchange algorithms instead, alternately adding and deleting candidates, but we have not explored such an implementation yet.
Algorithm 1 assumes we can calculate and hold in memory the C × C matrix V in memory based on the half-sample C × (M − 1) matrix . It then proceeds to include one-by-one candidates ("greedily") until the batch quota is filled.

Example
For DNNs, it may be fairly said that the ideas#1 and #2 that underlie algorithms 1 and 2 are mere heuristics, extrapolated from a linear context to a nonlinear one. In this section, we work out an example using a particular neural network, and see what can be achieved in practice.
For training, we use the first 12, 288(= 96 × 128) images, and for candidates the second 12,288 images of the 50,000-image training set. For a test set, we use the standard 10, 000 CIFAR-10 test set.
For comparison, we compute two reference models, updating with no new data and with all C = 12, 288 candidates, both using 24 iterations ("Update Epochs=24"). In addition, we take 512 random without-replacement samples of sizes 768, 2 × 768 = 1536, and 3 × 768 = 2304 , additional cases, respectively. These latter are essentially simulations of 512 different random case selections. Each random 2× sample extends a 1× sample, likewise each 3× sample extends a 2× sample.
We evaluate average prediction variance on the 10, 000 images in the CIFAR-10 test data. Of course, random selection of candidates imposes random distributions on any resulting prediction variance on the test data. Figure 3 presents the cumulative distribution functions (and averages) of the prediction variances for random selections of n = 768, n = 1536, and n = 2304, 1×, 2×, and 3×, , respectively. 87 percent of the time, algorithm 2 using OA(64) outperforms a similar amount of random data (labeled "random 1 × (n = 768)").
We also measure how far from "no new" data to "all C candidates." Presented in Table 1 as the rightmost column, it is the percentage as quantified by the "ave var" column (and x-axis in Fig. 3). Algorithm 2 moves the average prediction variance for OA (32) and OA(64) 38.8 and 42.4 percent of the way toward using 12,288 candidates, respectively, while 1× random sampling moves only 30 percent, 2× random sampling 36 percent, OA(16) 35 percent, and 3× random sampling 42 percent.
These results indicate that on average, algorithm 2 using OA(64) performs equivalent to three times as much data selected at random (labeled "random 3 × (n = 2304) "). The relative efficiency of half-sampling with OA(32) is between 2 and 3× . So larger ensembles buy some efficiency, but with decreasing marginal rates of return. Based on Table 1, it becomes quite difficult to recommend OA(128) over OA (64).
Note that the model employed by this example, a convolutional neural network, is quite non-linear, yet we achieve substantial gain from algorithm 2. This should be reassuring: Idea#1 in algorithms 1 and 2, which is motivated by linear model theory, seems to work effectively in this nonlinear context.

Summary
The problem we consider is that of supplementing training data for a machine learning model.
For our purposes, there is a pre-existing model, of a modern ML type, a deep neural network, say, fit using a current dataset, so there is some ability to make model predictions. For candidate c drawn from C potential candidates, we have available the associated feature vector x c but not the associated label/response y c . Consistent  (32), and OA(16), all with n = 768 , respectively. And by including no additional data ( n = 0 ). CDFs (left to right): dark red, from including n = 2304 additional random candidates (without replacement); red, n = 1536 additional random candidates; orange, n = 768 additional random candidates. Vertical dashed lines denote the average values of the respective CDFs with many big data applications, any data supplement likely needs to consist of a batch of substantial size, drawn from an even larger set of potential candidates. Common also with many ML models, we balance statistical considerations with those of computational feasibility, both in terms of computing complexity and data object size.
In general, we pursue a paradigm of multivariate model uncertainty, recognizing early that merely adding error bars to our model predictions is insufficient. This is because of the near-duplicates problem: Two cases close together in feature space may share about the same prediction and the same, larger error bars. If we observe only one of them, we may gain enough information about the other that observing both becomes wasteful. This redundancy can be detected, for instance, were we to have available the prediction covariance matrix; this would signal near-duplicates by the high correlation between the two candidates' predictions.
Our approach is built on three ideas. The first consists in recognizing the theoretical centrality of the C × C prediction covariance matrix V . V is a precision matrix, and quite suitable for this batch supplement problem, even for, especially for, modern ML models such as deep neural networks, for four reasons: (1) I-optimality: Consistent with such semi-parametric models, V is defined in the prediction domain.
(2) Update before label: V does not require observing the label/response y c in order to be updated by the prospect of including candidate c, with features x c , in the supplement. (3) Training case influence: When constructed from a training set, V can be used to quantify for each observation the impact on overall model fit. (4) Theory generalization: The underlying theory for V is comfortably compatible with linear model theory and classical experimental design ideas.
The statistically efficient estimation of V defines the second idea for our approach, called here half-sampling. Half-sampling strongly resembles a fifty-percent jackknife. This work focuses on deep neural networks; contemporary DNNs approach interpolators/memorizers to such an extent that without-replacement sampling seems preferable to with-replacement bootstrap resampling. As presented here, halfsampling depends on defining mutually exclusive, equally sized data shards, and the careful balancing properties of a two-level orthogonal array. (Note that when shards correspond to physical computer files, half-sampling reduces file reads by half.) The use of the orthogonal array buys something substantial: relative efficiencies approaching 2× when the ratio of parameter count to sample size becomes largeexactly the case inhabited by many applications of deep neural networks. Also, note that half-sampling admits highly parallel model fitting. Thus, half-sampling adapts to ML-type models four ways: without-replacement sampling, file sharding, orthogonal-array-based sample balancing, and parallel computing.
Our third idea recognizes that the C × C matrix V is, for C even moderately sized, awkwardly large. We therefore reformulate our algorithm 1 into algorithm 2, which operates on the linear-in-C matrix .
The resulting matrix of prediction contrasts inherits the well-known statistical efficiency properties of two-level orthogonal arrays: (1) For every candidate, each half-sample contrast uses all the training data. (2) The (signed) weights on each shard are equal in absolute value. (3) Between any two half-samples, the weights are uncorrelated. In this sense, the construction of has high statistical efficiency and estimates V = ⊤ ∕(M − 1) with M − 1 degrees of freedom. Balancing between computational burden and efficiency gain, we recommend around M = 64, that is, OA(64), which gives 63 degrees of freedom for estimating prediction covariances.
Our main result is that statistically efficient estimates of error buy something: for the problem of supplementing training data, half-sampling and algorithm 2 combine to give three times "(3× )" the statistical efficiency of random without-replacement case selection. So for each case proposed by half-sampling and algorithm 2, the reduction in average prediction variance is on average about the same as selecting 3 random cases, 50 percent beyond the 2× rule of thumb associated with active learning performance. To our knowledge, this is the first of the use of the concept of relative statistical efficiency for assessing active learning algorithms.
We give due diligence to computational issues: (1) Fitting by half-sampling can be accelerated by warm starts, reduced file reads, and parallel computation. (2) By using algorithm 2, we can avoid direct calculation of the C × C prediction covariance matrix V . Instead, we maintain a C × (M − 1) matrix, essentially a data table with M − 1 features for C candidates. (3) We quantify the precision-computation trade-offs in the appendix A and Sect. 6.
The present work is limited in several directions: (a) Our framework is developed by analogy with linear models and continuous responses (continuous labels, a.k.a. "regression"). In contrast, much of ML involves categorical labels. (b) Further, our efficiency claim is worked out only for one dataset, CIFAR-10. This places this work in substantial tension with much of ML literature, which favor extensive empirical exercises over common task frameworks [14]. (c) Our proposed algorithms are "greedy", and the benefit of non-greedy, exchange-type algorithms has not yet been explored. (d) Finally, each of our particular claims, orthogonal arrays to implement half-sampling, the efficiency gains from Sherman-Morrison batch construction, even the number of shards M to implement half-sampling, although theoretically motivated, each is fundamentally an empirical or simulation result. For these reasons, we find it appropriate to invite further study, empirical and theoretical, in other experimental settings, for other model families, and with additional datasets.

A Sample Sizes for Variances
For histograms, one rule of thumb for recommended sample size is n = 30 . The detailed rationale for this is as follows: (a) The proxy for the precision of a histogram is the precision of its spread, e.g., its standard deviation. (b) Based on Gaussian theory, the delta method, and the chi-square distribution with degrees of freedom, an approximate 1 − percent confidence interval for a standard deviation s is s(1 + z ∕2 ∕ √ 2 ) ±1 . Since the standard Gaussian deviate z ∕2 = 1.96 ≈ 2 for 95 percent confidence intervals, this expression can be usefully approximated by s(1 + 2∕
Each incremental bit of relative precision increases sample size requirements by 4×, a manifestation of the familiar square root rule for confidence intervals.
In Sect. 6, the recommended OA(64) gives 2.5 bits relative precision, about one part in six.