1 Introduction

Supervised machine learning models require enough labeled data to obtain good generalization performance. For many practical applications such as medical diagnosis or video topic prediction labeling data can be expensive or time consuming (Settles 2012). Often in these settings unlabeled data is abundant. In active learning an algorithm chooses unlabeled samples for labeling (Cohn et al. 1994). The idea is that models can perform better with less labeled data if the labeled data is chosen carefully instead of randomly. Active learning makes the most of a small labeling budget and can reduce labeling costs.

Several works use upperbounds on the expected loss to motivate particular active learning strategies (Gu and Han 2012; Ganti and Gray 2012; Gu et al. 2012, 2014; Wang and Ye 2013). We study pool-based active learners that choose queries that explicitly minimize generalization bounds and investigate the relation between bounds and active learning performance. We evaluate generalization with respect to the surrogate loss in the classification setting and use the kernel regularized least squares model (Rifkin et al. 2003), a popular model in active learning (Huang et al. 2010; Wang and Ye 2013). Our focus is on active learners that select a batch of queries in a single shot (Contardo et al. 2017). This means that there is no label information available at the time the batch of queries is determined. Since the active learners have only have unlabeled data at their disposal they aim to select the most representative subset of the unlabeled pool. This is different from batch mode or sequential active learning, where after requesting labels from the oracle the algorithm has to determine new queries, creating a feedback loop. The advantage of zero-shot active learning is that all queries can be computed ahead of time, and collected labels do not have to be fed into the active learner.

For applications this can be very convenient: it simplifies the annotation setup. Furthermore, active learning algorithm may require substantial amounts of time to compute the next query. In situations where annotation have to be done by domain experts whose time is costly this can be impractical. For example, if we were to apply active learning to to the problem of Esteva et al. (2017), who build a deep learning model to classify skin cancer, sequential or batch mode active learning strategies usually train a model as intermediate step before being able to determine the next query. For deep models this could take several hours. With zero-shot active learning the dermatologist can annotate all queries without waiting once.

Another example where requesting labels is costly is personalized machine learning models such as for movie recommendation. Here applications may ask feedback from end-users to improve their service. This problem can also be studied using the active learning framework (Harpale and Yang 2008). Asking end-users for feedback usually interrupts their activity in the application. Therefore, we may only interrupt the user a limited amount of times. Using zero-shot active learning users only have to be interrupted once and can answer multiple queries without waiting for new queries to be determined.

The Maximum Mean Discrepancy (MMD) is used for batch-mode active learning by Chattopadhyay et al. (2012) to match the marginal distribution of the selected samples to the marginal distribution of all unlabeled samples. This active learner has been shown to minimize a generalization bound (Wang and Ye 2013). The MMD is a divergence measure (Gretton et al. 2012) which is closely related to the Discrepancy divergence measure of Mansour et al. (2009), both have been used in domain adaptation (Huang et al. 2007; Cortes and Mohri 2014).

Using the Discrepancy, we show that we can get a tighter worst case generalization bound than the MMD in the realizable setting. Tighter bounds are generally considered better as they estimate the expected loss more accurately. One might therefore expect the Discrepancy to lead to better queries in active learning.

We show, however, that the Discrepancy and MMD generalization bounds can be derived, using a probabilistic analysis, from pessimistic assumptions. We subsequently apply the principle of maximum entropy to derive probabilistic assumptions that are more optimistic, inspiring us to introduce the Nuclear Discrepancy (ND) bound. Under these optimistic assumptions the ND provides a tighter bound on the expected loss than the MMD, while the Discrepancy bound is the loosest.

We compare the active learning performance of the proposed ND bound to the existing MMD and Discrepancy bounds. Our hypothesis is that we often find ourselves in a more optimistic average-case scenario than a worst-case scenarios. To this end we empirically study the behavior of the active learners on 13 datasets, and we investigate whether probabilistic assumptions or worst-case assumptions better model observed behavior in our experiments.

In the realizeable setting a model from the model class can perfectly predict the groundtruth labels, as in this setting there is no model mismatch or model misspecification. For this we show that the tightness relations between the generalization bounds is strict. As such, for the realizeable case, our theory gives the strongest predictions for the ranking of the active learners in terms of performance. In the agnostic case, where no such model may exist, the tightness relations can change, which renders our theory less applicable. We perform experiments in both settings to see the effect of the theoretical assumptions not being fulfilled.

We study the realizable setting since it is more amendable to theoretical analysis. This setting is often studied in active learning and is still a topic of active investigation (Tosh and Dasgupta 2017). The general case of the agnostic case is much harder to analyze. To illustrate this, we remark that it has been observed that if a model class is sufficiently wrongly chosen, active learning can even decrease model performance (Settles 2011; Attenberg and Provost 2011; Loog and Yang 2016; Yang and Loog 2018).

These counter-intuitive behaviors further underline the need for further theoretical studies. We believe that by improving our understanding of simpler active learning settings (realizeable case) will contribute to improved understanding of more difficult active learning settings (agnostic case).

To this end, our study provides new quantitative tightness relations between the MMD, Discrepancy and ND bound under different probabilistic assumptions. We investigates the connection between bound tightness and active learning performance. Our most important conclusion is that not only bound tightness is important for performance, but that appropriate assumptions are equally important.

1.1 Overview and contributions

First we discuss related work in Sect. 2. In Sect. 3 we describe the considered active learning setting and notation. We present our theoretical results regarding the MMD and Discrepancy in Sect. 4. In Sect. 5 we motivate our novel Nuclear Discrepancy bound. We evaluate the proposed active learners experimentally in Sect. 6. In Sect. 7 we give a discussion and in Sect. 8 we give the conclusions of this work. All proofs, additional background theory and experimental results are given in the Appendix. The main contributions of this work are:

  1. 1.

    An improved MMD bound for active learning and a more informed way to choose the kernel of the MMD in the context of learning.

  2. 2.

    A proof that the Discrepancy bound on the worst case loss is tighter than the MMD bound.

  3. 3.

    A probabilistic interpretation of the MMD bound.

  4. 4.

    The Nuclear Discrepancy (ND) bound that provides the tightest bound on the expected loss under probabilistic assumptions that follow from the principle of maximum entropy.

  5. 5.

    A probabilistic analysis that explains the differences in empirical performance (in terms of the mean squared error) achieved by the active learners.

In Table 1 we give a visual summary of our work. It shows all formal results and shows in which sections to find them. It also shows the relation between the theory and experiments, and the main findings of the experiments.

Table 1 Visual summary of our work

2 Related work

Many active learning methods have been proposed, Settles (2012) provides an excellent introduction and overview. Our work is related to active learning methods that select representative samples (Xu et al. 2003). Most active learning strategies of this kind are combined with an uncertainty criteria (Xu et al. 2003; Chattopadhyay et al. 2012; Wang and Ye 2013; Huang et al. 2010), and often the representative component is used to diversify queries when chosen in batches in order to avoid redundancy (Xu et al. 2003; Wang and Ye 2013). This is different from our considered setting: since there is no labeled data and we have to choose all queries in one shot, our only option is to select representative samples, since uncertainty criteria can only be computed if some labels are known.

A closely related well-known concept to our work is that of (Transductive or) Optimal Experimental Design (Yu et al. 2006). Here also no labeled data is required to select queries for the case of the linear regression model. These methods aim to minimize some form of posterior variance of the model. A closely related statistical approach relies on maximization of the Fisher Information to reduce model uncertainty (Hoi et al. 2006). However, for these approaches it is often required to explicitly specify a noise model (such as Gaussian i.i.d. noise), while in this work we consider deterministic labeling functions.

Our work is motivated by several active learners that minimize generalization bounds. Gu and Han (2012) uses the Transductive Rademacher Complexity generalization bound to perform active learning on graphs. Gu et al. (2012) show that the strategy of Yu et al. (2006) also minimizes a generalization bound, and extend the method to work with a semi-supervised model. Ganti and Gray (2012) introduce an active learning strategy that uses importance weighting to ensure asymptotic consistency of the actively learned model. Their strategy minimizes a generalization bound for the squared loss under some conditions on the data distribution. Gu et al. (2014) introduce an strategy that minimizes a generalization bound on the risk for logistic regression. Wang and Ye (2013) also use a generalization bound based on the MMD to perform active learning, but we will describe this work later in more detail when discussing all methods that use the MMD.

Many theoretical active learning works motivate algorithms by generalization bounds, for example one of the first active learning algorithms ‘CAL’ (Cohn et al. 1994) and its agnostic generalization \(A^2\) (Balcan et al. 2009) have been thoroughly analyzed using generalization bounds by making use of the Disagreement Coefficient (Hanneke 2007). Most of these theoretical works consider worst-case performance guarantees, where the distribution is chosen by an adversary subject to constraints. Balcan and Urner (2016) provides a short and concise overview of these and other recent theoretical active learning works. In contrast with our work, these algorithms consider generalization in terms of zero-one loss instead of squared loss and do not apply to one shot active learning.

A straightforward approach to one shot active learning is through clustering: cluster the data and request the labels of the cluster centers (Bodà et al. 2011; Hu et al. 2010; Zhu et al. 2008; Nguyen and Smeulders 2004). However, unlike our work, these methods are not motivated by generalization bounds. Obtaining bounds for such approaches may be difficult because the clustering algorithm and machine learning model may rely on different assumptions. To still get bounds one can use the clustering algorithm instead to also provide predictions for new samples (Urner et al. 2013). Instead, we stick to the regularized least squares model and use the MMD and Discrepancy to get bounds for this model. Our approach can be used to derive bounds and corresponding active learning strategies for any kernelized \(L_2\) regularized model, however, in this work we only focus on the squared loss.

Our work is closely related to that of Chattopadhyay et al. (2012): we use a greedy version of their proposed active learning algorithm. Chattopadhyay et al. (2012) are the first to use the MMD for active learning in a batch-mode setting. An in-depth empirical analysis shows that the MMD outperforms other active learning criteria as judged by the zero-one error when used with kernelized SVMs. They show that the MMD easily can be combined with uncertainty-based active learning approaches and transfer learning. Since we consider one-shot active learning we don’t consider the uncertainty-based component of their algorithm. In follow up work active learning and transfer learning is solved jointly using the MMD (Chattopadhyay et al. 2013).

Our theoretical analysis of the MMD bound extends the analysis of Wang and Ye (2013). Wang and Ye (2013) show that active learning by minimization of the MMD and the empirical risk can be seen as minimizing a generalization bound on the true risk. They introduce an active learner that balances exploration (distribution matching using MMD) with exploitation (a form of uncertainty sampling). They show empirically that their proposed algorithm is competitive with several other active learning strategies as evaluated by the zero-one error using kernelized SVMs.

We build upon the generalization bound of Wang and Ye (2013) and improve it. Their bound considers the underlying distribution of the unlabeled pool and labeled (queried) sample, however, this is problematic because the labeled sample is non-i.i.d. due to dependence of the queries of the active learner. We resolve this issue and introduce an additional term \(\eta \) that measures the error of approximating the worst-case loss function.

Mansour et al. (2009) introduce the Discrepancy generalization bound for domain adaptation with general loss functions. In a follow up work, Cortes and Mohri (2014) contrast the Discrepancy with the MMD generalization bound: they argue that the Discrepancy is favorable from a theoretical point of view because it takes the loss function and hypothesis set of the model into account, while the MMD does not. This means that the MMD bound for an SVM and regularized least squares model would be exactly the same, while the Discrepancy bound specializes to the chosen model and surrogate loss. They derive an efficient domain adaptation algorithm and empirically show that the Discrepancy improves upon the MMD in several regression adaptation tasks.

Prior to our work, the Discrepancy measure (Cortes and Mohri 2014) has not yet been used to perform active learning. We show that by choosing the kernel for the MMD carefully, we can adapt the MMD to take the hypothesis set and loss into account, addressing one of the theoretical limitations of the MMD identified by Cortes and Mohri (2014). Under these conditions we find that we can compare the MMD and Discrepancy bounds in terms of tightness. This quantitative comparison of these bounds is novel and was not considered before.

Germain et al. (2013) adapt the Discrepancy for the zero-one loss to a PAC-Bayes setting in order to do domain adaptation. Their analysis is specifically for the zero-one loss, while we consider the squared loss. Their PAC-Bayes framework is significantly different from our analysis: instead of minimizing a surrogate loss, they use a Gibbs classifier, and they minimize bounds on the expected risk directly. This involves a non-convex optimization problem. Instead, we simply minimize the empirical risk and consider deterministic models, similar to most PAC style analysis. This makes our analysis is simpler. Furthermore, they propose a framework to jointly minimize the empirical risk and domain divergence. To this end, their algorithm requires labeled data which is unavailable in zero-shot active learning, making it unsuitable for our zero-shot setting.

In Cortes et al. (2019) a new domain adaptation algorithm based on a new divergence measure, the Generalized Discrepancy, is introduced. The algorithm consists of two stages: first it minimizes the Discrepancy, afterward it minimizes the empirical risk and the Generalized Discrepancy jointly. The strategy of Cortes et al. (2019) is difficult to apply to active learning for two reasons. First of all, their algorithm requires labeled data to minimize the empirical risk and the General Discrepancy jointly, which is impossible in our zero-shot active learning setting. Second, their algorithm requires i.i.d. samples from the unlabeled pool to estimate the hyperparameter r. This would require costly random queries in the active learning setting. Because of these reasons, we believe their algorithm is more suitable to a joint active and domain adaptation setting (such as considered by Chattopadhyay et al. (2013)) where more labeled data is available.

Our theoretical analysis is substantially different from the analysis of Cortes et al. (2019). Because Cortes et al. (2019) use labeled data, they can make a more accurate characterization of possible worst case scenario’s, refining the worst-case scenario of the Discrepancy to obtain tighter bounds. We take an orthogonal approach: we consider probabilistic generalization bounds that hold in expectation. Instead of considering a worst-case, we make probabilistic assumptions to get to a plausible average-case. Cortes et al. (2019) compare the Generalized Discrepancy and Discrepancy bounds in terms of tightness. We compare the tightness of the bounds of the MMD, Discrepancy and Nuclear Discrepancy. We show several orderings of the tightness of the bounds under different probabilistic assumptions, while Cortes et al. (2019) only takes a worst-case approach.

In summary, our work differs from previous works by considering instead of worst-case analysis (Cortes et al. 2019; Cortes and Mohri 2014), a probabilistic analysis of generalization bounds. Unlike most other works that use generalization bounds for domain adaptation (Cortes et al. 2019; Cortes and Mohri 2014; Germain et al. 2013), we use bounds to perform active learning. For the MMD active learner, studied by Chattopadhyay et al. (2012); Wang and Ye (2013), we give new theoretical results: an improved bound for active learning and we provide a principled way to choose the kernel for the MMD. We give new quantitative comparisons of bound tightness for the MMD and Discrepancy in multiple settings, while before these bounds were compared only qualitatively (Cortes and Mohri 2014). Furthermore, we study the novel question: how does bound tightness relate to active learning performance?

3 Setting and notation

Let \({\mathcal {X}} = {\mathbb {R}}^d\) denote the input space and \({\mathcal {Y}}\) the output space. Like Cortes and Mohri (2014) we assume there is a function \(f: {\mathcal {X}} \rightarrow {\mathcal {Y}}\) that determines the outputs and there is an unknown distribution with density P over \({\mathcal {X}}\) from which we get an independent and identically distributed (i.i.d.) unlabeled sample \({\hat{P}}= (x'_1,\ldots ,x'_{n_{\hat{P}}}) \in {\mathcal {X}}^{n_{\hat{P}}}\). We study single-shot batch active learners that given the unlabeled pool \({\hat{P}}\) selects a batch \({\hat{Q}}_n \subset {\hat{P}}\) of n samples before observing any labels. The active learner submits the batch to the labeling oracle that provides the labels of the batch. A kernel regularized least squares (KRLS) model is trained on \({\hat{Q}}^{\text {lab}}_n\), where \({\text {lab}}\) indicates a labeled dataset.

We take the kernel of the model K to be positive semi-definite (PSD), and denote the reproducing kernel Hilbert space (RKHS) as \({\mathcal {H}}\) where \(||h||_K\) denotes the norm in \({\mathcal {H}}\). A model corresponds to \(h \in {\mathcal {H}}\) and is obtained by minimizing

$$\begin{aligned} L_{\hat{Q}}(h,f) + \mu ||h||_K^2 \end{aligned}$$

for \(h \in {\mathcal {H}}\) when trained on \(\hat{Q}^{\text {lab}}\), where we follow the convention of Cortes and Mohri (2014). \(L_{\hat{Q}}(h,f)\) is the average empirical loss of h on \({\hat{Q}}\) with outputs given by f:

$$\begin{aligned} L_{\hat{Q}}(h,f) = \frac{1}{n_{{\hat{Q}}}} \sum _{x \in {\hat{Q}}} l(h(x),f(x)), \end{aligned}$$

where \(l: {\mathbb {R}} \times {\mathbb {R}} \rightarrow {\mathbb {R}}\) is a loss function. For KRLS l is the squared loss: \(l(h(x),f(x)) = (h(x) - f(x))^2\), then \(L_{\hat{Q}}(h,f)\) is the mean squared error (MSE) on \({\hat{Q}}\). Model complexity is controlled by the regularization parameter \(\mu > 0\). We choose

$$\begin{aligned} H = \left\{ h \in {\mathcal {H}}: ||h||_K \le \varLambda = \frac{f_{\text {max}}}{\sqrt{\mu }}\right\} \end{aligned}$$

as our hypothesis set where \(f_{\text {max}} = \sup _{x \in {\mathcal {X}}}{|f(x)|}\). Training KRLS always leads to a solution \(h \in H\) (Mohri et al. 2012, Lemma 11.1).

In classification typically we are interested in the zero-one error (accuracy), however, our study focuses on the squared loss (the surrogate loss). We use the squared loss because we can relate the bounds of the MMD, Nuclear Discrepancy and Discrepancy in closed form and compare them quantitatively. Since our goal is to investigate the correlation between bound tightness and performance, this is essential to our study.

We have made the standard assumption that the data comes from an unknown distribution P. The goal of the active learner is to choose a batch of queries in such a way as to minimize the expected loss of the model under this distribution P:

$$\begin{aligned} L_P(h,f) = \int _{{\mathcal {X}}} (h(x)-f(x))^2 P(x) dx. \end{aligned}$$
(1)

Ideally we would want to train our model on \({\hat{P}}^{\text {lab}}\), since small \(L_{\hat{P}}(h,f)\) will lead to small \(L_P(h,f)\) if the model complexity is appropriate, as illustrated by the following theorem (Mohri et al. 2012, p. 240).

Theorem 1

(Generalization bound Squared Loss (Mohri et al. 2012)) Let l be the squared loss. For any \(\delta > 0\), with probability at least \(1-\delta \) over an i.i.d. sample \(\hat{P}\) of size \(n_{\hat{P}}\) from P, the following inequality holds for all \(h \in {H}\):

$$\begin{aligned} L_P(h,f) \le L_{\hat{P}}(h,f) + 4 M R_m(H) + M^2 \frac{\log (\frac{1}{\delta })}{2 n_{\hat{P}}} \end{aligned}$$
(2)

Here \(R_m(H)\) is the Rademacher complexity of the hypothesis set H, and M is a constant such that \(|h(x) - f(x)| \le M\) for all \(x \in {\mathcal {X}}\) and all \(h \in H\).

If the model complexity is appropriate \(R_m(H)\) will be small. The third term is small when the pool \({\hat{P}}\) is large. If both of these criteria are met, it is unlikely that we overfit as reflected by a tight bound. Then training on \(\hat{P}^{\text {lab}}\) will likely minimize \(L_P(h,f)\).

Ideally we would train on \(\hat{P}^{\text {lab}}\), however, since we only have access to the unlabeled sample \({\hat{P}}\) this is impossible. Therefore we upperbound \(L_{{\hat{P}}}(h,f)\) instead. This upperbound is minimized by the active learners. The studied bounds are of the form

$$\begin{aligned} L_{{\hat{P}}}(h,f) \le L_{{\hat{Q}}}(h,f) + \text {obj}({\hat{P}},{\hat{Q}}) + \eta . \end{aligned}$$

Due to training \(L_{{\hat{Q}}}(h,f)\) will be relatively small. The term \(\eta \) is a constant that cannot be minimized during active learning since it depends on \({\hat{P}}^{\text {lab}}\). However, if the model misspecification is small, \(\eta \) will be small. Therefore we ignore this term during active learning, this is also (sometimes implicitly) done in other works (Huang et al. 2007; Chattopadhyay et al. 2012; Cortes and Mohri 2014). Thus the active learners choose the batch \({\hat{Q}}\) to minimize \(\text {obj}({\hat{P}},{\hat{Q}})\). This objective can be the \(\text {MMD}\), \(\text {disc}\) or \(\text {disc}_N\) which will be introduced in the next sections. This term measures the similarity between the unlabeled pool \({\hat{P}}\) and the batch \({\hat{Q}}\). Minimizing it leads to selecting a representative sample.

We consider two settings. In the agnostic setting binary labels are used, i.e., \({\mathcal {Y}} = \{-1,+1\}\), and generally we have \(f \notin H\). In the realizable setting \(f \in H\), so a model of our hypothesis set can perfectly reproduce the labels as there is no model misspecification. In this case \({\mathcal {Y}}\) is a subset of \({\mathbb {R}}\). In the realizeable setting \(\eta \) can become zero under some conditions, which allows us to compare the tightness of the bounds and enables our probabilistic analysis.

\(K(x,x')\) indicates the kernel function between x and \(x'\). We mainly use the Gaussian kernel \(K(x,x') = \exp (-{||x - x'||_2^2}/({2\sigma ^2}))\) where \(\sigma \), the bandwidth, is a hyperparameter of the kernel. For the MMD we require a second PSD kernel, \(K_{{\mathcal {L}}}\). We indicate its RKHS and bandwidth (for a Gaussian kernel) by \({\mathcal {H}}_{\mathcal {L}}\) and \(\sigma _{{\mathcal {L}}}\), respectively. All vectors are column vectors. \(X_{\hat{P}}\) and \(X_{\hat{Q}}\) are the \(n_{\hat{P}}\times d\) and \(n_{\hat{Q}}\times d\) matrices of the sets \(\hat{P}\) and \(\hat{Q}\).

4 Analysis of existing bounds

First we provide an improved MMD generalization bound for active learning which is inspired by Cortes et al. (2019). Then we review a bound in terms of the Discrepancy of Cortes et al. (2019) and we review how to compute the Discrepancy quantity (Mansour et al. 2009). We show that the MMD can be computed using a novel eigenvalue analysis, and thereby making the MMD and Discrepancy bounds comparable. We wrap up the section with a probabilistic interpretation of both bounds. As a roadmap for the reader we give an overview of the tightness relations in Table 1 which will be proven in this section and the next section.

4.1 Improved MMD bound for active learning

The MMD measures the similarity between the two unlabeled samples \({\hat{Q}}\) and \({\hat{P}}\). Using this criterion we give a generalization bound similar to the one given by Wang and Ye (2013) suitable for active learning. The empirical MMD quantity is given by

$$\begin{aligned} \text {MMD}(\hat{P},\hat{Q}) = \max _{{\tilde{l}} \in {H_{{\mathcal {L}}}}} \left( \frac{1}{n_{\hat{P}}} \sum _{x \in {\hat{P}}} {\tilde{l}}(x) - \frac{1}{n_{\hat{Q}}} \sum _{x \in {\hat{Q}}} {\tilde{l}}(x) \right) . \end{aligned}$$

here \({\tilde{l}}\) is the worst-case function from a set of functions \(H_{{{\mathcal {L}}}}\). We take the standard choice \(H_{{\mathcal {L}}} = \{ h \in {\mathcal {H}}_{{\mathcal {L}}}: ||h||_{K_{{\mathcal {L}}}} \le \varLambda _{{\mathcal {L}}}\}\). In Appendix A.1 we revisit how to compute the MMD quantity. We extend the technique of Cortes et al. (2019) to give a generalization bound in terms of the MMD. To get a bound for the MMD we approximate the loss function \(g(h,f)(x) = l(h(x),f(x))\) using \(H_{{{\mathcal {L}}}}\).

Proposition 1

(Agnostic MMD worst case bound) Let l be any loss function \(l: {\mathbb {R}} \times {\mathbb {R}} \rightarrow {\mathbb {R}}\). Then for all \(h \in H\) and any labeling function \(f: {\mathcal {X}} \rightarrow {\mathcal {Y}}\) we have

$$\begin{aligned} L_{\hat{P}}(h,f) \le L_{\hat{Q}}(h,f) + \text {MMD}({\hat{P}},{\hat{Q}}) + \eta _{\text {MMD}}, \end{aligned}$$
(3)

where \(\eta _{\text {MMD}} = 2 \min _{{\tilde{l}} \in H_{{\mathcal {L}}}} \max _{h \in H, x \in {\hat{P}}} |g(h,f)(x) - {\tilde{l}}(x)|\).

Here \(\eta _{\text {MMD}}\) measures the approximation error since we may have that \(g(h,f) \notin H_{{\mathcal {L}}}\).

Our MMD bound above differs in two aspects from the bound of Wang and Ye (2013). Wang and Ye (2013) estimate the MMD between the distributions P and Q. However, to estimate the MMD between distributions i.i.d. samples are required (Gretton et al. 2012, Appendix A.2). The sample \({\hat{Q}}\) is not i.i.d. since it is chosen by an active learner.

Our bound allows for non-i.i.d. samples since it estimates the MMD between empirical samples and is therefore better suited for active learning. The second novelty is that we measure the error of approximating the loss function g(hf) using the term \(\eta _{\text {MMD}}\). This allows us to adjust the MMD to the hypothesis set H and loss l similar to the Discrepancy measure of Cortes and Mohri (2014). We give the theorem below with a small proof sketch for the simplified case of the linear kernel. See the Appendix for the full proof.

Theorem 2

(Adjusted MMD) Let l be the squared loss and assume \(f \in H\) (realizable setting). If \(K_{{\mathcal {L}}}(x_i,x_j) = K(x_i,x_j)^2\) and \(\varLambda _{{\mathcal {L}}} = 4 \varLambda ^2\), then \(g(h,f) \in H_{{\mathcal {L}}}\) and thus \(\eta _{\text {MMD}} = 0\).

Proof sketch Here we give a proof sketch for the case where K is the linear kernel: \(K(x_i,x_j) = x_i^T x_j\). Then \(h(x) = w_h^T x\) and \(f(x) = w_f^T x\), and \(g(h,f) = ((w_f - w_h)^T x)^2\) is a quadratic function of x. The featuremap of the kernel \(K_{{\mathcal {L}}}(x_i,x_j) = K(x_i,x_j)^2\) are all monomials of degree 2 (Shawe-Taylor and Cristianini 2004, chap. 9.1). Therefore \(H_{{\mathcal {L}}}\) can be used to model any quadratic function such as g(hf). Therefore if \(\varLambda _{{\mathcal {L}}}\) is chosen appropriately we have \(g(h,f) \in H_{{\mathcal {L}}}\).

Corollary 1

Let l be the squared loss and \(f \in H\) and let K be a Gaussian kernel with bandwidth \(\sigma \). If \(K_{{\mathcal {L}}}\) is a Gaussian kernel with bandwidth \(\sigma _{{\mathcal {L}}} = \frac{\sigma }{\sqrt{2}}\) and \(\varLambda _{{\mathcal {L}}} = 4\varLambda ^2\) then \(\eta _{\text {MMD}} = 0\).

Compared to other works Theorem 2 gives a more informed way to choose the MMD kernel in the context of learning.Footnote 1 Typically, a Gaussian kernel is used for the MMD with \(\sigma _{{\mathcal {L}}} = \sigma \). However, Corollary 1 shows that if \(\sigma _{{\mathcal {L}}} = \sigma \), we may have that \(\eta _{\text {MMD}} \ne 0\) even in the realizable setting, since \(\sigma _{{\mathcal {L}}}\) is too large—the true loss function g(hf) is less smooth than the functions in \(H_{{\mathcal {L}}}\). This is undesirable since \(\eta _{\text {MMD}}\) cannot be minimized during active learning. Our choice for \(\sigma _{{\mathcal {L}}}\) is preferable, as it ensures \(\eta _{\text {MMD}} = 0\) in the realizable setting.

4.2 Discrepancy bound

The Discrepancy is defined as

$$\begin{aligned} \text {disc}(\hat{P},\hat{Q}) = \max _{h,h' \in H} |L_{\hat{P}}(h',h) - L_{\hat{Q}}(h',h)|. \end{aligned}$$
(4)

Observe it depends on H and l and therefore automatically adjusts to the loss and hypothesis set. We give a bound of Cortes et al. (2019) in terms of the Discrepancy.

Theorem 3

(Agnostic Discrepancy worst case bound (Cortes et al. 2019)) Assume that for all \(x \in {\mathcal {X}}\) and for all \(h \in H\) that \(l(h(x),f(x)) \le C\) and let l be the squared loss. Then for all \(h \in H\) and any labeling function \(f: {\mathcal {X}} \rightarrow {\mathcal {Y}}\) we have

$$\begin{aligned} L_{\hat{P}}(h,f) \le L_{\hat{Q}}(h,f) + \text {disc}(\hat{P},\hat{Q}) + \eta _{\text {disc}}, \end{aligned}$$

where \(\eta _{\text {disc}} = 4 C \min _{{\tilde{f}} \in H} \max _{x \in {\hat{P}}} |{\tilde{f}}(x) - f(x)|\).

Here \(\eta _{\text {disc}}\) measures the model misspecification. In the realizable setting, \(f \in H\), and \(\eta _{\text {disc}} = 0\).

4.3 Eigenvalue analysis

We show the relation between the Discrepancy and MMD using a novel eigenvalue analysis. To this end we introduce the matrix \(M_{{\hat{P}},{\hat{Q}}}\) to compute the Discrepancy.

$$\begin{aligned} M_{{\hat{P}},{\hat{Q}}} = \frac{1}{n_{\hat{P}}} X_{\hat{P}}^T X_{\hat{P}}- \frac{1}{n_{\hat{Q}}} X_{\hat{Q}}^T X_{\hat{Q}}, \end{aligned}$$

For notational convenience we will often write M instead of \(M_{{\hat{P}},{\hat{Q}}}\). The matrix M measures the difference between two sets of samples using their second-order moment. Considering its kernelized version such comparison can implicitly take higher-order moments into account as well. In particular, for a Gaussian kernel all moments of the samples are compared and we have that \(M=0\) only if \({\hat{P}}={\hat{Q}}\).

In the following we will look at the eigendecomposition of M. Since M is the difference between two covariance matrices, it can have positive and negative eigenvalues. A positive (negative) eigenvalue means that in the direction of the corresponding eigenvector \({\hat{P}}\) has more (less) variance than \({\hat{Q}}\). Recall that in active learning, our aim is to approximate \({\hat{P}}\) using representative samples \({\hat{Q}}\), and thus small absolute eigenvalues are desirable, because this would indicate that in the direction of the corresponding eigenvector \({\hat{P}}\) is well approximated by \({\hat{Q}}\).

Theorem 4

(Discrepancy computation (Mansour et al. 2009)) Assume K is the linear kernel, \(K(x_i,x_j) = x_i^T x_j\), and l is the squared loss, then

$$\begin{aligned} \text {disc}(\hat{P},\hat{Q}) = 4 \varLambda ^2 \max _i |\lambda _i| = 4 \varLambda ^2 ||\lambda ||_\infty . \end{aligned}$$
(5)

where \(\lambda _i\) are the eigenvalues of M, and \(\lambda \) is the vector of eigenvalues of M.

Note that \(h'\) will later play the role of f, the true labeling function. The theorem shows that in the worst case, the h and \(h'\) that maximize the Discrepancy in Eq. 4 are chosen exactly in the direction where \({\hat{Q}}\) and \({\hat{P}}\) differ most, i.e., the direction of the largest absolute eigenvalue. Cortes and Mohri (2014) show that we can replace M by \(M_K\) to compute the Discrepancy for any PSD kernel.Footnote 2

Before we can give our main result we require some additional notation. Assume that the eigenvalues \(\lambda _i\) of \(M\) are ordered by absolute value where \(|\lambda _1|\) is the largest absolute eigenvalue. \(\lambda \) indicates the vector of eigenvalues, with \(r = \text {rank}(M)\) non-zero eigenvalues. \(e_i\) is the normalized (unit-length) eigenvector corresponding to \(\lambda _i\). By careful analysis we can realize the relationship between M and the featuremap of the squared kernel to show that the MMD can be computed as follows.

Theorem 5

(MMD Computation) Let \(K_{{\mathcal {L}}}(x_i,x_j) = K(x_i,x_j)^2\) and \(\varLambda _{{\mathcal {L}}} = 4 \varLambda ^2\), then

$$\begin{aligned} \text {MMD}({\hat{P}},{\hat{Q}}) = 4 \varLambda ^2 ||\lambda ||_2. \end{aligned}$$
(6)

This theorem shows that the MMD measures differences between the samples \({\hat{Q}}\) and \({\hat{P}}\) differently. The Discrepancy only measures similarity along one dimension, namely the direction where the samples differ the most. The MMD considers all dimensions to compare the samples \({\hat{Q}}\) and \({\hat{P}}\). Due to the square in the Euclidean norm, the MMD gives directions that differ more more weight in the comparison.

Corollary 2

Under the conditions of Theorem 2, \(\text {disc}({\hat{P}},{\hat{Q}}) \le \text {MMD}({\hat{P}},{\hat{Q}})\).

Under these conditions the Discrepancy bound (Theorem 3) is tighter than the MMD bound (Proposition 1), since \(\eta _{\text {MMD}} = \eta _{\text {disc}} = 0\). Since the Discrepancy bound is tighter, one may expect that active learning by minimization of the Discrepancy may result in better active learning queries than minimization of the MMD, in particular if \(\eta _{\text {MMD}}\) and \(\eta _{\text {disc}}\) are small or zero.

4.4 Probabilistic analysis

We show the MMD can provide a tighter bound on the expected loss under certain probabilistic assumptions. From this point on we assume the conditions of Theorem 2 and take h to be the model trained on the set \({\hat{Q}}\), and f to be the true labeling function. In addition, define \(u = h-f\) and \(U = \{ u \in {\mathcal {H}} : ||u||_K \le 2\varLambda \}\) and let \({\bar{u}}_i = u^T e_i\), where \(e_i\) is the eigenvector of M.

Then \(||u||_K = ||{\bar{u}}||_K \le 2\varLambda \), since \({\bar{u}}\) is a rotated version of u. It is more convenient to work with \({\bar{u}}\), since then the matrix M diagonalizes: \(u^T M u = \sum _i {\bar{u}}_i \lambda _i\).

The difference u is the unknown error our trained model h makes compared with the true model f. By making different probabilistic assumptions about the distribution of u we can arrive at different bounds. We now provide the building block for our probabilistic bounds. By noting that \(L_{\hat{P}}(h,f) - L_{\hat{Q}}(h,f) = u^T M u\) and by making use of the triangle inequality, we find the following.

Lemma 1

(Probabilistic bound) AssumeFootnote 3u is distributed according to a pdf p(u) over U. Then

$$\begin{aligned} \mathbb {E}_{u} L_{{\hat{P}}}(h,f) \le \mathbb {E}_{u} L_{{\hat{Q}}}(h,f) + \mathbb {E}_{u} G(u,M), \end{aligned}$$
(7)

where we defined \(G(u,M) = \sum _i {\bar{u}}_i^2 |\lambda _i|\).

Observe that G(uM) is a weighted sum, where each \(|\lambda _i|\) is weighted by \({\bar{u}}_i^2\). Recall that \(L_{{\hat{Q}}}(h,f)\) is generally small due to the training procedure of the model, thus generally \({{\,\mathrm{{\mathbb {E}}}\,}}_u L_{{\hat{Q}}}(h,f)\) will be small as well. Therefore we focus our probabilistic analysis on the term \({{\,\mathrm{{\mathbb {E}}}\,}}_u G(u,M)\). By giving bounds on this quantity, we derive several probabilistic bounds that hold in expectation w.r.t. u.

The Discrepancy can be interpreted to put all probability mass on \(u=2\varLambda e_1\).

Proposition 2

(Worst case: Probabilistic Discrepancy) Given the pdf \(p(u) = \delta (u - 2\varLambda e_1)\) where \(\delta (x)\) is the Dirac delta distribution. Then

$$\begin{aligned} \mathbb {E}_{u} L_{{\hat{P}}}(h,f) \le \mathbb {E}_{u} L_{{\hat{Q}}}(h,f) + \text {disc}(\hat{P},\hat{Q}) \end{aligned}$$
(8)

Only one \(u \in U\) can be observed under this pdf. This is a worst case distribution because this p(u) maximizes \({{\,\mathrm{{\mathbb {E}}}\,}}_u G(u,M)\). The Discrepancy assumes that the model error u points exactly in the direction that causes us to make the biggest error on \({\hat{P}}\). Under this distribution the Discrepancy gives a tighter bound on the expected loss than the MMD because of Corollary 2. Under a different p(u) the MMD bound is tighter.

Theorem 6

(Pessimistic case: Probabilistic MMD) Let p(u) be a pdf on \(U_s\) such thatFootnote 4

$$\begin{aligned} \mathbb {E}_{u} {\bar{u}}_i^2 = {4\varLambda ^2} {|\lambda _i|} \left( \sqrt{r} ||\lambda ||_2\right) ^{-1}, \end{aligned}$$
(9)

then

$$\begin{aligned} \mathbb {E}_{u} L_{{\hat{P}}}(h,f) \le \mathbb {E}_{u} L_{{\hat{Q}}}(h,f) + \frac{1}{\sqrt{r}} \text {MMD}(\hat{P},\hat{Q}) \le \mathbb {E}_{u} L_{{\hat{Q}}}(h,f) + \text {disc}(\hat{P},\hat{Q}). \end{aligned}$$

Unlike for the distribution of the Discrepancy, for the above p(u) it is possible to observe different model errors u. However, the model error u in this case is biased: Equation 9 suggests that u is more likely to point in the direction of eigenvectors with large absolute eigenvalues. This assumption is pessimistic since large absolute eigenvalues can contribute more to \({{\,\mathrm{{\mathbb {E}}}\,}}_u G(u,M)\). Another way to interpret this is that model errors are more likely to occur in directions where \({\hat{Q}}\) and \({\hat{P}}\) differ more. Because \({\hat{Q}}\) and \({\hat{P}}\) differ more in those directions, these model errors can count more towards the MSE on \({\hat{P}}\).

For this p(u) the MMD bound is tighter. If the probabilistic assumption of the MMD is more accurate, we can expect that the MMD active learner will yield better active learning queries than the Discrepancy.

5 Nuclear discrepancy

In this section we motivate the optimistic probabilistic assumption that leads to the Nuclear Discrepancy (ND) bound. First, let us introduce the Nuclear Discrepancy quantity

$$\begin{aligned} \text {disc}_N(\hat{P},\hat{Q}) = {4 \varLambda ^2} ||\lambda ||_1. \end{aligned}$$

In the absence of any prior knowledge, we choose the pdf p(u) according to the well established principle of maximum entropy. This principle dictates that in case nothing is known about a distribution, the distribution with the largest entropy should be chosen (Jaynes 1957). Accordingly, we choose p(u) uniform over U, which leads to the following.

Theorem 7

(Optimistic case: Probabilistic ND) Let p(u) be uniform over all \(u \in U_s\), then\(^{4}\)

$$\begin{aligned} \mathbb {E}_{u} L_{{\hat{P}}}(h,f) \le \mathbb {E}_{u} L_{{\hat{Q}}}(h,f) + \frac{1}{r+2} \text {disc}_N(\hat{P},\hat{Q}). \end{aligned}$$

In addition we have that \(\text {disc}_N(\hat{P},\hat{Q}) \le {\sqrt{r}}~\text {MMD}(\hat{P},\hat{Q}) \le {r}~\text {disc}(\hat{P},\hat{Q})\).

Under the uniform distribution, u is unbiased: each direction for the model error is equally likely. This is more optimistic than the assumption of the MMD, where u was biased towards directions that could larger errors on \({\hat{P}}\). Because now u is not biased, \({{\,\mathrm{{\mathbb {E}}}\,}}_u G(u,M)\) is smaller under this p(u) than in Theorems 2 and 6 and so this p(u) is more optimistic. The Nuclear Discrepancy (ND) owns its name to the fact that it is proportional to the nuclear matrix norm of M.

An appealing property of this choice of p(u) is that, given a fixed \({\hat{P}}\), any choice of \({\hat{Q}}\) does not influence p(u). For the Discrepancy and the MMD, choosing different \({\hat{Q}}\) leads to different p(u). Thus choosing queries changes the distribution of p(u) and thus also implicitly the distribution of h and f. Instead, for the ND, our queries don’t influence the distribution of h and f. This assumption seems reasonable, since f is usually assumed to be fixed and independent of our actions.

Under the uniform distribution the ND provides the tightest bound on the expected loss, while the MMD bound is looser, and the Discrepancy bound is the loosest. Therefore, if this probabilistic assumption is the most accurate, minimization of the Nuclear Discrepancy may lead to the best queries for active learning, followed by the MMD and Discrepancy, in that order.Footnote 5

6 Experiments

We explain the setup and baselines, afterward we review our main results: the realizable setting. We discuss the results and examine the probabilistic assumptions empirically. Somewhat similar results are observed in the agnostic setting which we will briefly discuss. An additional experiment investigates the influence of subsampling of datasets on our results. This subsampling experiment and all results of the agnostic case are discussed in detail in the Appendix.

6.1 Experimental setup and baselines

An overview of the experimental procedure is given in Algorithm 1. A training set (\(65\%\)) and test set (\(35\%\)) are used—the training set corresponds to \({\hat{P}}\) and we indicate the testset by \(\hat{T}\). We use the active learners to select batches of size \(n=1,2,\ldots ,50\). For computational reasons we select batches in a sequential greedy fashion. Initially at \(t=0\) the batch is empty: \({\hat{Q}}_0 = \emptyset \). In iteration \(1 \le t \le n\) the active learner selects a sample \(x_{t}\) from the unlabeled pool \({\hat{U}}_{t-1} = {\hat{P}}\setminus {\hat{Q}}_{t-1}\) according to \(x_{t} = {{\,\mathrm{arg\,min}\,}}_{s \in {\hat{U}}_{t-1}} \text {obj} ({\hat{P}}, {\hat{Q}}_{t-1} \cup s)\). We perform experiments multiple times to ensure significance of the results. We call each repetition a run, and for each run a new training and test split is used. During one run, we evaluate each active learner using the described procedure of Algorithm 1.

figure a

As baseline we use random sampling and a greedy version of the state-of-the-art MMD active learner (Chattopadhyay et al. 2012; Wang and Ye 2013). We compare the baselines with our novel active learners: the Discrepancy active learner and the Nuclear Discrepancy active learner.

The methods are evaluated on 13 datasets that originate either from the UCI Machine Learning repository (Lichman 2013) or were provided by Cawley and Talbot (2004). See Appendix E for the dataset names and characteristics. Furthermore, we perform an experiment on the image dataset MNIST. The MNIST dataset (LeCun et al. 1998) consists of images of handwritten digits of size \(28\times 28\) pixels. By treating each pixel as a feature, the dimensionality of this dataset is 784 which is relatively high dimensional. Like Yang and Loog (2018) we construct 3 difficult binary classification problems: 3vs5, 7vs9 and 5vs8.

To make datasets conform to the realizable setting we use the approach of Cortes and Mohri (2014): we fit a model of our hypothesis set to the whole dataset and use its outputs as labels.

To set reasonable hyperparameters we use a similar procedure as Gu et al. (2012). We use labeled data before any experiments are performed to perform model selection to determine hyperparameters (\(\sigma \) and \(\mu \) of the KRLS model). This can be motivated by the fact that in practice a related task or dataset may be available in order to obtain a rough estimate of the hyperparameter settings. This procedure makes sure \(\eta _{\text {MMD}}\) and \(\eta _{\text {disc}}\) are small in the agnostic setting.

Recall that the active learners minimize bounds on \(L_{\hat{P}}(h,f)\). Therefore active learners then implicitly also minimizes a bound on \(L_P(h,f)\), see Theorem 1. By choosing hyperparameters in the described way above, we ensure that the Rademacher complexity term \(R_m(H)\) is not too large and we don’t overfit. We measure performance on an independent test set in order to get an unbiased estimate of \(L_P(h,f)\).

To aid reproducibility we give all hyperparameters and additional details in Appendix E. We set \(\sigma _{{\mathcal {L}}}\) according to our analysis in Corollary 1.

6.2 Realizable setting

First we benchmark the active learners in the realizable setting. In this setting we are assured that \(\eta = 0\) in all bounds and therefore we eliminate unexpected effects that can arise due to model misspecification. We study this scenario to validate our theoretical results and gain more insight, furthermore, note that this scenario is also studied in adaptation (Cortes and Mohri 2014).

Fig. 1
figure 1

Learning curves for several datasets for the realizeable setting. Results are averaged over 100 runs. The MSE is measured with respect to random sampling (lower is better)

Several learning curves are shown in Fig. 1, all curves can be found in Appendix H.1. The MSE of the active learner minus the mean performance (per query) of random sampling is displayed on the y-axis (lower is better). The curve is averaged over 100 runs. Error bars represent the \(95\%\) confidence interval of the mean computed using the standard error.

We summarize results on all datasets using the Area Under the (mean squared error) Learning Curve (AULC) in Table 2. The AULC is a different metric than the well known AUROC or AUPRC measures. The AUROC measure summarize the performance of a model for different misclassification costs (type I and type II costs) and the AUPRC is useful when one class is more important than the other, such as in object detection.

By contrast, AULC is specifically suited to active learning, and summarizes the performance of an active learning algorithm for different number of labeling budgets (O’Neill et al. 2017; Huijser and van Gemert 2017; Settles and Craven 2008). Low AULC is obtained when an active learner quickly learns a model with low MSE. If a method in the table is bold, it either means it is the best method (as judged by the mean), or if it is not significantly worse than the best method (as judged by the t-test).

Significance improvement is judged by a paired two tailed t-test (significance level \(p = 0.05\)). We may use a paired test since during one run all active learners are evaluated using the same training and test split.

In the majority of the cases the MMD improves upon the Discrepancy (see Table 2). The results on the ringnorm dataset are remarkable, here the Discrepancy sometimes performs worse than random sampling, see Fig. 1. We observe that generally the Discrepancy performs the worst. These results illustrates that tighter worst case bounds do not guarantee improved performance. The proposed ND active learner significantly improves upon the MMD in 9 out of the 13 datasets tested. Here we counted MNIST once, while we remark that on all subproblems the ND improves significantly on the MMD. This provides evidence that the proposed method can also deal with high-dimensional datasets. In case the ND does not perform the best, it ties with the MMD or Discrepancy. The ND never performs significantly worse. This ranking of the methods exactly corresponds to the order of the bounds given by Theorem 7 under our optimistic probabilistic assumptions. This supports our hypothesis that we find ourselves more often in a more optimistic average-case scenario.

Table 2 Area Under the mean squared error Learning Curve (AULC) for the strategies in the realizable setting, averaged over 100 runs
Fig. 2
figure 2

Decomposition of the sum G(uM) during active learning for several datasets. EV1 indicates the contribution of \(\lambda _1\), EV2-9 indicate the summed contributions of \(\lambda _2,\ldots ,\lambda _9\), etc. Averaged over 100 runs of the random active learner. \(\lambda _1\) in most cases contributes little and in general all \(\lambda _i\) contribute to G(uM). This supports the optimistic probabilistic assumptions

6.3 Decomposition of probabilistic bounds

Since we are in the realizable setting we can compute \(u = h-f\) with the true labeling function f and our trained model h. Thus we can compute each term in the sum of G(uM) in (7) during the experiments.Footnote 6 We show the contribution of each eigenvalue to G(uM). In Fig. 2 we show this decomposition using a stacked bar chart during several active learning experiments of the baseline active learner ‘Random’.Footnote 7 Here EV1 indicates the largest absolute eigenvalue, its contribution is given by \({\bar{u}}_1^2|\lambda _1|\) (see also (7)). EV 2 - 9 to indicate the summed contribution: \(\sum _{i=2}^9 {\bar{u}}_i^2|\lambda _i|\), etc. The mean contributions over 100 runs are shown.

Observe that the contribution of \(|\lambda _1|\) to G(uM) is often small, it is shown by the small white bar at the bottom of the barchart. Therefore the Discrepancy active learner chooses suboptimal samples: its strategy is optimal for a worst-case scenario \(G(u,M) = 4\varLambda ^2 |\lambda _1|\) that is very rare. We observe that typically all \(\lambda _i\) contribute to G(uM) supporting our probabilistic assumption.

6.4 Agnostic setting

For completeness, we briefly mention the agnostic setting, for all details see Appendix F. In the agnostic setting the rankings of methods can change and performance differences become less significant. The ND still improves more upon the MMD than the reverse, however, the trend is less significant. Because our assumption \(\eta = 0\) is violated our theoretical analysis is less applicable.

For the MNIST experiments we however find that the results for some subproblems almost coincides with the realizeable setting: apparently, for the MNIST dataset the model misspefication is very small. This may be because the dataset is of relatively high dimensionalion.

6.5 Influence of subsampling

We briefly mention an additional experiment that we have performed on the splice dataset to see how subsampling affects performance. To this end we measure the performance while we vary the pool size \({\hat{P}}\) by changing the amount of subsampling. This to investigate how the proposed methods would perform for problems with a larger scale. For all details please see Appendix G, here we will summarize our findings.

For small pool sizes all active learners experience a drop in performance. We find the larger the pool, the better the performance, up until some point at which the performance levels off. The experiment provides evidence that if finer subsampling is used or larger datasets are used, methods typically improve in performance up to a point where performance levels off.

7 Discussion

In the experiments we have observed that in the realizable setting the order of the bounds under our more optimistic probabilistic assumptions give the best indication of active learning performance. The empirical decomposition of G(uM) during experiments also supports our hypothesis that we generally find ourselves in a more optimistic scenario instead of a worst case scenario.

Still it is meaningful to look at worst-case guarantees, though the worst-case should be expected to occur. The worst-case assumed by the Discrepancy can never occur in the realizable setting, and we believe it is also highly unlikely in the agnostic setting. The strength of our probabilistic approach is that it considers all scenarios equally and does not focus too much on specific scenarios, making the strategy more robust.

Our work illustrates that the order of bounds can change under varying conditions and thus tightness of bounds is not the whole story. The conditions under which the bounds hold are equally important, and should reflect the mathematical setting as much as possible. For example, in a different setting where an adversary would pick u, the Discrepancy active learner would be most appropriate. This insight illustrates that not only by obtaining tighter bounds active learning performance can be improved, but by finding more appropriate assumptions (bound-based) active learners can be improved as well.

Our work supports the idea of Germain et al. (2013) who introduce a probabilistic version of the Discrepancy bound for the zero-one loss (Ben-David et al. 2010). Our conclusions also support that the direct Cortes et al. (2019) takes: by using more accurate assumptions to better characterize the the worst case scenario, performance may be improved.

In our study we have focused on minimizing the mean squared error. It would be interesting to investigate the extension of the Nuclear Discrepancy to other loss functions, in particular the zero-one loss. As far as we can see, however, such an extension is not trivial. The above mentioned probabilistic version of the Discrepancy by Germain et al. (2013) may provide some inspiration to achieve this, but they offer a PAC Bayes approach that cannot be easily adapted to the probabilistic setting we consider.

Where the experiments in the realizable setting provide clear insights, the results concerning the agnostic setting are not fully understood. A more in depth experimental study of the agnostic setting is complicated by unexpected effects of \(\eta \). Since probabilistic bounds are the most informative in the realizable setting, it is of interest to consider probabilistic bounds for the agnostic setting as well.

In our experiments we have used greedy optimization to compute the batch \(\hat{Q}_n\). It is theoretically possible to optimize a whole batch of queries in one global optimization step. However, for the MMD this problem is known to be NP-hard (Chattopadhyay et al. 2012). Minimizing the Discrepancy is also non-trivial, as illustrated by the involved optimization procedure required by Cortes and Mohri (2014) for domain adaptation. Note that their optimization problem is easier than the optimization problem posed by active learning, where binary constraints are necessary. Since the objective value of the Nuclear Discrepancy is given by an expectation which can be approximated using sampling, we believe it may be possible to speed up the optimization by using approximations.

In this work we have only considered single-shot batch active learning. In regular batch-mode active learning label information of previously selected samples can be used to improve query selection. This can be accommodated in our active learner by refining p(u) using label information. Our results have implications for adaptation as well. We suspect our suggested choice of \(\sigma _{{\mathcal {L}}}\) may improve the MMD domain adaptation method (Huang et al. 2007). Furthermore, our results suggest that the ND is a promising objective for adaptation.

8 Conclusion

To investigate the relation between generalization bounds and active learning performance, we gave several theoretical results concerning the bound of the MMD active learner and the Discrepancy bound. In particular, we showed that the Discrepancy provides the tightest worst-case bound. We introduced a novel quantity; Nuclear Discrepancy, motivated from optimistic probabilistic assumptions derived from the principle of maximum entropy. Under these probabilistic assumptions the ND provides the tightest bound on the expected loss, followed by the MMD, and the Discrepancy provides the loosest bound.

Experimentally, we observed that in the realizable setting the Discrepancy performs the worst, illustrating that tighter worst-case bounds do not guarantee improved active learning performance. Our optimistic probabilistic analysis clearly matches the observed behavior in the realizable setting: the proposed ND active learner improves upon the MMD, and the MMD improves upon the Discrepancy active learner. We find that even on the high-dimensional image dataset MNIST our method is competitive. A similar, weaker, trend is observed in the agnostic case. One of our key conclusions is that not only bound tightness is important for active learning performance, but that appropriate assumptions are equally important.