PAC-learning with approximate predictors

Approximate learning machines have become popular in the era of small devices, including quantised, factorised, hashed, or otherwise compressed predictors, and the quest to explain and guarantee good generalisation abilities for such methods has just begun. In this paper, we study the role of approximability in learning, both in the full precision and the approximated settings. We do this through a notion of sensitivity of predictors to the action of the approximation operator at hand. We prove upper bounds on the generalisation of such predictors, yielding the following main findings, for any PAC-learnable class and any given approximation operator: (1) We show that under mild conditions, approximable target concepts are learnable from a smaller labelled sample, provided sufficient unlabelled data; (2) We give algorithms that guarantee a good predictor whose approximation also enjoys the same generalisation guarantees; (3) We highlight natural examples of structure in the class of sensitivities, which reduce, and possibly even eliminate the otherwise abundant requirement of additional unlabelled data, and henceforth shed new light onto what makes one problem instance easier to learn than another. These results embed the scope of modern model-compression approaches into the general goal of statistical learning theory, which in return suggests appropriate algorithms through minimising uniform bounds.


Introduction
The last decade has seen a tremendous increase of interest in complex learning problems, such as deep neural networks, and learning in very high dimensional spaces. This results in a large number of parameters which need to be learned from the data. This is typically very resource-intensive in terms of memory, computation, and labelled training data; and Editors: João Gama, Alípio Jorge, Salvador García. consequently infeasible to deploy on devices with limited resources such as mobile phones, wearable devices, and the Internet of Things. Therefore, a plethora of model-compression and approximation techniques have been proposed, such as quantisation, pruning, factorisation, random projection, hashing, and others (Choudhary et al., 2020). Rather intriguingly, many empirical findings on realistic benchmark problems seem to indicate that, despite a drastic compression of the complex model, such techniques often perform impressively well, with predictive accuracy comparable to that of full precision models. Below we mention just a few illustrative landmarks.
Quantisation of the weights of deep neural networks was proposed in BinaryConnect (Courbariaux et al., 2015), where a neural network with weights constrained to a single bit ( ±1 ) was proposed and empirically demonstrated to achieve comparable results to a full precision network of the same size. These results were further refined by the Quantised Neural Networks (QNN) training algorithm (Hubara et al., 2017), and the idea was also extended to convolutional networks in Xnor-net (Rastegari et al., 2016). Another compression scheme introduced in Han et al. (2016), called Deep Compression, employed a combination of pruning, quantisation, and Huffman coding to achieve similar results to the original network, with a significant reduction in memory usage.
Factorisation of the weights into low-rank matrices has been another common technique to reduce the size of a deep neural network (DNN), see Denil et al. (2013Denil et al. ( , 2014 for details. Recent survey articles on a variety of model-compression techniques specific to deep neural networks may be found in Choudhary et al. (2020), Cheng et al. (2017) and Menghani (2021).
In a related work (Ravi, 2019), the author proposes to learn the high and low complexity networks simultaneously through a joint objective function that minimises not only their individual sample errors but also their disagreement. They found experimentally that this approach improves accuracy of both models, regardless of the model-compression technique employed. While a theoretical explanation remains elusive, this was among the first attempts to shift focus from the compressed model back to the fuller picture of the original model, and consider these objectives in tandem.
Theoretical studies of model-compression are much scarcer, and the interplay between model approximation and generalisation is not very well understood. Work taking an information theoretic approach (Gao et al., 2019) studied the trade-off between the compression granularity (rate) and the change it induces in the empirical error, using rate distortion theory. Follow-on work (Bu et al., 2021) extended their analysis to show that it is possible (on occasion) for compressed versions of pre-trained models to generalise even better than the original.
Another line of research exploited a notion of compression (Arora et al., 2018;Zhou et al., 2019). In Arora et al. (2018), a new compression framework was introduced for proving generalisation bounds, and their analysis indicated that resilience to noise implies a better generalisation for deep neural networks. A PAC-Bayes bound was then proposed to give a non-vacuous generalisation bound on the compressed network in Zhou et al. (2019). This was further built upon in Baykal et al. (2019), and has inspired a new algorithm along with a generalisation bound for the fully connected network.
In Suzuki et al. (2020b), compression-based bounds on a new pruning method for DNN was established, and more recently the authors also gave bounds for the full network (Suzuki et al., 2020a). This latter work allows the compression-based bound to be converted into a bound for the full network, using the local Rademacher complexity of the Minkowski difference between the loss class of the full networks and the loss class of the compressed networks. This is therefore another instance, entirely complementary of the work of Ravi (2019), where the performance of the approximate model is linked back in some way to that of the full model, albeit a joint treatment has not been attempted.
In Ashbrock and Powell (2021), a stochastic Markov gradient decent was introduced to learn in memory limited setting directly in the discrete parameter space. They provide convergence analysis for their optimisation algorithm, but generalisation is only demonstrated experimentally.
The general trend and focus on compressing deep neural networks (DNNs) is remarkable. However, we conjecture a more fundamental connection between approximability and generalisation that is not specific to deep networks. Contrary to the increasingly sophisticated and specialised tools being developed for DNNs, our aim here is to study the connection between approximability and generalisation from first principles. To do this, we want to ensure generalisation guarantees for learning with approximate models in general.
We also hypothesise that target concepts that have low sensitivity to approximation may represent a benign trait of learning problems in general, which would imply easier learnability of the full precision model too. To substantiate this, we shall seek learning algorithms whose generalisation ability depends on the approximability of the target concept, irrespective of the form of the learned predictor being used in the full or approximated setting.

Contributions
In the following roadmap we summarise the main contributions and findings of this paper: • We define a notion of approximability of a predictor, which quantifies the average extent of sensitivity of its predictions when subjected to a given approximation operator (Sect. 2.1). This quantity will feature heavily in our generalisation bounds. • In Sect. 2.2 we show that low sensitivity target functions may require less labelled training data, provided we have access to an independent unlabelled set of sufficient size (Theorem 1). This sets the stage for approximability to be viewed as a benign trait for learning. • In Sect. 2.3 we develop a practical theory, showing that a constrained empirical risk minimisation algorithm with a modified loss function, which enforces approximability up to a given threshold, learns a predictor that is guaranteed to generalise well both in its full precision and its approximate forms (Proposition 2.4). Furthermore, we construct an objective function that also implicitly optimises the trade-off managed by the sensitivity threshold (Proposition 2.5). These results then give rise to a learning algorithm that is able to take advantage of additional unlabelled data without the requirement for it to be independent from the labelled set (Theorem 2). • For learning a good approximate predictor, we also give two variants of our algorithm that allows the user to control the above trade-off directly (Theorem 3, Remark 2.6). This may be useful in certain settings, for example when low memory requirements prevail over prediction accuracy. • Section 3 is devoted to studying our unlabelled data requirements. We show that, while the worst case unlabelled sample size requirement is necessarily large (Proposition 3.1), natural examples of structure may arise from the data source interacting with the model, which may reduce, or may even eliminate the requirement for additional unlabeled sample (Propositions 3.3,3.4). This analysis is independent of the hypothesis class employed, and leads to some general conditions under which sensitivity estima-1 3 tion enjoys favourable convergence (Theorem 4). In addition, we also point out that, the structural restrictions of the hypothesis class in itself can bring further insights -in particular, for generalised linear models, the weight sensitivity turns out to be sufficient for dimension-independent learning (Proposition 3.5). • We discuss implications of our theoretical results related to real problems, including binarisation with depth-indepedent error bounds and on-device deep network classification in Sect. 4.
Throughout the exposition of the main sections, we only consider deterministic approximation operators, keeping the reasoning and the formalism simple, and rooted in first principles. We discuss extensions in Sect. 4, including the use of stochastic approximation operators.

Related work
We have already highlighted two existing studies that considered both sides of model-compression, namely the approximate predictor as well as the full predictor. Below we further discuss these in the light of our aims, approach, and findings, along with existing works that relate to ours in terms of either high-level ideas or technical aspects. In a similar spirit to Ravi (2019), our inquiry concerns simultaneously both the approximate model and the full precision model. However, contrary to the empirical approach taken in Ravi (2019), where the heuristic nature of the algorithms make a theoretical understanding somewhat elusive, our approach is analytic. We employ Rademacher complexity analysis of the generalisation error (Bartlett & Mendelson, 2002) to give algorithmindependent uniform bounds on the generalisation for both approximate and approximable function classes. The uniform nature of these bounds justifies algorithms that minimise them. Therefore, our algorithms come with guarantees of good generalisation. Our framework is general, and can be used to analyse the approximability and generalisation in tandem for any PAC-learnable machine learning problem.
Our findings are consistent with those found in Suzuki et al. (2020a), with a difference in the approach, resulting in a different and more general angle. In particular, their focus is on translating already known bounds on compressed neural networks to the full uncompressed class. In contrast, we focus on showing that having good approximability (i.e. low sensitivity to approximation) improves generalisation bounds in PAC-learnable classes. In addition, we pursue a joint treatment of learning both the approximate and the full predictor simultaneously.
The works in Arora et al. (2018) and Zhou et al. (2019), based on the idea of compression and resilience to noise, are also somewhat related to our work, on a high-level. However, in both Arora et al. (2018) and Zhou et al. (2019) the generalisation bounds are for the compressed model only; whereas, our treatment provides both sides of the coin-algorithms that learn a predictor that generalises both in its full precision and its approximate form. In Arora et al. (2018), the focus is on bounding the classification error of the compressed predictor with the -margin loss (with > 0 ) of the full model for multi-class classification. This corresponds to our general bounded Lipschitz loss function. Moreover, in Zhou et al. (2019) a PAC-Bayes approach is taken and so numerical tightness comes from data-dependent quantities in the bound that do not necessarily identify or shed light on structural traits of the problem responsible for good generalisation. In contrast, by employing Rademacher analysis we are able to highlight structural properties responsible for low complexity and good generalisation, so our approach and findings are complementary to these works.
Our starting point in Sect. 2.2 is the semi-supervised framework of Bǎlcan and Blum (2010), where our approximability, or sensitivity of functions to approximation plays the role of an unlabeled error, and we replace VC entropy with Rademacher complexity to facilitate the use of our bounds outside the classification setting. However, from Sect. 2.3 onward we depart from this framework in favour of simpler and more straightforwardly implementable bounds that fit our specific goals at the expense of a negligible additive term. In return, we obtain some advantages: (1) for our purposes, the unlabelled data need not be independent from the labelled set, (2) the sensitivity threshold is optimised implicitly and automatically by our algorithm without appeal to structural risk minimisation, and (3) we are able to study structural regularities that reduce or even eliminate the need of unlabelled data, which was not attempted in the previous work.

Notations and preliminaries
Consider the input domain X ⊆ ℝ d , where d denotes the dimensionality of the feature representation, and output domain Y ⊆ ℝ . Let m ∈ ℕ and consider a sample S ∈ (X × Y) m of size m drawn i.i.d. from an unknown distribution D. Let H be the hypothesis class; this is a set of functions mapping from X to Y . We consider a loss function ∶ Y × Y → ℝ + . Then we define the generalisation and empirical error of a function f ∈ H as The best function in the class will be denoted as f * ∶= argmin f ∈H { err (f )}.
We let H A be the set of approximate functions from X to Y . Note H A needs not be a subset of H . We define an approximation operator A ∶ H → H A , which maps a hypothesis to its approximation. Here A is considered to be deterministic; extension to stochastic approximate algorithms is discussed later in Sect. 4. Definition 2.1 (Approximation-sensitivity of a function) Fix p ≥ 1 . Given a sample S ∈ X m of size m drawn i.i.d. from the marginal distribution D x , we define the true and empirical sensitivity as The choice of p-norm will be left to the user in our forthcoming bounds. Formally, it is sufficient to work with p = 1 , as by Jensen's inequality, for all p ≥ 1 , we have , for all f ∈ H . So the forthcoming bounds will be tightest with the choice p = 1 . However, sometimes the user might like to specify a constraint on the sensitivity of functions in terms of the more familiar Euclidean norm ( p = 2 ), or some other member of the family of p-norms. Our results apply to any specification of p, so we will state results for general p-norms. An example where p = 2 is advantageous will be encountered later in Theorem 4. When the choice of p is arbitrary, we may omit the upper index in our notation. The approximating class H A is typically chosen to be much smaller than the original class H , implying a reduced complexity term in our generalisation bounds, at the expense of a larger empirical error, and the appearance of an additional sensitivity term D A (f ) . We can think of H A as a compressed model class whose elements occupy less memory, yet still expressive enough to represent the essence of H . Examples include quantisation and other model-compression schemes. The granularity of approximation that we can afford is considered to be fixed. In memory-constrained settings this is constrained by the available hardware.
We now define sensitivity-restricted hypothesis classes We also define the class of sensitivities to be We begin by stating the assumptions that we employ throughout the remainder of the paper. The first assumption is that the loss function is bounded and Lipschitz. These allow us to invoke the theory of Rademacher complexity, as well as make the connection between the generalisation error and the sensitivity of a function.
Recall, for a sample S of size m, the empirical Rademacher complexity of the class H is defined as where ∈ {−1, 1} m is a Rademacher variable, i.e. distributed uniformly on {−1, 1} . The Rademacher complexity is A classic result (Bartlett & Mendelson, 2002), Theorem 8 [see also Mohri et al. (2018), Lemma 3.3] shows that the generalisation gap scales as the Rademacher complexity -that is, we have with probability at least 1 − that We make two assumptions that let us leverage the theory of Rademacher complexities. The first one is standard.
Assumption 1 ∶ Y × Y → ℝ + is a bounded and -Lipschitz loss function. That is, there for all x, z ∈ X, y ∈ Y . By re-scaling we may assume without loss of generality that B = 1.
Assumption 1 lets us bound the empirical Rademacher complexity of the loss class •H with that of H using Talagrand's contraction lemma (Mohri et al., 2018) . Classic examples of Lipschitz loss functions include the (clipped) hinge loss, and the logistic loss ( = 1 for both). The 0-1 loss for ±1 valued classifiers also satisfies Assumption 1 ( = 1∕2 ), and indeed we have R S (l 01 •H) = 1 2R (H) by Mohri et al. (2018), Lemma 3.4.
The second assumption we make is the uniform boundedness of the sensitivities. This will let us extend Rademacher analysis to the class of sensitivities D A (H) , which then allows us to shift the complexity terms from the full models to the approximate models.

Assumption 2
The set of sensitivities, D A H , is uniformly bounded. That is, there exists Assumption 2 is weaker than assuming that the functions in H and H A are bounded. The latter is often assumed in analyses, either by constraining the norms of parameters and taking X to be bounded, or by passing linear outputs through a bounded nonlinearity-for instance, a sigmoidal function, or a threshold function-in the case of classification.
We start by giving a lemma that compares the true and empirical sensitivity. This is where our estimates for the size of the unlabeled sample are derived. We explore this topic further in Sect. 3.

Lemma 2.2
With probability at least 1 − we have for all f ∈ H.

Learning of low approximation-sensitive predictors
Learning in high dimensional settings or complex model classes requires enormous training sets in general, or some fairly specific prior knowledge about the problem structure. However, many real-world problems possess benign traits that are hard to know in advance. Inspired by the practical success of approximate algorithms created by various model-compression methods, in this section we investigate approximability as a potential benign trait for learning, by quantifying its effect on the generalisation error. More precisely, we elaborate on our intuition that, if a relatively complex target concept admits a simpler approximation that makes little alteration to its predictive behaviour, then it should be learnable from smaller training set sizes. The rationale is easy to see, as follows. Fix some approximation operator A and associated sensitivity threshold t ≥ 0 . Then by the classic Rademacher bound (Bartlett & Mendelson, 2002), Theorem 8, for any > 0 , with probability at least 1 − over the draw of the training sample, we have for all f ∈ H t that To learn this function, we consider a hypothetical Empirical Risk Minimiser (ERM) in the restricted class H t -that is, we define the following minimum: Applying (1) to the function f from (2) with a failure probability of at most 2 ∕3 , we note that � err (f ) ≤ � err (f * t ) by definition of f , and further note that with probability at least 1 − ∕3 by Hoeffding's inequality. Combining these with the use of a union bound yields that, with probability at least 1 − , f satisfies Clearly, since H t ⊆ H , then by a property of Rademacher complexities (Bartlett & Mendelson, 2002), Theorem 12 part 1 we have R S (H t ) ≤R S (H) . So, whenever the concept we ( try to learn is actually in H t (i.e. a low-sensitivity target function) then, depending on t ≥ 0 , we can have a tighter guarantee compared to that of an empirical risk minimiser over the larger class H. Unfortunately, the minimisation in (2) is not implementable, because the specification of the function class H t depends on the sensitivity function D A , which in turn depends on the true marginal distribution of the input data. It is often much easier to specify a larger function class H independent of the distribution, but this would ignore the sensitivity property and consequently lose out on the tighter guarantee.
The first approach that we consider will be based on observing that the sensitivity function only depends on inputs and is independent of the target values. Hence, we can make use of additional unlabelled data to estimate it, which is typically more widely available in applications. To this end, our first line of attack is similar in flavour with a classic semisupervised framework proposed in Bǎlcan and Blum (2010). In that work, the authors augmented the standard PAC model with a notion of compatibility to encode a prior belief about the target function in terms of an expectation over the marginal distribution. As a first approach, we will instantiate their compatibility notion with our notion of approximation-sensitivity. Similarly to Bǎlcan and Blum (2010), this approach also allows us to use structural risk minimisation (SRM) to adapt the threshold parameter t. Therefore, balancing between the reduced complexity of the class and the potentially increased error of the best function on this reduced class yields the following result.

Theorem 1 Fix an approximation operator A. Suppose we have an independent
Then, for all k ∈ ℕ and all f ∈Ĥ t k + u , with probability at least 1 − , we have: and consider the following algorithm Then, with probability at least 1 − we have Before giving the proof, we make a few comments. Firstly, we see that, with a large enough m u (i.e. sufficient additional unlabelled data), we have by Lemma 2.2 with probability 1 − ∕2 that, the magnitude of can be made arbitrarily small -this is the only role of S ′ x . A detailed account of the possible ranges of magnitude of this quantity will be discussed in Sect. 3, along with some natural factors that make it small. For now, let us point out that, by construction, whenever both H and H A are PAClearnable, and without further conditions, the complexity of our sensitivity class is determined by the complexities of H and H A [see discussion around (29)]. By contrast, the general setting of semi-supervised learning in Bǎlcan and Blum (2010) allows arbitrarily complex compatibility classes, which, in a worst case scenario can backfire and blow up the required labelled data size (Bǎlcan & Blum, 2010), Theorem 22. The objective of the minimisation algorithm in (5) follows the idea of minimising the uniform bound (4). It finds a good predictor along with the appropriate subclass of H to which it belongs. The sequence of sensitivity threshold candidates (t k ) k∈ℕ , and the associated weights (w k ) k∈ℕ , with w k ≥ 0 for all k ∈ ℕ and ∑ k∈ℕ w k ≤ 1 , must be chosen before seeing any data (for instance, w k ∶= 2 −k ), with w k representing an a-priori belief in a particular t k .
As a further observation, the function classes Ĥ t k + u that feature in the high probability guarantee (6) are dependent on the unlabelled data. This dependence can be removed if desired, by noting that, with high probability, with high probability. In fact, the failure probability of this bound is already accounted for in the proof of (6), so replacing � (6) holds with the same probability as the stated.
Lastly, but most importantly, . The extent of this reduction of complexity depends on several factors even for specific approximation choices, including the sensitivity the unknown target function, the magnitude of u and the threshold estimate t k + u , the original class H , and the data distribution. For the sake of intuition, suppose that availability of unlabelled data is not a barrier, so the potential gain is down to the interaction between the unknown data distribution and the unknown target function. A low sensitivity asserts that, for the particular approximation A, only a small mass fraction of the input points is affected by subjecting a predictor to A. If the target function satisfies this, and the marginal distribution is such that most functions of H do not satisfy this, then H t (i.e. the remaining set of functions that have low sensitivity) will be small. Let us consider some informal examples.
Example 1. We can think of a model approximation as a perturbation of the model. In classification, this induces a perturbation of the decision boundary. If the true classes are well separated by a large margin, then there is leeway for such perturbation. Hence, just as in the framework in Bǎlcan and Blum (2010), dense classes separated by a large margin will rule out all functions that cut across dense regions, leaving a handful few -especially if H was a simple class, such as linear predictors.
Furthermore, in the extreme case of zero sensitivity we can use the simpler class H A instead of H , as in the following.
Example 2. Consider a relatively complex parametric class H , and a coarse quantisation as A. Then H A simply becomes a finite hypothesis class. If the target function is insensitive to this approximation, then it is enough to work with H A and have the guarantees enjoyed by the finite class.
Example 3. Suppose the functions in H have a large number of parameters so H has high complexity, but the data distribution is supported in a simple restricted set that makes much of the representational capacity of H remain dormant. Then the effect of a modelcompression will spread out among both relevant and irrelevant parameters, making less of a noticeable difference to the function values.
While these examples are both simplistic and informal, ample empirical evidence in the literature demonstrates that many model approximation methods do work surprisingly well 1 3 in practice. In the next section we aim to develop an approach that helps to untangle and shed more light onto the various contributing factors that influence the error when learning involves approximate predictors. But first we prove Theorem 1.

Proof of Theorem 1
For a fixed t ≥ 0 , by the definition of u , with probability 1 − ∕2 , we have that f ∈ H t implies f ∈Ĥ t+ u . We shall pursue SRM by exploiting the independent unlabelled sample to define a nested sequence of function classes These classes depend on the unlabelled sample, but not on the labelled sample. For any fixed k ∈ ℕ , the classic Rademacher bound (Bartlett & Mendelson, 2002), Theorem 8 implies with probability at Since k ∈ ℕ is arbitrary, and the non-negative weights satisfy ∑ k∈ℕ w(k) ≤ 1 , we take a union bound and it follows with probability at least 1 − 2 that, uniformly for all k ∈ ℕ and all f ∈Ĥ t k + u we have This proves (4).
To obtain (6) for f defined in (5), we apply (4) for all k ∈ ℕ . In the last inequality we used the definition of f noting that the right hand side of (7) is minimised by f . In addition, by Hoeffding's inequality, we also have with probability at least 1 − (w k )∕6 . Combining with (8) and using the union bound, it follows with probability at least 1 − that for all k ∈ ℕ . Finally, choosing k to minimise the bound concludes the proof. ◻

A joint approach to sensitivity and generalisation
The conceptually straightforward approach of the previous subsection implies that a target concept that is robust to the effects of approximation by a low-complexity predictor, .
may require less labelled examples to be learned. In particular, the regularised ERM algorithm defined in (5) can accomplish this learning task, the regulariser being the empirical Rademacher complexity of the restricted class Ĥ t̂k (f ) , along with a penalty for estimating k (f ) . In effect, this algorithm adaptively trims the original function class to the relevant subset of low-sensitivity predictors, and consequently returns a low-sensitivity element of an otherwise potentially much larger function class.
The appeal of this finding lies not only to serve as a possible explanation towards the question of what makes some instances of a learning problem easier than others. Also, by the low-sensitivity property, such predictor should be usable in its approximated form in memory-constrained settings. Indeed, for any t ≥ 0 , if f ∈ H t , then by Lemma 2.3 we have In other words, for a predictor with low approximation-sensitivity, using Af instead of f will only incur an additive error of up to t . This additional term is the price to pay for predicting with the simplified function Af instead of the full-precision function f -it will not improve with more data, but t is small precisely when the target function we try to learn has a low-sensitivity.
In this section we are interested in a more practical formulation of the tandem of learning an approximate predictor as well as a full precision predictor. The approach presented so far, beyond its conceptual elegance, has some practical drawbacks: (1) it requires an additional independent unlabelled data set; and (2) it requires computing the empirical Rademacher complexity of the restricted class. Computing empirical Rademacher complexities is known to be typically a hard combinatorial optimisation problem for interesting hypothesis classes (Bartlett & Mendelson, 2002), as it amounts to computing an empirical risk minimiser under the 0-1 loss.
To get around these limitations, we shall take a different approach. We start by modifying the loss function to explicitly encode the fact that we are interested in a good lowcomplexity approximate predictor. More precisely, for a given threshold value t ≥ 0 , we start by defining the minimiser of the following constrained function: This is defined for the purpose of theoretical analysis, it is not computable, since checking f ∈ H t would require knowledge of the data distribution.
The following result shows that the function f t in (10) achieves two different functionalities simultaneously, as it not only produces a good approximate predictor with quantified error guarantee, including the price to pay for the approximation, but f t itself is a good predictor whenever the problem admits an approximable target function.

Proposition 2.4 Fix an approximation operator A and
Then, with probability at least 1 − , the function f t from (10) satisfies all of the following simultaneously: We note that t → 0 as t → 0 ; however, as t decreases, the choice of predictors in H t decreases too, and so err (f * t ) would be expected to increase. That is, the choice of t balances the trade-off between the sensitivity term t , and the error term, err (f * t ). Proposition 2.4 allows us to view learning and model-compression as two sides of the same coin. Eq. (13) suggests that low-sensitivity target functions are easier to learn, and a constrained ERM algorithm is able to learn it up to a constant factor of its sensitivity.
Indeed, suppose f * = f * t , i.e. the target function has sensitivity below t. Then the error of f t is guaranteed to be much smaller than the worst case error of finding f * in the whole class H . At the same time, (12) provides a guarantee for the approximate predictor Af that can potentially be deployed in low-memory settings by paying the additive term t proportional to the extent of approximability. Remarkably, both of these two seemingly different goals are accomplished by the same function f t defined in (10). Moreover (11) gives guarantees for Af t relative to both Af * t (the approximation of the best predictor in H with sensitivity of at most t) and g * t (the best approximate predictor in AH t ). (Bartlett & Mendelson, 2002), Theorem 8 and Talagrand's contraction lemma (Mohri et al., 2018), Lemma 5.7, we have with probability at least 1 − 2 9 , that

Proof of Proposition 2.4 By Rademacher bounds
Using this together with Hoeffding's inequality, with probability 1 − 9 both hold separately. Therefore, by the union bound and the fact that R S (AH t ) ≤R S (H A ) we have with probability at least 1 − 4 9 , that (11) holds. Similarly, as � err (Af t ) ≤ � err (Af * t ) and by Lemma 2.3, we have with probability at least 1 − 9 , that 2m . (14) Combining the above three inequalities and the fact that R S (AH t ) ≤R S (H A ) we have with probability at least 1 − 2 9 , that This proves (12). The second part follows by using Lemma 2.3, Jensen's inequality and f t ∈ H t , so we have Taking the union bound for each of the equations completes the proof.
Next, we show that in this formulation we can relax the fixed parameter t that constrains the function class, without the use of SRM. To avoid clutter, here we suppose the functional form of f ↦ D A (f ) is known -this can be estimated from an independent unlabelled data set as in the previous section.
To this end, consider the minimiser of the following (hypothetical) objective function, used for theoretical analysis.
Here the first term is our modified loss function as before, and the second term acts as a regulariser that implicitly constrains the function class. The following result shows that f from (18) behaves as the previous minimiser from (10), while it also automatically adapts the class-constraining sensitivity threshold t.

Proposition 2.5 Fix an approximation operator
For the function f defined in (18), with probability at least 1 − we have both of the following simultaneously.
, and Then by the definition of f and the Hoeffding bound we obtain with a probability of at least 1 − 8 , that Then, by Lemma 2.3 and definition of g we have , and substituting into (23) yields with probability at least 1 − 8 . By the Talagrand contraction lemma (Mohri et al., 2018), , and so combining with (22) and then by a union bound we have with probability at least 1 − 2 that Noting that D A (f * t ) ≤ t completes the proof of (20). Eq. (19) also follows, with probability at least 1 − 2 , since err (Af ) is upper bounded by the right hand side of (21), by adding the non-negative term D A (f ).
From Proposition 2.5 we see again that, for any fixed approximation function A such that H A has smaller complexity than H , if the target function has a low sensitivity (i.e. D A (f * ) is small), then it is learnable from fewer labels than an arbitrary target from H would be. Of course, there may be learning problems where f * has low error but high sensitivity for the pre-defined A, but the minimiser in (18) is a function that automatically balances between generalisation error and sensitivity.
It is now straightforward to use an estimate of D A (f ) , giving rise to a learning algorithm that is an implementable version of the construct analysed in Proposition 2.5.
Theorem 2 (Joint learning of full and approximate predictors) Fix an approximation operator A, and consider the following algorithm.
Then with probability at least 1 − , the function f satisfies both 2m . .
Proof This follows by the same steps as the proof of Proposition 2.5 combined with Lemma 2.2. ◻ Let us compare Theorem 2 with Theorem 1. With sufficient unlabelled data u can be made arbitrarily small, in both theorems. However, in Theorem 1 the unlabelled sample for estimating u must be independent of the labelled sample; this is because in that construction the function class depends on the unlabelled data through the sensitivity estimate. By contrast, in Theorem 2 we have an implicit adaptation of t, so the function class does not depend on the unlabelled sample. This enables us to reuse the labelled points also for estimating the sensitivity, and any additional unlabelled data just contributes to further shrinking u . Hence, in Theorem 2 whenever u is already small enough using the m training points of S, we do not even require any additional unlabelled points. In later sections we will see natural conditions where this is easily the case.
The advantage of the algorithm analysed in Theorem 1 is its statistical consistency, since given enough labelled data the generalisation error converges to that of the best predictor of the class. However, if the goal is to obtain an approximate predictor, we pay the price of an additive sensitivity term (9), and Theorem 2 shows that allowing such term enables a much more implementation-friendly algorithm without sacrificing the essence of the theoretical guarantee on generalisation.
Comparing the algorithm from Theorem 2 with that of Theorem 1, observe the difference in the regularisation term. Regularising with the sensitivity estimate was not justified in the formulation of Theorem 1, and indeed the authors of Bǎlcan and Blum (2010) have pointed out that regularising with their general compatibility estimate was not theoretically justified -despite it being used in practice (Chapelle et al., 2006). By contrast, in the formulation of Theorem 2, we have been able to justify it within our approximability objective.

Managing the trade-off between sample error and sensitivity for the approximate predictor
The analysis from Proposition 2.5 and Theorem 2 have shown that the associated algorithm has an implicit ability to realise the optimal trade-off between the sample error of Af and the sensitivity term, t, without any effort or tuning parameter from the user. However, there may be situations when a different trade-off is desired, and in such a case we want to manage this trade-off as a tuning parameter. This is especially relevant for practical applications in memory-constrained settings, where obtaining a good approximate predictor Af is the sole interest. For instance, we may only care about very low sensitivity functions at the expense of a slightly raised error, or vice-versa. Or we might like to explore multiple trade-offs as in a multi-objective approach. Another instance of this is when unlabelled data is also scarce but an analytic upper bound can be derived on the sensitivity function up to an unknown constant. Conceptually, a good way to address this sort of issues would be to take back control over the threshold parameter t using the learning algorithm in (10) (with or without estimating the sensitivity). However, the constrained optimisation formulation can be awkward to perform in practice. Below we suggest a more user-friendly form of the algorithm, and show that its solution is close to that of (10).
For each ≥ 0 consider the following algorithm Algorithms of this form, including the exploitation of unlabelled data in the regularisation term, have been in use in practice for a long time (Chapelle et al., 2006), see also (van Engelen & Hoos, 2020). The regularisation parameter balances the two terms of the objective function, and in addition to potential availability of prior knowledge, there is a wide range of well-established model selection methods available to set this parameter in practice.
To this end, we shall compare the error of f from algorithm (25) with that for f t from the algorithm given in (10). The following proposition shows that, for any specification of , there is a value of t ≥ 0 such that the errors of these two predictors are close, up to additive terms that decay with the sample size. Rearranging, and using Lemma 2.2 again, we have with probability at least 1 − ∕2 that

Theorem 3 (Balancing sample error & sensitivity) Let
This shows that the sample errors of the two predictors are close. Now, to prove (26) we again use Rademacher bounds (Bartlett & Mendelson, 2002), Theorem 8 with probability at least 1 − ∕2 on H A twice, combined with (24) and the union bound. We have with probability 1 − , as required.
The comments we made on Theorem 2 also apply to Theorem 3. In particular, the training points contribute to estimating the sensitivity also, unlike the approach in Theorem 1 which required a separate independent unlabelled sample.
As a further remark, let us also address the case when instead of estimating the sensitivity from unlabelled data we have an analytic upper bound on this function, in the case of some specific choice of function class and approximation operator, up to some unknown absolute constant. The constant will be subsumed into the tuning parameter . Let does not depend on the sample. Now, for each ≥ 0 define the following algorithm Furthermore, let f t be the predictor returned by algorithm (10), and f t the predictor from a version of the same algorithm (10) that replaces the unknown D A (⋅) with D A (⋅) . Then f t will have a guarantee of the same form as before in Proposition 2.4 where t is now a threshold on D A (⋅) rather than D A (⋅) . The following remark shows that the error of f is close to that of f t .

Remark 2.6
For any > 0 , there exists t > 0 such that with probability at least 1 − we have Consequently, by the definition of f , we have Therefore, � err (Af )) ≤ � err (Af t ) . Using this, we have with probability at least 1 − , by applying the usual Rademacher bounds (Bartlett & Mendelson, 2002), Theorem 8 to the class H A twice. ◻ We should note that Theorem 3 and Remark 2.6 require that is specified before seeing the data. However, we can use SRM to allow an exploration of a countable number of different values for this parameter before making this choice for a small additional error term. Specifically, take a sequence of candidate values { k } k∈ℕ weighted by {w k } k∈ℕ with ∑ k∈ℕ w k ≤ 1 . Then the same bounds hold for all k , where k ∈ ℕ , simultaneously at the expense of an additional term of 3 √ log(1∕w k ) 2m .

Rademacher complexity of the class of sensitivities
The generalisation bounds of Sect. 2 that include estimated values of the sensitivity, rely on the empirical Rademacher complexity of the class of sensitivities D A H . In Theorem 1 this was estimated on a separate unlabelled set independent of the labelled sample, while in Theorems 2 and 3 it was estimated on the input points of the labelled training set, possibly augmented with further unlabelled data. To unify notations, in this section we will write S for a (generic) sample in both cases, and m for its cardinality -with a view that, if the empirical Rademacher complexity of D A H converges sufficiently fast with the cardinality of the labelled sample m, then the labelled data S may actually be sufficient. However, arguably, the complexity of the sensitivity class, R S (D A H) , can be at least as large as that of the original function class H in the worst case, so one may wonder whether the bounds are actually useful. In this section we look at this quantity more closely. Indeed, using a property of the empirical Rademacher complexities (Bartlett & Mendelson, 2002), Theorem 12, part 7 gives Moreover, this bound is tight, since equality holds when the approximating class H A is a singleton-however, the use of a singleton H A is quite contrived, and far from what approximate algorithms are designed for.
For a fixed (possibly unlabelled) sample S, the set of interest in this section is the restriction of D A H to S, We use R p ∶= sup f ∈HD p A (f ) for the worst sensitivity in the chosen p-norm on the sample S. Note that from Assumption 2 we have R p ≤ C for all p > 0 . We shall also use the shorthand Note that D A H| S ⊆ B p (0, m 1∕p R p ) for all p ≥ 1 , where B p (c, r) denotes the p-ball centered at c with radius r.
We start by putting a crude magnitude bound on R S (D A H) , which holds irrespective of the choices of H and H A , and is tight up to a constant factor. The following proposition shows that, whenever R p is small, the empirical Rademacher complexity of the sensitivity class must be small in magnitude, and this bound is also tight up to a constant factor, for all choices of p ≥ 1 . This magnitude bound will not imply a decay as m increases, as we make no assumptions beyond an i.i.d. sample at this point. However this magnitude bound will be a useful reference in our later subsections, and it can also be taken in conjunction with other bounds, since one can always take the minimum of all upper bounds.

Proposition 3.1 (Crude magnitude bound) For any
Moreover, a lower bound of the same order holds as follows. Given p as chosen above, suppose that D A H| S nearly fills the p-ball of radius R p m 1∕p , in the sense that the convex hull of D A H| S contains the p-ball of radius m 1∕p 2 R p intersected with the positive orthant. Then there exists a constant C p > 0 that only depends on the choice of the p-norm, such that Proof By Hölder's inequality, for all p ∈ [1, ∞) . This proves the upper bound. We denote by K + the positive orthant, and let B + p 0, m 1∕p 2 R p ∶= K + ∩ B p 0, m 1∕p 2 R p . To prove the lower bound, we recall Moreau's decomposition theorem (Moreau, 1965) (see also (Wei et al., 2019), Sec. 2.1 & Sec. 3.1.5), which is the following: Given a closed convex cone K ⊂ ℝ m , denote its polar cone by K * = {u ∈ ℝ m ∶ ⟨u, u � ⟩ ≤ 0 for all u � ∈ K} . Then, every vector v ∈ ℝ m can be decomposed as where Π K (u)∶= argmin u � ∈K ‖u − u � ‖ 2 is the orthogonal projection of u into K. Hence we haveR where p ′ is the Hölder conjugate of p, i.e. 1∕p + 1∕p � = 1 . In line (34) we applied (30) to , and (35) follows from the fact that u is in the positive orthant K + so ⟨u, Π K * + ( )⟩ ≤ 0 and because the supremum equality is attained when u is a nonnegative scalar multiple of Π K + ( ) -in which case ⟨u, Π K * + ( ) ⟩ = 0 . This completes the proof of the lower bound. ◻ The lower bound highlights the fact that one cannot tighten the complexity bound by more than a constant factor without making extra assumptions. In addition, we also see that non-negativity of the elements of D A H only affects this constant. Therefore in the next few sections we set out to find and exploit other structures in order to gain more transparency and insight on the effective magnitude of this quantity in some natural settings. Specifically, we shall discuss examples of some non-restrictive structural models from which one can read off benign conditions that give better bounds on R S (D A H) . A lower magnitude of this complexity implies a smaller unlabelled data set size requirement for accurate estimation of the sensitivity, and in the case of our bounds in Sects. 2.3 and 2.4 this may even permit solving the learning problem without the need of an additional unlabelled sample.

Exploiting structural models of the sensitivity set
Throughout this section we make no assumption about either the function class H or the approximating class H A . So the results of this section are equally relevant to very rich classes like deep neural networks, all the way to very restricted ones like linear classes. We also make no assumption about the form of the approximating function, and indeed the approximating class is not required to be of the same architectural type as the original class.
We demonstrate the benign effects of some structural traits that the set D A H may naturally exhibit regardless of the linear or nonlinear nature of the actual predictors. Such benign structures will manifest themselves by explaining a reduced complexity R S (D A H) -which in turn allow the bounds of Sect. 2 to provide a better understanding of what makes some instances of a learning problem easier than others.
Our strategy in the next subsections will be to study the complexity of the set D A H restricted to the sample S (as it appears in the empirical Rademacher bounds presented in Sect. 2) by inscribing it into various parametrised geometric shapes. These include natural structures such as the points of D A H |S being near-sparse, or exhibiting clusters, or having some structured sparsity type model. For this we will not actually impose any extra conditions, instead our strategy is to use these constructs to reveal how the Rademacher complexity depends on the parameters of these models. In other words, our bounds will always hold with some parameter values, as in the worst case we just recover the crude magnitude bound in Proposition 3.1, while at the same time the effects of parameters convey more insight.

Near-sparse sensitivity set
A very natural situation is when some points in S have little effect on the sensitivity of the approximation, or in other words the approximation has little effect on the predictions for part of the points of S. For instance in classification, points that are far from the boundary will often have the approximating function Af predict in agreement with the original f.
A simple way to model this situation is by having the vectors in D A H| S lie near the axes corresponding to the points in S which are less affected by the approximation, such as taking a shape of an axis-aligned ellipsoid in some Minkowski norm, defined as for p ≥ 1 , where ∶=( 1 , … , m ) ∈ (0, ∞) m are the semi-axes of the ellipsoid. Note, this model is not restrictive, since we have D A H| S ⊂ B m p (0, R p m 1∕p ) , therefore k ≤ R p m 1∕p for all k ∈ [m] . However, the added flexibility of this model allows us to infer the effect of the magnitudes of the semi-axes, yielding some simple and natural conditions that improve on the worst-case magnitude guarantee in Proposition 3.1.
The following lemma gives the exact expression for the Rademacher complexity of an ellipsoid in any p-norm.
Lemma 3.2 Let ∈ (0, ∞) m and p ≥ 1 , and consider E p ( ) as defined in (38). Then, Proof Using Hölder's inequality, k ∈ {−1, 1} , and the definition of E p ( ) we have where p ′ is the Hölder conjugate of p, i.e. 1∕p + 1∕p � = 1 . The identities (41) and (42) hold due to the supremum. This completes the proof. ◻ For more intuition, consider the case when p = 2 , which corresponds to the usual Euclidean norm ellipsoid, and we can relate the right hand side of the bound in Lemma 3.2 to the volume of the ellipsoid. Indeed, using the relation between the arithmetic and geometric mean, where C m > 0 is a constant depending only on m. Hence, for a fixed sample size m, if the quadratic mean of the k 's is small then the ellipsoid has a small volume.
If D A H| S ⊆ E p ( ) , then in the worst case k = R p m 1∕p for all k ∈ [m] , and so Hence it is clear that the bound in Lemma 3.2 recovers the bound in Proposition 3.1 in the worst case. Thus, if D A H| S ⊆ E p ( ) , then Lemma 3.2 is already an improvement on Proposition 3.1. As a model of the sensitivity set, an ellipsoid with high excentricity posits that most sensitivity vectors reside in a linear subspace of ℝ m . Interesting to note that this has no implication on the form of the predictors. Indeed, even with highly nonlinear predictors (nonlinear classification boundaries for example), the fraction of points for which the predictions are distorted under the action of approximation may be expected to be small. However, it might be unrealistic to expect of all good functions of H that the approximation should change the prediction for the same points and should leave alone the same points. Hence, instead of assuming that D A H is contained in a single ellipsoid, for a more realistic model, we consider a union of multiple axes-aligned ellipsoids that cover D A H| S . This allows the set of points for which predictions are relatively unaffected by the approximation of some f ∈ H be different for all f ∈ H.
The following proposition shows that in this model the Rademacher complexity of D A H| S is bounded by the Rademacher complexity of the largest ellipsoid from the union and, remarkably, it does not depend on the number of ellipsoids in the union -we can have countably many in this model, so the diversity of sensitivity profiles of the predictors of H in the span of the sample is accounted for at no expense. The vector of axis lengths for the i-th ellipsoid will be denoted by i . We refer to individual components of this vector by adding a second index, for example i,k for the kth semi-axis of the ith ellipsoid.
Then we have the following bound

3
The proof makes use of similar steps as the proof of Lemma 3.2, but it does not apply the result of Lemma 3.2, as it turns out that a direct approach yields the exact Rademacher complexity of the union of axis-aligned ellipsoids.

Proof of Proposition 3.3 As D
, then using the fact that for two bounded sets A and B we have sup(A ∪ B) = max{sup A, sup B} , taking absolute value, the Hölder inequality, k ∈ {−1, 1} , and the definition of E p ( i ) give where p ′ is the Hölder conjugate of p. The equality in (46) is due to the symmetry of the set ⋃ l i=1 E p ( i ) around each axis, and in (48) Hölder's inequality holds with equality due to the supremum.
We remark that the above proposition is true for a countably infinite number of ellipsoids by noticing that the sequence is non-decreasing in l. Thus, by the monotone convergence theorem we havê Thus Proposition 3.3 is true for countably infinite ellipsoids. It may be interesting to note that the model of a union of axis-aligned ellipsoids has an intuitive meaning of near-sparsity of sensitivities. This may also be interpreted as a kind-of near-compression bound, since Proposition 3.3 tells us that, when fewer points are affected by the approximation, the guarantee on the sensitivity estimation quality will be tighter, hence the generalisation bound will be tighter as well.
However, beyond the intuitive meaning above, our structural modelling approach has potential to reveal additional benign conditions that might be harder to find by intuition alone. To see this, we shall modify Proposition 3.3 to get an upper bound for a non-axis aligned union of ellipsoids. As long as the ellipsoids share the same center (for instance, at the origin), the upper bound will still be independent of the number of ellipsoids in the union.
To this end, in addition to the axis-length parameters, for each ellipsoid in the union, take a rotation matrix . The columns of V i are the principal directions for the i-th ellipsoid. We will refer to the k-th column of V i by (V i ) k , and (V i ) k,k � will denote its (k, k � )-th element. The i-th ellipsoid is then defined as By a change of variables, we have that u ∈ E V i p ( i ) is equivalent to V T i u ∈ E p ( i ) . Let Λ i be the diagonal matrix with elements i,k ∈ (0, ∞) for k ∈ [m] , so Λ −1 i V T i u ∈ B p (0, 1). We no longer have symmetry around the axes, so (46) becomes an inequality, and we havê Equation (55) used the assumption that Λ i and V i are full rank square matrices. The last line (56) holds by the definition of ‖ ⋅ ‖ p→1 , called the operator norm (or induced matrix norm) with domain p and co-domain 1. Such norms can only be computed explicitly in a few special cases. In particular, , since Λ i is the diagonal matrix with elements i,k . This recovers precisely the axis-aligned setting. 2. With p = 1 , the expression of the induced norm is known to be We see, the non-axis alignment has led to somewhat less intuitive expressions, but nevertheless the main quantity that governs the empirical Rademacher complexity remains some notion of size of the largest ellipsoid. To interpret this in the context of interest here, it is enough if the sensitivities mainly reside in linear subspaces of D A H| S ⊂ ℝ m for the Rademacher complexity of D A H to be small. Equivalently, for the estimation of sensitivities this means to require less unlabelled points and still getting accurate sensitivity estimates (not to be confused with small sensitivity values).

Clustered sensitivity set
In this section we consider another natural structure, namely when the elements of D A H| S form clusters. A cluster is a subset of H with similar sensitivity profile on the sample S. We can model each cluster with a p-norm ellipsoid, each having its own center as the following The components of the vector i are the semi-axes, and the vector c i is the center of the i-th cluster. This model is again non-restrictive, as there exist worst case parameter values ) that recover the ball B p (0, R p m 1∕p ) used previously in the crude bound of Proposition 3.1.
The following proposition shows that in this model, R S (D A H) is bounded by the Rademacher complexity of the largest cluster plus an additive term that grows logarithmically with the number of clusters and linearly with the largest displacement of a cluster from the origin.
Proposition 3.4 (Complexity of clustered sensitivity set) Let S ⊂ X be an unlabeled sample of size m drawn i.i.d. from D x . Let l ∈ ℕ , suppose that there exist i ∈ (0, ∞) m , c i ∈ ℝ m and V i ∈ ℝ m×m such that i,k ≤ R p m 1∕p , and where Λ i is the diagonal matrix with elements i,k ∈ (0, ∞) for k ∈ [m].
This cluster model highlights a trade-off about the effect of large sensitivities: If a cluster only contains functions whose approximation leads to large sensitivity values, then the first term of the bound can still be small, but a penalty is incurred in the second term if not all function fit in the same cluster.
Proof Let c ∶ D A H → {c 1 , … , c l } be defined as the function that sends u ∈ D A H to its best fitting ellipsoid, Ties are broken arbitrarily. Now, adding and subtracting c(u k ) and noting that, by construction, We proceed by bounding the above two terms separately.
We bound the first term by applying Proposition 3.3, or its extension, Eq. (56). To bound the second term, we use Massart's lemma to get since V i is a rotation matrix, so ‖V T i c i ‖ 2 = ‖c i ‖ . Combining the two bounds together completes the proof.R This bound is similar in flavour to that of the complexity of a union given in Golowich et al. (2020), Lemma 7.4 in the sense that there is a logarithmic price to pay for the number of clusters. However, by contrast, here we have an explicit constant in the second term with clear relation to the position of the ellipsoids, and our bound reduces to that from Proposition 3.3 if all c i = 0 for all i ∈ [l] . Therefore, the above bound gives more information as to what helps decrease the Rademacher complexity. More specifically, the benign structures identified are: small number of clusters, cluster centers close to the origin, and highly concentrated (low volume) clusters. We will summarise these positive findings and discuss their implications in Sect. 4.1.

Effect of the structural form of predictors
Our analysis so far was completely independent of the specification of H and H A , and applies to any PAC-learnable hypothesis class. From the crude bound in (29) we know that a low complexity H always implies a low complexity D A H . In this section we give a worked example of how this effect plays out in the case of hypothesis classes that are linear in the parameters. Linear models represent a well-weathered object of study at the foundation of machine prediction (Vapnik, 1998), whose high-dimensional / low sample size version has been of much interest for the puzzle of over-parameterisation, see e.g. Bartlett et al. (2020). These models also allow for nonlinearity effortlessly through a feature map or a kernel.
Let ℍ be a reproducing kernel Hilbert space with reproducing kernel k ∶ X × X → ℝ associated with the feature map Φ ∶ X → ℍ , so for any x 1 , x 2 ∈ X , we have k(x 1 , x 2 ) = ⟨Φ(x 1 ), Φ(x 2 )⟩ ℍ . Then our hypothesis class is The familiar Euclidean space setting corresponds to Φ being the identity map and ℍ = ℝ d .
We define our approximation operator to be A ∶ H → H A defined by Af w (x) = ⟨Q(w), Φ(x)⟩ ℍ where f w (x) = ⟨w, Φ(x)⟩ ℍ and Q ∶ ℍ → ℍ is some approximation of the weights w of the predictor f w . Proposition 3.5 Let m ∈ ℕ and S = {x 1 , … , x m } ⊂ X . Then we have the following bound This is of course upper bounded by the sum of familiar bounds for linear classes H and H A by the triangle inequality, as already implied indeed by the crude bound (29); however, the important observation from the special-case analysis of Proposition 3.5 is that (62) does not explicitly depend on the norm of the weight vectors, but instead it only depends on how the approximation A (through Q) distorts the weights. In other words, we do not need the norms ‖w‖ ℍ for R S (D A H) to be bounded as long as the weight sensitivity ‖w − Q(w)‖ ℍ is bounded for the chosen operator Q.
Therefore the finding we conclude from Proposition 3.5 is that, in the generalised-linear model class considered, small weight-sensitivity is sufficient for dimension-independent learning when the approximating class H A has dimension-free complexity. This is in contrast with existing dimension-free bounds that required a bounded norm constraint.
We have not found an analogous property for other hypothesis classes, and it remains an open question as to whether analyses of the sensitivity class tailored to specific classes would unearth additional insights.

Proof of Proposition 3.5
Since k are uniform on {−1, 1} , we can remove the absolute value, and by the linearity of inner products, and the Cauchy-Schwarz inequality we have Finally it is know that (Mohri et al., 2018), Theorem 6.12 This completes the proof.

Discussion of implications, and potential extensions
In this section we elaborate on the significance of our theoretical results. The next Sect. 4.1 shows how to use our analysis of the sensitivity set to obtain a natural and very general structural condition that yields favourable convergence rates on the sensitivity estimation error. Hence, under this condition, the generalisation error bounds in our previous sections become dominated by the complexity of the reduced approximate class, irrespective of the form or size of the original class.
In Sect. 4.2 we discuss consequences related to real problems by revisiting the original motivation of understanding model compression in deep networks. In particular, we consider a concrete case of approximation by weight-binarisation, as in BinaryConnect (Courbariaux et al., 2015), or parameter quantisation in deep network classifiers (Hubara et al., 2017), where applying our results yields a depth-independent bound. We also discuss a potential way to relate our approach to a previously successful but theoretically unjustified on-device deep net approach, Neural Projections (Ravi, 2019), which brings insights into its working.
Finally, in Sect. 4.3 we describe how our framework can be extended to stochastic approximation schemes.

Favourable rates for sensitivity estimation in approximable hypothesis classes
We already commented that whenever the target function admits a small sensitivity threshold t, this can usefully restrict the hypothesis class in favourable data distributions. Here we show that a uniformly small t, with approximation sensitivity specified in the p = 2 norm, can even obtain a speed-up of the convergence rate of sensitivity estimation, based on the findings of Sect. 3. First, we extract the fortuitous conditions that arose from our analysis in Sect. 3 that enable a fast convergence of the Rademacher complexity of the class of sensitivities. More precisely, if the sample sensitivity set D A H |S resides in a countable union of nearsparse sets and a finite number l ∈ ℕ of dense clusters, then the Rademacher complexity of the sensitivity set decays at a fast rate 1/m, up to a logarithmic factor.

Condition 4.1 (Structured sensitivity condition) Suppose that D
, where E p (c i ,̃i, V i ) are ellipsoids centered at c i having side-lengths concatenated in the vector ̃i and orientation V i ; and E p ( i ), i ≥ 1 , are ellipsoids centered at the origin, having side-lengths concatenated in i . Let Λ i be a diagonal matrix with elements ̃i ,k ∈ (0, ∞) Proof First note that from Assumption 2 we have ‖f − Af ‖ ∞ < C and we have the following bound on the variance of the function f − Af , where the last line is due to Jensen's inequality. The result then follows from Bartlett et al. (2005), Theorem 2.1 by setting = 1 2 . The second statement is proved in Lemma 4.2. ◻ Theorem 4 bounds the deviation between the true sensitivity and its sample estimate in terms of the global sensitivity threshold t of functions in H . Whenever t is sufficiently small, then the last term will dominate the t-dependent term, which in turn decays with m at a faster rate.
The observation that the sensitivity threshold t acts as a variance to control the rate could also be further refined using localisation to replace the global sensitivity threshold with the sensitivities of individual functions and relax the requirement that the entire class H is well approximable, at the expense of a more involved machinery of local Rademacher complexities (Bartlett et al., 2005), which we do not pursue here, and which would likely need a specialised treatment to bound the local complexity for particular choices of H , similarly to the approach taken in Suzuki et al. (2020a).
The key difference from the approach of Suzuki et al. (2020a) is the following. Their bounds depend on the local Rademacher complexity of the Minkowski difference between the loss classes of the full and the approximate predictors, which they are able to bound for some specific hypothesis classes; whereas, our bounds depend on the Rademacher complexity of the set of sensitivities of predictors from the hypothesis class. The Minkowski difference loses the coupling between the full and approximate predictor pairs which, in our approach is the key to taking advantage of structure in the set of sensitivities. These structures that we identified and exploited are not specific to the form of functions in the hypothesis class chosen, and instead uncover new general insights, as well as tighten our bounds effortlessly, with elementary tools.
Indeed, we highlighted that even in the simple global analysis of Theorem 4, from the findings of Sect. 3 we were able to readily extract some general favourable rate conditions for sensitivity estimation. Note that Lemma 4.2 is general, and holds for any PAC-learnable class. It says that whenever the target function admits a small t and the interplay of data and model satisfies Condition 4.1, the error from sensitivity estimation becomes negligible very quickly (even without any additional unlabelled data), hence the dominant term of our generalisation bounds (Theorems 1, 2, 3) now becomes the complexity of the approximate class, regardless of how big the original class H was.

Implications related to real problems
In this section we discuss the significance of our theoretical results by revisiting some of our motivating examples related to real problems.

From BinaryConnect to a depth-independent bound
We consider a specific example. Take H to be the class of L-layer feed-forward neural network classifiers with ReLu activations in the hidden layers and binary output. Let |W| be the total number of parameters (including all weights and bias terms). It was shown in Bartlett et al. (2019) that the VC dimension of this class is O(|W|L log(|W|)) , and this is near-tight with a lower bound of Ω(|W|L log(|W|∕L)) . A well-known relation between Rademacher complexity and VC dimension (Bartlett & Mendelson, 2002), Theorem 6 implies that for this class we have Let us consider the approximation operator A of parameter binarisation -that is, we retain only the signs of all parameters while keeping the same architecture, as it has been done in practice in BinaryConnect (Courbariaux et al., 2015). Hence H A is a finite class of cardinality |H A | = 2 |W| . By Massart's finite lemma (Mohri et al., 2018), Theorem 3.7 the Rademacher complexity of this class of approximate classifiers is Observe, this is independent of the network depth L. The same reasoning holds if quantisation in pursued into q bins, since then |H A | = q |W| . By contrast, the complexity of the original class H grows with L. Applying our Theorem 2 combined with Lemma 4.2 under condition 4.1, we obtain the following depth-independent error bound for both the full-precision and the binarised network (i.e. a guarantee on max{ err (Af ), err (f )} ). Condition 4.1 in this setting is implied whenever the number of points on which the binarised network disagrees with the full-precision network is of constant order with respect to the training set size.
Here we would like to discuss a potential interpretation of the result of Corollary 4.3. Bina-ryConnect and quantised deep nets are known empirically to be successful from the previous literature (Courbariaux et al., 2015;Hubara et al., 2017), e.g. in image classification problems. Our theory suggests that there must be something fortuitous about many natural data sources that, in our context, makes the complex function class of deep nets behave as a low complexity class. We can only speculate on this and to this end we identified Condition 4.1. It is intriguing that the same condition also turned out to explain depth independence of error in this example. There have been many attempts to depth independent error bounds for deep nets in the literature, by making various assumptions, for instance by imposing norm constraints on the weights (Golowich et al., 2020). Our above interpretation provides a complementary view on this, simply as a byproduct of our general pursuit to understand approximate predictors.

Towards understanding neural projections
Having developed an analytic approach to the twofold problem of learning a good full-precision predictor and a good approximate predictor, it may now be interesting to relate the training objective function we obtained in (25) to the training objective of Neural Projections proposed in Ravi (2019). The latter has been a practical approach to on-device deep networks. It has no theoretical backing, however ample empirical evidence demonstrated its impressive success in real world image classification problems (Ravi, 2019). It minimises a weighted sum of three terms -the empirical errors of full and approximate models plus their disagreement -with the ultimate goal to deploy the approximate model on-device.
Take any ∈ [0, 1] . Our training objective can be written as the following.
where 1 = 1∕ − 1 ≥ 0, 2 = 1∕ − 1 + 2 ∕ ≥ 0. Now, if we relax Af in H A , i.e. replace it with some g ∈ H A , then we arrive precisely at the training objective of Neural Projections. Indeed, in Ravi (2019) this modified objective is minimised in the parameters of f and g along with tuning both 1 and 2 independently. Thus, we may interpret the training objective function of Neural Projections as an approximate version of our objective function (25). While this has no theoretical justification, our objective function has a similar flavour, and it follows from a rigorous theory. Hence, while we reckon this is not a complete explanation of why Neural Projections (Ravi, 2019) are so effective in practice, nevertheless we believe this interpretation still brings some insights into its working.

Potential extensions to stochastic approximate predictors
The approximation schemes assumed so far were deterministic. Many approximation schemes are in fact stochastic in nature, therefore, in this section we discuss how to straightforwardly adapt our framework to stochastic approximation schemes.

3
Let (Ω, F, ℙ) be a probability space. Then we define a Stochastic approximation scheme by A ∶ Ω × H → H A , where H A ∶={A f ∶ ∈ Ω and f ∈ H} . Then for a fixed ∈ Ω we have an approximation operator A ∶ H → H where H ∶={A f ∶ f ∈ H} ; that is, for a fixed we have one approximation operator. Thus, when |Ω| = 1 we reduce to the deterministic setting. Also, for a fixed f ∈ H we have the collection of possible approximations to f the set{A f ∶ ∈ Ω}. Now we define D (f )∶=D A (f ) , and then for a fixed arbitrary ∈ Ω we have with probability at least 1 − , that for all f ∈ H . This uniform bound follows directly from Lemma 2.3 combined with a standard Rademacher bound, and for fixed the first two terms on its right hand side correspond to the objective function of the Algorithm (18) in Sect. 2.3.
We can make this independent of a particular random instance, e.g. by considering expectation. Although we cannot take expectation on both sides as this would incur a union bound over infinitely many sets, we can simply write Now applying Jensen's inequality, we have and the argument of the expectation can be bounded in terms of the Rademacher complexity R m (H ) . Thus, we have the following uniform bound expressed in terms of the expected sensitivity, the expected Rademacher complexity of the small approximating class, and a new empirical error term that, due to the expectation may be interpreted as a data augmentation loss. That is, we have, with probability at least 1 − , the following Minimising the first two terms on its right hand side could be used to justify a regularised data augmentation algorithm in analogy with our previous algorithm in (18).
Likewise, one can introduce estimates of the expected distortion D (f ) from unlabeled data. Alternatively, if the approximation operator A satisfies a variance condition, namely that is some property of f ∈ H , then we have, by Jensen's inequality and the variance condition, So we see this variance condition on A provides another instance where the need for additional unlabelled data is eliminated in the case of stochastic approximation operators. A similar condition, formulated on the level of parameters, is frequently , encountered in the literature of quantisation for learning and optimisation, such as in stochastic rounding (Alistarh et al., 2017;Wen et al., 2017).

Conclusions
We end our study with a high-level summary. Inspired by the recent surge of interest in model-compression and approximate learning algorithms in the context of small device settings, we studied the role of approximability in generalisation, both in the full precision and in the approximated settings. Our main findings can be summarised as follows: (1) For any given PAC-learnable problem, and any approximation scheme, target concepts that have low sensitivity to the approximation can be learned from a smaller labelled sample, provided sufficient unlabelled data. This is achieved by using approximation to modify the loss function and isolating a sensitivity term in the generalisation error. The modified loss function has a lower complexity in comparison with the original, pushing the complexity of the learning problem onto the class of sensitivity functions -which in turn only requires unlabeled data for estimation whenever the original loss is Lipschitz.
(2) Our analysis yielded algorithms showing that it is possible to learn a good predictor whose approximation has the same generalisation guarantee as the full precision predictor. Owing to the generality of our approach, such provably accurate approximate predictors can be used with a variety of model-compression and approximation schemes, and potentially deployed in memory-constrained settings.
(3) Our algorithms use unlabelled data to estimate the sensitivity of predictors to the given approximation operator, and this needs not be independent from the labelled training set. Moreover, while the required unlabelled sample complexity can be large in general, we highlighted several examples of natural structure in the class of sensitivities that significantly reduce, and possibly even eliminate, the need of additional unlabelled data. At the same time, structural properties of the sensitivity class shed new light onto the question of what makes certain instances of learning problems easier than others. Several open questions remain. As our upper bounds highlighted structural traits that explain good performance in model-compression settings, it will be interesting to develop lower bounds under the same structural traits, to assess the tightness of our bounds. From the practical perspective, it will be interesting to develop efficient implementations, and study their computational complexity. Another line of interesting future work is to explore adversarial settings (Chowdhury et al., 2022;Montasser et al., 2019), where the approximation operator A is in the hands of an adversary, and the learner needs to find a predictor that is robust to it. Furthermore, it would be interesting to study model-compression and approximate algorithms in other learning theory frameworks such as PAC-Bayes, and perhaps even non-uniform frameworks.

Appendix 1: Numerical illustration
We presented a theory for learning with model approximation / model compression. Our algorithm (24), and its refinement (25), were of theoretical interest in that pursuit, and we should note that, for certain choices of approximation-such as weight-binarisation or quantisation-our minimisation objective is not differentiable. However, it may still be interesting to illustrate its working in numerical experiments, which we do in this section.

Appendix 1.1: A multi-objective optimisation approach
We employ a general-purpose assumption-free method, known as NSGA-II, a multi-objective evolutionary algorithm based on non-dominated sorting (Deb et al., 2002). This is an iterative population-based heuristic approach that has had many successful applications in practice. Its computation complexity is of order O(MN 2 ) per iteration, where M is the number of objectives (in our case, M = 2 ), and N is the population size. The latter is a parameter representing the number of candidate solutions at any given iteration. NSGA-II returns a set of non-dominated solutions (classifiers in our case) that estimate the Pareto front, each solution representing a different tradeoff between the objectives.
NSGA-II was previously demonstrated to work well in regularised machine learning problems (Chen & Yao, 2010), as it alleviates the need for tuning the balance between competing terms. The user can choose from the returned solutions a-posteriori-for instance based on validation errors, or some other criteria depending on the application context.
Our objective function breaks up naturally in two components, which we will minimise simultaneously: We shall demonstrate the working of this approach with the approximation operator A taken to be weight-binarisation, as in BinaryConnect (Courbariaux et al., 2015) -a nondifferentiable problem.

Appendix 1.2: Implementation and parameter setup
Our implementation is based on the Python package PyMOO 1 (Blank & Deb, 2020). The candidate predictors are fully connected 3-layer feed-forward neural network classifiers with ReLu activation on the hidden nodes, thresholded sigmoid on the output node, and no regularisation (other than the implicit effect of the approximation operator). We employ the 0-1 loss directly, since the optimiser allows non-differentiable objectives.
We have set the population size to N = 100 , following (Chen & Yao, 2010), and we stop when the change in both objectives becomes less than 10 −3 for 100 consecutive iterations, or when the allotted computing time is exhausted. We use default settings (simulated binary crossover, and polynomial mutation) with default parameters, with two additions 2 that enhance efficiency for our problem, as follows. Firstly, we added a constraint to ensure the sample error êrr (Af ) shrinks throughout iterations to no larger than 0.3. This speeds up the process in our experience, as new candidates are then able to explore more promising regions of the hypothesis space. Secondly, at each iteration, we eliminate candidates with identical objective values, even if they are different in the parameter space, E 1 ≡êrr (Af ) and E 2 ≡D A (f ) 1 https:// pymoo. org/ 2 credit to Yangfan Peng breaking ties randomly. This encourages diversity of the candidate set, leading to a better coverage of the Parento front.

Appendix 1.3: Data sets and protocol
We created a 2D synthetic data set (Synth) by sampling two 0-mean axis-aligned Gaussians with variances (1, 0.3) and (0.3, 1) respectively, following the 'Bumpy' data set description from (Chen & Yao, 2010). The training set has 600 points, of which we used half for validation, and the independent test set has another 300 points. The validation set is only used to select one classifier from the non-dominated set of solutions returned by the multiobjective optimiser.
The second data set we use is MNIST. This consists of 28 × 28 pixel images of handwritten numerals, of which we took two classes representing '0' versus '1'. We have 4702 + 5430 images in these two classes for training, a further 2533 for validation, and an independent test set of size 2115. No pre-processing was applied to either data sets.
We use the same number of hidden nodes in all candidate classifiers which, in the reported experiments, we set to 10 per hidden later in the case of Synth, and to 100 per hidden layer in the case of MNIST.

Appendix 1.4: Results
In Fig. 1 we show the estimated Pareto fronts (blue curves) obtained on Synth and on MNIST after one full run (300 iterations) of the algorithm. The markers on these curves represent the two objective values of the non-dominated solutions found in the last generation. As we can see from the figure, each of these classifiers exhibit a different tradeoff between their sample error and sensitivity. For each of these classifiers f , we then computed the validation-set accuracy of their weight-binarised version, Af (vertical lines). The specific tradeoff at which the validation accuracy is highest is data set dependent, and this is one of the reasons that a multi-objective approach capable of capturing multiple tradeoffs is well suited. Indeed in the case of MNIST, the highest validation accuracy happens to