Quantum learning Boolean linear functions w.r.t. product distributions

The problem of learning Boolean linear functions from quantum examples w.r.t. the uniform distribution can be solved on a quantum computer using the Bernstein–Vazirani algorithm (Bernstein and Vazirani, in: Kosaraju (ed) Proceedings of the twenty-fifth annual ACM symposium on theory of computing, ACM, New York, 1993. https://doi.org/10.1145/167088.167097). A similar strategy can be applied in the case of noisy quantum training data, as was observed in Grilo et al. (Learning with errors is easy with quantum samples, 2017). However, extensions of these learning algorithms beyond the uniform distribution have not yet been studied. We employ the biased quantum Fourier transform introduced in Kanade et al. (Learning dnfs under product distributions via μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document}-biased quantum Fourier sampling, 2018) to develop efficient quantum algorithms for learning Boolean linear functions on n bits from quantum examples w.r.t. a biased product distribution. Our first procedure is applicable to any (except full) bias and requires O(ln(n))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}(\ln (n))$$\end{document} quantum examples. The number of quantum examples used by our second algorithm is independent of n, but the strategy is applicable only for small bias. Moreover, we show that the second procedure is stable w.r.t. noisy training data and w.r.t. faulty quantum gates. This also enables us to solve a version of the learning problem in which the underlying distribution is not known in advance. Finally, we prove lower bounds on the classical and quantum sample complexities of the learning problem. Whereas classically, Ω(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varOmega (n)$$\end{document} examples are necessary independently of the bias, we are able to establish a quantum sample complexity lower bound of Ω(ln(n))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varOmega (\ln (n))$$\end{document} only under an assumption of large bias. Nevertheless, this allows for a discussion of the performance of our suggested learning algorithms w.r.t. sample complexity. With our analysis, we contribute to a more quantitative understanding of the power and limitations of quantum training data for learning classical functions.


Introduction
The origins of the fields of machine learning as well as quantum information and computation both lie in the 1980s.The arguably most influential learning model, namely the PAC ("probably approximately correct") model, was introduced by Valiant in 1984 [26] with which the problem of learning was given a rigorous mathematical framework.Around the same time, Benioff [7] and Feynman presented the idea of quantum computers [12] to the public and thus gave the starting signal for important innovations at the intersection of computer science, information theory and quantum theory.Both learning theory and quantum computation promise new realms of computation in which tasks that seem insurmountable from the perspective of classical computation become feasible.The first has already proved its practical worth and is indispensable for modern-world big data applications, the latter is not yet as practically relevant but much work is invested to make the promises of quantum computation a reality.The interested reader is referred to [20,25] for an introduction to statistical learning and quantum computation and information, respectively.
Considering the increasing importance of machine learning and quantum computation, attempting a merger of the two seems a natural step to take and the first step in this direction was taken already in [10].The field of quantum learning has received growing attention over the last few years and by now some settings are known in which quantum training data and the ability to perform quantum computation can be advantageous for learning problems from an information-theoretic as well as from a computational perspective, in particular for learning problems with fixed underlying distribution (see, e.g., [3] for an overview).It was, however, shown in [4] that no such information-theoretic advantage can be obtained in the (distribution-independent) quantum PAC model (based on [10]) compared to the classical PAC model (introduced in [26]).
One of the early examples of the aptness of quantum computation for learning problems is the task of learning Boolean linear functions w.r.t. the uniform distribution via the Bernstein-Vazirani algorithm presented in [8].Whereas this task of identifying an unknown n-bit string classically requires a number of examples growing (at least) linearly with n, a bound on the sufficient number of copies of the quantum example state independent of n can be established.This approach was taken up in [13] where it is shown that, essentially, the Bernstein-Vazirani-based learning method is also viable if the training data is noisy.However, also this analysis is restricted to quantum training data arising from the uniform distribution.The same limiting assumption was also made in [10] for learning Disjunctive Normal Forms and in this context an extension to product distributions was achieved in [17].
Hence, a next direction to go is building up on the reasoning of [17] to extend the applicability of quantum learning procedures for linear functions to more general distributions.The analysis hereby differs from the one for DNFs because no concentration results for the biased Fourier spectrum of a linear function are available.Moreover, whereas many studies of specific quantum learning tasks focus on providing explicit learning procedures yielding a better performance than known classical algorithms, we complement our learning algorithms with lower bounds on the size of the training data for a comparison to the best classical procedure and for a discussion of optimality among possible quantum strategies.

Overview over the results
The task of learning linear functions has already served as a toy model for quantum speed-ups in the early days of quantum computing.We describe possible generalizations of known results in different scenarios.First, in Theorem 3 we exhibit a Fourier-sampling-based algorithm which learns Boolean linear functions on n inputs from O(ln(n)) quantum examples arising from a c-bounded product distribution D μ .(Classically, it is known that Ω(n) examples are required.)Moreover, for a bias vector μ satisfying |μ i | ≤ O 1 √ n for all i, this can be reduced to O(1) quantum examples (Theorem 4).We also show that this reduction to a constant number of quantum examples is not possible for arbitrary product distributions by giving quantum sample complexity lower bounds in Theorem 6.
In Theorem 8, we exhibit a noise bound for quantum examples arising from a product distribution D μ with |μ i | ≤ O 1 √ n for all i but corrupted by noise which guarantees that O (1) quantum examples still suffice for learning.Under milder assumptions on the noise, a O(ln(n)) upper bound on the sample complexity is given.Similarly, faulty quantum gates can be tolerated in our learning algorithm.Based on this observation, we construct a quantum learning algorithm without prior knowledge of the underlying distribution which requires O(n 2 ) quantum examples by first estimating the bias vector classically (Corollary 3).

Related work
The (classical) problem of learning linear functions from randomly drawn examples in the presence of noise was studied in [9] (over the field F 2 ) as well as in [22] (over a field F q for q prime).The latter of these two works also established the relevance of this learning problem for cryptography by connecting it to certain lattice problems.A different model for learning linear functions is studied in [16], where the training data is not assumed to be noisy but instead only partial information about the function values is revealed.
The quantum PAC model was introduced in [10], where it was employed for learning DNF formulae w.r.t. the uniform distribution using a quantum example oracle.This was extended to product distributions by [17].On the basis of this notion of quantum examples, the known Bernstein-Vazirani algorithm [8] can be reinterpreted as giving rise to a quantum learning algorithm for linear functions.This interpretation is explicitly given and further elaborated upon for the case of noisy training data in [11] (for q = 2) and in [13] (for general primes q).Cross et al. [11] established that, whereas the learning parity problem without noise is feasible both for classical and quantum computation, the learning parity with noise problem is widely believed to be classically intractable but remains feasible for quantum computers, where the runtime depends only logarithmically on the number of qubits.This quantum advantage for noisy systems was demonstrated experimentally in [23].Grilo et al. [13] extends this analysis to general fields and a broader class of noise models and obtains that also for that scenario, learning linear functions from noisy data is feasible for quantum computers; however, their runtime bound is polynomial in the number of subsystems.In [5], the class of juntas is found to also allow for efficient quantum learning.The framework of Fourier-based quantum exact learning is shown to be efficiently applicable more generally also to Fourier-sparse functions in [1].Limitations of the power of quantum computation for learning have been studied in a series of papers culminating in [4] and more recently also in [2].The former work shows that without prior restrictions on the underlying probability distribution, quantum examples are not more powerful than classical examples.The latter work demonstrates that, assuming quantum hardness of the learning with errors problem from classical examples, the class of shallow circuits is hard to learn from quantum examples.
Aside from the task of learning from examples, also the problem of learning from membership queries, both classical and quantum, is well studied.For instance, [24] established a polynomial relation between the number of required quantum versus required classical queries, which was recently improved upon in [1].Also, [19] uses quantum membership queries for learning multilinear polynomials more efficiently than is classically possible.

Structure of the paper
The paper is structured in the following way.In Sect.2, we introduce the well-known notions from classical learning, quantum computation and Boolean Fourier analysis required for our purposes as well as the prototypic learning algorithm which motivates our procedures.Section 3 consists of a description of the learning task to be considered.This is followed by a generalization of the Bernstein-Vazirani algorithm to product distributions in Sect. 4. In the next section, this is used to develop two quantum algorithms for solving our problem.("Appendix A" contains a stability analysis of the second of the two procedures w.r.t.noise in training data and computation.)In Sect.6, we establish sample complexity lower bounds complementing the upper bounds implied by the algorithms of Sect. 5. Finally, we conclude with some open questions and the references.

Basics of quantum information and computation
We first define some of the fundamental objects of quantum information theory, albeit restricted to those required in our discussion.For the purpose of our presentation, we will consider a pure n-qubit quantum state to be represented by a state vector |ψ ∈ C 2 n (in Dirac notation).Such a state encodes measurement probabilities in the following way: i=1 is an orthonormal basis of C 2 n , then there corresponds a measurement to this basis and the probability of observing outcome i for a system in state |ψ is given by | b i |ψ | 2 .Finally, when considering multiple subsystems we will denote the composite state by the tensor product, i.e., if the first system is in state |ψ and the second in state |φ , the composite system is in state |ψ, ϕ := |ψ ⊗ |φ .
Quantum computation now consists in evolution of quantum states.Performing a computational step on an n-qubit state corresponds to applying an 2 n × 2 n unitary transformation to the current quantum state.(The most relevant example of such unitary gates in our context will be the (biased) quantum Fourier transform discussed in more detail in Sect.2.4.)As the outcome of a quantum computation is supposed to be classical, as final step of our computation we perform a measurement such that the final output will be a sample from the corresponding measurement statistics.
We will also use some standard notions from (quantum) information theory.For example, we denote the Shannon entropy of a random variable X by H (X ), the conditional entropy of a random variable X given Y as H (X |Y ) and the mutual information between random variables X and Y as I (X : Y ).Similarly, the von Neumann entropy of a quantum state ρ will be denoted as S(ρ) and the mutual information for a bipartite quantum state ρ AB as I (ρ AB ) = I (A : B).Standard results on these quantities which will enter our discussion can, e.g., be found in [20].

Basics of learning theory
Next we describe the model of exact learning.In classical exact learning for an input space X , a target space {0, 1}, and a concept class F ⊂ {0, 1} X , a learning algorithm receives as input labeled training data {(x i , f (x i ))} m i=1 for some (to the learner) unknown f ∈ F, where the x i are drawn independently according to some probability distribution D on X which is known to the learner.The goal of the learner is to exactly reproduce the unknown function f from such training examples with high success probability.
We can formalize this as follows: We call a concept class F exactly learnable if there exists a learning algorithm A and a map m F : (0, 1) → N s.t. for every D ∈ Prob(X ) (where Prob(X ) is the set of all probability measures on X ), f ∈ F and δ ∈ (0, 1), running A on training data of size m ≥ m F (δ) drawn according to D and f with probability ≥ 1 − δ (w.r.t. the choice of training data) yields a hypothesis h s.t.h(x) = f (x) for all x ∈ X .The smallest such map m F is called sample complexity of exactly learning F.
Note that this definition of learning captures the information-theoretic challenge of the learning problem in the sample complexity, but it does not refer to the computational complexity of learning.The focus on sample complexity is typical in statistical learning theory.Hence, also our results will be formulated in terms of sample complexity bounds.As we give explicit algorithms, these results directly imply bounds on the computational complexity; however, we will not discuss them in any detail.
Note also that the exact learning model differs from the well-known PAC ("probably approximately correct"), introduced by [26], in two ways.First, whereas the PAC model only requires to approximate the unknown function with high probability, we require to reproduce it exactly; in other words, we set the accuracy in PAC learning to 0. Second, whereas in the PAC scenario the learner does not know the underlying distribution, we assume it to be fixed and known in advance.A short discussion on how to relax this restriction can be found in Sect.A.3.
The quantum exact learning model differs from the classical model in the form of the training data and the allowed form of computation.Namely, in quantum exact learning, the training data consists of m copies of the quantum example state , and this training data is processed by quantum computational steps.With this small change, the above definition of exact learnability and sample complexity now carry over analogously.
We conclude this introduction with a concentration result that has proven to be useful throughout learning theory.
Lemma 1 (Hoeffding's Inequality [15], compare also Theorem 2.2.6 in [27]) Let Z 1 , ..., Z n be real-valued independent random variables taking values in closed and bounded intervals [a i , b i ], respectively.Then for every ε > 0 This directly implies (after replacing Z i with −Z i ) that

-biased Fourier analysis of Boolean functions
We now give the basic ingredients of μ-biased Fourier analysis over the Boolean cube {−1, 1} n .For more details, the reader is referred to [21].For a bias vector μ Thus, a positive μ i tells us that at the ith position the distribution is biased towards +1, a negative μ i tells us that at the ith position the distribution is biased towards −1.For μ = 0 . . .0, we simply obtain the uniform distribution on {−1, 1} n .The absolute value of μ i quantifies the strength of the bias in the ith component.We call Assuming the underlying product distribution to be c-bounded thus corresponds to assuming that the bias is not arbitrarily strong.Hence, we will in the following express notions of "small" or "large" bias either in terms of the bias vector μ or in terms of the c-boundedness constant.

-biased quantum Fourier sampling
We now turn to the description of the quantum algorithm for μ-biased quantum Fourier sampling which constitutes the basic ingredient of our learning algorithms and which, to our knowledge, was first presented in [17].There the authors demonstrate that the μ-biased Fourier transform for a c-bounded D μ with c ∈ (0, 1] can be implemented on a quantum computer as the n-qubit μ-biased quantum Fourier transform: For In the same way as the unbiased quantum Fourier transform can be used for quantum Fourier sampling, this μ-biased version now yields a procedure to sample from the μ-biased Fourier spectrum of a function using a quantum computer.We describe the corresponding procedure in Algorithm 1.

Algorithm 1 μ-biased Quantum Fourier Sampling
Input: Output: j ∈ {0, 1} n with probability ĝμ ( j) 2 , where the function x) .Success Probability: One can show that this algorithm indeed works as claimed by analyzing the transformation of the quantum state throughout the steps algorithm and making use of the orthonormality of the basis.This is the content of the following Lemma 2 (Lemma 3 in [17]) x) .Then with probability ( ĝμ ( j)) 2 2 , Algorithm 1 outputs the string j ∈ {0, 1} n .
Proof The proof can be found in [17], we reproducce it in "Appendix B." This result allows us to generalize results based on quantum Fourier sampling w.r.t. the uniform distribution.In particular, we will apply it to obtain a generalization of the Bernstein-Vazirani algorithm.

The pretty good measurement
A basic problem in quantum information is that of distinguishing quantum states.We now describe a useful tool in this context, namely a measurement that is guaranteed to have a "pretty good" success probability to correctly identify an unknown state from a known ensemble.
Suppose that Alice (A) chooses one among m pure states |ψ i ∈ C d according to probabilities p i ∈ [0, 1], where p i ≥ 0 and m i=1 p i = 1 and then sends the state to Bob (B).B wants to identify the state by performing a POVM measurement A. Let E = {( p i , |ψ i )} i=1...,m be the ensemble describing A's preparation procedure, denote B's optimal success probability by P opt := max P OV M A P A , where P A := m i=1 p i ψ i |A i |ψ i for a POVM A = {A i } i=1,...,m .Hausladen and Wootters [14] suggested a canonical form for a measurement for state discrimination, which is now usually referred to as the "pretty good measurement" (PGM) corresponding to the ensemble E. It is defined in the following way: First let |ψ i := √ p i |ψ i be the states renormalized according to their respective probabilities.The density operator of the ensemble , where the inverse square root is taken only over nonzero eigenvalues of ρ.Now the PGM is The "pretty good" performance of the PGM was proved in [6]: Theorem 1 For the PGM measurement defined above it holds that Another useful property of the PGM is that the corresponding success probability can be computed from the Gram matrix of the ensemble as follows: Lemma 3 The success probability for the PGM measurement for an ensemble E = {( p i , |ψ i )} i=1...,m can be written as where G is the Gram matrix with entries G(i, j) = √ p i p j ψ i |ψ j for 1 ≤ i, j ≤ m.
Proof This result can be shown by direct computation using the definition of the PGM and the uniqueness of the positive square root of a positive matrix.

The learning problem
We now describe the learning task which we aim to understand.For a ∈ {0, 1} n , define When we observe that 1−x i 2 is simply the bit-description of x i , it becomes clear that f (a) computes the parity of the entries of the bit-description of x i at the positions at which a has a 1-entry.To ease readability, we will write xi = 1−x i 2 .The classical task which inspires our problem is the following: Given a set of , where the x i are drawn i.i.d.according to D μ , determine the string a with high success probability.Here, we assume prior knowledge of the underlying distribution and that the underlying distribution is a c-bounded product distribution as introduced in Sect.2.4.This means that we are considering a problem of exact learning from examples with instances drawn from a distribution that is known to the learner in advance.
Classically, as we show in Sect.6, successfully solving the task requires a number of examples that grows at least linearly in n.If we consider a version of this problem with noisy training data, then known classical algorithms perform worse both w.r.t.sample complexity and running time.For example, [18] exhibits an algorithm with polynomial (superlinear) sample complexity but barely subexponential runtime (both w.r.t.n).
The step to the quantum version of this problem now is the same as from classical to quantum exact learning.This means that training data is given as m copies of the quantum example state and the learner is allowed to use quantum computation to process the training data.The goal of the quantum learner remains that of outputting the unknown string a with high success probability.

A generalized Bernstein-Vazirani algorithm
To understand how μ-biased quantum Fourier sampling can help us with this learning problem, we first compute the μ-biased Fourier coefficients of g (a) := (−1) f (a) , with f (a) for a ∈ {0, 1} n the linear functions defined in Sect.3.

Lemma 4
Let a ∈ {0, 1} n , g (a) := (−1) f (a) and μ ∈ (−1, 1) n .Then the μ-biased Fourier coefficients of g (a) satisfy: We can reformulate this as Proof We first observe that all the "objects of interest," namely the probability distribution D μ , the basis functions φ μ, j , and the target function ĝ(a) μ , factorize.This now implies that also the μ-biased Fourier coefficients factorize, i.e., we have Therefore we only have to study the case n = 1 in detail and the general result then follows.In this case, we have f (a) By plugging in we now obtain which is exactly the claim for n = 1.
For clarity, we write down explicitly the algorithm which we obtain as a generalization of the Bernstein-Vazirani algorithm to a μ-biased product distribution as Algorithm 2. The generalization compared to the standard Bernstein-Vazirani algorithm consists only in going from the uniform to a more general product distribution, which gives rise to different observation probabilities.

Algorithm 2 Generalized Bernstein-Vazirani algorithm
Success Probability: This corresponds to a success of the algorithm.7: Output o = j 1 . . .j n .

8: end if
We now show that the output probabilities of Algorithm 2 are as claimed in its description.This follows directly by combining Lemma 2 on the workings of μbiased quantum Fourier sampling with Lemma 4 on the μ-biased Fourier coefficients of our target functions and is the content of the following with a ∈ {0, 1} n and μ ∈ (−1, 1) n .Then step 3 of Algorithm 2 provides an outcome | j 1 . . .j n+1 with the following properties: Note that (v) can be trivial if the bias is too strong.This observation already hints at why we later use different procedures for arbitrary and for small bias.
We also want to point out that in the case of no bias (i.e., μ = 0), Algorithm 2 simply reduces to the well-known Bernstein-Vazirani algorithm [8].

Quantum sample complexity upper bounds
This section contains the description of two procedures for solving the task of learning an unknown Boolean linear function from quantum examples w.r.t. a product distribution.(Here, we assume perfect quantum examples, noisy examples will be taken into consideration in the next section.)It is subdivided into an approach which is applicable for arbitrary (albeit not full) bias in the product distribution and a strategy which produces better results but is only valid for small bias.

Arbitrary bias
As in the case of learning w.r.t. the uniform distribution, we intend to run the generalized Bernstein-Vazirani algorithm multiple times as a subroutine and then use our knowledge of the outcome of the subroutine together with probability-theoretic arguments.The main difficulty compared to the case of an example state arising from the uniform distribution lies in the fact that whereas an observation of j n+1 = 1 when performing the standard Bernstein-Vazirani algorithm guarantees that j 1 . . .j n equals the desired string, this is not true in the μ-biased case.Hence, we have to develop a different procedure of learning from the outcomes of the subroutine.For this purpose, we propose Algorithm 3.

Algorithm 3 Amplified Generalized Bernstein-Vazirani algorithm -Version 1
⎠ for a suitable constant C > 0, and Run Algorithm 2 on the lth copy of |ψ a , store the output as o (l) .3: end for Let Output o =⊥.

11: end if
The amplification procedure in Algorithm 3 differs from the majority vote in the standard Bernstein-Vazirani learning procedure (w.r.t. the uniform distribution) as used in [11,13] in the following two ways: Instead of working on the level of the whole string, we use a componentwise strategy.And instead of taking a majority vote over observed values, we take a maximum to account for the asymmetry in the probability of an observation error (see Theorem 2).
We now show that the number of copies postulated in Algorithm 3 is actually sufficient to achieve the desired success probability.
First, we bound the probability of the algorithm outputting ⊥ (i.e., of each subroutine failing) as follows: where the last step uses Theorem 2 and that the training data consists of independent copies of |ψ a , i.e., is given as a product state.The choice of m now guarantees that this last term is ≤ δ 2 (if we choose the constant C > 0 sufficiently large).Now we bound the second term in Eq. (5.1).We make the following observation: Suppose 1 ≤ i ≤ n is s.t. a i = 1.As the Fourier coefficients, and with them the output probabilities, factorize, the probability of Algorithm 2 outputting a string j 1 . . .j n with j i = 1 = a i is simply the probability of Algorithm 2 applied to only the subsystem state of |ψ a corresponding to the ith and the (n + 1) st subsystem outputting a 1.By Theorem 2, this probability is Hence, assuming a i = 1, the probability of not observing a 1 at the ith position in any of the m runs of Algorithm 2 is 1 So using the union bound, we arrive at and in m runs no 1 is observed at the i th entry] P[a i = 1 and in m runs no 1 is observed at the i th entry] The choice of m guarantees that this last term is ≤ δ 2 (if we choose the constant C > 0 sufficiently large).
We now combine this with Eq. (5.1) and obtain which finishes the proof.

Remark 1
We want to comment shortly on the dependence of the sample complexity bound on the c-boundedness constant by considering extreme cases.As c → 0, i.e., we allow more and more strongly biased distributions, the sample complexity goes to infinity.This reflects the fact that in the case of a fully biased underlying product distribution, only a single bit of information about a can be extracted, so exactly learning the string a is (in general) not possible.For c = 1, i.e., the case of no bias, we simply obtain that O ln(n) + ln( 2 δ ) copies of the quantum example state are sufficient.Note that this does not coincide with the bound obtained for the standard Bernstein-Vazirani procedure which is independent of n. (This can easily be shown using Lemma 1.)This discrepancy is due to the difference in "amplification procedures."Namely, in Algorithm 3 we do not explicitly make use of the knowledge that, given j n+1 = 1, we know the probability of j 1 . . .j n = a 1 . . .a n because, whereas for μ = 0 this probability equals 1, for μ = 0 it can become small.Hence, for μ = 0 our algorithm introduces an additional procedure to deal with the uncertainty of j 1 . . .j n even knowing j n+1 and we see in the proof that this yields the additional ln(n) term.In the next subsection, we describe a way to get rid of exactly that ln(n) term for "small" bias.

Small bias
In this subsection, we want to study the case in which (v) of Theorem 3 gives a good bound.Namely, throughout this subsection we will assume that the c-boundedness constant is s.
. This assumption will allow us to apply a different procedure to learn from the output of Algorithm 2 and thus obtain a different bound on the sample complexity of the problem.Note, however, that this requirement becomes more restrictive with growing n and can in the limit n → ∞ only be satisfied by c = 1, i.e., for the underlying distributions being uniform.Also, we will from now on refer to c as c-boundedness parameter because the name "constant" would hide the n-dependence.
Our procedure for the case of small bias is given in Algorithm 4.
1 δ copies of the quantum example state |ψ a are sufficient to guarantee that, with probability ≥ 1 − δ, Algorithm 4 outputs the string a.
Note that due to the required lower bound on c the sample complexity upper bound basically loses its n-dependence.This is different from the result of Theorem 3, where n explicitly entered the upper bound.
Proof By Theorem 2, we have P[ j n+1 = 1] = 1 2 .Hence, the probability of observing j n+1 = 1 in at most k − 1 of the m runs of Algorithm 2 is given by where Bin denotes a binomial distribution.
Next we assume k ≤ m 2 (this will be justified later in the proof) and use Hoeffding's inequality (Lemma 1) to obtain We will now search for the number of observations of j n+1 = 1 which is required to guarantee that the majority string is correct with high probability.Assume that we observe j n+1 = 1 in k runs of Algorithm 2, k ∈ 2N.(The latter assumption clearly does not significantly change the number of copies.)Using (v) from Theorem 2, we see that where the second inequality uses that the majority string can only be wrong if in at least half of the runs where we observed j n+1 = 1 there was some error in the remaining string.
Next we use Hoeffding's inequality and obtain, using our assumption We now set this last expression ≤ δ 2 for δ ∈ (0, 1) and rearrange the inequality to Rearranging this inequality gives By finding the zeros of this quadratic function, we get to the sufficient sample size This is in particular guaranteed if Note that this lower bound in particular implies m ≥ 2k, as required earlier in the proof.This proves the claim of the theorem thanks to the union bound.
Morally speaking, Theorem 4 shows that for product distributions which are close enough to the uniform distribution the sample complexity upper bound is the same as for the unbiased case.We conjecture that there is an explicit noise threshold above which this sample complexity cannot be reached (see the discussion in Sect.6), but have not yet succeeded in identifying such a critical value.
In this section, we have discussed the case of quantum training data that perfectly represents the target function in a superposition state.Similar results can be proved in the case of noisy quantum training data.As the reasoning is analogous to the one presented here, the details are deferred to "Appendix A."

Sample complexity lower bounds
After proving upper bounds on the number of required quantum examples by exhibiting explicit learning procedures in the previous section, we now study the converse question of sample complexity lower bounds.We will prove both classical and quantum sample complexity lower bounds and then relate them to the above results.Our proof strategy follows a state-discrimination-based strategy from [3].

Classical sample complexity lower bounds
We first prove a sample complexity lower bound for the classical version of our learning problem that upon comparison with our obtained quantum sample complexity upper bounds shows the advantage of quantum examples over classical training data in this setting.Neither the result nor the proof strategy are new, but we include them for completeness.
Let A be a classical learning algorithm and let m ∈ N be such that upon input of m examples of the form (x i , f (a) (x i )), with x i drawn i.i.d.according to D μ , with probability ≥ 1 − δ w.r.

t. the choice of training data, A outputs the string a. Then m ≥ Ω(n).
Proof Let A be a random variable uniformly distributed on {0, 1} n .(A describes the underlying string from the initial perspective of the learner.)Let B = (B 1 , . . ., B m ) be a random variable describing the training data corresponding to the underlying string.Our proof will have three main steps: First, we prove a lower bound on I (A : B) from the learning requirement.Second, we observe that I (A : B) ≤ m • I (A : B 1 ).And third, we prove an upper bound on I (A : B 1 ).Then combining the three steps will lead to a lower bound on m.
We start with the mutual information lower bound.Let h(B) ∈ {0, 1} n denote the random variable describing the output hypothesis of the algorithm A upon input of training data B. Let Z = 1 {h(B)=A} .By the learning requirement we have P[Z = 1] ≥ 1 − δ and thus H (Z ) ≤ H (δ). Therefore we obtain We now show that from m examples we can gather at most m times as much information as from a single example.Here we directly cite from [3].Namely, Here, the second step uses independence of the B i conditioned on A, the third step uses subadditivity of the Shannon entropy, and the final step uses that the distributions of (A, B i ) are the same for all 1 ≤ i ≤ m.We come to the upper bound on the mutual information.Write B 1 = (X , L) for X ∈ {−1, 1} n and L ∈ {0, 1}, i.e., with probability D μ (x) we have (X , L) = (x, f (a) (x)).Note that I (A : X ) = 0 because X and A are independent random variables.Also, I (A : L|X = 1 . . . 1) = 0 because f (a) (1 . . . 1) = 0 ∀a ∈ {0, 1} n , and for x ∈ {−1, 1} n \ {1 . . .1} Here, the first step is due to the fact that f (a) (x) does not depend on the entries a j with x j = 1, the third step follows because A {i|x i =−1} is uniformly distributed on a set of size 2 |{i|x i =−1}| and f (a) assigns the labels 0 and 1 to half of the elements of that set, respectively.This now implies Here, the first step is due to the chain rule for mutual information and the last step simply uses the fact that D μ defines a probability distribution.Now we combine our upper and lower bounds on the mutual information and obtain as claimed.

Remark 2
The result of Theorem 5 is intuitively clear: In order to identify the underlying string the learning algorithm has to learn n bits of information.However, a condition of the form f (a) (x) = l for x ∈ {0, 1} n , l ∈ {0, 1}, takes away at most one degree of freedom from the initial space {0, 1} n for a and thus from such an equality the algorithm can extract at most 1 bit of information.So at least n examples will be required.This observation is thus neither new nor surprising.But we want to emphasize that this analysis works independently of the product structure of the underlying distribution D μ .
If we compare the classical lower bound from Theorem 5 with our quantum upper bounds from Theorems 3 and 4 , we conclude that quantum examples allow us to strictly outperform the best possible classical algorithm w.r.t. the number of required examples.

Quantum sample complexity lower bounds
We can use a similar argument to prove quantum sample complexity lower bounds.Note that steps 1 and 2 carry over with (almost) no changes.Only the analysis of step 3 changes significantly.Even though this proof strategy is possible, as in [3] it can be improved upon by an argument based on state discrimination.We will thus follow this same approach.
An n-independent quantum sample complexity lower bound is given in the following Let A be a quantum learning algorithm and let m ∈ N be such that upon input of m copies of |ψ a , with probability ≥ 1 − δ, A outputs the string a. Then m ≥ Ω( 1c ln( 1 δ )).
Remark 3 Note that any quantum sample complexity lower bound will also lower bound the classical sample complexity.Hence, Lemma 2 also holds in the scenario of the previous subsection, which is why we did not discuss the δ-dependence there.
As A is able to distinguish the quantum states |ψ a ⊗m and |ψ b ⊗m with success probability By our assumption on a and b, δ We now combine this with our upper bound and rearrange to obtain where we used the elementary inequality 1 combined with ln(δ) ≤ 0.
We will compare this lower bound with our upper bound(s) from Sect. 5 later on.Now we turn to the n-dependent part of the sample complexity lower bound.

Theorem 6 Let |ψ
Let A be a quantum learning algorithm and let m ∈ N be such that upon input of m copies |ψ a , with probability ≥ 1 − δ, A outputs the string a, for 0 < δ ≤ 1  3 .Then m ≥ Ω (ln(n)).
Before going into the detailed proof, we give an overview over its underlying idea.The learning assumption implies that A is able to identify a state from the ensemble E = 1  2 n , |ψ a ⊗m a∈{0,1} n with success probability ≥ 1 − δ.Thus we will obtain a lower bound on m by proving an upper bound on the optimal success probability for this state identification task.
Recall that by Theorem 1, the optimal success probability can be upper bounded by the square root of the PGM success probability.Moreover, by Lemma 3, the latter can be computed via the Gram matrix of the ensemble.Thus, we now first study the Gram matrix and its square root and then use these results to bound the optimal success probability.
We first recall a well-known result on the diagonalization of matrices with a specific structure, namely matrices whose entries can be written as Boolean function of the sum of the indices.Proof The proof can be found in [3], we reproduce it in "Appendix B" We will later apply this result for G being the Gram matrix corresponding to the ensemble in our state identification task.Motivated by Lemma 3, we first use the diagonalization of such a matrix to explicitly compute the diagonal entries of the matrix square root.
Corollary 1 Let G ∈ R 2 n ×2 n be a matrix with entries given by G(a, b) = g(a + b) for a, b ∈ {0, 1} n and a function g : {0, 1} n → R.Then, for every a ∈ {0, Proof The proof can be found in [3] In our scenario, the Gram matrix has entries This can, e.g., be shown by induction on n when observing that In particular, we can write From now on, we will write |x| := d H (x, 0).By Corollary 1, we can upper bound the diagonal entries of √ G m (and thus the PGM and the optimal success probability) by upper bounding the (unbiased) Fourier coefficients of f m .To this end, consider for j ∈ {0, We now rewrite the expectations on the right-hand side This allows us to upper bound the Fourier coefficients of f as follows: According to Lemma 6, this now gives us the following upper bound on the diagonal entries of the root of the Gram matrix and this in turn allows us to bound the PGM success probability as We combine this with our learning requirement and Theorem 1 to obtain This can be rearranged (using For μ ≥ 1 − 1 ln(n) we now obtain (for n large enough) and this finishes the proof.
Note that this proof strategy also yields for a strictly increasing function g : N → R >0 with lim n→∞ g(n) = ∞ and for a distribution D μ with μ i ≥ 1 − 1 g(n) for all 1 ≤ i ≤ n the sample complexity lower bound Ω(g(n)) (for n large enough).This is consistent with the intuition that solving the learner problem becomes harder when the distribution is more strongly biased towards the uninformative instance with all entries equal to 1.
We now compare this lower bound to our previously obtained upper bounds.First, we consider the n-independent part of the bounds.When comparing Theorem 3 with Lemma 5, we obtain We study this for δ 1 (high confidence) and c 1 (high bias).Then Taylor expansion shows ln 1 ) 1. Therefore we conjecture that the c-dependence of the upper bound arising from Theorem 4 is not optimal.Now we compare the bounds w.r.t. the n-dependence, i.e., we compare Theorem 3 with Theorem 6, and obtain But in Theorem 6, we assumed that μ i ≥ 1− 1 ln(n) for all 1 ≤ i ≤ n.When considering values for μ lying on this threshold, we can rephrase this as condition on the (then ndependent) c-boundedness parameter, namely c ≤ 1 ln(n) .So when honestly including the n-dependence of c, our comparison becomes and is thus not tight.
Finally, we want to point towards a second unsatisfactory aspect of our results.We provide an n-dependent quantum sample complexity lower bound for "large" noise and an n-independent quantum sample complexity upper bound for "small" noise.However, there is a large discrepancy between the obtained characterizations of "small" and "large" noise.That this already becomes relevant for moderate n can be seen in Fig. 1.
Hence, we did not succeed in identifying a bias threshold beyond which the sample complexity qualitatively differs from the unbiased case, but merely provided a region in learning not possible learning via Bernstein-Vazirani which such a threshold would lie.To improve upon our results, it would be necessary to modify either the proof of Theorem 4 to allow for stronger bias or the proof of Theorem 6 to allow for weaker bias.In particular, it would be interesting to obtain a non-trivial quantum sample complexity lower bound for constant bias, i.e., without introducing n-dependence into the c-boundedness parameter.However, we currently do not see whether our proof strategies admit such an improvement.

Conclusion and outlook
In this paper, we extended a well-known quantum learning strategy for linear functions from the uniform distribution to biased product distributions.This approach naturally led to a distinction between a procedure for arbitrary (not full) bias and a procedure for small bias, the latter with a significantly better performance.Moreover, we showed that the second procedure is (to a certain degree) stable w.r.t.noise in the training data and in the performed quantum gates.Finally, we also provided lower bounds on the size of the training data required for the learning problem, both in the classical and in the quantum setting.The sample complexity upper and lower bounds in the case of no noise are summarized in Fig. 2. We want to conclude by outlining some open questions for future work: -Can we identify a bias threshold s.t. the optimal sample complexity below the threshold differs qualitatively from the one above it?-Is our learning procedure for small bias also stable w.r.t.different types of noise in the training data, e.g., malicious noise?-Our explicit learning algorithms also give upper bounds on the computational complexity of our learning problem.Can we find corresponding lower bounds to facilitate a discussion of optimality w.r.t.runtime?-Can we find more examples of learning tasks (i.e., function classes) where quantum training data yields an advantage w.r.t.sample and/or time complexity?
from the TopMath Graduate Center of the TUM Graduate School at the Technische Universität München, Germany, and from the TopMath Program at the Elite Network of Bavaria is gratefully acknowledged.

Compliance with ethical standards
Conflict of interest The author declares that he has no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Stability w.r.t. noise
Both algorithms presented in Sect. 5 implicitly assume that the quantum example state perfectly represents the underlying function and that all quantum gates performed during the computation are perfectly accurate.In this section, we relax these assumptions.We will do so separately, but our analysis shows that moderate noise in the training data and moderately faulty quantum gates can be tolerated at the same time.

A.1 Noisy training data
One of the most well-studied noise models in classical learning theory is that of random classification noise.Here, the training data are assumed to be s.t. with probability 1−η, the learning algorithm obtains a correct example, and with probability η, the examples label is flipped.In [4], this is translated to a quantum example state which in our notation has the form We will only shortly comment on how to battle this type of noise with our learning strategy at the end of this subsection.Instead, our focus will be on a performance analysis of our algorithm in the case of noisy training data similar to [13].This means that we now assume our quantum example state to be of the form where the ξ i x i , for 1 ≤ i ≤ n and x i ∈ {−1, 1}, are independent random variables distributed according to Bernoulli distributions with parameters η i (i.e., P[ξ i and addition is understood modulo 2. Here, we choose a noise model that is rather general but we make an important restriction.Namely, we do not allow a noise ξ x that depends in an arbitrary way on x but rather we require the noise to have a specific sum structure ξ x = n i=1 ξ i x i .This requirement will later imply that also the noisy Fourier coefficients factorize.As this factorization is crucial for our analysis, with our strategy we cannot generalize the results of [13] on that more general noise model.
We first examine the result of applying the same procedure as in Algorithm 2 to a copy of a noisy quantum example state |ψ noisy a .To simplify referencing, we write this down one more time as Algorithm 5 even though the procedure is exactly the same, only the form of the input changes.

Algorithm 5 Generalized Bernstein-Vazirani algorithm with noisy training data
Output: See Theorem 7 Success Probability: Similarly to our previous analysis, we will first study the Fourier coefficients that are relevant for the sampling process in Algorithm 5. x i and let μ ∈ (−1, 1).Then the μ-biased Fourier coefficients of g (a) satisfy: For y ∈ {0, 1} n , with probability Proof The proof is analogous to the one of Lemma 4, see "Appendix B." We now make a step analogous to the one from Lemma 4 to Theorem 2 in order to understand the output of Algorithm 5.

Theorem 7 Let |ψ
x i be a noisy quantum example state, a ∈ {0, 1} n , μ ∈ (−1, 1) n .Then Algorithm 5 provides an outcome | j 1 . . .j n+1 with the following properties: (iii) For any 1 ≤ i ≤ n, with probability 2η i (1 − η i ) it holds that Note that in the scenario of Theorem 7 the underlying distribution D μ is known to the algorithm as μ is provided as part of the input (see Algorithm 5).Building on this subroutine, we will now describe an amplified procedure for moderate noise (which is made precise in Theorem 8) in Algorithm 6 analogous to the one described in Sect.5.2.Again, only the input changes, but we write the procedure down explicitly to simplify referencing.The previous Theorem shows that if the bias is not too strong and if the noise is not too random (i.e., the probability of adding a random 1 is either very low or very high), then learning is possible with essentially the same sample complexity as in the case without noise (compare Theorem 4).Note that the proof of Theorem 8 shows that the exact choices of the bounds (in our formulation c > 1 − 1 2 √ n and 2η i (1 − η i ) < 1 5n ) are flexible to some degree with a trade-off.If we have a better bound on c, we can loosen our requirement on the η i and vice versa.

Theorem 8 Let |ψ
Also observe that the requirement of "not too random noise" is natural.If 2η i (1 − η i ) → 1 2 or, equivalently, η i → 1 2 , then the label in the noisy quantum example state becomes completely random and thus no information on the string a can be extracted from it.Our bound gives a quantitative version of this intuition.
Nevertheless, the restriction which we put on the noise can be considered quite strong because of its n-dependence.This can, however, be relaxed at the cost of a looser sample complexity upper bound.Namely, similarly to the difference between the proofs of Theorems 3 and 4 , if we, e.g., only assume 2η i (1 − η i ) < 1  5 for all 1 ≤ i ≤ n, we can first for each coordinate separately bound the probability of the noise variables becoming relevant in at least k 5 runs using Hoeffding's inequality and then use the union bound.This will yield a quantum sample complexity upper bound with an n-dependent term of the form ln(n). Hence, if we assume a c-boundedness parameter strongly restricted as in Theorems 4 or 8, but obtain faulty training data states without an n-dependent noise bound as in Theorem 8, then we can still obtain a sample complexity upper bound with the same n-dependence as in Theorem 3.
Finally, as promised at the beginning of this subsection, we shortly describe how to use the ideas presented in this subsection in the case of random classification noise as in [4].If the quantum learning algorithm has access to copies of a quantum example state then we observe that applying the μ-biased Fourier transform to SSthe first n qubits and the standard Fourier transform to the last qubit gives Hence, compared to the scenario studied in section 5 the probabilities of observing a certain string as measurement outcome are simply scaled by a factor of ( . So our analysis carries over almost directly.We do not give the detailed reasoning here but only mention that incorporating the now rescaled probabilities basically changes the sample complexity upper bounds from the nonnoisy case by a factor of 1 (η− 1 2 ) 2 , which is again in accordance with the intuition that the learning task becomes hard-and eventually impossible-for η → 1 2 .

A.2 Faulty quantum gates
We now turn to the (more realistic) setting where the quantum gates in our computation (i.e., the μ-biased quantum Fourier transforms) are not implemented exactly but only approximately.In this scenario, we obtain Proof This follows from Theorem 2 because the outcome probabilities are the squares of the amplitudes, and thus, the difference in outcome probabilities can be bounded by the 2-norm of the difference of the quantum states after applying the biased quantum Fourier transform and its approximate version.
Now we can proceed analogously to the proof strategy employed in Theorem 8 to derive + ε copies of the quantum example state |ψ a suffice to guarantee that, with probability ) outputs the string a.
In particular, the sample complexity upper bound from Theorem 4 remains basically untouched if quantum gates with small error are used.

A.3 The case of unknown underlying distributions
An interesting consequence of the result of the previous subsection is the possibility to drop the assumption of prior knowledge of the underlying product distribution, as was already observed in [17] for a similar scenario.The important observations towards this end are given in this subsection.

Lemma 9 (Lemma 5 in [17])
Let A = A n • • • A 1 be a product of unitary operators A j .Assume that for every A j there exists an approximation Ã j s.t.A j − Ã j ≤ ε j .Then it holds that i.e., the operator Ã := Ãn • • • Ã1 is an ε-approximation to A w.r.t. the operator norm.
Proof This can be proven by induction using the triangle inequality and the fact that a unitary operator has operator norm equal to 1.For details, the reader is referred to [17].
This can be used to derive (compare again [17]) Then the corresponding biased quantum Fourier transforms satisfy The next Lemma is on approximating the bias parameter of an unknown product distribution from examples.(Compare the closing remark in Appendix A of [17].)  ) for a product distribution D μ with bias vector μ ∈ )) examples drawn i.i.d.from D μ (which can be obtained from copies of the quantum example state by measuring the corresponding subsystem) are sufficient to guarantee that, with probability ≥ 1 − δ, the empirical estimate μi satisfies . As each component of a copy of the quantum example state can be measured separately, we see -using the union bound, that O( 8γ 2 •n 2 ε 2 ln( n δ )) copies of the (possibly noisy) quantum example state suffice to guarantee that, with probability . Now we can apply the previous Corollary to finish the proof.
If we now combine this result with Theorem 9, we obtain a sample complexity upper bound for our learning problem without assuming the underlying distribution to be known in advance.

Corollary 3 Let |ψ
Then there exists a quantum algorithm which, given access to 1 δ copies of the quantum example state |ψ a , with probability ≥ 1 − δ, outputs the string a, without prior knowledge of the underlying distribution D μ .
Note, however, that the learning algorithm does need to obtain the c-boundedness parameter c as input in advance, but this (in general) does not fix the underlying distribution.Observe also that-since Lemma 10 remains valid for noisy quantum examples-, even though we do not explicitly formulate the result of this subsection for noisy quantum training data, such a generalization is possible by combining the strategies presented in this and the previous subsections.

B Proofs
Proof of Lemma 2 We directly compute the state produced by the algorithm before the measurement is performed: Hence, the computational basis measurement from step 3 of Algorithm 1 on the last qubit returns 1 with probability 1 2 and if that is the case, the computational basis measurement on the first n qubits will return j with probability ĝμ ( j) 2 , as claimed.

Proof of Lemma 6
The proof is by direct computation using the Fourier expansion: Unitarity of H can be checked easily by exploiting the same identity as in the second to last line of the previous computation.

Proof of Theorem 8
We want to prove that P[Algorithm 6 does not output a] ≤ δ, where the probability is w.r.t.both the internal randomness of the algorithm and the random variables.First observe that, due to (i) in Theorem 7, exactly the same reasoning as in the proof of Theorem 4 shows that the probability of observing j n+1 = 1 in at most k − 1 of the m runs of Algorithm 5 (assuming k ≤ m 2 ) is bounded by We will now search for the number of observations of j n+1 = 1 which is required to guarantee that the majority string is correct with high probability.Suppose we observe j n+1 = 1 in k runs of Algorithm 5, k ∈ 2N.Again we see that As "false 1's" can only appear in the case where our noise variables have an influence (compare Theorem 7), we will first find a lower bound on k which guarantees that the probability of the noise variable influence becoming relevant for at least k 5 runs is ≤ Now we will find a lower bound on k which guarantees that, if the noise variable influence is relevant in at most k 5 of the runs, among the remaining 4k 5 runs with probability ≥ 1 − δ 4 we make at most k 5 "false 0" observations.To this end, we bound Rearranging gives the sufficient condition This proves the claim of the theorem thanks to the union bound.

Proof of Corollary 2
According to the Lemma 9 it holds that Thus it suffices to bound the operator norm of the difference of the 1-qubit biased quantum Fourier transforms.So let = x∈{−1,1} α x |x be a qubit state.Then D μ i (x)φ μ i , j (x) − D μi (x)φ μi , j (x) α x | j .

Now note that
which implies and that moreover Hence, we obtain where we defined γ :=

Lemma 6
Let G ∈ R 2 n ×2 n be a matrix with entries given by G(a, b) = g(a + b) for a, b ∈ {0, 1} n and a function g : {0, 1} n → R. Then(H G H −1 )(a, b) = 2 n ĝ(a)δ a,b , with H ∈ R 2 n ×2 n given by H (a, b) = (−1) a•b √ 2 n .In other words, the set of eigenvalues of G is given by {2 n ĝ(a) | a ∈ {0, 1} n } and G is unitarily diagonalized by H .

Fig. 1 A
Fig. 1 A plot comparing the maximal bias allowed in Theorem 4 (depicted by the blue crosses) with the minimal bias required in Theorem 6 (depicted by the red line) (Color figure online)

Fig. 2
Fig. 2 Overview of the quantum sample complexity upper and lower bounds from Theorems 3, 4 and 6 depending on the c-boundedness parameter (without noise in the training data).Here, g : N → R >0 is a strictly increasing function with lim n→∞ g(n) = ∞ (Color figure online)

( 2 .
We now set this last expression ≤ δ 4 and rearrange the inequality tok ≥ 25 2(1 − 4n(1 − c) 2 ) 2 ln 4 δ .Hence, by the union bound a sufficient condition for P[∃1 ≤ i ≤ n :a i = o i ] ≤ δ As discussed above, we consider the problem of state identification with the ensemble E = .By Lemma 3, with the Gram matrix G m (a, b) :=12 n ψ a |ψ b m we can write the success probability as , we reproduce it in "Appendix B." With this, we can now prove Theorem 6: 123 Proof of Theorem 6 n Perform the μ-biased QFT H μ on the first n qubits, obtain the state (H μ ⊗ 1)|ψ

1 δ
copies of the quantum example state |ψ a suffice to guarantee that with probability ≥ 1 − δ Algorithm 6 outputs the string a.As in Theorem 4, our restrictions on both the c-boundedness parameter and the noise strength lead to a basically n-independent sample complexity upper bound.ProofThe proof is analogous to the one of Theorem 4, see "Appendix B." Algorithm 6 Amplified Generalized Bernstein-Vazirani algorithm with noisy training data be a quantum example state, with a ∈ {0, 1} n , μ ∈ (−1, 1) n .Then a version of Algorithm 2 with H μ replaced by H μ for H μ − H μ 2 ≤ ε provides an outcome | j 1 . . .j n+1 with the following properties: 2 μ i (x)φ μ i , j (x) − D μi (x)φ μi , j (x) α x | j D