Noisy tensor completion via the sum-of-squares hierarchy

In the noisy tensor completion problem we observe m entries (whose location is chosen uniformly at random) from an unknown \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_1 \times n_2 \times n_3$$\end{document}n1×n2×n3 tensor T. We assume that T is entry-wise close to being rank r. Our goal is to fill in its missing entries using as few observations as possible. Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n = \max (n_1, n_2, n_3)$$\end{document}n=max(n1,n2,n3). We show that if \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m > rsim n^{3/2} r$$\end{document}m≳n3/2r then there is a polynomial time algorithm based on the sixth level of the sum-of-squares hierarchy for completing it. Our estimate agrees with almost all of T’s entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case when \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r > n$$\end{document}r>n, and in fact it works all the way up to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r = n^{3/2-\epsilon }$$\end{document}r=n3/2-ϵ. Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constraint satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the sum-of-squares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the sum-of-squares hierarchy?


Introduction
Matrix completion is one of the cornerstone problems in machine learning and has a diverse range of applications. One of the original motivations for it comes from the Netflix Problem where the goal is to predict user-movie ratings based on all the ratings we have observed so far, from across many different users. We can organize this data into a large, partially observed matrix where each row represents a user and each column represents a movie. The goal is to fill in the missing entries. The usual assumptions are that the ratings depend on only a few hidden characteristics of each user and movie and that the underlying matrix is approximately low rank. Another standard assumption is that it is incoherent, which we elaborate on later. How many entries of M do we need to observe in order to fill in its missing entries? And are there efficient algorithms for this task?
There have been thousands of papers on this topic and by now we have a relatively complete set of answers. A representative result (building on earlier works by Fazel [21], Recht, Fazel and Parrilo [57], Srebro and Shraibman [61], Candes and Recht [12], Candes and Tao [13]) due to Keshavan, Montanari and Oh [41] can be phrased as follows: Suppose M is an unknown n 1 × n 2 matrix that has rank r but each of its entries has been corrupted by independent Gaussian noise with standard deviation δ. Then if we observe roughly m = (n 1 + n 2 )r log(n 1 + n 2 ) of its entries, the locations of which are chosen uniformly at random, there is an algorithm that outputs a matrix X that with high probability satisfies There are extensions to non-uniform sampling models [16,45], as well as various efficiency improvements [33,39]. What is particularly remarkable about these guarantees is that the number of observations needed is within a logarithmic factor of the number of parameters -(n 1 + n 2 )r -that define the model. In fact, there are benefits to working with even higher-order structure but so far there has been little progress on natural extensions to the tensor setting. To motivate this problem, consider the Groupon Problem (which we introduce here to illustrate this point) where the goal is to predict user-activity ratings. The challenge is that which activities we should recommend (and how much a user liked a given activity) depends on time as well -weekday/weekend, day/night, summer/fall/winter/spring, etc. or even some combination of these. As above, we can cast this problem as a large, partially observed tensor where the first index represents a user, the second index represents an activity and the third index represents the time period. It is again natural to model it as being close to low rank, under the assumption that a much smaller number of (latent) factors about the interests of the user, the type of activity and the time period should contribute to the rating. How many entries of the tensor do we need to observe in order to fill in its missing entries? This problem is emblematic of a larger issue: Can we always solve linear inverse problems when the number of observations is comparable to the number of parameters in the model, or is computational intractability an obstacle?
In fact, one of the advantages of working with tensors is that their decompositions are unique in important ways that matrix decompositions are not. There has been a groundswell of recent work that uses tensor decompositions for exactly this reason for parameter learning in phylogenetic trees [51], HMMs [51], mixture models [38], topic models [2] and to solve community detection [3]. In these applications, one assumes access to the entire tensor (up to some sampling noise). But given that the underlying tensors are low-rank, can we observe fewer of their entries and still utilize tensor methods?
A wide range of approaches to solving tensor completion have been proposed [11,28,40,43,47,52,60,62,63]. However, in terms of provable guarantees none 1 of them improve upon the following näive algorithm. If the unknown tensor T is n 1 × n 2 × n 3 we can treat it as a collection of n 1 matrices each of size n 2 × n 3 . It is easy to see that if T has rank at most r then each of these slices also has rank at most r (and they inherit incoherence properties as well). By treating a third-order tensor as nothing more than an unrelated collection of n 1 low-rank matrices, we can complete each slice separately using roughly m = n 1 (n 2 + n 3 )r log(n 2 + n 3 ) observations in total. When the rank is constant, this is a quadratic number of observations even though the number of parameters in the model is linear.
Here we show how to solve the (noisy) tensor completion problem with many fewer observations. Let n 1 ≤ n 2 ≤ n 3 . We give an algorithm based on the sixth level of the sum-of-squares hierarchy that can accurately fill in the missing entries of an unknown, incoherent n 1 × n 2 × n 3 tensor T that is entry-wise close to being rank r with roughly m = (n 1 ) 1/2 (n 2 + n 3 )r log 4 (n 1 + n 2 + n 3 ) observations. Moreover, our algorithm works even when the observations are corrupted by noise. When n = n 1 = n 2 = n 3 , this amounts to about n 1/2 r observations per slice which is much smaller than what we would need to apply matrix completion on each slice separately. Our algorithm needs to leverage the structure between the various slices.

Our results
We give an algorithm for noisy tensor completion that works for third-order tensors. Let T be a third-order n 1 × n 2 × n 3 tensor that is entry-wise close to being low rank. 1 Most of the existing approaches rely on computing the tensor nuclear norm, which is hard to compute [32,34]. The only other algorithms we are aware of [11,40] require that the factors be orthogonal. This is a rather strong assumption. First, orthogonality requires the rank to be at most n. Second, even when r ≤ n, most tensors need to be "whitened" to be put in this form and then a random sample from the "whitened" tensor would correspond to a (dense) linear combination of the entries of the original tensor, which would be quite a different sampling model.

In particular let
where σ is a scalar and a , b and c are vectors of dimension n 1 , n 2 and n 3 respectively. Here is a tensor that represents noise. Its entries can be thought of as representing model misspecification because T is not exactly low rank or noise in our observations or both. We will only make assumptions about the average and maximum absolute value of entries in . The vectors a , b and c are called factors, and we will assume that their norms are roughly √ n i for reasons that will become clear later. Moreover we will assume that the magnitude of each of their entries is bounded by C in which case we call the vectors C-incoherent 2 . (Note that a random vector of dimension n and norm √ n will be O( √ log n i )-incoherent with high probability.) The advantage of these conventions are that a typical entry in T does not become vanishingly small as we increase the dimensions of the tensor. This will make it easier to state and interpret the error bounds of our algorithm.
Let represent the locations of the entries that we observe, which (as is standard) are chosen uniformly at random and without replacement. Set | | = m. Our goal is to output a hypothesis X that has small entry-wise error, defined as: This measures the error on both the observed and unobserved entries of T . Here and throughout let n 1 ≤ n 2 ≤ n 3 and n = n 3 . Our goal is to give algorithms that achieve vanishing error, as the size n of the problem increases. Moreover we will want algorithms that need as few observations as possible. Our main result is: Since the error bound above is quite involved, let us dissect the terms in it. In fact, having an additive δ in the error bound is unavoidable. We have not assumed anything about in (1) except a bound on the average and maximum magnitude of its entries. If were a random tensor whose entries are +δ and −δ then no matter how many entries of T we observe, we cannot hope to obtain error less than δ on the unobserved entries 3 . The crucial point is that the remaining term in the error bound becomes o(1) when m = ((r * ) 2 n 3/2 ) which for polylogarithmic r * improves over the näive algorithm for tensor completion by a polynomial factor in terms of the number of observations. Moreover our algorithm works without any constraints that factors a , b and c be orthogonal or even have low inner-product.
Furthermore we show that in certain non-degenerate cases we can even remove another factor of r * from the number of observations we need: Suppose that T is a tensor as in (1), but let σ be Gaussian random variables with mean zero and variance one. The factors a , b and c are still fixed, but because of the randomness in the coefficients σ , the entries of T are now random variables. Corollary 1.2 Suppose we are given m observations whose locations are chosen uniformly at random (and without replacement) from a tensor T of the form (1), where each coefficient σ is a Gaussian random variable with mean zero and variance one, and each of the factors a , b and c are C-incoherent.
Further, suppose that for a 1−o(1) fraction of the entries of T , we have var(T i, j,k ) ≥ r / polylog(n) = V and that is a tensor where each entry is a Gaussian with mean zero and variance o(V ). Then there is a polynomial time algorithm that outputs a hypothesis X that satisfies In the setting above, it is enough that the coefficients σ are random and that the nonzero entries in the factors are spread out to ensure that the typical entry in T has variance about r . Consequently, the typical entry in T is about √ r . This fact combined with the error bounds in Theorem 1.1 immediately yield the above corollary. Remarkably, the guarantee is interesting even when r = n 3/2− . The setting where r > n is called the overcomplete case. In this setting, if we observe a subpolynomial fraction of the entries of T we are able to recover almost all of the remaining entries almost entirely. For context, there are no known algorithms for decomposing an overcomplete, thirdorder tensor even if we are given all of its entries, at least without imposing much stronger conditions that the factors be nearly orthogonal [29].
We believe that this work is a natural first step in designing practically efficient algorithms for tensor completion. Our algorithms manage to leverage the structure across the slices through the tensor, instead of treating each slice as an independent matrix completion problem. Now that we know this is possible, a natural follow-up question is to get more efficient algorithms. Our algorithms are based on the sixth level of the sum-of-squares hierarchy and run in polynomial time, but are quite far from being practically efficient as stated. Recent work of Hopkins et al. [37] shows how to speed up sum-of-squares and obtain nearly linear time algorithms for a number of problems where the only previously known algorithms ran in a prohibitively large degree polynomial running time. Another approach would be to obtain similar guarantees for alternating minimization. Currently, the only known approaches [40] require that the factors are orthonormal and only work in the undercomplete case. Finally, it would be interesting to get algorithms for low rank tensor completion that find T exactly when there is no noise.

Our approach
All of our algorithms are based on solving the following optimization problem: and outputting the minimizer X , where · K is some norm that can be computed in polynomial time. It will be clear from the way we define the norm that the low rank part of T will itself be a good candidate solution. But this is not necessarily the solution that the convex program finds. How do we know that whatever it finds not only has low entry-wise error on the observed entries of T , but also on the unobserved entries too? This is a well-studied topic in statistical learning theory, and as is standard we can use the notion of Rademacher complexity as a tool to bound the error. The Rademacher complexity is a property of the norm we choose, and our main innovation is to use the sum-of-squares hierarchy to suggest a suitable norm. The high-level idea is to establish a parallel between noisy tensor completion and refuting random constraint satisfaction problems. This connection is bidirectional: We show that any polynomial time computable norm with good Rademacher complexity immediately yields a polynomial time algorithm for refuting random constraint satisfaction problems. Moreover we embed known algorithms for refutation into the sum-of-squares hierarchy to suggest a suitable polynomial time computable norm in order to give generalization bounds for our algorithms for tensor completion.
A natural question to ask is: Are there other norms that have even better Rademacher complexity than the ones we use here, and that are still computable in polynomial time? It turns out that any such norm would immediately lead to much better algorithms for refuting random constraint satisfaction problems than we currently know. We have not yet introduced Rademacher complexity, so we state our lower bounds informally: These results follow directly from the works of Grigoriev [31], Schoenebeck [58] and Feige [22]. There are similar connections between our upper bounds and the work of Coja-Oghlan, Goerdt and Lanka [17] who give an algorithm for strongly refuting random 3-SAT. In Sect. 2 we explain some preliminary connections between these fields, at which point we will be in a better position to explain how we can borrow tools from one area to address open questions in another. We state this theorem more precisely in Corollary 2.13 and Corollary 5.6, which provide both conditional and unconditional lower bounds that match our upper bounds. An important caveat is that an algorithm for tensor completion, particularly in the exact case when there is no noise, need not be based on a norm with good Rademacher complexity. In fact an algorithm for tensor completion might not even be based on convex programming in the first place!

Computational versus sample complexity tradeoffs
It is interesting to compare the story of matrix completion and tensor completion. In matrix completion, we have the best of both worlds: There are efficient algorithms which work when the number of observations is close to the information theoretic minimum. In tensor completion, we gave algorithms that improve upon the number of observations needed by a polynomial factor but still require a polynomial factor more observations than can be achieved if we ignore computational considerations [63]. We believe that for many other linear inverse problems (e.g. sparse phase retrieval), there may well be gaps between what can be achieved information theoretically and what can be achieved with computationally efficient estimators. Moreover, proving lower bounds against the sum-of-squares hierarchy offers a new type of evidence that problems are hard, that does not rely on reductions from other average-case hard problems which seem (in general) to be brittle and difficult to execute while preserving the naturalness of the input distribution. In fact, even when there are such reductions [10], the sum-of-squares hierarchy offers a methodology to make sharper predictions for questions like: Is there a quasi-polynomial time algorithm for sparse PCA, or does it require exponential time? Are there algorithms that run in time n o(log n) that can find n 1/2− -sized planted cliques for some > 0? After our work, there has been substantial progress on these questions. Barak et al. [5] gave a nearly optimal lower bound for the planted clique problem. For sparse PCA, Hopkins et al. [36] gave subexponential lower bounds and showed that, in many natural settings, proving lower bounds against the sum-of-squares hierarchy is equivalent to proving lower bounds against a family of matrix polynomial methods. Ding et al. [20] recently gave a subexponential time algorithm for sparse PCA. There still remain many important problems which appear to exhibit fundamental computational vs. statistical tradeoffs for which we do not yet have strong sum-of-squares lower bounds. Perhaps the most notable example is the problem of community detection in the stochastic block model where it is conjectured that no polynomial time algorithm can solve the problem beneath the Kesten-Stigum bound [1,19].

Subsequent work on tensor completion
One of the main questions left open by our work is whether there is a polynomial time algorithm that can exactly recover the unknown tensor T from m = n 3/2 r random observations. Our approach has the advantage that it works when we get noisy observations of T or if T is merely close to being low-rank, but in the case when there is no noise and T is exactly low-rank, we are still only able to achieve low prediction error for the typical missing entry in T . Potechin and Steurer [55] studied the problem of exact tensor completion, but with the restriction that the unknown factors are orthogonal. They constructed a dual certificate that shows that the true low rank tensor T is exactly optimal to the primal with high probability with just m = n 3/2 r random observations. However, when the factors are not orthogonal the dual certificate for showing that no other tensor has lower norm (where the norm is defined by the sum-of-squares relaxation to the tensor nuclear norm) is fundamentally much more complicated. In the orthogonal case, Foster and Risteski [25] gave algorithms whose generalization error scale as 1/m rather than 1/ √ m as in our paper. These are called fast rates in the context of agnostic learning.
Another outstanding open question is to give faster algorithms for tensor completion. In particular algorithms based on the sum-of-squares hierarchy run in polynomial time but are impractical. Are there fast and practical algorithms for tensor completion that still succeed with m = n 3/2 r random observations? Montanari and Sun [50] gave a spectral algorithm that works with m = n 3/2 r , but only when r ≤ n 3/4 . In contrast, our algorithms work even when r = n 3/2 . They gave another algorithm that runs in time n 6 that works in the overcomplete setting. Building on their work, Liu and Moitra [46] analyzed a variant of alternating minimization and showed that it succeeds with m = n 3/2 r O (1) . Their algorithm turns out to be practical and runs quickly even with n on the order of a thousand. Finally, on a technical level, Moitra and Wein [49] gave a general framework for designing spectral algorithms from tensor networks, and applied it to the continuous multireference alignment problem. Tensor networks can also be used to graphically visualize the trace method calculations in Sect. 4 that are at the heart of this paper.

Organization
In Sect. 2 we introduce Rademacher complexity, the tensor nuclear norm and strong refutation. We connect these concepts by showing that any norm that can be computed in polynomial time and has good Rademacher complexity yields an algorithm for strongly refuting random 3-SAT. In order to understand the proof of the upper bound, it is not strictly necessary to read Sects. 2.1 or 2.3. However they provide important context that might be useful for many readers. For example, before our work there was no discussion in the literature about the possibility of there being computational vs. statistical tradeoffs for tensor completion. Section 2.1 gives a simple hypothesis testing problem, where the goal is to distinguish between an approximately rank one tensor and random noise, that is statistically easy but believed to be computationally hard. Moreover many works suggested using the tensor nuclear norm. While it is known that computing the tensor nuclear norm is computationally hard in general, could it be that it is easy for the specific types of problems that arise in tensor completion? In Sect. 2.3 we bound the Rademacher complexity of the tensor nuclear norm, which implies that we cannot compute the tensor nuclear norm (or even approximate it) on random instances under natural complexity assumptions. An important reason for understanding the connections between tensor completion and random CSPs, rather than sweeping them under the rug, is that it helps demystify where the particular relaxation we are working with comes from.
In Sect. 3 we show how a particular algorithm for strong refutation can be embedded into the sum-of-squares hierarchy and directly leads to a norm that can be computed in polynomial time and has good Rademacher complexity. In Sect. 4 we establish certain spectral bounds that we need, and prove our main upper bounds. In Sect. 5 we prove lower bounds on the Rademacher complexity of the sequence of norms arising from the sum-of-squares hierarchy by a direct reduction to lower bounds for refuting random 3-XOR. In Appendix Appendix A we give a extension from the case where n 1 = n 2 = n 3 to the general case. This is what allows us to extend our analysis to arbitrary order d tensors, but the proofs are essentially identical to those in the d = 3 case but more notationally involved so we omit them.

Noisy tensor completion and refutation
Here we make the connection between noisy tensor completion and strong refutation explicit. Our first step is to formulate a problem that is a special case of both, and studying it will help us clarify how notions from one problem translate to the other.

The distinguishing problem
Here we introduce a problem that we call the distinguishing problem. We are given random observations from a tensor and promised that the underlying tensor fits into one of the two following categories. We want an algorithm that can tell which case the samples came from, and succeeds using as few observations as possible. The two cases are: 1. Each observation is chosen uniformly at random (and without replacement) from a tensor T where independently for each entry we set where a is a vector whose entries are ±1. 2. Alternatively, each observation is chosen uniformly at random (and without replacement) from a tensor T each of whose entries is independently set to either +1 or −1 and with equal probability.
In the first case, the entries of the underlying tensor T are predictable. It is possible to guess a 15/16 fraction of them correctly, once we have observed enough of its entries to be able to deduce a. And in the second case, the entries of T are completely unpredictable because no matter how many entries we have observed, the remaining entries are still random. Thus we cannot predict any of the unobserved entries better than random guessing. Now we will explain how the distinguishing problem can be equivalently reformulated in the language of refutation. We give a formal definition for strong refutation later (Definition 2.10), but for the time being we can think of it as the task of (given an instance of a constraint satisfaction problem) certifying that there is no assignment that satisfies many of the clauses. We will be interested in 3-XOR formulas, where there are n variables v 1 , v 2 , ..., v n that are constrained to take on values +1 or −1.

Each clause takes the form
where the right hand side is either +1 or −1. The clause represents a parity constraint but over the domain {+1, −1} instead of over the usual domain F 2 . We have chosen the notation suggestively so that it hints at the mapping between the two views of the problem. Each observation T i, j,k maps to a clause v i · v j · v k = T i, j,k and vice-versa. Thus an equivalent way to formulate the distinguishing problem is that we are given a 3-XOR formula which was generated in one of the following two ways: 1. Each clause in the formula is generated by choosing an ordered triple of variables uniformly at random (and without replacement) and we set where a is a vector whose entries are ±1. Now a represents a planted solution and by design our sampling procedure guarantees that many of the clauses that are generated are consistent with it.
2. Alternatively, each clause in the formula is generated by choosing an ordered triple of variables (v i , v j , v k ) uniformly at random (and without replacement) and we set v i · v j · v k = z i, j,k where z i, j,k is a random variable that takes on values +1 and −1.
In the first case, the 3-XOR formula has an assignment that satisfies a 15/16 fraction of the clauses in expectation by setting v i = a i . In the second case, any fixed assignment satisfies at most half of the clauses in expectation. Moreover if we are given (n log n) clauses, it is easy to see by applying the Chernoff bound and taking a union bound over all possible assignments that with high probability there is no assignment that satisfies more than a 1/2 + o(1) fraction of the clauses. This will be the starting point for the connections we establish between noisy tensor completion and refutation. Even in the matrix case these connections seem to have gone unnoticed, and the same spectral bounds that are used to analyze the Rademacher complexity of the nuclear norm [61] are also used to refute random 2-SAT formulas [30], but this is no accident.

Rademacher complexity
Ultimately our goal is to show that the hypothesis X that our convex program finds is entry-wise close to the unknown tensor T . By virtue of the fact that X is a feasible solution to (2) we know that it is entry-wise close to T on the observed entries. This is often called the empirical error: Recall that err(X ) is the average entry-wise error between X and T , over all (observed and unobserved) entries. Also recall that among the candidate X 's that have low empirical error, the convex program finds the one that minimizes X K for some polynomial time computable norm. The way we will choose the norm · K and our bound on the maximum magnitude of an entry of will guarantee that the low rank part of T will with high probability be a feasible solution. Thus we are guaranteed that there is a good solution to the optimization problem in the sense that we could always choose X = T which simultaneously satisfies the constraints imposed by and has bounded · K norm. One way to bound err(X ) is to show that no hypothesis in the unit norm ball can have too large a gap between its error and its empirical error (and then dilate the unit norm ball so that it contains X ). With this in mind, we define: Definition 2.2 For a norm · K and a set of observations, the generalization error is As we will discuss, a way to control the generalization error is through the Rademacher complexity. And let σ 1 , σ 2 , ..., σ m be random ±1 variables. The Rademacher complexity of (the unit ball of) the norm · K is defined as

Definition 2.3 Let
It follows from a standard symmetrization argument from empirical process theory [9,42] that the Rademacher complexity does indeed bound the generalization error.

Theorem 2.4
Let ∈ (0, 1) and suppose each X with X K ≤ 1 has bounded lossi.e. |X i, j,k −T i, j,k | ≤ a and that locations (i, j, k) are chosen uniformly at random and without replacement. Then with probability at least 1 − , for every X with X K ≤ 1, we have We repeat the proof here following [9] for the sake of completeness but readers familiar with Rademacher complexity can feel free to skip ahead to Definition 2.5. The main idea is to let be an independent set of m samples from the same distribution, again without replacement. The expected generalization error is: Then we can write where the last line follows by the concavity of sup(·). Now we can use the Rademacher (random ±1) variables {σ } and rewrite the right hand side of the above expression as follows: where the second, fourth and fifth inequalities use the triangle inequality. The equality uses the fact that the σ 's are random signs and hence can absorb the absolute value around the terms that they multiply. The second term above in the last expression is exactly the Rademacher complexity that we defined earlier. This argument only shows that the Rademacher complexity bounds the expected generalization error. However it turns out that we can also use the Rademacher complexity to bound the generalization error with high probability by applying McDiarmid's inequality. We also remark that generalization bounds are often stated in the setting where samples are drawn i.i.d., but here the locations of our observations are sampled without replacement. Nevertheless for the settings of m we are interested in, the fraction of our observations that are repeats is o(1) -in fact it is subpolynomial -and we can move back and forth between both sampling models at negligible loss in our bounds.
In much of what follows it will be convenient to think of = {(i 1 , j 1 , k 1 ), (i 2 , j 2 , k 2 ), ..., (i m , j m , k m )} and {σ } as being represented by a sparse tensor Z , defined below.

Definition 2.5
Let Z be an n 1 × n 2 × n 3 tensor such that This definition greatly simplifies our notation. In particular we have where we have introduced the notation · , · to denote the natural innerproduct between tensors. Our main technical goal in this paper will be to analyze the Rademacher complexity of a sequence of successively tighter norms that we get from the sum-of-squares hierarchy, and to derive implications for noisy tensor completion and for refutation from these bounds.

The tensor nuclear norm
Here we introduce the tensor nuclear norm and analyze its Rademacher complexity. Many works have suggested using it to solve tensor completion problems [47,60,63]. This suggestion is quite natural given that it is based on a similar guiding principle as that which led to 1 -minimization in compressed sensing and the nuclear norm in matrix completion [21]. More generally, one can define the atomic norm for a wide range of linear inverse problems [15], and the 1 -norm, the nuclear norm and the tensor nuclear norm are all special cases of this paradigm. Before we proceed, let us first formally define the notion of incoherence that we gave in the introduction.
Recall that we chose to work with vectors whose typical entry is a constant so that the entries in T do not become vanishingly small as the dimensions of the tensor increase. We can now define the tensor nuclear norm 4 : Definition 2.7 (tensor nuclear norm) Let A ⊆ R n 1 ×n 2 ×n 3 be defined as The tensor nuclear norm of X which is denoted by X A is the infimum over α such that X /α ∈ A.
Recall that T is the low rank tensor and represents the noise. Furthermore by assumption we have that T − A ≤ r * . Finally we give an elementary bound on the Rademacher complexity of the tensor nuclear norm. Recall that n = n 3 .

Lemma 2.8 R m ( ·
Proof Recall the definition of Z given in Definition 2.5. With this we can write We can now adapt the discretization approach in [27], although our task is considerably simpler because we are constrained to C-incoherent a's. In particular, let S = a a is C-incoherent and a ∈ Z n By standard bounds on the size of an -net [48], we get that Then for an arbitrary, but C-incoherent a we can expand it as a = i i a i where each a i ∈ S and similarly for b and c. And now Moreover since each entry in a ⊗ b ⊗ c has magnitude at most C 3 we can apply a Chernoff bound to conclude that for any particular a, b, c ∈ S we have with probability at least 1 − γ . Finally, if we set γ = ( C ) −n and we set = 1/2 we get that and this completes the proof.
The important point is that the Rademacher complexity of the tensor nuclear norm is o(1) whenever m = ω(n). In the next subsection we will connect this to refutation in a way that allows us to strengthen known hardness results for computing the tensor nuclear norm [32,34] and show that it is even hard to compute in an average-case sense based on some standard conjectures about the difficulty of refuting random 3-SAT.

From rademacher complexity to refutation
Here we show the first implication of the connection we have established. Any norm that can be computed in polynomial time and has good Rademacher complexity immediately yields an algorithm for strongly refuting random 3-SAT and 3-XOR formulas.
Recall that a 3-SAT clause takes in three literals (a variable or its negation) and outputs TRUE if at least one of them is TRUE, and FALSE otherwise. A 3-SAT formula is a collection of clauses and the formula is satisfied if and only if all of its clauses are. Finally a 3-XOR formula is the same, but composed of 3-XOR clauses that takes the XOR of a collection of three literals. Now let us finally define strong refutation.

Definition 2.9
For a formula φ, let opt(φ) be the largest fraction of clauses that can be satisfied by any assignment.
In what follows, we will use the term random 3-XOR formula to refer to a formula where each clause is generated by choosing an ordered triple of variables (v i , v j , v k ) uniformly at random (and without replacement) and setting v i · v j · v k = z where z is a random variable that takes on values +1 and −1.

Definition 2.10
An algorithm for strongly refuting random 3-XOR takes as input a 3-XOR formula φ and outputs a quantity alg(φ) that satisfies 1. For any 3-XOR formula φ, opt(φ) ≤ alg(φ) 2. If φ is a random 3-XOR formula with m clauses, then with high probability alg Standard concentration bounds imply that when m = ω(n) we have opt(φ) = 1/2 + o(1) with high probability. With fewer clauses, opt(φ) will be bounded away from 1/2 and the above problem is not well-defined. The goal is to design algorithms that use as few clauses as possible, and are able to certify that a random formula is indeed far from satisfiable (without underestimating the fraction of clauses that can be satisfied) and to do so as close as possible to the information theoretic threshold. If we are not concerned about running time, we could compute opt(φ) by exhaustive search over the 2 n possible assignments. But what if we are restricted to polynomial time algorithms? In a celebrated work, Håstad [35] showed that deciding whether alg(φ) ≤ 1/2 + for any > 0 is NP-hard for 3-XOR formulas. However this is a worst-case hardness result and we are interested in random 3-XOR formulas. Now we can make a deeper connection to tensor completion: Any polynomial time computable norm · K that has good Rademacher complexity immediately yields an algorithm for strongly refuting random 3-XOR. We can follow the blueprint in Sect. 2.1 where given a formula φ we map its m clauses to a collection of m observations according to the following rule: If there are n variables, we construct an n × n × n tensor Z where for each clause of the form v i · v j · v k = z i, j,k we put the entry z i, j,k at location (i, j, k). All the rest of the entries in Z -i.e. all the triples of indices that do not show up as a clause in the 3-XOR formula -are set to zero. Now, since by assumption the norm · K is polynomial time computable, we can solve the following optimization problem: Let η * be the optimum value. We set alg(φ) = 1/2 + η * . What remains is to prove that the output of this algorithm solves the strong refutation problem for 3-XOR.

Theorem 2.11
Suppose that · K is computable in polynomial time and satisfies X K ≤ 1 whenever X = a ⊗ a ⊗ a and a is a vector with ±1 entries. Further suppose that for any X with X K ≤ 1 its entries are bounded by C 3 in absolute value. Then (3) can be solved in polynomial time and if R m ( · K ) = o(1) then setting alg(φ) = 1/2 + η * solves strong refutation for 3-XOR with O(C 6 m log n) clauses.

Proof
The key observation is the following inequality which relates (3) to opt(φ).
To establish this inequality, let v 1 , v 2 , ..., v n be the assignment that maximizes the fraction of clauses satisfied. If we set a i = v i and X = a ⊗ a ⊗ a we have that X K ≤ 1 by assumption. Thus X is a feasible solution. Now with this choice of X for the right hand side, every term in the sum that corresponds to a satisfied clause contributes +1 and every term that corresponds to an unsatisfied clause contributes −1. We get 2 opt(φ) − 1 for this choice of X , and this completes the proof of the inequality above.
The crucial point is that the expectation of the right hand side over and σ is exactly the Rademacher complexity. However we want a bound that holds with high probability instead of just in expectation. It follows from McDiarmid's inequality and the fact that the entries of Z and of X are bounded by 1 and by C 3 in absolute value respectively that if we take O(C 6 m log n) observations the right hand side will be o(1) with high probability. In this case, rearranging the inequality we have The right hand side is exactly alg(φ) and is 1/2 + o(1) with high probability, which implies that both conditions in the definition for strong refutation hold and this completes the proof.
We can now combine Theorem 2.11 with the bound on the Rademacher complexity of the tensor nuclear norm given in Lemma 2.8 to conclude that if we could compute the tensor nuclear norm we would also obtain an algorithm for strongly refuting random 3-XOR with only m = (n log n) clauses. It is not obvious but it turns out that any algorithm for strongly refuting random 3-XOR implies one for 3-SAT. Let us define strong refutation for 3-SAT. We will refer to any variable v i or its negationv i as a literal. We will use the term random 3-SAT formula to refer to a formula where each clause is generated by choosing an ordered triple of literals (y i , y j , y k ) uniformly at random (and without replacement) and setting y i ∨ y j ∨ y k = 1.

7/8 + o(1)
The only change from Definition 2.10 comes from the fact that for 3-SAT a random assignment satisfies a 7/8 fraction of the clauses in expectation. Our goal here is to certify that the largest fraction of clauses that can be satisfied is 7/8 + o (1). The connection between refuting random 3-XOR and 3-SAT is often called "Feige's XOR Trick" [22]. The first version of it was used to show that an algorithm for -refuting 3-XOR can be turned into an algorithm for -refuting 3-SAT. However we will not use this notion of refutation so for further details we refer the reader to [22]. The reduction was extended later by Coja-Oghlan, Goerdt and Lanka [17] to strong refutation, which for us yields the following corollary: Corollary 2.13 Suppose that · K is computable in polynomial time and satisfies X K ≤ 1 whenever X = a ⊗ a ⊗ a and a is a vector with ±1 entries. Suppose further that for any X with X K ≤ 1 its entries are bounded by C 3 in absolute value and that R m ( · K ) = o (1). Then there is a polynomial time algorithm for strongly refuting a random 3-SAT formula with O(C 6 m log n) clauses. Now we can get a better understanding of the obstacles to noisy tensor completion by connecting it to the literature on refuting random 3-SAT. Despite a long line of work on refuting random 3-SAT [17,23,24,26,30], there is no known polynomial time algorithm that works with m = n 3/2− clauses for any > 0. Feige [22] conjectured that for any constant C, there is no polynomial time algorithm for refuting random 3-SAT with m = Cn clauses 5 . Daniely et al. [18] conjectured that there is no polynomial time algorithm for m = n 3/2− for any > 0. What we have shown above is that any norm that is a relaxation to the tensor nuclear norm and can be computed in polynomial time but has Rademacher complexity R m ( · K ) = o(1) for m = n 3/2− would disprove the conjecture of Daniely et al. [18] and would yield much better algorithms for refuting random 3-SAT than we currently know, despite fifteen years of work on the subject. This leaves open an important question. While there are no known polynomial time algorithms for strongly refuting random 3-SAT with m = n 3/2− clauses, there are algorithms that work with roughly m = n 3/2 clauses [17]. Do these algorithms have any implications for noisy tensor completion? We will adapt the algorithm of Coja-Oghlan, Goerdt and Lanka [17] and embed it within the sum-of-squares hierarchy. In turn, this will give us a norm that we can use to solve noisy tensor completion which uses a polynomial factor fewer observations than known algorithms. After our work, Raghavendra et al. [56] gave subexponential time algorithms for refuting random 3-SAT with m = n 3/2− clauses. These bounds could likely be used to give subexponential time algorithms for noisy tensor completion, using our framework for embedding refutation algorithms into the sum-of-squares hierarchy and using their analysis to bound the Rademacher complexity. However such algorithms would now require nearly n 2 levels rather than six levels, which is what we work with here.

Pseudo-expectation
Here we introduce the sum-of-squares hierarchy and will use it (at level six) to give a relaxation to the tensor nuclear norm. This will be the norm that we will use in proving our main upper bounds. First we introduce the notion of a pseudo-expectation operator from [6][7][8]: Definition 3.1 (Pseudo-expectation [6]) Let k be even and let P n k denote the linear subspace of all polynomials of degree at most k on n variables. A linear operator E : P n k → R is called a degree k pseudo-expectation operator if it satisfies the following conditions: (1) E[1] = 1 (normalization) (2) E[P 2 ] ≥ 0, for any degree at most k/2 polynomial P (nonnegativity) Moreover suppose that p ∈ P n k with deg( p) = k . We say that E satisfies the constraint { p = 0} if E[ pq] = 0 for every q ∈ P n k−k . And we say that E satisfies the constraint The rationale behind this definition is that if μ is a distribution on vectors in R n then the operator is a degree d pseudo-expectation operator for every d -i.e. it meets the conditions of Definition 3.1. However the converse is in general not true. We are now ready to define the norm that will be used in our upper bounds: Definition 3.2 (SO S k norm) We let K k be the set of all X ∈ R n 1 ×n 2 ×n 3 such that there exists a degree k pseudo-expectation operator on P n 1 +n 2 +n 3 k satisfying the following polynomial constraints (where the variables are the Y (a) i 's) k ] for all i, j and k. The SO S k norm of X ∈ R n 1 ×n 2 ×n 3 which is denoted by X K k is the infimum over α such that X /α ∈ K k . The constraints in Definition 3.1 can be expressed as an O(n k )-sized semidefinite program. This implies that given any set of polynomial constraints of the form { p = 0}, { p ≥ 0}, one can efficiently find a degree k pseudo-expectation satisfying those constraints if one exists. This is often called the degree k Sum-of-Squares algorithm [44,53,54,59]. Hence we can compute the norm X K k of any tensor X to within arbitrary accuracy in polynomial time. And because it is a relaxation to the tensor nuclear norm which is defined analogously but over a distribution on C-incoherent vectors instead of a pseudo-expectation over them, we have that X K k ≤ X A for every tensor X . Throughout most of this paper, we will be interested in the case k = 6. Our main technical result is an upper bound on its Rademacher complexity: In Corollary 5.6 we establish a matching lower bound.

Resolution in K 6
Recall that any polynomial time computable norm with good Rademacher complexity with m observations yields an algorithm for strong refutation with roughly m clauses too. Here we will use an algorithm for strongly refuting random 3-SAT to guide our search for an appropriate norm. We will adapt an algorithm due to Coja-Oghlan, Goerdt and Lanka [17] that strongly refutes random 3-SAT, and will instead give an algorithm that strongly refutes random 3-XOR. Moreover each of the steps in the algorithm embeds into the sixth level of the sum-of-squares hierarchy by mapping resolution operations to applications of Cauchy-Schwartz, that ultimately show how the inequalities that define the norm (Definition 3.2) can be manipulated to give bounds on its own Rademacher complexity. Let's return to the task of bounding the Rademacher complexity of · K 6 . Let X be arbitrary but satisfy X K 6 ≤ 1. Then there is a degree six pseudo-expectation meeting the conditions of Definition 3.2. Using Cauchy-Schwartz we have: To simplify our notation, we will define the following polynomial k which we will use repeatedly. If d is even then any degree d pseudo-expectation operator satisfies the constraint ( E[ p]) 2 ≤ E[ p 2 ] for every polynomial p of degree at most d/2 (e.g., see Lemma A.4 in [4]). Hence the right hand side of (4) can be bounded as: It turns out that bounding the right-hand side of (5) boils down to bounding the spectral norm of the following matrix.

Definition 3.4 Let
A be the n 2 n 3 × n 2 n 3 matrix whose rows and columns are indexed over ordered pairs ( j, k ) and ( j , k) respectively, defined as We can now make the connection to resolution more explicit: We can think of a pair of observations Z i, j,k , Z i, j ,k as a pair of 3-XOR constraints, as usual. Resolving them (i.e. multiplying them) we obtain a 4-XOR constraint A captures the effect of resolving certain pairs of 3-XOR constraints into 4-XOR constraints. The challenge is that the entries in A are not independent, so bounding its maximum singular value will require some care. It is important that the rows of A are indexed by ( j, k ) and the columns are indexed by ( j , k), so that j and j come from different 3-XOR clauses, as do k and k , and otherwise the spectral bounds that we will want to prove about A would simply not be true! This is perhaps the key insight in [17].
It will be more convenient to decompose A and reason about its two types of contributions separately. To that end, we let R be the n 2 n 3 × n 2 n 3 matrix whose non-zero entries are of the form and all of its other entries are set to zero. In particular R is diagonal. Then let B be the n 2 n 3 × n 2 n 3 matrix whose entries are of the form By construction we have A = B + R. Finally: Now let Y (2) ∈ R n 2 be a vector of variables where the ith entry is Y (2) i and similarly for Y (3) . Then we can re-write the right hand side as a matrix inner-product: We will now bound the contribution of B and R separately. (3) ) T ] is positive semidefinite and has trace at most n 2 n 3 Proof It is easy to see that a quadratic form on E[(Y (2) (3) ) T ] corresponds to E[ p 2 ] for some p ∈ P n 2 +n 3 2 and this implies the first part of the claim. Finally where the last equality follows because the pseudo-expectation operator satisfies the constraints Hence we can bound the contribution of the first term as C 2 B, E[(Y (2) ⊗Y (3) )(Y (2) ⊗ Y (3) ) T ]] ≤ C 2 n 2 n 3 B . Now we proceed to bound the contribution of the second term: Proof It is easy to verify by direct computation that the following equality holds: Moreover the pseudo-expectation of each of the three terms above is nonnegative, by construction. This implies the claim.
Moreover each entry in Z is in the set {−1, 0, +1} and there are precisely m non-zeros. Thus the sum of the absolute values of all entries in R is at most m. Now we have: And this completes the proof of the lemma.

Spectral bounds
The main remaining step is to bound B , where B was defined in the previous section. In fact, for our spectral bounds it will be more convenient to relabel the variables (but keeping the definition intact): Directly working with B would still be challenging because of the complex dependencies among its entries. Instead, to make the analysis simpler we will randomly group its entries according to the following rule: For r = 1, 2, ..., O(log n) partition the set of all ordered triples (i, j, k) into two sets S r and T r . We will use this ensemble of partitions to define an ensemble of matrices {B r } O(log n) r =1 : Set U r i, j,k as equal to Z i, j,k if (i, j, k ) ∈ S r and zero otherwise. Similarly set V r i, j ,k equal to Z i, j ,k if (i, j , k) ∈ T r and zero otherwise. Also let E i, j, j ,k,k ,r be the event that there is no r < r where (i, j, k ) ∈ S r and (i, j , k) ∈ T r nor is there an r < r where (i, j , k) ∈ S r and (i, j, k ) ∈ T r . Now let where E is short-hand for the indicator function of the event E i, j, j ,k,k ,r . The idea behind this construction is that each pair of triples (i, j, k ) and (i, j , k) that contributes to B will contribute to some B r with high probability. (This follows by standard concentration bounds because the chance that any triple is covered by a random partition is a constant). Moreover it will not contribute to any later matrix in the ensemble. Hence with high probability Throughout the rest of this section, we will suppress the superscript r and work with a particular matrix in the ensemble, B. Now let be even and consider As is standard, we are interested in bounding E[Tr(BB T BB T ...BB T )] in order to bound B . But note that B is not symmetric. Also note that the random variables U and V are not independent, however whether or not they are non-zero is non-positively correlated and their signs are mutually independent. Expanding the trace above we have Tr(BB T BB T ...BB T ) where E 1 is the indicator for the event that the entry B j 1 ,k 1 , j 2 ,k 2 is not covered by an earlier matrix in the ensemble, and similarly for E 2 , ..., E .
Notice that there are 2 random variables in the above sum (ignoring the indicator variables). Moreover if any U or V random variable appears an odd number of times, then the contribution of the term to E[Tr(BB T BB T ...BB T )] is zero. We will give an encoding for each term that has a non-zero contribution, and we will prove that it is injective.
Fix a particular term in the above sum where each random variable appears an even number of times. Let s be the number of distinct values for i. Moreover let i 1 , i 2 , ..., i s be the order that these indices first appear. Now let r j 1 denote the number of distinct values for j that appear with i 1 in U terms -i.e. r j 1 is the number of distinct j's that appear as U i 1 , j, * . Let r k 1 denote the number of distinct values for k that appear with i 1 in U terms -i.e. r k 1 is the number of distinct k's that appear as or U i 1 , * ,k . Similarly let q j 1 denote the number of distinct values for j that appear with i 1 in V terms -i.e. q j 1 is the number of distinct j's that appear as V i 1 , j, * . And finally let q k 1 denote the number of distinct values for k that appear with i 1 in V terms -i.e. q k 1 is the number of distinct k's that appear as V i 1 , * ,k .
We give our encoding below. It is more convenient to think of the encoding as any way to answer the following questions about the term. respectively. It is also easy to see that the number of answers to (c) that arise over the sequence of steps is at most 8 (s(r j + r k )(q j + q k )) . We remark that much of the work on bounding the maximum eigenvalue of a random matrix is in removing any type terms, and so one needs to encode re-visiting indices more compactly. However such terms will only cost us polylogarithmic factors in our bound on B .
It is easy to see that this encoding is injective, since given the answers to the above questions one can simulate each step and recover the sequence of random variables. Next we establish some easy facts that allow us to bound E[Tr(BB T BB T ...BB T )]. Claim 4.1 For any term that has a non-zero contribution to E[Tr(BB T BB T ...BB T )], we must have s ≤ /2 and r j + q j + r k + q k ≤ Proof Recall that there are 2 random variables in the product and precisely of them corresponds to U variables and of them to V variables. Suppose that s > /2. Then there must be at least one U variable and at least one V variable that occur exactly once, which implies that its expectation is zero because the signs of the non-zero entries are mutually independent. Similarly suppose r j + q j +r k + q k > . Then there must be at least one U or V variable that occurs exactly once, which also implies that its expectation is zero.

Claim 4.2
For any valid encoding, s ≤ r j + q j and s ≤ r k + q k .
Proof This holds because in each step where the i variable is new and has not been visited before, by definition the j variable is new too (for the current i) and similarly for the k variable.
Finally, if s, r j , q j , r k and q k are defined as above then for any contributing term its expectation is at most p r j +r k p q j +q k where p is the probability of any particular entry in T being observed. In particular p = m/n 1 n 2 n 3 because there are exactly r j + r k distinct U variables and q j + q k distinct V variables whose values are in the set {−1, 0, +1} and whether or not a variable is non-zero is non-positively correlated and the signs are mutually independent.
This now implies the main lemma: where the sum is over all valid triples s, r j , r k , q j , q k and hence s, r , q ≤ /2 and s ≤ r j + r k and s ≤ q j + q k using Claim 4.1 and Claim 4.2. We can upper bound the above as Now if p max(n 2 , n 3 ) ≤ 1 then using Claim 4.2 followed by the first half of Claim 4.1 we have: where the last inequality follows because pn 1/2 1 max(n 2 , n 3 ) > 1. Alternatively if p max(n 2 , n 3 ) > 1 then we can directly invoke the second half of Claim 4.1 and get: and this completes the proof.
As before, let n = n 3 . Then the last piece we need to bound the Rademacher complexity is the following spectral bound: We can now ready to prove Theorem 3.3: Proof Consider any X with X K 6 ≤ 1. Then using Lemma 3.5 and Theorem 4.4 we have Recall that Z was defined in Definition 2.5. The Rademacher complexity can now be bounded as which completes the proof of the theorem.
Recall that bounds on the Rademacher complexity readily imply bounds on the generalization error (see Theorem 2.4). We can now prove Theorem 1.1: Proof We solve (2) using the norm · K 6 . Since this norm comes from the sixth level of the sum-of-squares hierarchy, it follows that (2) is an n 6 -sized semidefinite program and there is an efficient algorithm to solve it to arbitrary accuracy. Moreover we can always plug in X = T − and the bounds on the maximum magnitude of an entry in together with the Chernoff bound imply that with high probability X = T − is a feasible solution. Moreover T − K 6 ≤ r * . Hence with high probability, the minimizer X satisfies X K 6 ≤ r * . Now if we take any such X returned by the convex program, because it is feasible its empirical error is at most 2δ. And since X K 6 ≤ r * the bounds on the Rademacher complexity (Theorem 3.3) together with Theorem 2.4 give the desired bounds on err(X ) and complete the proof of our main theorem.
In Appendix Appendix A we treat the general case where the n i 's can be different, which allows us to extend our results to the case of higher order tensors as well.
Finally we prove Corollary 1.2: Proof Our goal is to lower bound the absolute value of a typical entry in T . To be concrete, suppose that var(T i, j,k ) ≥ f (r , n) for a 1 − o(1) fraction of the entries where f (r , n) = r 1/2 / log D n. Consider T i, j,k , which we will view as a degree three polynomial in Gaussian random variables. Then the anti-concentration bounds of Carbery and Wright [14] now imply that |T i, j,k | ≥ f (r , n)/ log n with probability 1 − o (1). With this in mind, we define and it follows form Markov's bound that that |R| ≥ (1 − o(1))n 1 n 2 n 3 . Now consider just those entries in R which we get substantially wrong: We can now invoke Theorem 1.1 which guarantees that the hypothesis X that results from solving (2) satisfies err(X ) = o(1/ log n) with probability 1 − o(1) provided that m = (n 3/2 r ). This bound on the error immediately implies that |R | = o(n 1 n 2 n 3 ) and so |R \ R | = (1 − o(1))n 1 n 2 n 3 . This completes the proof of the corollary.

Sum-of-squares lower bounds
Here we will show strong lower bounds on the Rademacher complexity of the sequence of relaxations to the tensor nuclear norm that we get from the sum-of-squares hierarchy. Our lower bounds follow as a corollary from known lower bounds for refuting random instances of 3-XOR [31,58]. First we need to introduce the formulation of the sum-ofsquares hierarchy used in [58], which is a natural semidefinite programing relaxation for the problem of deciding whether a given formula is satisfiable or not. We will call a Boolean function f a k-junta if there is set S ⊆ [n] of at most k variables so that f is determined by the values in S.

Definition 5.1
The k-round Lasserre hierarchy is the following relaxation: +g for all f , g that are k-juntas and satisfy f · g ≡ 0 Here we define a vector v f for each k-junta, and C is a class of constraints that must be satisfied by any Boolean solution (and are necessarily k-juntas themselves). See [58] for more background, but it is easy to construct a feasible solution to the above convex program given a distribution on feasible solutions for some constraint satisfaction problem. In the above relaxation, we think of functions f as being {0, 1}-valued. Over the {0, 1}-alphabet, we write use the notation (⊕ S , Z S ) to represent the XOR clause that takes the variables in the set S, takes their XOR and asks that the value be equal to Z S . It will be more convenient to work with an intermediate relaxation where functions are {−1, 1}-valued and the intuition is that u S for some set S ⊆ [n] should correspond to the vector for the character χ S . Definition 5.2 Alternatively, the k-round Lasserre hierarchy is the following relaxation: where f is the parity function on S and g = 1 − f is its complement. Moreover let u ∅ = v 0 .
where f S is the parity function on S, and similarly for the other functions. Then we An identical argument holds for the other terms. This implies that all the Constraints (b) hold. Similarly suppose (⊕ S , Z S ) ∈ C. Since f S · g S ≡ 0 and [58]). Thus and this completes the proof. Now following Barak et al. [4] we can use the constraints in Definition 5.2 to define the operator E[·]. In particular, given p ∈ P n k where p ≡ S c S i∈S Y i and p is multilinear, we set Here we will also need to define E[ p] when p is not multilinear, and in that case if Y i appears an even number of times we replace it with 1 and if it appears an odd number of times we replace it by Y i to get a multilinear polynomial q and then set

Claim 5.4 E[·] is a feasible solution to the constraints in Definition 3.2, and for any
Proof Then by construction E[1] = 1, and the proof that E[ p 2 ] ≥ 0 is given in [4], but we repeat it here for completeness. Let p = S c S i∈S Y i be multilinear where we follow the above recipe and replace terms of the form Y 2 i with 1 as needed. Then p 2 = S,T c S c T i∈S Y i i∈T Y i and moreover which holds for any polynomial q ∈ P n k−2 . Finally note that which follows because C 2 ≥ 1 and holds for any polynomial q ∈ P n (d−d )/2 . This completes the proof.
Theorem 5.5 [31,58] Let φ be a random 3-XOR formula on n variables with m = n 3/2− clauses. Then for any > 0 and any c < 2, the k = (n c ) round Lasserre hierarchy given in Definition 5.1 permits a feasible solution, with probability 1 − o (1).
Note that the constant in the (·) depends on and c. Then using the above reductions, we have the following as an immediate corollary: Thus there is a sharp phase transition (as a function of the number of observations) in the Rademacher complexity of the norms derived from the sum-of-squares hierarchy. At level six, R m ( · K 6 ) = o(1) whenever m = ω(n 3/2 log 4 n). In contrast, R m ( · K k ) = 1 − o(1) when m = n 3/2− even for very strong relaxations derived from n 2 rounds of the sum-of-squares hierarchy. These norms require time 2 n 2 to compute but still achieve essentially no better bounds on their Rademacher complexity.
third-order case and show how any algorithm for tensor prediction that works in the case n 1 = n 2 = n 3 can be used to predict the entries when the n i 's are not necessarily the same. We will then be able to flatten higher-order tensors into third-order tensors and get new, immediate consequences for tensor completion. Hardt gave a related reduction for the cases of matrices [33] and it is instructive to first understand this reduction, before proceeding to the tensor case. Suppose we are given a matrix M that is not necessarily square. Then the approach of [33] is to construct the following square matrix: We have not precisely defined the notion of incoherence that is used in the matrix completion literature, but it turns out to be easy to see that S is low rank and incoherent as well.
The important point is that given m samples generated uniformly at random from M, we can generate random samples from S too. It will be more convenient to think of these random samples as being generated without replacement, but this reduction works just as well with replacement too. Let M ∈ R n 1 ×n 2 . Now for each sample from S, with probability p = n 2 1 +n 2 2 (n 1 +n 2 ) 2 we reveal a uniformly random entry in the either block of zeros. And with probability 1 − p we reveal a uniformly random entry from M. Each entry in M appears exactly twice in S, and we choose to reveal this entry of M with probability 1/2 from the top-right block, and otherwise from the bottomleft block. Thus given m samples from M, we can generate from S (in fact we can generate even more, because some of the revealed entries will be zeros). It is easy to see that this approach works for the case of sampling without replacement to, in that m samples without replacement from M can be used to generate at least m samples without replacement from S. Now let us proceed to the tensor case. Let us introduce the following definition, for ease of notation: m(n, r , , f , C) be such that, there is an algorithm that on a rank r , order d, size n × n × ... × n tensor where each factor has norm at most C, the algorithm returns an estimate X with err(X ) = f with probability 1 − when it is given m (n, r , , f )  f Proof Our goal is to symmetrize the problem in such a way that each entry is either zero or else corresponds to an entry in the original tensor. Our reduction will work for any odd order d tensor. In particular let be an order d tensor where the dimension of a j is n j . Also let n = d j=1 n j . Then we will construct a new order d tensor as follows. Let σ 1 , σ 2 , ...σ d be a collection of d random ± variables that are chosen uniformly at random from the 2 d−1 configurations where d j=1 σ j = 1. Then we consider the following random vector It is immediate that the dimensions of S are all equal and it has rank at most 2 d−1 r by expanding out the expectation into a sum over the valid sign configurations. Moreover each rank one term in the decomposition is of the form a ⊗d where a 2 2 = d because it is the concatenation of d unit vectors.
If σ 1 , σ 2 , ...σ d is fixed, then each entry in S is itself a degree d polynomial in the σ j variables. By our construction of the σ j variables, and because d is odd so there are no terms where every variable appears to an even power, it follows that all the terms vanish in expectation except for the terms which have a factor of d j=1 σ j , and these are exactly terms that correspond to some permutation π : [d] → [d], and a term of the form Hence all of the entries in S are either zero or are 2 d−1 times an entry in T . As before, we can generate m uniformly random samples from S given m uniformly random samples from T , by simply choosing to sample an entry from one of the blocks of zeros with the appropriate probability, or else revealing an entry of T and choosing where in S to reveal this entry uniformly at random. Hence: we obtain err(Y ). This completes the reduction.
Note that in the case where n 1 = n 2 = n 3 ... = n d , the error and the rank in this reduction increase only by at most an e d and 2 d factor respectively.