Abstract
This paper considers the problem of estimation of a low-rank matrix when most of its entries are not observed and some of the observed entries are corrupted. The observations are noisy realizations of a sum of a low-rank matrix, which we wish to estimate, and a second matrix having a complementary sparse structure such as elementwise sparsity or columnwise sparsity. We analyze a class of estimators obtained as solutions of a constrained convex optimization problem combining the nuclear norm penalty and a convex relaxation penalty for the sparse constraint. Our assumptions allow for simultaneous presence of random and deterministic patterns in the sampling scheme. We establish rates of convergence for the low-rank component from partial and corrupted observations in the presence of noise and we show that these rates are minimax optimal up to logarithmic factors.
Similar content being viewed by others
Notes
This statement actually appears as an intermediate step in the proof of this lemma.
References
Agarwal, A., Negahban, S., Wainwright, M.J.: Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions. Ann. Stat. 40(2), 1171–1197 (2012)
Aubin, J.P., Ekeland, I.: Applied nonlinear analysis. In: Pure and Applied Mathematics. Wiley, New York (1984)
Bauer, F.L., Stoer, J., Witzgall, C.: Absolute and monotonic norms. Numer. Math. 3, 257–264 (1961)
Buehlmann, P., van de Geer, S.: Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, New York (2011)
Cai, T.T., Zhou, W.: Matrix completion via max-norm constrained optimization. doi:10.1007/978-1-4612-0537-1.201E
Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(1), 1–37 (2009)
Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Willsky, A.S.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2), 572–596 (2011)
Chen, Y., Huan, X., Caramanis, C., Sanghavi, S.: Robust matrix completion with corrupted columns. ICML, 873–880 (2011)
Chen, Y., Jalali, A., Sanghavi, S., Caramanis, C.: Low-rank matrix recovery from errors and erasures. IEEE Trans. Inf. Theory 59(7), 4324–4337 (2013)
de la Peña, V.H., Giné, E.: Decoupling. Probability and Its Applications (New York). Springer, New York (1999) (from dependence to independence, randomly stopped processes. \(U\)-statistics and processes. Martingales and beyond)
Foygel, R., Srebro, N.: Concentration-based guarantees for low-rank matrix reconstruction. J. 24nd Annu. Conf. Learn. Theory (COLT) (2011)
Giné, E., Latała, R., Zinn, J.: High dimensional probability, II (Seattle, 1999). In: Progress in Probability. Exponential and Moment Inequalities for \(U\)-Statistics, vol. 47, pp. 13–38. Birkhäuser, Boston (2000)
Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)
Hsu, D., Kakade, S.M., Zhang, T.: Robust matrix decomposition with sparse corruptions. IEEE Trans. Inf. Theory 57(11), 7221–7234 (2011)
Huan, X., Caramanis, C., Sanghavi, S.: Robust PCA via outlier pursuit. IEEE Trans. Inf. Theory 58(5), 3047–3064 (2012)
Keshavan, R.H., Montanari, A., Sewoong, O.: Matrix completion from noisy entries. J. Mach. Learn. Res. 11, 2057–2078 (2010)
Klopp, O.: Noisy low-rank matrix completion with general sampling distribution. Bernoulli 20(1), 282–303 (2014)
Koltchinskii, V.: Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, École d’Été de Probabilités de Saint-Flour (Saint-Flour Probability Summer School). Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. In: Lecture Notes in Mathematics, vol. 2033. Springer, Heidelberg (2011)
Koltchinskii, V., Lounici, K., Tsybakov, A.B.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39(5), 2302–2329 (2011)
Negahban, S., Wainwright, M.J.: Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res. 13, 1665–1697 (2012)
Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)
Rohde, A., Tsybakov, A.: Estimation of high-dimensional low-rank matrices. Ann. Stat. 39(2), 887–930 (2011)
Rudelson, M., Vershynin, R.: Hanson–Wright inequality and sub-Gaussian concentration. Electron. Commun. Probab. 18(82), 9 (2013)
Tsybakov, A.B.: Introduction to nonparametric estimation. In: Springer Series in Statistics. Springer, New York (2009) (revised and extended from the 2004 French original, translated by Vladimir Zaiats)
Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027 (2010)
Xiaodong, L.: Compressed sensing and matrix completion with constant proportion of corruptions. Constr. Approx. 37(1) (2013)
Acknowledgments
The work of O. Klopp was conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01). The work of K. Lounici was supported in part by Simons Grant 315477 and by NSF Career Grant DMS-1454515. The work of A.B. Tsybakov was supported by GENES and by the French National Research Agency (ANR) under the Grants IPANEMA (ANR-13-BSH1-0004-02), Labex ECODEC (ANR—11-LABEX-0047), ANR—11-IDEX-0003-02, and by the “Chaire Economie et Gestion des Nouvelles Données”, under the auspices of Institut Louis Bachelier, Havas-Media and Paris-Dauphine. The authors want to thank the anonymous referee for his extremely valuable remarks.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proofs of Theorem 1 and of Corollary 7
1.1 A.1: Proof of Theorem 1
The proofs of the upper bounds have similarities with the methods developed in [18] for noisy matrix completion but the presence of corruptions in our setting requires a new approach, in particular, for proving ”restricted strong convexity property” (Lemma 15) which is the main difficulty in the proof.
Recall that our estimator is defined as
and our goal is to bound from above the Frobenius norms \(\Vert L_0-{\hat{L}}\Vert _2^{2}\) and \(\Vert S_0-{\hat{S}}\Vert _2^{2}\).
- (1):
-
Set \({\mathcal {F}}(L,S)=\frac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}+\lambda _1 \Vert L\Vert _*+\lambda _2 {\mathcal {R}}( S)\), \(\Delta L=L_0-{\hat{L}}\) and \(\Delta S=S_0-{\hat{S}}\). Using the inequality \({\mathcal {F}}({\hat{L}},{\hat{S}})\le {\mathcal {F}}(L_0,S_0)\) and (1) we get
$$\begin{aligned}&\dfrac{1}{N}\sum ^{N}_{i=1} \left( \left\langle X_i,\Delta L+\Delta S\right\rangle +\xi _{i}\right) ^{2}+\lambda _{1} \Vert {\hat{L}}\Vert _*+\lambda _{2}{\mathcal {R}}({\hat{S}})\nonumber \\&\qquad \le \dfrac{1}{N}\sum ^{N}_{i=1} \xi _{i}^{2}+\lambda _{1} \Vert L_0\Vert _* +\lambda _{2}{\mathcal {R}}(S_0). \end{aligned}$$After some algebra this implies
$$\begin{aligned} \dfrac{1}{N}\sum _{i\in \Omega } \left\langle X_i,\Delta L+\Delta S\right\rangle ^{2}\le & {} \underset{{\mathbf {I}}}{\underbrace{\dfrac{2}{N}\sum _{i\in {\tilde{\Omega }}} \left| \left\langle \xi _iX_i,\Delta L+\Delta S\right\rangle \right| -\dfrac{1}{N}\sum _{i\in {\tilde{\Omega }}} \left\langle X_i, \Delta L+\Delta S\right\rangle ^{2}}}\nonumber \\&+\, \underset{{\mathbf {II}}}{\underbrace{2\left| \left\langle \Sigma ,\Delta L\right\rangle \right| +\lambda _{1} \left( \Vert L_0\Vert _* -\Vert {\hat{L}}\Vert _*\right) }}\nonumber \\&+\,\underset{{\mathbf {III}}}{\underbrace{2\left| \left\langle \Sigma ,\Delta S_{\mathcal {I}}\right\rangle \right| +\lambda _{2}\left( \mathcal {R}(S_0)-\mathcal {R}({\hat{S}})\right) }} \end{aligned}$$(25)where \(\Sigma =\frac{1}{N}\sum _{i\in \Omega } \xi _iX_i\) and we have used the equality \(\left\langle \Sigma ,\Delta S\right\rangle =\left\langle \Sigma ,\Delta S_{\mathcal {I}}\right\rangle \). We now estimate each of the three terms on the right hand side of (25) separately. This will be done on the random event
$$\begin{aligned} {\mathcal {U}}=\left\{ \underset{1\le i\le N}{\max }\vert \xi _{i}\vert \le C_*\sigma \sqrt{\log d}\right\} \end{aligned}$$(26)where \(C_*>0\) is a suitably chosen constant. Using a standard bound on the maximum of sub-gaussian variables and the constraint \(N\le m_1m_2\) we get that there exists an absolute constant \(C_*>0\) such that \({\mathbb {P}}({\mathcal {U}})\ge 1-\frac{1}{2d}\). In what follows, we take this constant \(C_*\) in the definition of \({\mathcal {U}}\).
We start by estimating \({\mathbf {I}}\). On the event \({\mathcal {U}}\), we get
$$\begin{aligned} \mathbf{I}\le & {} \frac{1}{N}\sum _{i\in {\tilde{\Omega }}} \xi ^{2}_i \le \dfrac{C\,\sigma ^{2}\vert {\tilde{\Omega }}\vert \log (d)}{N}. \end{aligned}$$(27)Now we estimate \({\mathbf {II}}\). For a linear vector subspace S of a euclidean space, let \(P_S\) denote the orthogonal projector on S and let \(S^\bot \) denote the orthogonal complement of S. For any \(A\in {\mathbb {R}}^{m_1\times m_2}\), let \(u_j(A)\) and \(v_j(A)\) be the left and right orthonormal singular vectors of A, respectively . Denote by \(S_1(A)\) the linear span of \(\{u_j(A)\}\), and by \(S_2(A)\) the linear span of \(\{v_j(A)\}\). We set
$$\begin{aligned} {\mathbf {P}}_A^{\bot }(B)= & {} P_{S_1^{\bot }(A)}BP_{S_2^{\bot }(A)} \quad \text {and}\quad {\mathbf {P}}_A(B)=B- {\mathbf {P}}_A^{\bot }(B). \end{aligned}$$By definition of \({\mathbf {P}}_{L_0}^{\bot }\), for any matrix B the singular vectors of \({\mathbf {P}}_{L_0}^{\bot }(B)\) are orthogonal to the space spanned by the singular vectors of \(L_0\). This implies that \(\left\| L_0+{\mathbf {P}}_{L_0}^{\bot }(\Delta L) \right\| _*=\left\| L_0 \right\| _*+\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L) \right\| _*\). Thus,
$$\begin{aligned} \Vert {\hat{L}}\Vert _*= & {} \left\| L_0 +\Delta L\right\| _*\\= & {} \left\| L_0 +{\mathbf {P}}_{L_0}^{\bot }(\Delta L) +{\mathbf {P}}_{L_0}(\Delta L)\right\| _*\\\ge & {} \left\| L_0 +{\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _* -\left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*\\= & {} \left\| L_0 \right\| _*+\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _*-\left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*, \end{aligned}$$which yields
$$\begin{aligned} \Vert L_0 \Vert _*-\Vert {\hat{L}}\Vert _*\le \left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*-\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _*. \end{aligned}$$(28)Using (28) and the duality between the nuclear and the operator norms, we obtain
$$\begin{aligned} {\mathbf {II}}\le 2 \Vert \Sigma \Vert \Vert \Delta L\Vert _{*}+\lambda _{1}\left( \left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*-\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _*\right) . \end{aligned}$$The assumption that \(\lambda _1\ge 4\Vert \Sigma \Vert \) and the triangle inequality imply
$$\begin{aligned} {\mathbf {II}} \le \dfrac{3}{2}\lambda _1\left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*\le \dfrac{3}{2}\lambda _1\sqrt{2r}\left\| \Delta L\right\| _2 \end{aligned}$$(29)where \(r=\mathrm{rank}(L_0)\) and we have used that \(\mathrm{rank}({\mathbf {P}}_{L_0}(\Delta L))\le 2\,\mathrm{rank}(L_0)\).
For the third term in (25), we use the duality between the \(\mathcal {R}\) and \(\mathcal {R}^{*}\), and the identity \(\Delta S_{\mathcal {I}}=-{\hat{S}}_{\mathcal {I}}\):
$$\begin{aligned} {\mathbf {III}}\le 2\mathcal {R}^{*}(\Sigma )\mathcal {R}({\hat{S}}_{\mathcal {I}})+\lambda _{2}( \mathcal {R}(S_0)-\mathcal {R}({\hat{S}})). \end{aligned}$$This and the assumption that \(\lambda _2\ge 4\mathcal {R}^{*}(\Sigma )\) imply
$$\begin{aligned} {\mathbf {III}}\le \lambda _2\mathcal {R}( S_0). \end{aligned}$$(30)Plugging (29), (30) and (27) in (25) we get that, on the event \({\mathcal {U}}\),
$$\begin{aligned} \dfrac{1}{n}\sum _{i\in \Omega } \left\langle X_i,\Delta L+\Delta S\right\rangle ^{2}\le & {} \frac{3\,\ae \,\lambda _1}{\sqrt{2}}\sqrt{r}\left\| \Delta L\right\| _2 +\ae \lambda _2\mathcal {R}( S_0) +\dfrac{C\sigma ^{2}\vert {\tilde{\Omega }} \vert \log (d)}{n}\nonumber \\ \end{aligned}$$(31)where \(\ae =N/n\).
- (2):
-
Second, we will show that a kind of restricted strong convexity holds for the random sampling operator given by \((X_i)\) on a suitable subset of matrices. In words, we prove that the observation operator captures a substantial component of any pair of matrices L, S belonging to a properly chosen constrained set (cf. Lemma 15(ii) below for the exact statement). This will imply that, with high probability,
$$\begin{aligned} \dfrac{1}{n}\sum _{i\in \Omega } \left\langle X_i,\Delta L+\Delta S\right\rangle ^{2}\ge \left\| \Delta L+\Delta S\right\| _{L_2(\Pi )}^{2}-{\mathcal {E}} \end{aligned}$$(32)with an appropriate residual \({\mathcal {E}}\), whenever we prove that \((\Delta L,\Delta S)\) belongs to the constrained set. This will be a substantial element of the remaining part of the proof. The result of the theorem will then be deduced by combining (31) and (32).
We start by defining our constrained set. For positive constants \(\delta _1\) and \(\delta _2\), we first introduce the following set of matrices where \(\Delta S\) should lie:
The constants \(\delta _1\) and \(\delta _2\) define the constraints on the \(L_2(\Pi )\)-norm and on the sparsity of the component S. The error term \({\mathcal {E}}\) in (32) depends on \(\delta _1\) and \(\delta _2\). We will specify the suitable values of \(\delta _1\) and \(\delta _2\) for the matrix \(\Delta S\) later. Next, we define the following set of pairs of matrices:
where \(\kappa \) and \(\tau <m_1\wedge m_2\) are some positive constants. This will be used for \(A=\Delta L\) and \(B=\Delta S\). If the \(L_2(\Pi )\)-norm of the sum of two matrices is too small, the right hand side of (32) is negative. The first inequality in the definition of \({\mathcal {D}}(\tau ,\kappa )\) prevents from this. Condition \(\left\| A\right\| _{*}\le \sqrt{\tau } \left\| A_{\mathcal {I}}\right\| _{2}+\kappa \) is a relaxed form of the condition \(\left\| A\right\| _{*}\le \sqrt{\tau } \left\| A\right\| _{2}\) satisfied by matrices with rank \(\tau \). We will show that, with high probability, the matrix \(\Delta L\) satisfies this condition with \(\tau =C\,\mathrm{rank}(L_0)\) and a small \(\kappa \). To prove it, we need the bound \(\mathcal {R}(B)\le \delta _2\) on the corrupted part.
Finally, define our constrained set as the intersection
We now return to the proof of the theorem. To prove (11), we bound separately the norms \(\left\| \Delta L\right\| _2\) and \(\left\| \Delta S\right\| _2\). Note that
and similarly,
In view of these inequalities, it is enough to bound the quantities \(\Vert \Delta S_{\mathcal {I}}\Vert _{L_2(\Pi )}^{2}\) and \(\Vert \Delta L_{\mathcal {I}}\Vert _2^{2}\). A bound on \(\Vert \Delta S_{\mathcal {I}}\Vert _{L_2(\Pi )}^{2}\) with the rate as claimed in (11) is given in Lemma 14 below. In order to bound \(\Vert \Delta L_{\mathcal {I}}\Vert _{L_2(\Pi )}^{2}\) (or \(\Vert \Delta L_{\mathcal {I}}\Vert _2^{2}\) according to cases), we will need the following argument.
Case 1 Suppose that \(\left\| \Delta L+\Delta S\right\| _{L_2(\Pi )}^{2} < 16{\mathbf {a}}^{2}\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\). Then a straightforward inequality
together with Lemma 14 below implies that, with probability at least \(1-2.5/d\),
where
Note also that \(\Psi _4\le C(\Psi _1+\Psi _2 +\Psi _3)\). In view of (34), (36) and of fact that \(\vert \mathcal {I}\vert \le m_1m_2\), the bound on \(\Vert \Delta L\Vert _2^{2}\) stated in the theorem holds with probability at least \(1-2.5/d\).
Case 2 Assume now that \(\left\| \Delta L+\Delta S\right\| _{L_2(\Pi )}^{2} \ge 16{\mathbf {a}}^{2}\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\). We will show that in this case and with an appropriate choice of \(\delta _1,\delta _2,\tau \) and \(\kappa \), the pair \(\frac{1}{4{\mathbf {a}}}(\Delta L, \Delta S)\) belongs to the intersection \({\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\mathcal {B}}(\delta _1,\delta _2)\}\).
Lemma 13 below and (27) imply that, on the event \({\mathcal {U}}\),
Lemma 14 yields that, with probability at least \(1-2.5\,d^{-1}\),
This property and (37) imply that \(\frac{1}{4{\mathbf {a}}}\left( \Delta L,\Delta S\right) \in {\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\bar{\mathcal {B}}}\}\) with probability at least \(1-2.5\,d^{-1}\), where
Therefore, we can apply Lemma 15(ii). From Lemma 15(ii) and (31) we obtain that, with probability at least \(1-4.5\,d^{-1}\),
where
Using an elementary argument and then (34) we find
This inequality and (38) yield
Using again (35), Lemma 14, (9) and the bound \(\vert \mathcal {I}\vert \le m_1m_2\) we obtain
This and the inequality \(\sqrt{2r\vert {\tilde{\mathcal {I}}}\vert }\,{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \le \dfrac{\vert {\tilde{\mathcal {I}}}\vert }{\mu \,m_1m_2}+\mu \,m_1m_2\,r\,\left( {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \right) ^{2}\) imply that, with probability at least \(1-4.5\,d^{-1}\),
In view of (40) and (34), \(\Vert \Delta L\Vert _2^{2}\) is bounded by the right hand side of (11) with probability at least \(1-4.5\,d^{-1}\). Finally, inequality (12) follows from Lemma 14, (9) and the identity \(\Delta S_{\mathcal {I}}=-{\hat{S}}_{\mathcal {I}}\).
Lemma 12
Assume that \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \). Then, we have
Proof
Let \(\partial \Vert \cdot \Vert _{*}\), and \(\partial {\mathcal {R}}\) denote the subdifferentials of \(\Vert \cdot \Vert _{*}\) and of \({\mathcal {R}}\), respectively. By the standard condition for optimality over a convex set (see [2, Chapter 4, Section 2, Corollary 6]), we have
for all feasible pairs (L, S). In particular, for \(({\hat{L}},S_0)\) we obtain
which implies
Using the elementary inequality \(2ab\le a^2+b^2\) and the bound \(\Vert \Delta L\Vert _{\infty }\le 2{\mathbf {a}}\) we find
Combining the last two displays we get
By Lemma 18,
where \(W=\frac{1}{N}\sum _{i\in \Omega }X_{i}\). On the other hand, the convexity of \(\mathcal {R}(\cdot )\) and the definition of subdifferential imply
Plugging (43) and (44) in (42) we obtain
Next, the decomposability of \(\mathcal {R}(\cdot )\), the identity \((S_0)_{\mathcal {I}}=0\) and the triangle inequality yield
Since \(\lambda _2\ge 4\left( 2{\mathbf {a}}\mathcal {R}^{*}(W)+\mathcal {R}^{*}( \Sigma )\right) \) the last two displays imply
Thus,
Since we assume that all unobserved entries of \(S_0\) are zero, we have \((S_0)_{\tilde{\mathcal {I}}}=(S_0)_{\tilde{\Omega }}\). On the other hand, \(S_{\tilde{\mathcal {I}}}={\hat{S}}_{\tilde{\Omega }}\) as \(\mathcal {R}(\cdot )\) is a monotonic norm. Indeed, adding to S a non-zero element on the non-observed part increases \(\mathcal {R}(S)\) but does not modify \(\frac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}\). To conclude, we have \(\Delta S_{{\tilde{\mathcal {I}}}}=\Delta S_{{\tilde{\Omega }}}\), which together with (45), implies the Lemma. \(\square \)
Lemma 13
Suppose that \(\lambda _1\ge 4\Vert \Sigma \Vert \) and \(\lambda _2\ge 4\mathcal {R}^{*}(\Sigma )\). Then,
Proof
Using (41) for \((L,S)=(L_0,S_0)\) we obtain
The convexity of \(\Vert \cdot \Vert _{*}\) and of \(\mathcal {R}(\cdot )\) and the definition of the subdifferential imply
Together with (46), this yields
Using the conditions \(\lambda _1\ge 4\Vert \Sigma \Vert \), \(\lambda _2\ge 4\mathcal {R}^{*}(\Sigma )\), the triangle inequality and (28) we get
Since we assume that all unobserved entries of \(S_0\) are zero, we obtain \(\mathcal {R}(S_0)\le {\mathbf {a}}\mathcal {R}({\mathbf {Id}}_{\tilde{\Omega }})\). Using this inequality in the last display proves the lemma. \(\square \)
Lemma 14
Let \(n>m_1\) and \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \). Suppose that the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfies Assumptions 1 and 2. Let \(\left\| S_0\right\| _{\infty }\le {\mathbf {a}}\) for some constant \({\mathbf {a}}\) and let Assumption 3 be satisfied. Then, with probability at least \(1-2.5\,d^{-1}\),
and
Proof
Using the inequality \({\mathcal {F}}({\hat{L}},{\hat{S}})\le {\mathcal {F}}({\hat{L}},S_0)\) and (1) we obtain
which implies
From Lemma 18 and the duality between \(\mathcal {R}\) and \(\mathcal {R}^{*}\) we obtain
Since here \(\Delta S_{\mathcal {I}}=-{\hat{S}}_{\mathcal {I}}\) and \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \) it follows that
Now, Lemma 12 and the bound \(\Vert \Delta S\Vert _{\infty }\le 2{\mathbf {a}}\) imply that, on the event \({\mathcal {U}}\) defined in (26),
Thus, (48) is proved. To prove (47), consider the following two cases.
Case I \(\Vert \Delta S\Vert ^{2}_{L_{2}(\Pi )}< 4{\mathbf {a}}^{2}\sqrt{\frac{64\log (d)}{\log (6/5)\,n}}\). Then (47) holds trivially.
Case II \(\Vert \Delta S\Vert ^{2}_{L_{2}(\Pi )}\ge 4{\mathbf {a}}^{2}\sqrt{\frac{64\log (d)}{\log (6/5)\,n}}\). Then inequality (50) and the bound \(\Vert \Delta S\Vert _{\infty }\le 2{\mathbf {a}}\) imply that, on the event \({\mathcal {U}}\),
where, for any \(\delta >0\), the set \({\mathcal {C}}(\delta )\) is defined as:
Thus, we can apply Lemma 15(i) below. In view of this lemma, the inequalities (49), (27), \(\Vert \Delta L\Vert _{\infty }\le 2{\mathbf {a}}\) and \(\mathcal {R}\left( S_0\right) \le {\mathbf {a}}\mathcal {R}({\mathbf {Id}}_{\tilde{\mathcal {I}}})\) imply that (47) holds with probability at least \(1-2.5\,d^{-1}\). \(\square \)
Lemma 15
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1 and 2. Let \(\delta , \delta _1, \delta _2, \tau \), and \(\kappa \) be positive constants. Then, the following properties hold.
-
(i)
With probability at least \(1-\dfrac{2}{d}\),
$$\begin{aligned} \frac{1}{n}\sum _{i\in \Omega } \left\langle X_i,S\right\rangle ^{2}\ge \dfrac{1}{2}\Vert S\Vert _{L_2(\Pi )}^{2}-8\delta {\mathbb {E}}( \mathcal {R}^{*}(\Sigma _R)) \end{aligned}$$for any \(S\in {\mathcal {C}}(\delta )\).
-
(ii)
With probability at least \(1-\dfrac{2}{d}\),
$$\begin{aligned} \frac{1}{n}\sum _{i\in \Omega } \left\langle X_i,L+S\right\rangle ^{2}\ge & {} \dfrac{1}{2}\Vert L+S\Vert _{L_2(\Pi )}^{2}- \{360\mu \,\vert \mathcal {I}\vert \,\tau \left( {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \right) ^{2}\\&+4\delta _1^{2}+8\delta _2\,{\mathbb {E}}( \mathcal {R}^{*}(\Sigma _R))+8\kappa {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \} \end{aligned}$$for any pair \((L,S)\in {\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\mathcal {B}}(\delta _1,\delta _2)\}\).
Proof
We give a unified proof of (i) and (ii). Let \(A=S\) for (i) and \(A=L+S\) for (ii). Set
and
To prove the lemma it is enough to show that the probability of the random event
is smaller than 2 / d. In order to estimate the probability of \({\mathcal {B}}\), we use a standard peeling argument. Set \(\nu =\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\) and \(\alpha =\dfrac{6}{5}\). For \(l\in {\mathbb {N}}\), define
If the event \({\mathcal {B}}\) holds, there exist \(l\in {\mathbb {N}}\) and a matrix \(A\in {\mathcal {C}}\cap S_l\) such that
For each \(l\in {\mathbb {N}}\), consider the random event
where
Note that \(A\in S_l\) implies that \(A\in {\mathcal {C}}'(\alpha ^{l}\nu )\). This and (52) grant the inclusion \({\mathcal {B}}\subset \cup _{l=1}^\infty \,{\mathcal {B}}_l\). By Lemma 16, \({\mathbb {P}}\left( {\mathcal {B}}_l\right) \le \exp (-c_5\,n\,\alpha ^{2l}\nu ^{2})\) where \(c_5=1/128\). Using the union bound we find
where we have used the inequality \(e^{x}\ge x\). We finally obtain, for \(\nu =\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\),
\(\square \)
Let
Lemma 16
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1 and 2. Then,
where \(c_5=\dfrac{1}{128}\).
Proof
We follow a standard approach: first we show that \(Z_T\) concentrates around its expectation and then we bound from above the expectation. Since \(\left\| A\right\| _{\infty }\le 1\) for all \(A\in {\mathcal {C}}'(T)\), we have \(\left| \left\langle X_i,A\right\rangle \right| \le 1\). We use first a Talagrand type concentration inequality, cf. [4, Theorem 14.2], implying that
where \(c_5=\dfrac{1}{128}\). Next, we bound the expectation \({\mathbb {E}}\left( Z_T\right) \). By a standard symmetrization argument (see e.g. [19, Theorem 2.1]) we obtain
where \(\{\epsilon _i\}_{i=1}^{n}\) is an i.i.d. Rademacher sequence. Then, the contraction inequality (see e.g. [19]) yields
where \(\Sigma _R=\dfrac{1}{n} {\sum \nolimits ^{n}_{i=1}} \epsilon _i X_i\). Now, to obtain a bound on \({\mathbb {E}}({\sup }_{A\in {\mathcal {C}}'(T)}\left| \left\langle \Sigma _R,A\right\rangle \right| )\) we will consider separately the cases \({\mathcal {C}}={\mathcal {C}}(\delta )\) and \({\mathcal {C}}= {\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\mathcal {B}}(\delta _1,\delta _2)\}\).
Case I \(A\in {\mathcal {C}}(\delta )\) and \(\left\| A\right\| _{L_2(\Pi )}^{2}\le T\). By the definition of \({\mathcal {C}}(\delta )\) we have \(\mathcal {R}( A) \le \delta .\) Thus, by the duality between \(\mathcal {R}\) and \(\mathcal {R}^{*}\),
This and the concentration inequality (53) imply
with \(c_5=\dfrac{1}{128}\) and \({\mathcal {E}}=8\delta \,{\mathbb {E}}\left( \mathcal {R}^{*}(\Sigma _R)\right) \) as stated.
Case II \(A=L+S\) where \((L,S)\in {\mathcal {D}}(\tau ,\kappa )\), \(S\in {\mathcal {B}}(\delta _1,\delta _2)\), and \(\Vert L+S\Vert ^{2}_{L_2(\Pi )}\le T\). Then, by the definition of \({\mathcal {B}}(\delta _1,\delta _2)\), we have \(\mathcal {R}(S)\le \delta _2.\) On the other hand, the definition of \({\mathcal {D}}(\tau ,\kappa )\) yields
and
The last two inequalities imply
Therefore we can write
Combining this bound with the following elementary inequalities:
and using the concentration bound (53) we obtain
with \(c_5=\dfrac{1}{128}\) and
as stated. \(\square \)
1.2 A.2: Proof of Corollary 7
With \(\lambda _1\) and \(\lambda _2\) given by (16) we obtain
Appendix B: Proof of Theorems 2 and 3
Note that the assumption \(\ae \le 1+s/m_2\) implies that
Assume w.l.o.g. that \(m_1\ge m_2\). For a \(\gamma \le 1\), define
and consider the associated set of block matrices
where O denotes the \(m_1\times (m_2-r\lfloor m_2/(2r) \rfloor )\) zero matrix, and \(\lfloor x \rfloor \) is the integer part of x.
We define similarly the set of matrices
and
where \({\tilde{O}}\) is the \(m_1\times (m_2-s )\) zero matrix. We now set
Remark 2
In the case \(m_1< m_2\), we only need to change the construction of the low rank component of the test set. We first introduce a matrix \({\tilde{L}}= \left( \begin{array}{c|c}{\bar{L}}&{}O\\ \end{array}\right) \in {\mathbb {R}}^{r \times m_2}\) where \({\bar{L}} \in {\mathbb {R}}^{r \times (m_2/2)}\) with entries in \(\{0, \gamma (\sigma \wedge {\mathbf {a}}) (\frac{ rM}{n} )^{1/2}\}\) and then we replicate this matrix to obtain a block matrix L of size \(m_1 \times m_2\)
By construction, any element of \({\mathcal {A}}\) as well as the difference of any two elements of \({\mathcal {A}}\) can be decomposed into a low rank component L of rank at most r and a group sparse component S with at most s nonzero columns. In addition, the entries of any matrix in \({\mathcal {A}}\) take values in [0, a]. Thus, \({\mathcal {A}}\subset {\mathcal {A}}_{GS}(r,s,{\mathbf {a}})\).
We first establish a lower bound of the order rM / n. Let \({\tilde{\mathcal {A}}}\subset {\mathcal {A}}\) be such that for any \(A=L+S\in {\tilde{\mathcal {A}}}\) we have \(S={\mathbf {0}}\). The Varshamov–Gilbert bound (cf. Lemma 2.9 in [25]) guarantees the existence of a subset \({\mathcal {A}} ^{0}\subset {\tilde{\mathcal {A}}}\) with cardinality \({\mathrm {Card}}({\mathcal {A}} ^0) \ge 2^{(rM)/8}+1\) containing the zero \(m_{1}\times m_2\) matrix \(\mathbf{0}\) and such that, for any two distinct elements \(A_1\) and \(A_2\) of \({\mathcal {A}} ^{0}\),
Since \(\xi _i\sim {\mathcal {N}}(0,\sigma ^2)\) we get that, for any \(A\in {\mathcal {A}} ^0\), the Kullback–Leibler divergence \(K\big ({{\mathbb {P}}}_{\mathbf{0}},{{\mathbb {P}}}_{A}\big )\) between \({{\mathbb {P}}}_{\mathbf{0}}\) and \({{\mathbb {P}}}_{A}\) satisfies
where we have used Assumption 9. From (57) we deduce that the condition
is satisfied if \(\gamma >0\) is chosen as a sufficiently small numerical constant. In view of (56) and (58), the application of Theorem 2.5 in [25] implies
for some absolute constants \(\beta \in (0,1)\).
We now prove the lower bound relative to the corruptions. Let \({\bar{\mathcal {A}}}\subset {\mathcal {A}}\) such that for any \(A=L+S\in {\bar{\mathcal {A}}}\) we have \(L={\mathbf {0}}\). The Varshamov–Gilbert bound (cf. Lemma 2.9 in [25]) guarantees the existence of a subset \({\mathcal {A}} ^0\subset {\bar{\mathcal {A}}}\) with cardinality \({\mathrm {Card}}({\mathcal {A}} ^0) \ge 2^{(sm_1)/8}+1\) containing the zero \(m_1\times m_2\) matrix \(\mathbf{0}\) and such that, for any two distinct elements \(A_1\) and \(A_2\) of \({\mathcal {A}} ^0\),
For any \(A\in {\mathcal {A}} _0\), the Kullback–Leibler divergence between \({{\mathbb {P}}}_{\mathbf{0}}\) and \({{\mathbb {P}}}_{A}\) satisfies
which implies that condition (58) is satisfied if \(\gamma >0\) is chosen small enough. Thus, applying Theorem 2.5 in [25] we get
for some absolute constant \(\beta \in (0,1)\). Theorem 2 follows from inequalities (55), (59) and (61).
The proof of Theorem 3 follows the same lines as that of Theorem 2. The only difference is that we replace \({\tilde{\mathcal {S}}}\) by the following set
We omit further details here.
Appendix C: Proof of Lemma 6
Part (i) of Lemma 6 is proved in Lemmas 5 and 6 in [18].
Proof of (ii)
For the sake of brevity, we set \(X_i(j,k) = \langle X_i, e_j(m_1)e_k(m_2)^{\top }\rangle \). By definition of \(\Sigma \) and \(\Vert \cdot \Vert _{2,\infty }\), we have
For any fixed k, we have
where \(\Xi = (\xi _1,\ldots ,\xi _n)^\top \) and \(A_k\in {\mathbb {R}}^{|\Omega |\times |\Omega |}\) with entries
We freeze the \(X_i\) and we apply the version of Hanson–Wright inequality in [24] to get that there exists a numerical constant C such that with probability at least \(1-e^{-t}\)
Next, we note that
where we have used the Cauchy–Schwarz inequality in the first line and the relation \(X^{2}_{i} (j,k)=X_{i} (j,k)\).
Note that \(Z_i(k):= \sum _{j= 1}^{m_1} X_i(j,k)\) follows a Bernoulli distribution with parameter \(\pi _{\cdot k}\) and consequently \(Z(k) = \sum _{i\in \Omega } Z_{i}(k)\) follows a Binomial distribution \(B(|\Omega |,\pi _{\cdot k})\). We apply Bernstein’s inequality (see, e.g., [4, page 486]) to get that, for any \(t>0\),
Consequently, we get with probability at least \(1-2e^{-t}\) that
and, using \(\Vert A_k\Vert \le \Vert A_k\Vert _{2}\), that
Note also that
Combining the last three displays with (63) we get, up to a rescaling of the constants, with probability at least \(1-e^{-t}\) that
Replacing t by \(t+\log m_2\) in the above display and using the union bound gives that, with probability at least \(1-e^{-t}\),
Assuming that \(\log m_2 \ge 1\) we get with probability at least \(1-e^{-t}\) that
Using (14), we get that there exists a numerical constant \(C>0\) such with probability at least \(1 -e^{-t}\)
Finally, we use Lemma 17 to obtain the required bound on \({\mathbb {E}}\Vert \Sigma \Vert _{2,\infty }\).
Proof of (iii)
We follow the same lines as in the proof of part (ii) above. The only difference is to replace \(\xi _i\) by \(\epsilon _i\), \(\sigma \) by 1 and N by n.
Proof of (iv)
We need to establish the bound on
For any fixed k, we have
The first term on the right hand side of the last display can be written as
Using the concentration bound on Z(k) in the proof of part (ii) above, we get that, with probability at least \(1-e^{-t}\),
Next, the random variable
is a U-statistic of order 2. We use now a Bernstein-type concentration inequality for U-statistics. To this end, we set \(X_i(\cdot ,k) = (X_i(1,k),\ldots , X_i(m_1,k))^\top \) and
Let \(e_0(m_1) = {\mathbf {0}}_{m_1}\) be the zero vector in \({\mathbb {R}}^{m_1}\). Note that \(X_i(\cdot ,k)\) takes values in \(\{ e_j(m_1),\, 0\le j \le m_1\}\). For any function \(g\,:\, \{ e_j(m_1),\, 0\le j \le m_1\}^2 \rightarrow {\mathbb {R}}\), we set \(\Vert g\Vert _{L^{\infty }} = \max _{0\le j_1,j_2\le m_1}|g(e_{j_1}(m_1),e_{j_2}(m_1))|\).
We will need the following quantities to control the tail behavior of \(U_2\)
where \(X_i'(\cdot ,k)\) are independent replications of \(X_i(\cdot ,k)\) and f, \(g\,:\, {\mathbb {R}}^{m_1} \rightarrow {\mathbb {R}}\).
We now evaluate the above quantities in our particular setting. It is not hard to see that \({\mathbf {A}} =\max \{ \pi _{\cdot k}^{(2)} \,,\,1 - \pi _{\cdot k}^{(2)} \}\le 1\) where \(\pi _{\cdot k}^{(2)}= \sum _{j=1}^{m_1}\pi _{jk}^2 \). We also have that
where we have used in the second line that \(\langle X_{i_1}(\cdot ,k),X_{i_2}'(\cdot ,k)\rangle ^2 \!=\! \langle X_{i_1}(\cdot ,k),X_{i_2}'(\cdot ,k)\rangle \) since \(\langle X_{i_1}(\cdot ,k),X_{i_2}'(\cdot ,k)\rangle \) takes values in \(\{0,1\}\).
We now derive a bound on \({\mathbf {D}}\). By Jensen’s inequality, we get
where we used the bound \({\mathbb {E}} [\sum _{i}f_{i}^2(X_i(\cdot ,k))]\le 1\). Thus, the Cauchy–Schwarz inequality implies
where we have used the fact that \({\mathbb {E}} [h^2(X_{i_1},X_{i_2}')] \le \sum _{j=1}^{m_1}\pi _{jk}^2\) following from an argument similar to that used to bound \({\mathbf {C}}\).
Finally, we get a bound on \({\mathbf {B}}\). Set \(\pi _{0,k} = 1 - \pi _{\cdot ,k}\). Note first that
By symmetry, we obtain the same bound on \(\Vert \sum _{i_2} {\mathbb {E}} h^2(\cdot ,X_{i_2}(\cdot ,k)) \Vert _{L^\infty } \). Thus we have
Set now \(U_2 = \sum _{i_1\ne i_2}h(X_{i_1}(\cdot ,k), X_{i_2}(\cdot ,k)).\) We apply a decoupling argument (See for instance Theorem 3.4.1 page 125 in [11]) to get that there exists a constant \(C>0\), such that for any \(u>0\)
where \(X_i'(\cdot ,k)\) is independent of \(X_i(\cdot ,k)\) and has the same distribution as \(X_i(\cdot ,k)\). Next, Theorem 3.3 in [13] gives that, for any \(u>0\),
for some absolute constant \(C>0\). Combining the last display with our bounds on \({\mathbf {A}},{\mathbf {B}},{\mathbf {C}},{\mathbf {D}}\), we get that for any \(t>0\), with probability at least \(1- 2e^{-t}\),
where \(C>0\) is a numerical constant. Combining the last display with (64) we get that, for any \(t>0\) with probability at least \(1-3e^{-t}\),
Set \(\pi _{\max } = \max _{1\le k \le m_2}\{\pi _{\cdot k}\}\) and \(\pi ^{(2)}_{\max } = \max _{1\le k \le m_2}\{\pi _{\cdot k}^{(2)}\}\). Using the union bound and up to a rescaling of the constants, we get that, with probability at least \(1 - e^{t}\),
Recall that \(\vert \Omega \vert =n\) and \(\ae =N/n\). Assumption 5 and the fact that \(n\le \vert \mathcal {I}\vert \) imply that there exists a numerical constant \(C>0\) such that, with probability at least \(1 -e^{-t}\),
where we have used that \(\pi _{j,k} \le \pi _{\cdot k} \le \sqrt{2}\gamma /m_2\). Finally, the bound on the expectation \({\mathbb {E}} \Vert W\Vert _{2,\infty }\) follows from this result and Lemma 17.
Appendix D: Proof of Lemma 10
With the notation \(X_i(j,k) = \langle X_i, e_j(m_1)e_k(m_2)^{\top }\rangle \) we have
Under Assumption 3, the Orlicz norm \(\Vert \xi _i\Vert _{\psi _2}= \inf \{x>0: {\mathbb {E}}[(\xi _i/x)^2] \le e\}\) satisfies \(\Vert \xi _i\Vert _{\psi _2} \le c\sigma \) for some numerical constant \(c>0\) and all i. This and the relation (See Lemma 5.5 in [26]Footnote 1)
imply that \(N^{-\ell }{\mathbb {E}}[|\xi _i|^\ell X_i^\ell (j,k)] = N^{-\ell } {\mathbb {E}}[X_i(j,k)] {\mathbb {E}}[|\xi _i|^\ell ]\le (\ell !/2) c^2v(c\sigma /N)^{\ell -2}\) for all \(\ell \ge 2\) and \(v=\frac{\sigma ^2\mu _1}{N^2 m_1m_2}\), where we have used the independence between \(\xi _i\) and \(X_i\), and Assumption 9. Thus, for any fixed (j, k), we have
and
Thus, we can apply Bernstein’s inequality (see, e.g. [4, page 486]), which yields
for any fixed (j, k). Replacing here t by \(t+\log (m_1m_2)\) and using the union bound we obtain
The bound on \({\mathbb {E}}[\Vert \Sigma \Vert _{\infty }]\) in the statement of Lemma 10 follows from this inequality and Lemma 17. The same argument proves the bounds on \(\Vert \Sigma _R\Vert _{\infty }\) and \({\mathbb {E}} \Vert \Sigma _R\Vert _{\infty }\) in the statement of Lemma 10. By a similar (and even somewhat simpler) argument, we also get that
while Assumption 9 implies that \(\Vert {\mathbb {E}}[W]\Vert _{\infty } \le \frac{\mu _1}{\ae m_1m_2}\).
Appendix E: Technical Lemmas
Lemma 17
Let Y be a non-negative random variable. Let there exist \(A\ge 0\), and \(a_j>0\), \(\alpha _j>0\) for \(1\le j\le m\), such that
Then
where \(\Gamma (\cdot )\) is the Gamma function.
Proof
Using the change of variable \( u = \sum _{j=1}^m a_j v^{\alpha _j} \) we get
\(\square \)
Lemma 18
Assume that \(\mathcal {R}\) is an absolute norm. Then
where \(W=\frac{1}{N}\sum _{i\in \Omega }X_{i}\).
Proof
In view of the definition of \(\mathcal {R}^{*}\),
where we have used the inequalities \(\left\langle X_i,\Delta L\right\rangle \le \Vert \Delta L\Vert _{\infty }\le 2{\mathbf {a}}\), and the fact that \(\mathcal {R}\) is an absolute norm. \(\square \)
Rights and permissions
About this article
Cite this article
Klopp, O., Lounici, K. & Tsybakov, A.B. Robust matrix completion. Probab. Theory Relat. Fields 169, 523–564 (2017). https://doi.org/10.1007/s00440-016-0736-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-016-0736-y