Skip to main content
Log in

Low-rank matrix completion using nuclear norm minimization and facial reduction

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Minimization of the nuclear norm, NNM, is often used as a surrogate (convex relaxation) for finding the minimum rank completion (recovery) of a partial matrix. The minimum nuclear norm problem can be solved as a trace minimization semidefinite programming problem, SDP. Interior point algorithms are the current methods of choice for this class of problems. This means that it is difficult to: solve large scale problems; exploit sparsity; and get high accuracy solutions. The SDP  and its dual are regular in the sense that they both satisfy strict feasibility. In this paper we take advantage of the structure of low rank solutions in the SDP  embedding. We show that even though strict feasibility holds, the facial reduction framework used for problems where strict feasibility fails can be successfully applied to generically obtain a proper face that contains all minimum low rank solutions for the original completion problem. This can dramatically reduce the size of the final NNM  problem, while simultaneously guaranteeing a low-rank solution. This can be compared to identifying part of the active set in general nonlinear programming problems. In the case that the graph of the sampled matrix has sufficient bicliques, we get a low rank solution independent of any nuclear norm minimization. We include numerical tests for both exact and noisy cases. We illustrate that our approach yields lower ranks and higher accuracy than obtained from just the NNM  approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Note that the linear mapping \({\mathcal {A}}= {\mathcal {P}} _{\hat{E}}: \mathbb {R}^{m \times n}\rightarrow \mathbb {R}^{|\hat{E}|}\) corresponding to sampling is surjective as we can consider \({\mathcal {A}}(M)_{ij \in \hat{E}} = {{\mathrm{{trace}}}}(E_{ij} M)\), where \(E_{ij}\) is the ij-unit matrix.

  2. For G we have the additional trivial cliques of size k, \(C=\{i_1,\ldots , i_k\}\subset \{1,\ldots , m\}\) and \(C=\{j_1,\ldots , j_k\}\subset \{m+1,\ldots , m+n\}\), that are not of interest to our algorithm.

  3. We have a bar | to emphasize the end/start of the row/column indices.

  4. The authors thank Dmitriy Drusvyatskiy for the simplification of our original proof of Lemma 3.6. Further discussions are given in [7].

  5. The MATLAB command null was used to find an orthonormal basis for the nullspace. However, this requires an SVD decomposition and fails for huge problems. In that case, we used the Lanczos approach with eigs.

  6. The MATLAB economical version function \([\sim ,R,E]=qr(\Phi ,0)\) finds the list of constraints for a well conditioned representation, where \(\Phi \) denotes the matrix of constraints.

  7. The density p in the tables are reported as “mean(p)” because the real density obtained is usually not the same as the one set for generating the problem. We report the mean of the real densities over the five instances.

  8. The Tables 4 with rank 6 and 5 with rank 8 were done using a MacBookPro12,1, Intel Core i5, 2.7 GHz with two cores and 8 GB RAM. The version of MATLAB was the same R2016a.

  9. We used CVX version 2.1 with the MOSEK solver, e.g., [1].

References

  1. Andersen, E.D., Andersen, K.D.: The Mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. In: High performance optimization, vol. 33 of Applied Optimization, pp. 197–232. Kluwer Academic Publishers, Dordrecht (2000)

  2. Blanchard, J.D., Tanner, J., Wei, K.: CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference 4(4), 289–327 (2015)

    MathSciNet  MATH  Google Scholar 

  3. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Drusvyatskiy, D., Krislock, N., Voronin, Y.-L.C. Wolkowicz, H.: Noisy Euclidean distance realization: robust facial reduction and the Pareto frontier. SIAM J. Optim. 27(4), 2301–2331 (2017). arXiv:1410.6852. (to appear)

  6. Drusvyatskiy, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semidefinite and Euclidean distance matrices. SIAM J. Optim. 25(2), 1160–1178 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Drusvyatskiy, D., Wolkowicz, H.: The many faces of degeneracy in conic optimization. Found. Trends Optim. 3(2), 77–170 (2016). https://doi.org/10.1561/2400000011

  8. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrica 1(3), 211–218 (1936)

    Article  MATH  Google Scholar 

  9. Fang, E.X., Liu, H., Toh, K.-C., Zhou, W.-X.: Max-norm optimization for robust matrix recovery. Technical report, Department of Operations Research and Financial Engineering, Princeton University, Princeton (2015)

  10. Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University, Stanford, CA (2001)

  11. Fazel, M., Hindi, H., Boyd, S.P.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings American Control Conference, pp. 4734–4739 (2001)

  12. Gauvin, J.: A necessary and sufficient regularity condition to have bounded multipliers in nonconvex programming. Math. Program. 12(1), 136–138 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  13. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 1.21 (2011). http://cvxr.com/cvx

  14. Hardt, M., Meka, R., Raghavendra, P., Weitz, B.: Computational limits for matrix completion. In: JMLR Proceedings, pp. 1–23. JMLR (2014)

  15. Király, F.J., Theran, L., Tomioka, R.: The algebraic combinatorial approach for low-rank matrix completion. J. Mach. Learn. Res. 16, 1391–1436 (2015)

    MathSciNet  MATH  Google Scholar 

  16. Krislock, N., Wolkowicz, H.: Explicit sensor network localization using semidefinite representations and facial reductions. SIAM J. Optim. 20(5), 2679–2708 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  17. Meka, R., Jain, P., Dhillon, I.S.: Guaranteed rank minimization via singular value projection. Technical report advanced studies in pure mathematics, Department of Computer Science, University of Texas at Austin, Austin (2009)

  18. Pataki, G.: Geometry of semidefinite programming. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming: Theory, Algorithms, and Applications. Kluwer Academic Publishers, Boston, MA (2000)

    Google Scholar 

  19. Pilanci, M., Wainwright, M.J.: Randomized sketches of convex programs with sharp guarantees. IEEE Trans. Inf. Theory 61(9), 5096–5115 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Singer, A., Cucuringu, M.: Uniqueness of low-rank matrix completion by rigidity theory. SIAM J. Matrix Anal. Appl. 31(4), 1621–1641 (2009/10)

  22. Srebro, N.: Learning with matrix factorizations. (Ph.D.) Thesis, ProQuest LLC, Ann Arbor, MI. Massachusetts Institute of Technology (2004)

  23. Srebro, N., Shraibman, A.: Rank, trace-norm and max-norm. In: Learning theory, vol. 3559 of Lecture Notes in Computer Science, pp. 545–560. Springer, Berlin (2005)

  24. Tanner, J., Wei, K.: Low rank matrix completion by alternating steepest descent methods. Appl. Comput. Harmon. Anal. 40(2), 417–429 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  25. Wolkowicz, H.: Tutorial: facial reduction in cone optimization with applications to matrix completions. In: Dimacs workshop on distance geometry: theory and applications. Based on survey paper: the many faces of degeneracy in conic optimization, (with D. Drusvyatskiy) (2016)

Download references

Acknowledgements

The authors thank Nathan Krislock for his help with parts of the MATLAB coding and also thank Fei Wang for useful discussions related to the facial reduction steps. We also thank two anonymous referees for carefully reading the paper and helping us improve the details and the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Henry Wolkowicz.

Additional information

Presented as Part of tutorial at DIMACS Workshop on Distance Geometry: Theory and Applications, July 26–29, 2016, [25]. Shimeng Huang: Research supported by the Undergraduate Student Research Awards Program, Natural Sciences and Engineering Research Council of Canada. Henry Wolkowicz: Research supported by The Natural Sciences and Engineering Research Council of Canada.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, S., Wolkowicz, H. Low-rank matrix completion using nuclear norm minimization and facial reduction. J Glob Optim 72, 5–26 (2018). https://doi.org/10.1007/s10898-017-0590-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-017-0590-1

Keywords

Mathematics Subject Classification

Navigation