Advertisement

Robust Principal Component Analysis

  • René Vidal
  • Yi Ma
  • S. Shankar Sastry
Chapter
Part of the Interdisciplinary Applied Mathematics book series (IAM, volume 40)

Abstract

In the previous chapter, we considered the PCA problem under the assumption that all the sample points are drawn from the same statistical or geometric model: a low-dimensional subspace.

Keywords

Missing Entries Matrix Completion Principal Component Pursuit (PCP) Alternating Direction Method Of Multipliers (ADMM) Robust PCA (RPCA) 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Amaldi, E., & Kann, V. (1998). On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems. Theoretical Computer Science, 209, 237–260.MathSciNetCrossRefzbMATHGoogle Scholar
  2. Bach, F. (2013). Convex relaxations of structured matrix factorizations. arXiv:1309.3117v1.Google Scholar
  3. Barnett, V., & Lewis, T. (1983). Outliers in statistical data (2nd ed.). New York: Wiley.zbMATHGoogle Scholar
  4. Basri, R., & Jacobs, D. (2003). Lambertian reflection and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2), 218–233.CrossRefGoogle Scholar
  5. Bertsekas, D. P. (1999). Nonlinear programming (2nd ed.). Optimization and computation (Vol. 2) Belmont: Athena Scientific.Google Scholar
  6. Brandt, S. (2002). Closed-form solutions for affine reconstruction under missing data. In In Proceedings Statistical Methods for Video Processing (ECCV’02 Workshop).Google Scholar
  7. Buchanan, A., & Fitzgibbon, A. (2005). Damped Newton algorithms for matrix factorization with missing data. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 316–322).Google Scholar
  8. Burer, S., & Monteiro, R. D. C. (2005). Local minima and convergence in low-rank semidefinite programming. Mathematical Programming, Series A, 103(3), 427–444.MathSciNetCrossRefzbMATHGoogle Scholar
  9. Cai, J.-F., Candés, E. J., & Shen, Z. (2008). A singular value thresholding algorithm for matrix completion. SIAM Journal of Optimization, 20(4), 1956–1982.MathSciNetCrossRefzbMATHGoogle Scholar
  10. Candès, E. (2006). Compressive sampling. In Proceedings of the International Congress of Mathematics.Google Scholar
  11. Candès, E. (2008). The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9–10), 589–592.MathSciNetCrossRefzbMATHGoogle Scholar
  12. Candès, E., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM, 58(3).Google Scholar
  13. Candès, E., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.CrossRefGoogle Scholar
  14. Candès, E., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9, 717–772.MathSciNetCrossRefzbMATHGoogle Scholar
  15. Candès, E., & Tao, T. (2005). Decoding by linear programming. IEEE Transactions on Information Theory, 51(12), 4203–4215.MathSciNetCrossRefzbMATHGoogle Scholar
  16. Candès, E., & Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2053–2080.MathSciNetCrossRefGoogle Scholar
  17. Chandrasekaran, V., Sanghavi, S., Parrilo, P., & Willsky, A. (2009). Sparse and low-rank matrix decompositions. In IFAC Symposium on System Identification.Google Scholar
  18. De la Torre, F., & Black, M. J. (2004). A framework for robust subspace learning. International Journal of Computer Vision, 54(1), 117–142.zbMATHGoogle Scholar
  19. Fei-Fei, L., Fergus, R., & Perona, P. (2004). Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In Workshop on Generative Model Based Vision.Google Scholar
  20. Ferguson, T. (1961). On the rejection of outliers. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability.Google Scholar
  21. Fischler, M. A., & Bolles, R. C. (1981). RANSAC random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 26, 381–395.MathSciNetCrossRefGoogle Scholar
  22. Ganesh, A., Wright, J., Li, X., Candès, E., & Ma, Y. (2010). Dense error correction for low-rank matrices via principal component pursuit. In International Symposium on Information Theory.Google Scholar
  23. Geman, S., & McClure, D. (1987). Statistical methods for tomographic image reconstruction. In Proceedings of the 46th Session of the ISI, Bulletin of the ISI (Vol. 52, pp. 5–21).Google Scholar
  24. Goldfarb, D., & Ma, S. (2009). Convergence of fixed point continuation algorithms for matrix rank minimization. Preprint.Google Scholar
  25. Golub, H., & Loan, C. V. (1996). Matrix Computations (2nd ed.). Baltimore: Johns Hopkins University Press.zbMATHGoogle Scholar
  26. Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Trans on Information Theory, 57(3), 1548–1566.MathSciNetCrossRefGoogle Scholar
  27. Gruber, A., & Weiss, Y. (2004). Multibody factorization with uncertainty and missing data using the EM algorithm. In IEEE Conference on Computer Vision and Pattern Recognition (Vol. I, pp. 707–714).Google Scholar
  28. H.Aanaes, Fisker, R., Astrom, K., & Carstensen, J. M. (2002). Robust factorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9), 1215–1225.Google Scholar
  29. Haeffele, B., & Vidal, R. (2015). Global optimality in tensor factorization, deep learning, and beyond. Preprint, http://arxiv.org/abs/1506.07540.
  30. Haeffele, B., Young, E., & Vidal, R. (2014). Structured low-rank matrix factorization: Optimality, algorithm, and applications to image processing. In International Conference on Machine Learning.Google Scholar
  31. Hardt, M. (2014). Understanding alternating minimization for matrix completion. In Symposium on Foundations of Computer Science.Google Scholar
  32. Hartley, R., & Schaffalitzky, F. (2003). Powerfactorization: An approach to affine reconstruction with missing and uncertain data. In Proceedings of Australia-Japan Advanced Workshop on Computer Vision.Google Scholar
  33. Huber, P. (1981). Robust Statistics. New York: Wiley.CrossRefzbMATHGoogle Scholar
  34. Jacobs, D. (2001). Linear fitting with missing data: Applications to structure-from-motion. Computer Vision and Image Understanding, 82, 57–81.CrossRefzbMATHGoogle Scholar
  35. Jain, P., Meka, R., & Dhillon, I. (2010). Guaranteed rank minimization via singular value projection. In Neural Information Processing Systems (pp. 937–945).Google Scholar
  36. Jain, P., & Netrapalli, P. (2014). Fast exact matrix completion with finite samples. In http://arxiv.org/pdf/1411.1087v1.pdf.
  37. Jain, P., Netrapalli, P., & Sanghavi, S. (2012). Low-rank matrix completion using alternating minimization. In http://arxiv.org/pdf/1411.1087v1.pdf.
  38. Johnson, C. (1990). Matrix completion problems: A survey. In Proceedings of Symposia in Applied Mathematics.Google Scholar
  39. Jolliffe, I. (2002). Principal Component Analysis (2nd ed.). New York: Springer.zbMATHGoogle Scholar
  40. Ke, Q., & Kanade, T. (2005). Robust 1-norm factorization in the presence of outliers and missing data. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  41. Keshavan, R., Montanari, A., & Oh, S. (2010a). Matrix completion from a few entries. IEEE Transactions on Information Theory.Google Scholar
  42. Keshavan, R., Montanari, A., & Oh, S. (2010b). Matrix completion from noisy entries. Journal of Machine Learning Research, 11, 2057–2078.MathSciNetzbMATHGoogle Scholar
  43. Keshavan, R. H. (2012). Efficient algorithms for collaborative filtering. Ph.D. Thesis. Stanford University.Google Scholar
  44. Kontogiorgis, S., & Meyer, R. (1989). A variable-penalty alternating direction method for convex optimization. Mathematical Programming, 83, 29–53.MathSciNetzbMATHGoogle Scholar
  45. Lanczos, C. (1950). An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. Journal of Research of the National Bureau of Standards, 45, 255–282.MathSciNetCrossRefGoogle Scholar
  46. Lin, Z., Chen, M., Wu, L., & Ma, Y. (2011). The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055v2.Google Scholar
  47. Lions, P., & Mercier, B. (1979). Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis, 16(6), 964–979.MathSciNetCrossRefzbMATHGoogle Scholar
  48. Recht, B., Fazel, M., & Parrilo, P. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.MathSciNetCrossRefzbMATHGoogle Scholar
  49. Shum, H.-Y., Ikeuchi, K., & Reddy, R. (1995). Principal component analysis with missing data and its application to polyhedral object modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(9), 854–867.CrossRefGoogle Scholar
  50. Soltanolkotabi, M., & Candès, E. J. (2013). A geometric analysis of subspace clustering with outliers. Annals of Statistics, 40(4), 2195–2238.MathSciNetCrossRefzbMATHGoogle Scholar
  51. Steward, C. V. (1999). Robust parameter estimation in computer vision. SIAM Review, 41(3), 513–537.MathSciNetCrossRefGoogle Scholar
  52. Udell, M., Horn, C., Zadeh, R., & Boyd, S. (2015). Generalized low rank models. Working manuscript.Google Scholar
  53. Wiberg, T. (1976). Computation of principal components when data are missing. In Symposium on Computational Statistics (pp. 229–326).Google Scholar
  54. Wright, J., Ganesh, A., Kerui, M., & Ma, Y. (2013). Compressive principal component analysis. IMA Journal on Information and Inference, 2(1), 32–68.CrossRefzbMATHGoogle Scholar
  55. Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009a). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In NIPS.Google Scholar
  56. Xu, H., Caramanis, C., & Sanghavi, S. (2010). Robust pca via outlier pursuit. In Neural Information Processing Systems (NIPS).Google Scholar
  57. Yuan, X., & Yang, J. (2009). Sparse and low-rank matrix decomposition via alternating direction methods. Preprint.Google Scholar
  58. Zhou, M., Wang, C., Chen, M., Paisley, J., Dunson, D., & Carin, L. (2010a). Nonparametric bayesian matrix completion. In Sensor Array and Multichannel Signal Processing Workshop.Google Scholar
  59. Zhou, Z., Wright, J., Li, X., Candès, E., & Ma, Y. (2010b). Stable principal component pursuit. In International Symposium on Information Theory.Google Scholar

Copyright information

© Springer-Verlag New York 2016

Authors and Affiliations

  • René Vidal
    • 1
  • Yi Ma
    • 2
  • S. Shankar Sastry
    • 3
  1. 1.Center for Imaging Science Department of Biomedical EngineeringJohns Hopkins UniversityBaltimoreUSA
  2. 2.School of Information Science and Technology ShanghaiTech UniversityShanghaiChina
  3. 3.Department of Electrical Engineering and Computer ScienceUniversity of California BerkeleyBerkeleyUSA

Personalised recommendations