International Journal of Computer Vision

, Volume 54, Issue 1–3, pp 117–142 | Cite as

A Framework for Robust Subspace Learning

  • Fernando De la Torre
  • Michael J. Black
Article

Abstract

Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multi-linear models. These models have been widely used for the representation of shape, appearance, motion, etc., in computer vision applications. Methods for learning linear models can be seen as a special case of subspace fitting. One draw-back of previous learning methods is that they are based on least squares estimation techniques and hence fail to account for “outliers” which are common in realistic training sets. We review previous approaches for making linear learning methods robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Subspace Learning (RSL) for linear models within a continuous optimization framework based on robust M-estimation. The framework applies to a variety of linear learning problems in computer vision including eigen-analysis and structure from motion. Several synthetic and natural examples are used to develop and illustrate the theory and applications of robust subspace learning in computer vision.

principal component analysis singular value decomposition learning robust statistics subspace methods structure from motion robust PCA robust SVD 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aguiar, P. and Moura, J. 1999. Factorization as a rank 1 problem. In Conference on ComputerVision andPattern Recognition, pp. 178- 184.Google Scholar
  2. Baldi, P. and Hornik, K. 1989. Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2:53-58.Google Scholar
  3. Beaton, A.E. and Tukey, J.W. 1974. The fitting of power series, meaning polynomials, illustrated on band-spectroscopic data. Technometrics, 16(2):147-185.Google Scholar
  4. Black, M.J. and Anandan, P. 1996. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer Vision and Image Understanding, 63(1):75-104.Google Scholar
  5. Black, M.J. and Jepson, A.D. 1998. Eigentracking: Robust matching and tracking of objects using view-based representation. International Journal of Computer Vision, 26(1):63-84.Google Scholar
  6. Black, M.J. and Rangarajan, A. 1996. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. International Journal of Computer Vision, 25(19):57- 92.Google Scholar
  7. Black, M.J., Sapiro, G., Marimont, D., and Heeger, D. 1998. Robust anisotropic diffusion. IEEE Transactions on Image Processing, 7:421-432.Google Scholar
  8. Black, M.J., Yaccob, Y., Jepson, A., and Fleet, D.J. 1997. Learning parameterized models of image motion. In Conference on Computer Vision and Pattern Recognition, pp. 561-567.Google Scholar
  9. Blake, A. and Isard, M. 1998. Active Contours. Springer Verlag.Google Scholar
  10. Blake, A. and Zisserman, A. 1987. Visual Reconstruction. MIT Press series, Massachusetts.Google Scholar
  11. Campbell, N.A. 1980. Robust procedures in multivariate analysis I: Robust covariance estimation. Applied Statistics, 29(3):231-2137.Google Scholar
  12. Cardoso, J.F. 1996. Independent component analysis, a survey of some algebraic methods. In International Symposium Circuits and Systems, vol. 2, pp. 93-96.Google Scholar
  13. Carroll, J. and Chang, J. 1970. Analysis of individual differences in multidimensional scaling via an n-way generalization eckartyoung decomposition. Psychometrika, 35:283-319.Google Scholar
  14. Cichocki, A., Unbehauen, R., and Rummert, E. 1993. Robust learning algorithm for blind separation of signals. Electronics Letters, 30(17):1386-1387.Google Scholar
  15. Cootes, T.F., Edwards, G.J., and Taylor, C.J. 1998. Active appearance models. In European Conference Computer Vision, pp. 484-498.Google Scholar
  16. Croux, C. and Filzmoser, P. 1998. Robust factorization of a data matrix. In COMPSTAT, Proceedings in Computational Statistics, pp. 245-249.Google Scholar
  17. De la Torre, F. and Black, M.J. 2001. Dynamic coupled component analysis. In Computer Vision and Pattern Recognition, pp. 643-650.Google Scholar
  18. De la Torre, F. and Black, M.J. 2001. Robust principal component analysis for computer vision. In International Conference on Computer Vision, pp. 362-369.Google Scholar
  19. De la Torre, F. and Black, M.J. 2002. Robust parameterized component analysis: Theory and applications to 2d facial modeling. In European Conference on Computer Vision, pp. 653-669.Google Scholar
  20. Diamantaras, K.I. 1996. Principal Component Neural Networks (Therory and Applications). John Wiley & Sons.Google Scholar
  21. Eckardt, C. and Young, G. 1936. The approximation of one matrix by another of lower rank. Psychometrika, 1:211-218.Google Scholar
  22. Everitt, B.S. 1984. An Introduction to Latent Variable Models. Chapman and Hall: London.Google Scholar
  23. Fukunaga, K. 1990. Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press: Boston, MA.Google Scholar
  24. Gabriel, K.R. and Odoroff, C.L. 1984. Resistant lower rank approximation of matrices. In Data Analysis and Informatics, III, pp. 23- 30.Google Scholar
  25. Gabriel, K.R. and Zamir, S. 1979. Lower rank approximation of matrices by least squares with any choice of weights. Technometrics, 21:489-498.Google Scholar
  26. Geiger, D. and Pereira, R. 1991. The outlier process. In IEEE Workshop on Neural Networks for Signal Proc., pp. 61-69.Google Scholar
  27. Geman, S. and McClure, D. 1987. Statistical methods for tomographic image reconstruction. Bulletin of the International Statistical Institute, LII:4-5.Google Scholar
  28. Golub, G. and Van Loan, C.F. 1989. Matrix Computations. 2nd ed. The Johns Hopkins University Press.Google Scholar
  29. Golub, G.H. and van der Vorst, H.A. 2000. Eigenvalue computation in the 20th century. Journal of Computational and Applied Mathematics, 123:35-65.Google Scholar
  30. Greenacre, M.J. 1984. Theory and Applications of Correspondence Analysis. Academic Press: London.Google Scholar
  31. Greenacre, M.J. 1988. Correspondence analysis of multivariate categorical data by weighted least squares. Biometrika, 75:457-467.Google Scholar
  32. Hampel, F., Ronchetti, E., Rousseeuw, P., and Stahel, W. 1986. Robust Statistics: The Approach Based on Influence Functions. Wiley: New York.Google Scholar
  33. Hartley, R.I. and Zisserman, A. 2000. Multiple View Geometry in Computer Vision. Cambridge University Press.Google Scholar
  34. Holland, P.W. and Welsch, R.E. 1977. Robust regression using iteratively reweighted least-squares. Communications in Statistics, (A6):813-827.Google Scholar
  35. Huber, P.J. 1981. Robust Statistics. Wiley: New York.Google Scholar
  36. Irani, M. and Anandan, P. 2000. Factorization with uncertainty. In European Conference on Computer Vision, pp. 539-553.Google Scholar
  37. Neudecker, H. and Magnus, J.R. 1999. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley.Google Scholar
  38. Jolliffe, I.T. 1986. Principal Component Analysis. Springer-Verlag: New York.Google Scholar
  39. Karhunen, J. and Joutsensalo, J. 1995. Generalizations of principal component analysis, optimization problems, and neural networks. Neural Networks, 4(8):549-562.Google Scholar
  40. Kroonenberg, P. and de Leeuw, J. 1980. Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika, 45:69-97.Google Scholar
  41. Lai, S.H. 2000. Robust image alignment under partial occlusion and spatially varying illumination change. Computer Vision and Image Understanding, 78:84-98.Google Scholar
  42. Li, G. 1985. Robust regression. In Exploring Data, Tables, Trends and Shapes. D.C. Hoaglin, F. Mosteller, and J.W. Tukey (Eds.). John Wiley & Sons.Google Scholar
  43. MacLean, J., Jepson, A., and Frecker, R. 1994. Recovery of egomotion and segmentation of indepedent object motion using the EM-algorithm. In British Machine Vision Conference, pp. 175-184, Leeds, UK.Google Scholar
  44. Mardia, K., Kent, J., and Bibby, J. 1979. Multivariate Analysis. Academic Press: London.Google Scholar
  45. Meer, P., Mintz, D., Kim, D., and Rosenfeld, A. 1991. Robust regression methods in computer vision: A review. International Journal of Computer Vision, 6:59-70.Google Scholar
  46. Meer, P., Stewart, C., and Tyler, D. (Eds.). 2000. Special issue on robust statistics. Computer Vision and Image Understanding, 78(1).Google Scholar
  47. Mirsky, L. 1960. Symmetric gauge functions and unitarily invariant norms. Quart. J. Marth. Oxford, 11:50-59.Google Scholar
  48. Moghaddam, B. and Pentland, A. 1997. Probabilistic visual learning for object representation. Pattern Analysis and Machine Intelligence, 19(7):137-143.Google Scholar
  49. Morris, D. and Kanade, T. 1998. A unified factorization algorithm for points, line segments and planes with uncertainty models. In International Conference on Computer Vision, pp. 696-702.Google Scholar
  50. Murase, H. and Nayar, S.K. 1995. Visual learning and recognition of 3D objects from appearance. International Journal of Computer vision, 1(14):5-24.Google Scholar
  51. Oja, E. 1982. A simplified neuron model as principal component analyzer. Journal of Mathematical Biology, 15:267-273.Google Scholar
  52. Oliver, N., Rosario, B., and Pentland, A. 1999. A Bayesian computer vision system for modeling human interactions. In Int. Conf. Computer on Vision Systems, ICVS, H.I. Christensen (Ed.), vol. 1542 of LNCS-Series, Gran Canaria, Spain, Springer-Verlag, pp. 255-272.Google Scholar
  53. Parlett, B.N. 1980. The Symmetric Eigenvalue Problem. Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
  54. Poelman, C. and Kanade, T. 1994. A paraperspective factorization method for shape and motion recovery. In International Conference on Computer Vision, pp. 97-108.Google Scholar
  55. Rao, R.P.N. 1999. An optimal estimation approach to visual perception and learning. Vision Research, 39(11):1963-1989.Google Scholar
  56. Rousseeuw, P.J. and Leroy, A.M. 1987. Robust Regression and Outlier Detection. John Wiley and Sons.Google Scholar
  57. Roweis, S. 1997. EM algorithms for PCA and SPCA. In Neural Information Processing Systems, pp. 626-632.Google Scholar
  58. Ruymagaart, F.H. 1981. A robust principal component analysis. Journal of Multivariate Analysis, 11:485-497.Google Scholar
  59. Sanger, T.D. 1989. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2:459-473.Google Scholar
  60. Shi, J. and Malik, J. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8).Google Scholar
  61. Shum, H., Ikeuchi, K., and Reddy, R. 1995. Principal component analysis with missing data and its application to polyhedral object modeling. Pattern Analysis and Machine Intelligence, 17(9):855- 867.Google Scholar
  62. Sidenbladh, H., de la Torre, F., and Black, M.J. 2000. A framework for modeling the appearance of 3D articulated figures. In Face and Gesture Recognition, pp. 368-375.Google Scholar
  63. Skoaj, D., Bischof, H., and Leonardis, A. 2002. A robust PCA algorithm for building representations from panoramic images. In European Conference Computer Vision, pp. 761-775.Google Scholar
  64. Tenenbaum, J.B. and Freeman, W.T. 2000. Separating style and context with bilinear models. Neural Computation, 12(6):1247-1283.Google Scholar
  65. Tipping, M. and Bishop, C.M. 1999. Probabilistic principal component analysis. Journal of the Royal Statistical Society B, 61:611-622.Google Scholar
  66. Tomasi, C. and Kanade, T. 1992. Shape and motion from image streams under orthography: A factorization method. Int. Jorunal of Computer Vision., 9(2):137-154.Google Scholar
  67. Turk, M. and Pentland, A. 1991. Eigenfaces for recognition. Journal Cognitive Neuroscience, 3(1):71-86.Google Scholar
  68. Van Huffel, S. and Vandewalle, J. 1991. The Total Least Squares Problem: Computational Aspects and Analysis. Society for Industrial and Applied Mathematics, Philadelphia.Google Scholar
  69. Xu, L. 1993. Least mean square error recosntruction for self-organizing nerual nets. Neural Networks, 6:627-648.Google Scholar
  70. Xu, L. and Yuille, A. 1995. Robust principal component analysis by self-organizing rules based on statistical physics approach. IEEE Transactions on Neural Networks, 6(1):131-143.Google Scholar
  71. Yang, T.N. and Wang, S.D. 1999. Robust algorithms for principal component analysis. Pattern Recognition Letters, 20(9):927-933.Google Scholar
  72. Yuille, A.L., Snow, D., Epstein, R., and Belhumeur, P. 1999. Determining generative models for objects under varying illumination: Shape and albedo from multiple images using svd and integrability. International Journal of Computer Vision, 35(3):203-222.Google Scholar
  73. Zhang, Z. 1996. Parameter estimation techniques: A tutorial with application to conic fitting. Image and vision Computing, 15(1):59- 76.Google Scholar

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Fernando De la Torre
    • 1
  • Michael J. Black
    • 2
  1. 1.Department of Communications and Signal Theory, La Salle School of EngineeringUniversitat Ramon LLullBarcelonaSpain
  2. 2.Department of Computer ScienceBrown UniversityProvidence

Personalised recommendations