Advertisement

Robust Regression

  • Dong Huang
  • Ricardo Silveira Cabral
  • Fernando De la Torre
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7575)

Abstract

Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. Regression methods typically map image features (X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing regression methods is that samples are directly projected onto a subspace and hence fail to account for outliers which are common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that in existing regression methods, and discriminative methods in general, the regressor variables X are assumed to be noise free. Due to this assumption, discriminative methods experience significant degrades in performance when gross outliers are present.

Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of Robust Regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, multi-label classification and head pose estimation from images. Several synthetic and real world examples are used to illustrate the benefits of RR.

Keywords

Robust methods errors in variables intra-sample outliers 

References

  1. 1.
    Wang, H., Ding, C., Huang, H.: Multi-label Linear Discriminant Analysis. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part VI. LNCS, vol. 6316, pp. 126–139. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  2. 2.
    Huang, D., Storer, M., De la Torre, F., Bischof, H.: Supervised local subspace learning for continuous head pose estimation. In: CVPR (2011)Google Scholar
  3. 3.
    Huber, P.: Robust Statistics. Wiley and Sons (1981)Google Scholar
  4. 4.
    Rousseeuw, P., Leroy, A.: Robust Regression and Outlier Detection. Wiley (2003)Google Scholar
  5. 5.
    Meer, P.: Robust Techniques for computer vision. In: Medioni, G., Kang, S. (eds.) Emerging Topics in Computer Vision. Prentice Hall (2004)Google Scholar
  6. 6.
    Gillard, J.: An Historical Overview of Linear Regression with Errors in both variables. Cardiff University, School of Mathematics, TR (2006)Google Scholar
  7. 7.
    Huffel, S.V., Vandewalle, J.: The Total Least Squares Problem: Computational Aspects and Analysis. SIAM (1991)Google Scholar
  8. 8.
    Lindley, D.: Regression lines and the linear functional relationship. Suppl. J. Roy. Statist. Soc. 9, 218–244 (1947)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Fischler, M., Bolles, R.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Torr, P., Zisserman, A.: Mlesac: A new robust estimator with application to estimating image geometry. CVIU 78, 138–156 (2000)Google Scholar
  11. 11.
    Choi, S., Kim, T., Yu, W.: Performance Evaluation of RANSAC Family. In: BMVC (2009)Google Scholar
  12. 12.
    Adcock, R.: A problem in least squares. Analyst. 5, 53–54 (1878)CrossRefGoogle Scholar
  13. 13.
    Kummel, C.: Reduction of observed equations which contain more than one observed quantity. Analyst. 6, 97–105 (1879)CrossRefGoogle Scholar
  14. 14.
    Wald, A.: The fitting of straight lines if both variables are subject to error. Ann. Math. Statistics 11, 285–300 (1940)MathSciNetGoogle Scholar
  15. 15.
    Gillard, J., Iles, T.: Method of moments estimation in linear regression with errors in both variables. Cardiff University, School of Mathematics, TR (2005)Google Scholar
  16. 16.
    Matei, B., Meer, P.: Estimation of nonlinear errors-in-variables models for computer vision applications. IEEE Trans. PAMI 28, 1537–1552 (2006)CrossRefGoogle Scholar
  17. 17.
    Croux, C., Dehon, C.: Robust linear discriminant analysis using s-estimators. Canadian Journal of Statistics 29 (2001)Google Scholar
  18. 18.
    Kim, S., Magnani, A., Boyd, S.: Robust FDA. In: NIPS (2005)Google Scholar
  19. 19.
    Zhang, Y., Yeung, D.Y.: Worst-case linear discriminant analysis. In: NIPS (2010)Google Scholar
  20. 20.
    Fidler, S., Skocaj, D., Leonardis, A.: Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. PAMI 28, 337–350 (2006)CrossRefGoogle Scholar
  21. 21.
    Leonardis, A., Bischof, H.: Robust recognition using eigenimages. CVIU 78, 99–118 (2000)Google Scholar
  22. 22.
    Jia, H., Martinez, A.: Support vector machines in face recognition with occlusions. In: CVPR (2009)Google Scholar
  23. 23.
    De la Torre, F., Black, M.: A framework for robust subspace learning. International Journal on Computer Vision 54, 117–142 (2003)zbMATHCrossRefGoogle Scholar
  24. 24.
    Candès, E., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? Journal of the ACM 58 (2011)Google Scholar
  25. 25.
    Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In: NIPS (2009)Google Scholar
  26. 26.
    Cabral, R., De la Torre, F., Costeira, J.P., Bernardino, A.: Matrix completion for multi-label image classification. In: NIPS (2011)Google Scholar
  27. 27.
    Zhang, Z., Liang, X., Ma, Y.: Unwrapping low-rank textures on generalized cylindrical surfaces. In: ICCV (2011)Google Scholar
  28. 28.
    Cheng, B., Liu, G., Wang, J., Huang, Z., Yan, S.: Multi-task low-rank affinity pursuit for image segmentation. In: ICCV (2011)Google Scholar
  29. 29.
    De la Torre, F.: A least-squares framework for component analysis. IEEE Trans. PAMI 34, 1041–1055 (2012)CrossRefGoogle Scholar
  30. 30.
    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B 68, 49–67 (2007)MathSciNetGoogle Scholar
  31. 31.
    Golub, G., Loan, C.V.: Regression lines and the linear functional relationship. SIAM J. Numer. Anal. 17, 883–893 (1980)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Gross, R., Matthews, I., Cohn, J.F., Kanade, T., Baker, S.: The cmu multi-pose, illumination, and expression (multi-pie) face database. Technical report, CMU Robotics Institute.TR-07-08 (2007)Google Scholar
  33. 33.
    Snoek, C., Worring, M., Gemert, J., Geusebroek, J.M., Smeulders, A.: The challenge problem for automated detection of 101 semantic concepts in multimedia. In: ACM MM (2006)Google Scholar
  34. 34.
    Vedaldi, A., Zisserman, A.: Efficient additive kernels via explicit feature maps. IEEE Trans. PAMI 34, 480–492 (2012)CrossRefGoogle Scholar
  35. 35.
    Li, F., Lebanon, G., Sminchisescu, C.: Chebyshev Approximations to the Histogram χ 2 Kernel. In: CVPR (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Dong Huang
    • 1
  • Ricardo Silveira Cabral
    • 1
  • Fernando De la Torre
    • 1
  1. 1.Robotics InstituteCarnegie Mellon UniversityUSA

Personalised recommendations