Advertisement

A Robust and Efficient Doubly Regularized Metric Learning Approach

  • Meizhu Liu
  • Baba C. Vemuri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7575)

Abstract

A proper distance metric is fundamental in many computer vision and pattern recognition applications such as classification, image retrieval, face recognition and so on. However, it is usually not clear what metric is appropriate for specific applications, therefore it becomes more reliable to learn a task oriented metric. Over the years, many metric learning approaches have been reported in literature. A typical one is to learn a Mahalanobis distance which is parameterized by a positive semidefinite (PSD) matrix M. An efficient method of estimating M is to treat M as a linear combination of rank-one matrices that can be learned using a boosting type approach. However, such approaches have two main drawbacks. First, the weight change across the training samples may be non-smooth. Second, the learned rank-one matrices might be redundant. In this paper, we propose a doubly regularized metric learning algorithm, termed by DRMetric, which imposes two regularizations on the conventional metric learning method. First, a regularization is applied on the weight of the training examples, which prevents unstable change of the weights and also prevents outlier examples from being weighed too much. Besides, a regularization is applied on the rank-one matrices to make them independent. This greatly reduces the redundancy of the rank-one matrices. We present experiments depicting the performance of the proposed method on a variety of datasets for various applications.

Keywords

Regularized metric learning boosting PSD matrix 

References

  1. 1.
    Bi, J., et al.: AdaBoost on low-rank PSD matrices for metric learning with applications in Computer Aided Diagnosis. In: IEEE CVPR, pp. 1049–1056 (2011)Google Scholar
  2. 2.
    Yang, L., et al.: A Boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval. TPAMI, 30–44 (2010)Google Scholar
  3. 3.
    Ong, E., Bowden, R.: A boosted classifier tree for hand shape detection. In: IEEE Int. Conf. Automatic Face & Gesture Recogn., pp. 889–894 (2004)Google Scholar
  4. 4.
    Cao, Z., Yin, Q., Tang, X., Sun, J.: Face recognition with learning-based descriptor. In: IEEE CVPR, pp. 2707–2714 (2010)Google Scholar
  5. 5.
    Guillaumin, M., Verbeek, J., Schmid, C.: Multimodal semi-supervised learning for image classification. In: IEEE CVPR, pp. 902–909 (2010)Google Scholar
  6. 6.
    Kumar, N., Berg, A.C., Belhumeur, P.N., Nayar, S.K.: Attribute and simile classifiers for face verification. In: IEEE ICCV, pp. 365–372 (2009)Google Scholar
  7. 7.
    Nowak, E., Jurie, F.: Learning visual similarity measures for comparing never seen objects. In: IEEE CVPR (2007)Google Scholar
  8. 8.
    Pinto, N., DiCarlo, J.J., Cox, D.D.: How far can you get with a modern face recognition test set using only simple features? In: IEEE CVPR (2009)Google Scholar
  9. 9.
    Shen, C., Kim, J., Wang, L.: A scalable dual approach to semidefinite metric learning. In: IEEE CVPR, pp. 2601–2608 (2011)Google Scholar
  10. 10.
    Jiang, N., Liu, W., Wu, Y.: Adaptive and Discriminative Metric Differential Tracking. In: IEEE CVPR (2011)Google Scholar
  11. 11.
    Xing, E., Ng, A., Jordan, M., Russell, S.: Distance metric learning, with application to clustering with side-information. NIPS 15, 505–512 (2002)Google Scholar
  12. 12.
    Shen, C., Kim, J., Wang, L., van den Hengel, A.: Positive semidefinite metric learning with boosting. In: NIPS (2009)Google Scholar
  13. 13.
    Bar-Hillel, A., Hertz, T., Shental, N., Weinshall, D.: Learning a mahalanobis metric from equivalence constraints. JMLR 6, 937–965 (2005)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Davis, J., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: ICML, pp. 209–216 (2007)Google Scholar
  15. 15.
    Cox, M., Cox, T.: Multidimensional Scaling. In: Handbooks Comp. Statistics, pp. 315–347. Springer (2008)Google Scholar
  16. 16.
    Tenenbaum, J., Silva, V., Langford, J.: A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 290, 2319–2323 (2000)CrossRefGoogle Scholar
  17. 17.
    Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)CrossRefGoogle Scholar
  18. 18.
    Vemuri, B.C., Liu, M., Amari, S.I., Nielsen, F.: Total Bregman divergence and its applications to DTI analysis. IEEE TMI 30, 475–483 (2011)Google Scholar
  19. 19.
    Shalev-Shwartz, S., Singer, Y., Ng, A.Y.: Online and batch learning of pseudo-metrics. In: ICML (2004)Google Scholar
  20. 20.
    Tsuda, K., Rsch, G., Warmuth, M.K.: Matrix exponentiated gradient updates for on-line learning and bregman projection. JMLR 6, 995–1018 (2005)zbMATHGoogle Scholar
  21. 21.
    Jian, B., Vemuri, B.C.: Metric learning using iwasawa decomposition. In: IEEE ICCV, pp. 1–6 (2007)Google Scholar
  22. 22.
    Saberian, M.J., Vasconcelos, N.: Multiclass Boosting: Theory and Algorithms. In: NIPS (2011)Google Scholar
  23. 23.
    Liu, M., Vemuri, B.C.: Robust and efficient regularized boosting using total bregman divergence. In: IEEE CVPR, pp. 2897–2902 (2011)Google Scholar
  24. 24.
    Warmuth, M.K., Glocer, K.A., Vishwanathan, S.V.: Entropy regularized LPBoost. In: Int. Conf. Alg. Learn. Theory, pp. 256–271 (2008)Google Scholar
  25. 25.
    Liu, M., , Vemuri, B.C., Amari, S., Nielsen, F.: Shape retrieval using hierarchical total bregman soft clustering. IEEE TPAMI (2012)Google Scholar
  26. 26.
    Boyd, S., Vandenberghe, L.: Convex optimization. Cambridge University Press, Cambridge (2004)Google Scholar
  27. 27.
    Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Mach. Learn. 46, 225–254 (2002)zbMATHCrossRefGoogle Scholar
  28. 28.
    Schapire, R., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Mach. Learn. 37, 297–336 (1999)zbMATHCrossRefGoogle Scholar
  29. 29.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  30. 30.
    Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In: ECCV (2008)Google Scholar
  31. 31.
    French, J., Watson, J., Jin, X., Martin, W.: An exogenous approach for adding multiple image representations to content-based image retrieval systems. In: Int. Sym. Signal Processing App., vol. 1, pp. 201–204 (2003)Google Scholar
  32. 32.
    Canny, J.: A computational approach to edge detection. IEEE TPAMI 8, 679–698 (1986)CrossRefGoogle Scholar
  33. 33.
    Manjunath, B.S., Ma, W.: Texture features for browsing and retrieval of image data. IEEE TPAMI 18, 837–842 (1996)CrossRefGoogle Scholar
  34. 34.
    Nguyen, H.V., Bai, L.: Cosine Similarity Metric Learning for Face Verification. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part II. LNCS, vol. 6493, pp. 709–720. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  35. 35.
    Yin, Q., Tang, X., Sun, J.: An Associate-Predict Model for Face Recognition. In: IEEE CVPR, pp. 497–504 (2011)Google Scholar
  36. 36.
    Taigman, Y., Wolf, L., Hassner, T., Tel-Aviv, I.: Multiple One-Shots for utilizing class label information. In: BMVC (2009)Google Scholar
  37. 37.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60, 91–110 (2004)CrossRefGoogle Scholar
  38. 38.
    Guillaumin, M., Verbeek, J., Schmid, C.: Is that you? metric learning approaches for face identification. In: IEEE ICCV, pp. 498–505 (2009)Google Scholar
  39. 39.
    Wolf, L., Hassner, T., Taigman, Y.: Similarity Scores Based on Background Samples. In: Zha, H., Taniguchi, R.-I., Maybank, S. (eds.) ACCV 2009, Part II. LNCS, vol. 5995, pp. 88–97. Springer, Heidelberg (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Meizhu Liu
    • 1
    • 2
  • Baba C. Vemuri
    • 1
    • 2
  1. 1.Siemens Corporate Research & TechnologyPrincetonUSA
  2. 2.CISEUniversity of FloridaGainesvilleUSA

Personalised recommendations