Frontiers of Computer Science

, Volume 7, Issue 3, pp 359–369 | Cite as

Co-metric: a metric learning algorithm for data with multiple views

Research Article

Abstract

We address the problem of metric learning for multi-view data. Many metric learning algorithms have been proposed, most of them focus just on single view circumstances, and only a few deal with multi-view data. In this paper, motivated by the co-training framework, we propose an algorithm-independent framework, named co-metric, to learn Mahalanobis metrics in multi-view settings. In its implementation, an off-the-shelf single-view metric learning algorithm is used to learn metrics in individual views of a few labeled examples. Then the most confidently-labeled examples chosen from the unlabeled set are used to guide the metric learning in the next loop. This procedure is repeated until some stop criteria are met. The framework can accommodate most existing metric learning algorithms whether types-of-side-information or example-labels are used. In addition it can naturally deal with semi-supervised circumstances under more than two views. Our comparative experiments demonstrate its competiveness and effectiveness.

Keywords

multi-view learning metric learning algorithmindependent framework 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Davis J, Kulis B, Jain P, Sra S, Dhillon I. Information-theoretic metric learning. In: Proceedings of the 24th International Conference on Machine Learning. 2007, 209–216Google Scholar
  2. 2.
    Weinberger K, Saul L. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 2009, 10: 207–244MATHGoogle Scholar
  3. 3.
    Goldberger J, Roweis S, Hinton G, Salakhutdinov R. Neighbourhood components analysis. Advances in Neural Information Processing Systems, 2005, 513-520Google Scholar
  4. 4.
    Zheng H, Wang M, Li Z. Audio-visual speaker identification with multi-view distance metric learning. In: Proceedings of the 17th IEEE International Conference on Image Processing. 4561–4564Google Scholar
  5. 5.
    Zhai D, Chang H, Shan S, Chen X, Gao W. Multiview metric learning with global consistency and local smoothness. ACM Transactions on Intelligent Systems and Technology (TIST), 2012, 3(3): 53Google Scholar
  6. 6.
    Blum A, Mitchell T. Combining labeled and unlabeled data with cotraining. In: Proceedings of the 11th Annual Conference on Computational Learning Theory. 1998, 92–100Google Scholar
  7. 7.
    Guo R, Chakraborty S. Bayesian adaptive nearest neighbor. Statistical Analysis and Data Mining, 2010, 3(2): 92–105MathSciNetGoogle Scholar
  8. 8.
    Holmes C, Adams N. A probabilistic nearest neighbour method for statistical pattern recognition. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2002, 64(2): 295–306MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Tomasev N, Radovanović M, Mladenić D, Ivanović M. A probabilistic approach to nearest-neighbor classification: naive hubness bayesian kNN. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management. 2011, 2173-2176Google Scholar
  10. 10.
    Xing E, Ng A, Jordan M, Russell S. Distance metric learning, with application to clustering with side-information. Advances in Neural Information Processing Systems. 2002, 505-512Google Scholar
  11. 11.
    Shalev-Shwartz S, Singer Y, Ng A. Online and batch learning of pseudo-metrics. In: Proceedings of the 21st International Conference on Machine Learning. 2004, 94-103Google Scholar
  12. 12.
    Globerson A, Roweis S. Metric learning by collapsing classes. Advances in Neural Information Processing Systems, 2006Google Scholar
  13. 13.
    Weinberger K, Saul L. Fast solvers and efficient implementations for distance metric learning. In: Proceedings of the 25th International Conference on Machine Learning. 2008, 1160–1167CrossRefGoogle Scholar
  14. 14.
    Zhou Z, Li M. Semi-supervised learning by disagreement. Knowledge and Information Systems, 2010, 24(3): 415–439CrossRefGoogle Scholar
  15. 15.
    Zhou Z. Unlabeled data and multiple views. In: Proceedings of the 1st IAPR TC3 Conference on Partially Supervised Learning. 2011, 1–7Google Scholar
  16. 16.
    Zhou Z, Chen K, Dai H. Enhancing relevance feedback in image retrieval using unlabeled data. ACM Transactions on Information Systems (TOIS), 2006, 24(2): 219–244CrossRefGoogle Scholar
  17. 17.
    Wang W, Zhou Z. Multi-view active learning in the non-realizable case. arXiv preprint arXiv:1005.5581, 2010Google Scholar
  18. 18.
    Nigam K, Ghani R. Analyzing the effectiveness and applicability of co-training. In: Proceedings of the 9th International Conference on Information and Knowledge Management(CIKM2000). 2000Google Scholar
  19. 19.
    Brefeld U, Scheffer T. Co-em support vector learning. In: Proceedings of the 21st International Conference on Machine Learning. 2004, 16-24Google Scholar
  20. 20.
    Kumar A, Daume III H. A co-training approach for multi-view spectral clustering. In: Proceedings of the 28th IEEE International Conference on Machine Learning. 2011Google Scholar
  21. 21.
    Yarowsky D. Unsupervised word sense disambiguation rivaling supervised methods. In: Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics. 1995, 189–196CrossRefGoogle Scholar
  22. 22.
    Bickel S, Scheffer T. Multi-view clustering. In: Proceedings of the Fourth IEEE International Conference on Data Mining. 2004, 19–26CrossRefGoogle Scholar
  23. 23.
    Zhang M, Zhou Z. coTrade: confident co-training with data editing. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2011, 41(6): 1612–1626CrossRefGoogle Scholar
  24. 24.
    Manocha S, Girolami M. An empirical analysis of the probabilistic k-nearest neighbour classifier. Pattern Recognition Letters, 2007, 28(13): 1818–1824CrossRefGoogle Scholar
  25. 25.
    Cucala L, Marin J, Robert C, Titterington D. A Bayesian reassessment of nearest-neighbor classification. Journal of the American Statistical Association, 2009, 104(485): 263–273MathSciNetCrossRefGoogle Scholar
  26. 26.
    Sun T, Chen S, Yang J, Shi P. A novel method of combined feature extraction for recognition. In: Proceedings of 8th IEEE International Conference on the Data Mining. 2008, 1043–1048Google Scholar
  27. 27.
    Frank A, Asuncion A. UCI machine learning repository, 2010. http://archive.ics.uci.edu/ml Google Scholar
  28. 28.
    Dumais S. Latent semantic analysis. Annual Review of Information Science and Technology, 2004, 38(1): 188–230CrossRefGoogle Scholar
  29. 29.
    Sa V R, Gallagher P, Lewis J, Malave V. Multi-view kernel construction. Machine Learning, 2010, 79(1): 47–71CrossRefGoogle Scholar
  30. 30.
    Balcan M, Blum A, Ke Y. Co-training and expansion: towards bridging theory and practice. Advances in Neural Information Processing Systems, 2004Google Scholar
  31. 31.
    Shawe-Taylor N, Kandola A. On kernel target alignment. In: Advances in Neural Information Processing Systems. 2002Google Scholar

Copyright information

© Higher Education Press and Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringNanjing University of Aeronautics and AstronauticsNanjingChina
  2. 2.State Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina

Personalised recommendations