Advertisement

Adjacency Matrix Construction Using Sparse Coding for Label Propagation

  • Haixia Zheng
  • Horace H. S. Ip
  • Liang Tao
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7585)

Abstract

Graph-based semi-supervised learning algorithms have attracted increasing attentions recently due to their superior performance in dealing with abundant unlabeled data and limited labeled data via the label propagation. The principle issue of constructing a graph is how to accurately measure the similarity between two data examples. In this paper, we propose a novel approach to measure the similarities among data points by means of the local linear reconstruction of their corresponding sparse codes. Clearly, the sparse codes of data examples not only preserve their local manifold semantics but can significantly boost the discriminative power among different classes. Moreover, the sparse property helps to dramatically reduce the intensive computation and storage requirements. The experimental results over the well-known dataset Caltech-101 demonstrate that our proposed similarity measurement method delivers better performance of the label propagation.

Keywords

Adjacency Matrix Training Image Sparse Code Neural Information Processing System Label Propagation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Zhu, X.: Semi-supervised learning literature survey. Technical report, Department of Computer Sciences, University of Wisconsin, Madison (2005)Google Scholar
  2. 2.
    Zhu, X., Ghahramani, Z., Lafferty, J.: Semi-supervised learning using gaussian fields and harmonic functions. In: ICML, pp. 912–919 (2003)Google Scholar
  3. 3.
    Belkin, M., Matveeva, I., Niyogi, P.: Regularization and Semi-supervised Learning on Large Graphs. In: Shawe-Taylor, J., Singer, Y. (eds.) COLT 2004. LNCS (LNAI), vol. 3120, pp. 624–638. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in Neural Information Processing Systems 16, pp. 321–328. MIT Press (2004)Google Scholar
  5. 5.
    Delalleau, O., Bengio, Y., Roux, N.L.: Nonparametric function induction in semi-supervised learning. In: Proc. Artificial Intelligence and Statistics (2005)Google Scholar
  6. 6.
    Zhang, K., Kwok, J.T., Parvin, B.: Prototype vector machine for large scale semi-supervised learning. In: Proceedings of the 26th Annual International Conference on Machine Learning (2009)Google Scholar
  7. 7.
    Fergus, R., Weiss, Y., Torralba, A.: Semi-Supervised Learning in Gigantic Image Collections. In: Neural Information Processing Systems (2009)Google Scholar
  8. 8.
    Liu, W., He, J., Chang, S.F.: Large Graph Construction for Scalable Semi-Supervised Learning. In: International Conference on Machine Learning, pp. 679–686 (2010)Google Scholar
  9. 9.
    Belkin, M., Niyogi, P.: Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation, 1373–1396 (2003)Google Scholar
  10. 10.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science, 2323–2326 (2000)Google Scholar
  11. 11.
    Cheng, H., Liu, Z., Yang, J.: Sparsity induced similarity measure for label propagation. In: International Conference on Computer Vision, pp. 317–324 (2009)Google Scholar
  12. 12.
    Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: Neural Information Processing Systems, pp. 801–808 (2006)Google Scholar
  13. 13.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: International Conference on Machine Learning, pp. 689–696 (2009)Google Scholar
  14. 14.
    Lu, Z., Peng, Y.: Latent semantic learning by efficient sparse coding with hypergraph regularization. In: Burgard, W., Roth, D. (eds.) AAAI (2011)Google Scholar
  15. 15.
    Wang, J., Yang, J., Yu, K., Lv, F., Huang, T.S., Gong, Y.: Locality-constrained Linear Coding for image classification. In: Computer Vision and Pattern Recognition, pp. 3360–3367 (2010)Google Scholar
  16. 16.
    Wang, F., Zhang, C.: Label propagation through linear neighborhoods. In: International Conference on Machine Learning, pp. 985–992 (2006)Google Scholar
  17. 17.
    Fei-Fei, L., Fergus, R., Perona, P.: Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In: Computer Vision and Pattern Recognition (2004)Google Scholar
  18. 18.
    Zhang, H., Berg, A.C., Maire, M., Malik, J.: SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition. In: Computer Vision and Pattern Recognition, vol. 2, pp. 2126–2136 (2006)Google Scholar
  19. 19.
    Jain, P., Kulis, B., Grauman, K.: Fast image search for learned metrics. In: Computer Vision and Pattern Recognition (2008)Google Scholar
  20. 20.
    Boiman, O., Shechtman, E., Irani, M.: In defense of Nearest-Neighbor based image classification. In: Computer Vision and Pattern Recognition (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Haixia Zheng
    • 1
  • Horace H. S. Ip
    • 1
  • Liang Tao
    • 1
  1. 1.Centre for Innovative Applications of Internet and Multimedia Technologies (AIMtech Centre), Department of Computer ScienceCity University of Hong KongKowloonHong Kong

Personalised recommendations