International Journal of Computer Vision

, Volume 109, Issue 1–2, pp 42–59

Weakly-Supervised Cross-Domain Dictionary Learning for Visual Recognition

Article

DOI: 10.1007/s11263-014-0703-y

Cite this article as:
Zhu, F. & Shao, L. Int J Comput Vis (2014) 109: 42. doi:10.1007/s11263-014-0703-y

Abstract

We address the visual categorization problem and present a method that utilizes weakly labeled data from other visual domains as the auxiliary source data for enhancing the original learning system. The proposed method aims to expand the intra-class diversity of original training data through the collaboration with the source data. In order to bring the original target domain data and the auxiliary source domain data into the same feature space, we introduce a weakly-supervised cross-domain dictionary learning method, which learns a reconstructive, discriminative and domain-adaptive dictionary pair and the corresponding classifier parameters without using any prior information. Such a method operates at a high level, and it can be applied to different cross-domain applications. To build up the auxiliary domain data, we manually collect images from Web pages, and select human actions of specific categories from a different dataset. The proposed method is evaluated for human action recognition, image classification and event recognition tasks on the UCF YouTube dataset, the Caltech101/256 datasets and the Kodak dataset, respectively, achieving outstanding results.

Keywords

Visual categorization Image classification Human action recognition Event recognition Transfer learning  Weakly-supervised dictionary learning 

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.College of Electronic and Information EngineeringNanjing University of Information Science and TechnologyNanjing China
  2. 2.Department of Electronic and Electrical EngineeringThe University of SheffieldSheffield UK

Personalised recommendations