Advertisement

International Journal of Computer Vision

, Volume 109, Issue 1–2, pp 28–41 | Cite as

Asymmetric and Category Invariant Feature Transformations for Domain Adaptation

  • Judy HoffmanEmail author
  • Erik Rodner
  • Jeff Donahue
  • Brian Kulis
  • Kate Saenko
Article

Abstract

-1We address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce a unified flexible model for both supervised and semi-supervised learning that allows us to learn transformations between domains. Additionally, we present two instantiations of the model, one for general feature adaptation/alignment, and one specifically designed for classification. First, we show how to extend metric learning methods for domain adaptation, allowing for learning metrics independent of the domain shift and the final classifier used. Furthermore, we go beyond classical metric learning by extending the method to asymmetric, category independent transformations. Our framework can adapt features even when the target domain does not have any labeled examples for some categories, and when the target and source features have different dimensions. Finally, we develop a joint learning framework for adaptive classifiers, which outperforms competing methods in terms of multi-class accuracy and scalability. We demonstrate the ability of our approach to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types, and codebooks. The experiments show its strong performance compared to previous approaches and its applicability to large-scale scenarios.

Keywords

Object recognition Domain adaptation Transformation learning 

References

  1. Argyriou, A., Micchelli, C. A., & Pontil, M. (2010). On spectral learning. Journal of Machine Learning Research, 11, 935–953.zbMATHMathSciNetGoogle Scholar
  2. Aytar, Y., & Zisserman, A. (2011). Tabula rasa: Model transfer for object category detection. In Proceedings of the international conference on computer vision (ICCV) (pp. 2252–2259).Google Scholar
  3. Ben-david, S., Blitzer, J., Crammer, K., & Pereira, O. (2007). Analysis of representations for domain adaptation. In Advances in neural information processing systems (NIPS) ( pp. 137–145). Cambridge: MIT Press.Google Scholar
  4. Bergamo, A., & Torresani, L. (2010). Exploiting weakly-labeled web images to improve object classification: A domain adaptation approach. In Advances in neural information processing systems (NIPS) (pp. 181–189).Google Scholar
  5. Blitzer, J., Dredze, M., & Pereira, F. (2007). Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. ACL, 7, 440–447.Google Scholar
  6. Chopra, S., Balakrishnan, S., & Gopalan, R. (2013). Dlid: Deep learning for domain adaptation by interpolating between domains. In ICML workshop on challenges in representation learning.Google Scholar
  7. Dai, W., Chen, Y., Xue, G., Yang, Q., & Yu, Y. (2008). Translated learning: Transfer learning across different feature spaces. In Advances in neural information processing systems (NIPS) (pp. 353–360).Google Scholar
  8. Daume III, H. (2007). Frustratingly easy domain adaptation. In ACL (pp. 256–263).Google Scholar
  9. Davis, J., Kulis, B., Jain, P., Sra, S., & Dhillon, I. (2007). Information-theoretic metric learning. In Proceedings of the international conference on Machine learning (ICML) (pp. 209–216) .Google Scholar
  10. Diethe, T., Hardoon, D. R., & Shawe-Taylor, J. (2010). Constructing nonlinear discriminants from multiple data views. In Machine learning and knowledge discovery in databases (pp. 328–343) Berlin: Springer.Google Scholar
  11. Donahue, J., Hoffman, J., Rodner, E., Saenko, K., & Darrell, T. (2013). Semi-supervised domain adaptation with instance constraints. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 668–675).Google Scholar
  12. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., et al. (2014). Decaf: A deep convolutional activation feature for generic visual recognition. In International conference in machine learning (ICML).Google Scholar
  13. Duan, L., Tsang, I. W., Xu, D., & Maybank, S. J. (2009). Domain transfer svm for video concept detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1375–1381).Google Scholar
  14. Duan, L., Xu, D., & Tsang, I. W. (2012a). Learning with augmented features for heterogeneous domain adaptation. In Proceedings of the international conference on machine learning (pp. 711–718).Google Scholar
  15. Duan, L., Xu, D., Tsang, I. W. H., & Luo, J. (2012b). Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1667–1680.CrossRefGoogle Scholar
  16. Farhadi, A., & Tabrizi, M. K. (2008). Learning to recognize activities from the wrong view point. In Proceedings of the European conference on computer vision (ECCV) (pp. 154–166).Google Scholar
  17. Farquhar, J., Hardoon, D., Meng, H., Shawe-taylor, J. S., & Szedmak, S. (2005). Two view learning: Svm-2k, theory and practice. In Advances in neural information processing systems (NIPS) (pp. 355–362).Google Scholar
  18. Gong, B., Shi, Y., Sha, F., & Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2066–2073).Google Scholar
  19. Gopalan, R., Li, R., & Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the international conference on computer vision (ICCV) (pp. 999–1006).Google Scholar
  20. Hoffman, J., Rodner, E., Donahue, J., Saenko, K., & Darrell, T. (2013). Efficient learning of domain-invariant image representations. In International conference on learning representations (ICLR). http://arxiv.org/abs/1301.3224
  21. Jhuo, I. H., Liu, D., Chang, S. F., & Lee, D. T. (2012). Robust visual domain adaptation with low-rank reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2168–2175).Google Scholar
  22. Jiang, J. (2008). A literature survey on domain adaptation of statistical classifiers. http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/survey/.
  23. Jiang, J., & Zhai, C. X. (2007). Instance weighting for domain adaptation in NLP. In ACL (pp. 264–271). Google Scholar
  24. Jiang, W., Zavesky, E., Chang, S., & Loui, A. (2008). Cross-domain learning methods for high-level visual concept classification. In International conference on image processing (ICIP) (pp. 161–164).Google Scholar
  25. Kan, M., Shan, S., Zhang, H., Lao, S., & Chen, X. (2012). Multi-view discriminant analysis. In Proceedings of the European computer vision conference (ECCV) (pp. 808–821). Berlin: Springer.Google Scholar
  26. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems.Google Scholar
  27. Kulis, B., Jain, P., & Grauman, K. (2009). Fast similarity search for learned metrics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2143–2157.CrossRefGoogle Scholar
  28. Kulis, B., Saenko, K., & Darrell, T. (2011). What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1785–1792).Google Scholar
  29. Li, R., & Zickler, T. (2012). Discriminative virtual views for cross-view action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2855–2862).Google Scholar
  30. Li, X. (2007). Regularized adaptation: Theory, algorithms and applications. Ph.D. thesis, USA: University of WashingtonGoogle Scholar
  31. Quadrianto, N., & Lampert, C. H. (2011). Learning multi-view neighborhood preserving projections. In Proceedings of the International Conference on Machine Learning (ICML) (pp. 425–432).Google Scholar
  32. Rodner, E., Hoffman, J., Donahue, J., Darrell, T., Saenko, K. (2013). Towards adapting imagenet to reality: Scalable domain adaptation with implicit low-rank transformations. arXiv:1308.4200 (preprint).
  33. Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 213–226).Google Scholar
  34. Sharma, A., Kumar, A., Daume, H., & Jacobs, D. W. (2012). Generalized multiview analysis: A discriminative latent space. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2160–2167).Google Scholar
  35. Torralba, A., & Efros, A. (2011). Unbiased look at dataset bias. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1521–1528).Google Scholar
  36. Yang, J., Yan, R., & Hauptmann, A. G. (2007). Cross-domain video concept detection using adaptive svms. In ACM Multimedia (pp 188–197).Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Judy Hoffman
    • 1
    Email author
  • Erik Rodner
    • 2
  • Jeff Donahue
    • 1
  • Brian Kulis
    • 3
  • Kate Saenko
    • 4
  1. 1.UC BerkeleyBerkeleyUSA
  2. 2.Friedrich Schiller University of JenaJenaGermany
  3. 3.Ohio State UniversityColumbusUSA
  4. 4.University of MassachusettsLowellUSA

Personalised recommendations