World Wide Web

, Volume 19, Issue 2, pp 199–215 | Cite as

Active domain adaptation with noisy labels for multimedia analysis

  • Gaowen Liu
  • Yan Yan
  • Ramanathan Subramanian
  • Jingkuan Song
  • Guoyu Lu
  • Nicu Sebe
Article

Abstract

Supervised learning methods require sufficient labeled examples to learn a good model for classification or regression. However, available labeled data are insufficient in many applications. Active learning (AL) and domain adaptation (DA) are two strategies to minimize the required amount of labeled data for model training. AL requires the domain expert to label a small number of highly informative examples to facilitate classification, while DA involves tuning the source domain knowledge for classification on the target domain. In this paper, we demonstrate how AL can efficiently minimize the required amount of labeled data for DA. Since the source and target domains usually have different distributions, it is possible that the domain expert may not have sufficient knowledge to answer each query correctly. We exploit our active DA framework to handle incorrect labels provided by domain experts. Experiments with multimedia data demonstrate the efficiency of our proposed framework for active DA with noisy labels.

Keywords

Active learning Domain adaptation Noisy labels Multimedia analysis 

References

  1. 1.
    Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing multiclass to binary: A unifying approach for margin classifiers. JMLR 1(1), 113–141 (2000)MathSciNetGoogle Scholar
  2. 2.
    Aodha, O.M., Campbell, N.D., Kautz, J., Brostow, G.J.: Hierarchical subquery evaluation for active learning on a graph. In: CVPR (2014)Google Scholar
  3. 3.
    Argyrious, A., Evegenious, T.: Multi-task feature learning. In: NIPS (2007)Google Scholar
  4. 4.
    Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. J. Mach. Learn. Res. 20, 97–112 (2011)Google Scholar
  5. 5.
    Bonilla, E., Chai, K., Williams, C.: Multi-task gaussian process prediction. In: NIPS (2008)Google Scholar
  6. 6.
    Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines (2001)Google Scholar
  7. 7.
    Chang, X., Nie, F., Yang, Y., Huang, H.: A convex formulation for semi-supervised multi-label feature selection. In: AAAI (2014)Google Scholar
  8. 8.
    Dai, W., Yang, Q., Yu, Y.: Boosting for transfer learning. In: ICML (2007)Google Scholar
  9. 9.
    Daume, H.: Frustratingly easy domain adaptation. In: ACL (2007)Google Scholar
  10. 10.
    Du, J., Ling, C. X.: Active learning with human-like noisy oracle. In: ICDM (2010)Google Scholar
  11. 11.
    Duan, L., Xu, D., Tsang, I. W.: Visual event recognition in videos by learning from web data. In: CVPR (2010)Google Scholar
  12. 12.
    Elhamifar, E., Sapiro, G., Sastry, S: A convex optimization framework for active learning. In: ICCV (2013)Google Scholar
  13. 13.
    Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: SIGKDD (2004)Google Scholar
  14. 14.
    Freund, Y., Schapire, R.: A short introduction to boosting. J.Japanese Soc. Artif. Intell. 14(5), 771–780 (1999)Google Scholar
  15. 15.
    Golovin, D., Krause, A., Ray, D.: Near-optimal bayesian active learning with noisy observations. In: NIPS (2010)Google Scholar
  16. 16.
    Gretton, A., Borgwardt, K., Scholkopt, B.: A kernel method for the two-sample-problem. In: NIPS (2006)Google Scholar
  17. 17.
    Han, Y., Wu, F., Zhuang, Y., He, X.: Multi-label transfer learning with sparse representation. TCSVT 20, 1110–1121 (2010)Google Scholar
  18. 18.
    Han, Y., Wu, F., Tao, D., Shao, J., Zhuang, Y., Jiang, J.: Sparse unsupervised dimensionality reduction for multiple view data. TCSVT 22, 1485–1496 (2012)Google Scholar
  19. 19.
    Han, Y., Yang, Y., Ma, Z., Shen, H., Sebe, N., Zhou, X.: Image attribute adaptation. TMM 16, 1115–1126 (2014)Google Scholar
  20. 20.
    Hoi, S., Jin, R., Lyu, M.: Large-scale text categorization by batch mode active learning. In: WWW (2006)Google Scholar
  21. 21.
    Hua, G., Long, C., Yang, M., Gao, Y.: Collaborative active learning of a kernel machine ensemble for recognition. In: ICCV (2013)Google Scholar
  22. 22.
    Huang, J., Smola, A., Scholkopf, B.: Correcting sample selection bias by unlabeled data. In: NIPS (2007)Google Scholar
  23. 23.
    Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In: CVPR (2011)Google Scholar
  24. 24.
    Li, X., Guo, Y.: Multi-level adaptive active learning for scene classification. In: ECCV (2014)Google Scholar
  25. 25.
    Liang, L., Grauman, K.: Beyond comparing image pairs: Setwise active learning for relative attributes. In: CVPR (2014)Google Scholar
  26. 26.
    Maron, O., Lozano-Perez, T.: A framework for multiple-instance learning. In: NIPS (1998)Google Scholar
  27. 27.
    Mihalkova, L., Huynh, T., Mooney, R.: Mapping and revising markov logic networks for transfer learning. In: AAAI (2007)Google Scholar
  28. 28.
    Natarajan, N., Dhillon, I.S., Ravikumar, P., Tewari, A.: Learning with noisy labels. In: NIPS (2013)Google Scholar
  29. 29.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowledge Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  30. 30.
    Rajagopal, A.K., Subramanian, R., Ricci, E., Vieriu, R. L., Lanz, O., Sebe, N., et al.: Exploring transfer learning approaches for head pose classification from multi-view surveillance images. IJCV 109(1–2), 146–167 (2014)CrossRefGoogle Scholar
  31. 31.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: ECCV (2010)Google Scholar
  32. 32.
    Sheng, V., Provost, F., Ipeirotis, P.: Get another label? Improving data quality and data mining using multiple, noisy labelers. In: KDD (2008)Google Scholar
  33. 33.
    Shi, X., Fan, W., Ren, J.: Actively transfer domain knowledge. In: ECML (2008)Google Scholar
  34. 34.
    Sogawa, Y., Ueno, T., Kawahara, Y., Washio, T.: Active learning for noisy oracle via density power divergence. Neural Netw. 46, 133–143 (2013)CrossRefMATHGoogle Scholar
  35. 35.
    Stiefelhagen, R., Bowers, R., Fiscus, J.G.: Multimodal technologies for perception of humans. CLEAR, 2007 (2007)Google Scholar
  36. 36.
    Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. In: ICML (2000)Google Scholar
  37. 37.
    Wang, X., Huang, T.K., Schneider, J.: Active transfer learning under model shift. In: ICML (2014)Google Scholar
  38. 38.
    Yan, R., Yang, J., Hauptmann, A.G.: Automatically labeling video data using multi-class active learning. In: ICCV (2003)Google Scholar
  39. 39.
    Yan, Y., Ricci, E., Subramanian, R., Lanz, O., Sebe, N.: No matter where you are: Flexible graph-guided multi-task learning for multi-view head pose classification under target motion. In: ICCV (2013)Google Scholar
  40. 40.
    Yan, Y., Subramanian, R., Lanz, O., Sebe, N.: Active Transfer Learning for Multiview Head-pose Classification. ICPR (2012)Google Scholar
  41. 41.
    Yan, Y., Yang, Y., Shen, H., Meng, D., Liu, G., Hauptmann, A., Sebe, N.: Complex event detection via event oriented dictionary learning. In: AAAI (2015)Google Scholar
  42. 42.
    Yan, Y., Ricci, E., Subramanian, R., Liu, G., Sebe, N.: Multitask Linear Discriminant Analysis for View Invariant Action Recognition. IEEE Transactions on Image Processing, vol. 23, no. 12, (2014)Google Scholar
  43. 43.
    Yang, J., Yan, R., Hauptmann, A.G.: Cross-domain video concept detection using adaptive svms. In: ACM MM (2007)Google Scholar
  44. 44.
    Yang, L., Hanneke, S., Carbonell, J.: A theory of transfer learning with application to actively transfer. JMLR (2012)Google Scholar
  45. 45.
    Yang, Y., Nie, F., Xu, D., Luo, J., Zhuang, Y., Pan, Y.: A multimedia retrieval framework based on semi-supervised ranking and relevance feedback. TPAMI 34, 723–742 (2012)CrossRefGoogle Scholar
  46. 46.
    Yang, Y., Ma, Z., Nie, F., Chang, X., Hauptmann, A. G.: Multi-class active learning by uncertainty sampling with diversity maximization. IJCV, 11 (2014)Google Scholar
  47. 47.
    Yao, Y., Dorretto, G.: Boosting for transfer learning with multiple sources. In: CVPR (2010)Google Scholar
  48. 48.
    Zhang, J., Han, Y., Tang, J., Hu, Q., Jiang, J.: What can we learn about motion videos from still images? In: ACM MM (2014)Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Gaowen Liu
    • 1
  • Yan Yan
    • 1
  • Ramanathan Subramanian
    • 2
  • Jingkuan Song
    • 1
  • Guoyu Lu
    • 3
  • Nicu Sebe
    • 1
  1. 1.Department of Computer Science and Information EngineeringUniversity of TrentoTrentoItaly
  2. 2.Advanced Digital Sciences Center, UIUCSingaporeSingapore
  3. 3.Department of Computer ScienceUniversity of DelawareNewarkUSA

Personalised recommendations