Abstract
Regular domain adaptation (DA) problems are interested in source examples drawn from a single source distribution, yet they probably come from multiple source domains in reality. Compared with DAs, Multi-Source DA (MSDA) is more challenging to settle: The extra domain shifts exist between source domains and moreover, the multi-source domains may also disagree on their semantic information. In this section, we surveyed Deep CockTail Network (DCTN), a prevalent MSDA algorithm that battles the multi-source-derived domain and semantic shifts. The ideology behind is inspired by making cocktails with multiple kinds of stuff (i.e. sources in our background). In particular, DCTN replays two alternating learning phases: (1) DCTN goes through a multi-way adversarial DA process to minimize the domain discrepancy between the target and each source, in order to obtain domain-invariant features. In this process, each target example would lead to the source-specific perplexity scores, denoting how similar each target feature appears to a feature from one of the source domains. (2) Integrated with the perplexity scores, the multi-source category classifiers categorizes target samples, and the pseudo-labeled target samples and source samples jointly update the category classifiers and the feature extractor. In the empirical studies, DCTNs are evaluated in three domain adaptation benchmarks in vanilla and source-category-shift MSDA scenarios. The results thoroughly evidence the superiority of DCTN framework that resists negative transfers across domains and tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Here perplexity scores are disconnected with the term used in natural language processing. In our chapter, they are completely determined by some relevant equations.
- 2.
The pre-training process can be found in the original paper and the official code in.
- 3.
Since each sample x corresponds to an unique class y, \(\{\mathcal {P}_{j}\}^M_{j=1}\) and \(\mathcal {P}_t\) can be viewed as an equivalent embedding from \(\{P_{j}(x,y)\}^N_{j=1}\) and P t(x, y) that we have discussed.
- 4.
- 5.
References
Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Mach. Learn. 79(1), 151–175 (2010)
Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks (2016). Preprint. arXiv:1612.05424
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009, pp. 248–255. IEEE, Piscataway (2009)
Duan, L., Xu, D., Tsang, I.W.H.: Domain adaptation from multiple sources: a domain-dependent regularization approach. IEEE Trans. Neural Netw. Learn. Syst. 23(3), 504–518 (2012)
Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189 (2015)
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2016)
Gebru, T., Hoffman, J., Fei-Fei, L.: Fine-grained recognition in the wild: a multi-task domain adaptation approach (2017). Preprint. arXiv:1709.02476
Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D., Li, W.: Deep reconstruction-classification networks for unsupervised domain adaptation. In: European Conference on Computer Vision, pp. 597–613. Springer, Cham (2016)
Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2066–2073. IEEE, Piscataway (2012)
Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: An unsupervised approach. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 999–1006. IEEE, Piscataway (2011)
Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schölkopf, B.: Covariate shift by kernel mean matching. Dataset Shift Mach. Learn. 3(4), 5 (2009)
Hoffman, J., Tzeng, E., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: Domain Adaptation in Computer Vision Applications, pp. 173–187. Springer, Cham (2017)
Jhuo, I.H., Liu, D., Lee, D., Chang, S.F.: Robust visual domain adaptation with low-rank reconstruction. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2168–2175. IEEE, Piscataway (2012)
Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A.: Visual genome: connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vis. 123(1), 32–73 (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning, pp. 97–105 (2015)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems, pp. 136–144 (2016)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 2208–2217 (2017). http://JMLR.org
Maaten, L.v.d., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)
Mansour, Y., Mohri, M., Rostamizadeh, A.: Domain adaptation with multiple sources. In: Advances in Neural Information Processing Systems, pp. 1041–1048 (2009)
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)
Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)
Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Computer Vision–ECCV 2010, pp. 213–226 (2010)
Saito, K., Ushiku, Y., Harada, T.: Asymmetric tri-training for unsupervised domain adaptation (2017). Preprint. arXiv:1702.08400
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation (2017). Preprint. arXiv:1702.05464
Xie, J., Hu, W., Zhu, S.C., Wu, Y.N.: Learning sparse frame models for natural image patterns. Int. J. Comput. Vis. 114(2–3), 91–112 (2015)
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning, pp. 2048–2057 (2015)
Xu, R., Chen, Z., Zuo, W., Yan, J., Lin, L.: Deep cocktail network: multi-source unsupervised domain adaptation with category shift. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3964–3973 (2018)
Yang, J., Yan, R., Hauptmann, A.G.: Cross-domain video concept detection using adaptive svms. In: Proceedings of the 15th ACM International Conference on Multimedia, pp. 188–197. ACM, New York (2007)
Acknowledgements
We would like to thank the other authors, i.e., Ruijia Xu, Wangmeng Zuo and Junjie Yan, for their contribution to the original paper.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Chen, Z., Lin, L. (2020). Multi-Source Domain Adaptation by Deep CockTail Networks. In: Venkateswara, H., Panchanathan, S. (eds) Domain Adaptation in Computer Vision with Deep Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-45529-3_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-45529-3_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-45528-6
Online ISBN: 978-3-030-45529-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)