Advertisement

Every Pixel Matters: Center-Aware Feature Alignment for Domain Adaptive Object Detector

Conference paper
  • 850 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12354)

Abstract

A domain adaptive object detector aims to adapt itself to unseen domains that may contain variations of object appearance, viewpoints or backgrounds. Most existing methods adopt feature alignment either on the image level or instance level. However, image-level alignment on global features may tangle foreground/background pixels at the same time, while instance-level alignment using proposals may suffer from the background noise. Different from existing solutions, we propose a domain adaptation framework that accounts for each pixel via predicting pixel-wise objectness and centerness. Specifically, the proposed method carries out center-aware alignment by paying more attention to foreground pixels, hence achieving better adaptation across domains. We demonstrate our method on numerous adaptation settings with extensive experimental results and show favorable performance against existing state-of-the-art algorithms. Source codes and models are available at https://github.com/chengchunhsu/EveryPixelMatters.

Notes

Acknowledgment

This work was supported in part by the Ministry of Science and Technology (MOST) under grants MOST 107-2628-E-009-007-MY3, MOST 109-2634-F-007-013, and MOST 109-2221-E-009-113-MY3, and by Qualcomm through a Taiwan University Research Collaboration Project. M.-H. Yang is supported in part by NSF CAREER Grant 1149783.

Supplementary material

504446_1_En_42_MOESM1_ESM.pdf (22.8 mb)
Supplementary material 1 (pdf 23373 KB)

References

  1. 1.
    Cai, Q., Pan, Y., Ngo, C.W., Tian, X., Duan, L., Yao, T.: Exploring object relation in mean teacher for cross-domain detection. In: CVPR (2019)Google Scholar
  2. 2.
    Chen, Y., Li, W., Sakaridis, C., Dai, D., Gool, L.V.: Domain adaptive faster r-cnn for object detection in the wild. In: CVPR (2018)Google Scholar
  3. 3.
    Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: CVPR (2016)Google Scholar
  4. 4.
    Dai, J., He, K., Sun, J.: Instance-aware semantic segmentation via multi-task network cascades. In: CVPR (2016)Google Scholar
  5. 5.
    Dai, S., Sohn, K., Tsai, Y.H., Carin, L., Chandraker, M.: Adaptation across extreme variations using unlabeled domain bridges. arXiv preprint arXiv:1906.02238 (2019)
  6. 6.
    Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: Centernet: Keypoint triplets for object detection. In: ICCV (2019)Google Scholar
  7. 7.
    Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: ICML (2015)Google Scholar
  8. 8.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: CVPR (2012)Google Scholar
  9. 9.
    Girshick, R.: Fast r-cnn. In: ICCV (2015)Google Scholar
  10. 10.
    Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: ECCV (2014)Google Scholar
  11. 11.
    Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Hypercolumns for object segmentation and fine-grained localization. In: CVPR (2015)Google Scholar
  12. 12.
    He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  14. 14.
    He, Z., Zhang, L.: Multi-adversarial faster-rcnn for unrestricted object detection. In: ICCV (2019)Google Scholar
  15. 15.
    Hsu, H.K., et al.: Progressive domain adaptation for object detection. In: WACV (2020)Google Scholar
  16. 16.
    Inoue, N., Furuta, R., Yamasaki, T., Aizawa, K.: Cross-domain weakly-supervised object detection through progressive domain adaptation. In: CVPR (2018)Google Scholar
  17. 17.
    Johnson-Roberson, M., Barto, C., Mehta, R., Sridhar, S.N., Rosaen, K., Vasudevan, R.: Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In: ICRA (2017)Google Scholar
  18. 18.
    Kang, K., et al.: T-cnn: Tubelets with convolutional neural networks for object detection from videos. In: TCSVT (2018)Google Scholar
  19. 19.
    Karen, S., Andrew, Z.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  20. 20.
    Karpathy, A., Li, F.F.: Deep visual-semantic alignments for generating image descriptions. In: CVPR (2015)Google Scholar
  21. 21.
    Kim, S., Choi, J., Kim, T., Kim, C.: Self-training and adversarial background regularization for unsupervised domain adaptive one-stage object detection. In: ICCV (2019)Google Scholar
  22. 22.
    Kim, T., Jeong, M., Kim, S., Choi, S., Kim, C.: Diversify and match: A domain adaptive representation learning paradigm for object detection. In: CVPR (2019)Google Scholar
  23. 23.
    Law, H., Deng, J.: Cornernet: Detecting objects as paired keypoints. In: ECCV (2018)Google Scholar
  24. 24.
    Lin, T.Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)Google Scholar
  25. 25.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: ICCV (2017)Google Scholar
  26. 26.
    Liu, W., et al.: Ssd: Single shot multibox detector. In: ECCV (2016)Google Scholar
  27. 27.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)Google Scholar
  28. 28.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML (2015)Google Scholar
  29. 29.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified and real-time object detection. In: CVPR (2016)Google Scholar
  30. 30.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: NeurIPS (2015)Google Scholar
  31. 31.
    Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR (2018)Google Scholar
  32. 32.
    Saito1, K., Ushiku, Y., Harada, T., Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In: CVPR (2019)Google Scholar
  33. 33.
    Sakaridis, C., Dai, D., Gool, L.V.: Semantic foggy scene understanding with synthetic data. In: IJCV (2018)Google Scholar
  34. 34.
    Tian, Z., Shen, C., Chen, H., He, T.: Fcos: Fully convolutional one-stage object detection. In: ICCV (2019)Google Scholar
  35. 35.
    Tsai, Y.H., Sohn, K., Schulter, S., Chandraker, M.: Domain adaptation for structured output via discriminative patch representations. In: ICCV (2019)Google Scholar
  36. 36.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)Google Scholar
  37. 37.
    Wu, Q., Shen, C., Wang, P., Dick, A., van den Hengel, A.: Image captioning and visual question answering based on attributes and external knowledge. In: TPAMI (2018)Google Scholar
  38. 38.
    Xu, K., et al.: Show and attend and tell: Neural image caption generation with visual attention. In: ICML (2015)Google Scholar
  39. 39.
    Yu, J., Jiang, Y., Wang, Z., Cao, Z., Huang, T.: Unitbox: An advanced object detection network. In: ACMMM (2016)Google Scholar
  40. 40.
    Zhou, X., Zhuo, J., Krähenbühl, P.: Bottom-up object detection by grouping extreme and center points. In: CVPR (2019)Google Scholar
  41. 41.
    Zhu, X., Pang, J., Yang, C., Shi, J., Lin, D.: Adapting object detectors via selective cross-domain alignment. In: CVPR (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Academia SinicaTaipeiTaiwan
  2. 2.NEC Labs AmericaTexasUSA
  3. 3.National Chiao Tung UniversityHsinchuTaiwan
  4. 4.UC MercedMercedUSA
  5. 5.Google ResearchCambridgeUSA

Personalised recommendations