Skip to main content
Log in

Multi-task Generative Adversarial Network for Detecting Small Objects in the Wild

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Object detection results have been rapidly improved over a short period of time with the development of deep convolutional neural networks. Although impressive results have been achieved on large/medium sized objects, the performance on small objects is far from satisfactory and one of remaining open challenges is detecting small object in unconstrained conditions (e.g. COCO and WIDER FACE benchmarks). The reason is that small objects usually lack sufficient detailed appearance information, which can distinguish them from the backgrounds or similar objects. To deal with the small object detection problem, in this paper, we propose an end-to-end multi-task generative adversarial network (MTGAN), which is a general framework. In the MTGAN, the generator is a super-resolution network, which can up-sample small blurred images into fine-scale ones and recover detailed information for more accurate detection. The discriminator is a multi-task network, which describes each inputted image patch with a real/fake score, object category scores, and bounding box regression offsets. Furthermore, to make the generator recover more details for easier detection, the classification and regression losses in the discriminator are back-propagated into the generator during training process. Extensive experiments on the challenging COCO and WIDER FACE datasets demonstrate the effectiveness of the proposed method in restoring a clear super-resolved image from a blurred small one, and show that the detection performance, especially for small sized objects, improves over state-of-the-art methods by a large margin.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. http://cocodataset.org/#detection-eval.

References

  • Bai, Y., & Ghanem, B. (2017). Multi-branch fully convolutional network for face detection. Preprint arXiv:1707.06330.

  • Bai, Y., Zhang, Y., Ding, M., & Ghanem, B. (2018a). Finding tiny faces in the wild with generative adversarial network. In CVPR IEEE.

  • Bai, Y., Zhang, Y., Ding, M., & Ghanem, B. (2018b). Sod-mtgan: Small object detection via multi-task generative adversarial network. In Computer vision-ECCV (pp. 8–14).

  • Bell, S., Lawrence Zitnick, C., Bala, K., & Girshick, R. (2016). Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2874–2883).

  • Cai, Z., Fan, Q., Feris, R. S., & Vasconcelos, N. (2016). A unified multi-scale deep convolutional neural network for fast object detection. In European conference on computer vision (pp. 354–370). Berlin: Springer.

  • Cai, Z., & Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. In CVPR.

  • Cheng, B., Wei, Y., Shi, H., Feris, R., Xiong, J., & Huang, T. (2018a). Decoupled classification refinement: Hard false positive suppression for object detection. Preprint arXiv:1810.04002.

  • Cheng, B., Wei, Y., Shi, H., Feris, R., Xiong, J., & Huang, T. (2018b). Revisiting rcnn: On awakening the classification power of faster rcnn. In Proceedings of the European conference on computer vision (ECCV) (pp. 453–468).

  • Chi, C., Zhang, S., Xing, J., Lei, Z., Li, S. Z., & Zou, X. (2018). Selective refinement network for high performance face detection. Preprint arXiv:1809.02693

  • Dai, J., Li, Y., He, K., & Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In NIPS (pp. 379–387).

  • Denton, E. L., Chintala, S., Fergus, R., et al. (2015). Deep generative image models using a Laplacian pyramid of adversarial networks. In Advances in neural information processing systems (pp. 1486–1494).

  • Dong, C., Loy, C. C., & Tang, X. (2016). Accelerating the super-resolution convolutional neural network. In European conference on computer vision (pp. 391–407). Berlin: Springer.

  • Fu, C. Y., Liu, W., Ranga, A., Tyagi, A., & Berg, A. C. (2017). Dssd: Deconvolutional single shot detector. Preprint arXiv:1701.06659.

  • Girshick, R. (2015). Fast r-cnn. In ICCV (pp. 1440–1448). New York: IEEE.

  • Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR (pp. 580–587).

  • Girshick, R., Radosavovic, I., Gkioxari, G., Dollár, P., & He, K. (2018). Detectron. https://github.com/facebookresearch/detectron.

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).

  • He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask r-cnn. In CVPR (pp. 2961–2969).

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV (pp. 1026–1034).

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

  • Hradiš, M., Kotera, J., Zemcík, P., & Šroubek, F. (2015). Convolutional neural networks for direct text deblurring. In Proceedings of BMVC (Vol. 10, p. 2).

  • Hu, P., & Ramanan, D. (2017). Finding tiny faces. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1522–1530). New York: IEEE.

  • Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., et al. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In IEEE CVPR.

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML (pp. 448–456).

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In CVPR (pp. 1125–1134).

  • Jain, V., & Learned-Miller, E. (2010). Fddb: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009. Amherst: University of Massachusetts.

  • Jiang, H., & Learned-Miller, E. (2017). Face detection with the faster r-cnn. In 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017) (pp. 650–657). New York: IEEE.

  • Kim, J., Kwon Lee, J., & Mu Lee, K. (2016). Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1646–1654).

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. Preprint arXiv:1412.6980.

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).

  • Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In CVPR (pp. 4681–4690).

  • Li, J., Wang, Y., Wang, C., Tai, Y., Qian, J., Yang, J., et al. (2018). Dsfd: Dual shot face detector. Preprint arXiv:1810.10220.

  • Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In CVPR (Vol. 1, p. 4).

  • Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2014). Microsoft coco: Common objects in context. In ECCV (pp. 740–755). Berlin: Springer.

    Chapter  Google Scholar 

  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In ECCV (pp. 21–37). Berlin: Springer.

  • Mathieu, M. F., Zhao, J. J., Zhao, J., Ramesh, A., Sprechmann, P., & LeCun, Y. (2016). Disentangling factors of variation in deep representation using adversarial training. In Advances in neural information processing systems (pp. 5040–5048).

  • Najibi, M., Samangouei, P., Chellappa, R., & Davis, L. S. (2017). Ssh: Single stage headless face detector. In Proceedings of the IEEE international conference on computer vision (pp. 4875–4884).

  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. Preprint arXiv:1511.06434

  • Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779–788).

  • Redmon, J., & Farhadi, A. (2017). Yolo9000: Better, faster, stronger. In 2017 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 6517–6525) New York: IEEE.

  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS (pp. 91–99).

  • Shen, Z., He, Z., & Xue, X. (2019). Meal: Multi-model ensemble via adversarial learning. In AAAI.

  • Shiri, F., Yu, X., Porikli, F., Hartley, R., & Koniusz, P. (2019). Identity-preserving face recovery from stylized portraits. International Journal of Computer Vision, 127, 1–21.

    Article  Google Scholar 

  • Shrivastava, A., Gupta, A., & Girshick, R. (2016a). Training region-based object detectors with online hard example mining. In CVPR (pp. 761–769).

  • Shrivastava, A., Sukthankar, R., Malik, J., & Gupta, A. (2016b). Beyond skip connections: Top-down modulation for object detection. CoRR arXiv:1612.06851.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. Preprint arXiv:1409.1556.

  • Song, Y., Zhang, J., Gong, L., He, S., Bao, L., Pan, J., et al. (2019). Joint face hallucination and deblurring via structure generation and detail enhancement. International Journal of Computer Vision, 127(6–7), 785–800.

    Article  Google Scholar 

  • Tang, X., Du, D. K., He, Z., & Liu, J. (2018). Pyramidbox: A context-assisted single shot face detector. Preprint arXiv:1803.07737.

  • Viola, P., & Jones, M. J. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.

    Article  Google Scholar 

  • Wan, S., Chen, Z., Zhang, T., Zhang, B., & Wong, K. K. (2016). Bootstrapping face detection with hard negative examples. Preprint arXiv:1608.02236.

  • Wang, J., Yuan, Y., & Yu, G. (2017a). Face attention network: An effective face detector for the occluded faces. Preprint arXiv:1711.07246.

  • Wang, Y., Ji, X., Zhou, Z., Wang, H., & Li, Z. (2017b). Detecting faces using region-based fully convolutional networks. Preprint arXiv:1709.05256.

  • Wang, Z., Liu, D., Yang, J., Han, W., & Huang, T. (2015). Deep networks for image super-resolution with sparse prior. In Proceedings of the IEEE international conference on computer vision (pp. 370–378).

  • Wen, Y., Zhang, K., Li, Z., & Qiao, Y. (2019). A comprehensive study on center loss for deep face recognition. International Journal of Computer Vision, 127, 1–16.

    Article  Google Scholar 

  • Yan, J., Lei, Z., Wen, L., & Li, S. Z. (2014). The fastest deformable part model for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2497–2504).

  • Yan, J., Zhang, X., Lei, Z., & Li, S. Z. (2013). Real-time high performance deformable model for face detection in the wild. In 2013 international conference on Biometrics (ICB) (pp. 1–6). CiteSeer.

  • Yang, S., Luo, P., Loy, C. C., & Tang, X. (2016). Wider face: A face detection benchmark. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5525–5533).

  • Zhang, C., Xu, X., & Tu, D. (2018a). Face detection using improved faster rcnn. Preprint arXiv:1802.02142.

  • Zhang, H., Riggan, B. S., Hu, S., Short, N. J., & Patel, V. M. (2019a). Synthesis of high-quality visible faces from polarimetric thermal faces using generative adversarial networks. International Journal of Computer Vision, 127(6–7), 845–862.

    Article  Google Scholar 

  • Zhang, J., Wu, X., Zhu, J., & Hoi, S. C. (2017a). Feature agglomeration networks for single stage face detection. Preprint arXiv:1712.00721.

  • Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10), 1499–1503.

    Article  Google Scholar 

  • Zhang, S., Wen, L., Shi, H., Lei, Z., Lyu, S., & Li, S. Z. (2019b). Single-shot scale-aware network for real-time face detection. International Journal of Computer Vision, 127, 1–23.

    Article  Google Scholar 

  • Zhang, S., Zhu, X., Lei, Z., Shi, H., Wang, X., & Li, S. Z. (2017b). S3fd: Single shot scale-invariant face detector. In Proceedings of the IEEE international conference on computer vision (pp. 192–201).

  • Zhang, Y., Bai, Y., Ding, M., Li, Y., & Ghanem, B. (2018b). W2f: A weakly-supervised to fully-supervised framework for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 928–936).

  • Zhang, Y., Bai, Y., Ding, M., Li, Y., & Ghanem, B. (2018c). Weakly-supervised object detection via mining pseudo ground truth bounding-boxes. Pattern Recognition, 84, 68–81.

    Article  Google Scholar 

  • Zhang, Y., Ding, M., Bai, Y., & Ghanem, B. (2019c). Detecting small faces in the wild based on generative adversarial network and contextual information. Pattern Recognition, 94, 74–86.

    Article  Google Scholar 

  • Zhang, Y., Ding, M., Bai, Y., Liu, D., & Ghanem, B. (2019d). Learning a strong detector for action localization in videos. Pattern Recognition Letters, 128, 407–413.

    Article  Google Scholar 

  • Zhang, Y., Ding, M., Bai, Y., Xu, M., & Ghanem, B. (2019e). Beyond weakly-supervised: Pseudo ground truths mining for missing bounding-boxes object detection. IEEE Transactions on Circuits and Systems for Video Technology.

  • Zhang, Y., Ding, M., Fu, W., & Li, Y. (2017c). Reading recognition of pointer meter based on pattern recognition and dynamic three-points on a line. In ICMV 2016, international society for optics and photonics (Vol. 10341, p. 103410K).

  • Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. In NIPS (pp. 487–495).

  • Zhu, C., Tao, R., Luu, K., & Savvides, M. (2018). Seeing small faces from robust Anchor’s perspective. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5127–5136).

  • Zhu, C., Zheng, Y., Luu, K., & Savvides, M. (2017a). CMS-RCNN: Contextual multi-scale region-based CNN for unconstrained face detection. In B. Bhanu & A. Kumar (Eds.), Deep learning for biometrics. Advances in computer vision and pattern recognition. Cham: Springer.

    Google Scholar 

  • Zhu, C., Zheng, Y., Luu, K., & Savvides, M. (2017b). Cms-rcnn: contextual multi-scale region-based cnn for unconstrained face detection. In Deep learning for biometrics (pp. 57–79). Berlin: Springer.

  • Zhu, J. Y., Krähenbühl, P., Shechtman, E., & Efros, A. A. (2016). Generative visual manipulation on the natural image manifold. In European conference on computer vision (pp. 597–613). Berlin: Springer.

Download references

Acknowledgements

The majority of this work was done when Yongqiang Zhang was a visiting Ph.D. student at King Abdullah University of Science and Technology (KAUST), and the others are continued at Harbin Institute of Technology (HIT). This work was supported by Natural Science Foundation of China, Grant No. 61603372.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingli Ding.

Additional information

Communicated by Kristen Grauman.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Bai, Y., Ding, M. et al. Multi-task Generative Adversarial Network for Detecting Small Objects in the Wild. Int J Comput Vis 128, 1810–1828 (2020). https://doi.org/10.1007/s11263-020-01301-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-020-01301-6

Keywords

Navigation