Advertisement

Renovating Parsing R-CNN for Accurate Multiple Human Parsing

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

Multiple human parsing aims to segment various human parts and associate each part with the corresponding instance simultaneously. This is a very challenging task due to the diverse human appearance, semantic ambiguity of different body parts, and complex background. Through analysis of multiple human parsing task, we observe that human-centric global perception and accurate instance-level parsing scoring are crucial for obtaining high-quality results. But the most state-of-the-art methods have not paid enough attention to these issues. To reverse this phenomenon, we present Renovating Parsing R-CNN (RP R-CNN), which introduces a global semantic enhanced feature pyramid network and a parsing re-scoring network into the existing high-performance pipeline. The proposed RP R-CNN adopts global semantic representation to enhance multi-scale features for generating human parsing maps, and regresses a confidence score to represent its quality. Extensive experiments show that RP R-CNN performs favorably against state-of-the-art methods on CIHP and MHP-v2 datasets. Code and models are available at https://github.com/soeaver/RP-R-CNN.

Keywords

Multiple human parsing Region-based approach Global semantic enhanced FPN Parsing re-scoring network 

References

  1. 1.
    Chao, Y.W., Wang, Z., He, Y., Wang, J., Deng, J.: HICO: a benchmark for recognizing human-object interactions in images. In: ICCV (2015)Google Scholar
  2. 2.
    Chen, K., et al.: Hybrid task cascade for instance segmentation. In: CVPR (2019)Google Scholar
  3. 3.
    Chen, L., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587 (2017)
  4. 4.
    Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_49CrossRefGoogle Scholar
  5. 5.
    Girdhar, R., Ramanan, D.: Attentional pooling for action recognition. In: NIPS (2017)Google Scholar
  6. 6.
    Girshick, R.: Fast R-CNN. In: ICCV (2015)Google Scholar
  7. 7.
    Gkioxari, G., Girshick, R., Dollar, P., He, K.: Detecting and recognizing human-object interactions. In: CVPR (2018)Google Scholar
  8. 8.
    Gong, K., Liang, X., Li, Y., Chen, Y., Yang, M., Lin, L.: Instance-level human parsing via part grouping network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 805–822. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01225-0_47CrossRefGoogle Scholar
  9. 9.
    Gong, K., Gao, Y., Liang, X., Shen, X., Lin, L.: Graphonomy: universal human parsing via graph transfer learning. In: CVPR (2019)Google Scholar
  10. 10.
    Guler, R., Neverova, N., Kokkinos, I.: DensePose: dense human pose estimation in the wild. In: CVPR (2018)Google Scholar
  11. 11.
    He, H., Zhang, J., Zhang, Q., Tao, D.: Grapy-ML: graph pyramid mutual learning for cross-dataset human parsing. In: AAAI (2020)Google Scholar
  12. 12.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  14. 14.
    Hsieh, C.W., Chen, C.Y., Chou, C.L., Shuai, H.H., Liu, J., Cheng, W.H.: FashionOn: semantic-guided image-based virtual try-on with detailed human and clothing information. In: ACM MM (2019)Google Scholar
  15. 15.
    Huang, Z., Huang, L., Gong, Y., Huang, C., Wang, X.: Mask scoring R-CNN. In: CVPR (2019)Google Scholar
  16. 16.
    Ji, R., et al.: Learning semantic neural tree for human parsing. arXiv:1912.09622 (2019)
  17. 17.
    Jiang, B., Luo, R., Mao, J., Xiao, T., Jiang, Y.: Acquisition of localization confidence for accurate object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 816–832. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01264-9_48CrossRefGoogle Scholar
  18. 18.
    Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: CVPR (2018)Google Scholar
  19. 19.
    Kirillov, A., Girshick, R., He, K., Dollár, P.: Panoptic feature pyramid networks. In: CVPR (2019)Google Scholar
  20. 20.
    Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)Google Scholar
  21. 21.
    Li, J., et al.: Multi-human parsing in the wild. arXiv:1705.07206 (2017)
  22. 22.
    Li, W., Zhu, X., Gong, S.: Harmonious attention network for person re-identification. In: CVPR (2018)Google Scholar
  23. 23.
    Lin, T., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)Google Scholar
  24. 24.
    Liu, X., Zhang, M., Liu, W., Song, J., Mei, T.: BraidNet: braiding semantics and details for accurate human parsing. In: ACM MM (2019)Google Scholar
  25. 25.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)Google Scholar
  26. 26.
    Luo, P., Wang, X., Tang, X.: Pedestrian parsing via deep decompositional network. In: ICCV (2013)Google Scholar
  27. 27.
    Miao, J., Wu, Y., Liu, P., Ding, Y., Yang, Y.: Pose-guided feature alignment for occluded person re-identification. In: ICCV (2019)Google Scholar
  28. 28.
    Nair, V., Hinton, G.: Rectified linear units improve restricted Boltzmann machines. In: ICML (2010)Google Scholar
  29. 29.
    Qi, S., Wang, W., Jia, B., Shen, J., Zhu, S.-C.: Learning human-object interactions by graph parsing neural networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 407–423. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01240-3_25CrossRefGoogle Scholar
  30. 30.
    Qin, H., Hong, W., Hung, W.C., Tsai, Y.H., Yang, M.H.: A top-down unified framework for instance-level human parsing. In: BMVC (2019)Google Scholar
  31. 31.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)Google Scholar
  32. 32.
    Ruan, T., et al.: Devil in the details: towards accurate single and multiple human parsing. In: AAAI (2019)Google Scholar
  33. 33.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 115, 211–252 (2015).  https://doi.org/10.1007/s11263-015-0816-yMathSciNetCrossRefGoogle Scholar
  34. 34.
    Tan, Z., Nie, X., Qian, Q., Li, N., Li, H.: Learning to rank proposals for object detection. In: ICCV (2019)Google Scholar
  35. 35.
    Tangseng, P., Wu, Z., Yamaguchi, K.: Retrieving similar styles to parse clothing. TPAMI (2014) Google Scholar
  36. 36.
    Wu, Y., He, K.: Group normalization. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 3–19. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01261-8_1CrossRefGoogle Scholar
  37. 37.
    Xiao, B., Wu, H., Wei, Y.: Simple baselines for human pose estimation and tracking. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 472–487. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01231-1_29CrossRefGoogle Scholar
  38. 38.
    Xiao, T., Liu, Y., Zhou, B., Jiang, Y., Sun, J.: Unified perceptual parsing for scene understanding. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 432–448. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01228-1_26CrossRefGoogle Scholar
  39. 39.
    Yamaguchi, K., Kiapour, M.H., Ortiz, L.E., Berg, T.L.: Parsing clothing in fashion photographs. In: CVPR (2012)Google Scholar
  40. 40.
    Yang, L., Song, Q., Wang, Z., Jiang, M.: Parsing R-CNN for instance-level human analysis. In: CVPR (2019)Google Scholar
  41. 41.
    Yang, W., Luo, P., Lin, L.: Clothing co-parsing by joint image segmentation and labeling. In: CVPR (2014)Google Scholar
  42. 42.
    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: ICLR (2016)Google Scholar
  43. 43.
    Zeng, X., et al.: Crafting GBD-net for object detection. TPAMI 40(9), 2109–2123 (2017)CrossRefGoogle Scholar
  44. 44.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR (2017)Google Scholar
  45. 45.
    Zhao, J., Li, J., Cheng, Y., Feng, J.: Understanding humans in crowded scenes: deep nested adversarial learning and a new benchmark for multi-human parsing. In: ACM MM (2018)Google Scholar
  46. 46.
    Zhu, B., Song, Q., Yang, L., Wang, Z., Liu, C., Hu, M.: CPM R-CNN: calibrating point-guided misalignment in object detection. arXiv:2003.03570 (2020)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Beijing University of Posts and TelecommunicationsBeijingChina
  2. 2.Noah’s Ark LabHuawei TechnologiesShenzhenChina

Personalised recommendations