Skip to main content
Log in

Joining features by global guidance with bi-relevance trihard loss for person re-identification

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Person re-identification (ReID) aims to associate the person with the given identity across different cameras, which has wide application in the field of intelligent video. In this work, an efficient method is proposed to improve ReID performance. First, a global-guided feature joint network is designed, which consists of a multiple feature extraction network and a global-guided feature fusion network. The former can align different body regions and extract global feature and local features. The latter utilizes the global feature to guide the adaptive fusion of the local features, which can evaluate the importance of different local features dynamically. Secondly, a Bi-relevance TriHard loss (TriBR) is designed to penalize the loss dynamically. TriBR combines Euclidean distance and angle information, which considers the self-relevance of the intra-class samples and the cross-relevance of the inter-class samples simultaneously. Besides, TriBR can adaptively adjust the distance margin and the angle margin to optimize the network. The proposed TriBR is advantageous to learn more discriminative features. The method achieves 89.2% mAP (mean Average Precision) with 95.8% Rank-1 on Market-1501, 79.7% mAP with 89.3% Rank-1 on DukeMTMC. The proposed method also performs excellently on the datasets for the occluded person re-identification problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Farenzena M, Bazzani L, Perina A, Murino V, Cristani M (2010) Person re-identification by symmetry-driven accumulation of local features. In Proceedings of IEEE CVPR, pp 2360–2367, California

  2. Zhao R, Ouyang W, Wang X (2013) Person re-identification by salience matching. In Proceedings of IEEE ICCV, pp 2528–2535, Hobart

  3. Wang X, Ming Y, Zhu S, Lin Y (2013) Regionlets for generic object detection. In Proceedings of IEEE ICCV, pp 17–24, Hobart

  4. Liu X, Song M, Tao D, Zhou X, Bu J (2014) Semi-supervised coupled dictionary learning for person re-identification. In Proceedings of IEEE CVPR, pp 3550–3557, Columbus

  5. Li W, Zhao R, Xiao T, Wang X (2014) Deepreid: Deep filter pairing neural network for person re-identification. In Proceedings of IEEE CVPR, pp 152–159, Columbus

  6. Xiao T, Li H, Ouyang W, Wang X (2016) Learning deep feature representations with domain guided dropout for person re-identification. In Proceedings of IEEE CVPR, pp 1249–1258, Las Vegas

  7. Liu J, Ni B, Yan Y, Zhou P, Cheng S, Hu J (2018) Pose transferrable person re-identification. In Proceedings of IEEE CVPR, pp 4099–4108, Salt Lake City

  8. Huang Z, Qin W, Luo F, Guan T, Xie F, Han S, Sun D (2021) Combination of validity aggregation and multi-scale feature for person re-identification. J Ambient Intell Human Comput, Early Access

  9. Varior R, Haloi M, Wang G (2016) Gated siamese convolutional neural network architecture for human re-identification. In Proceedings of IEEE ECCV, pp 791–808, Amsterdam

  10. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: A unified embedding for face recognition and clustering. In Proceedings of IEEE CVPR, pp 845–853, Boston

  11. Hermans A, Beyer L, Leibe B (2017) In defense of the triplet loss for person re-identification

  12. Chen W, Chen X, Zhang J, Huang K (2017) Beyond triplet loss: a deep quadruplet network for person re-identification. In Proceedings of IEEE CVPR, pp 1320–1329, Honolulu

  13. Varior R, Gang W, Lu J (2014) Learning invariant color features for person re-identification. IEEE Trans Image Process 25(7):3395–3410

    Article  Google Scholar 

  14. Ding G, Khan S, Tang Z, Porikli F (2017) Let features decide for themselves: Feature mask network for person re-identification

  15. Zheng Z, Zheng L, Yang Y (2019) Pedestrian alignment network for large-scale person re-identification. IEEE Trans Circuits Syst Video Technol 29(10):3037–3045

    Article  Google Scholar 

  16. Zhao H, Tian M, Sun S, Jing S, Tang X (2017) Spindle net: Person re-identification with human body region guided feature decomposition and fusion. In Proceedings of IEEE CVPR, pp 907–915, Honolulu

  17. Zheng L, Huang Y, Lu H, Yang Y (2019) Pose invariant embedding for deep person re-identification. IEEE Trans Image Process 28(9):4500–4509

    Article  MathSciNet  Google Scholar 

  18. Li D, Wei X, Hong X, Gong Y (2020) Infrared-visible cross-modal person re-identification with an x modality. In Proceedings of AAAI, pp 4610–4617, New York

  19. Li W, Zhu X, Gong S (2018) Harmonious attention network for person re-identification. In Proceedings of IEEE CVPR, Salt Lake City

  20. Xiao Q, Luo H, Zhang C (2017) Margin sample mining loss: A deep learning based method for person re-identification. In Proceedings of IEEE CVPR, pp 3346–3355, Honolulu

  21. Ma W, Han H, Zhang Y, Wang C (2018) Metric learning algorithm based on weighted pairwise constrained component analysis for person re-identification. In Proceedings of IEEE ICCT, pp 1154–1158, Chongqing

  22. Liu H, Ma B, Qin L, Pang J, Zhang C, Huang Q (2015) Set-label modeling and deep metric learning on person re-identification. Neurocomputing 151:1283–1292

    Article  Google Scholar 

  23. Chen D, Yuan Z, Chen B, Zheng N (2016) Similarity learning with spatial constraints for person re-identification. In Proceedings of IEEE CVPR, pp 1268–1277, Las Vegas

  24. Wang J, Zhou F, Wen S, Liu X, Lin Y (2017) Deep metric learning with angular loss. In Proceedings of IEEE ICCV, pp 2610–2620, Venice

  25. Wang Y, Wang Z, Jiang M, Chen L, Shen T, Zhang W (2020) Joint deep learning of angular loss and hard sample mining for person re-identification

  26. Wei S, Ramakrishna V, Kanade T, Sheikh Y (2016) Convolutional pose machines. In Proceedings of IEEE CVPR, pp 4724–4732, Las Vegas

  27. Zheng L, Shen L, Tian L, Wang S, Wang J, Tian Q (2015) Scalable person re-identification: A benchmark. In Proceedings of IEEE ICCV, pp 1116–1124, Santiago

  28. Ristani E, Solera F, Zou R, Cucchiara R, Tomasi C (2016) Performance measures and a data set for multi-target, multicamera tracking. In Proceedings of IEEE ICCV, pp 17–35, Amsterdam

  29. Miao J, Wu Y, Liu P, Ding Y, Yang Y (2019) Pose-guided feature alignment for occluded person re-identification. In Proceedings of IEEE ICCV, pp 542–551, Seoul

  30. Zheng W, Gong S, Xiang T (2011) Person re-identification by probabilistic relative distance comparison. In Proceedings of IEEE CVPR, pp 649–656, Providence

  31. Deng J, Dong W, Socher R, Li L, Li K, Li F (2009) Imagenet: A large-scale hierarchical image database. In Proceedings of IEEE CVPR, pp 248–255, Florida

  32. Zhong Z, Zheng L, Kang G, Li S, Yang Y (2017) Random erasing data augmentation

  33. Fang P, Zhou J, Roy S, Petersson L, Harandi M (2019) Bilinear attention networks for person retrieval. In Proceeding of IEEE ICCV, pp 8029–8038, Seoul

  34. Shen Y, Xiao T, Li H, Yi S, Wang X (2018) End-to-end deep kronecker-product matching for person re-identification. In Proceedings of IEEE CVPR, pp 6886–6895, Salt Lake City

  35. Zheng X, Luo H, Fan X, Xiang W, Jian S (2017) Alignedreid: Surpassing human-level performance in person re-identification

  36. Sun Y, Zheng L, Yang Y, Wang S, Tian Q (2018) Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of IEEE ECCV, pp 501–518, Munich

  37. Dai Z, Chen M, Gu X, Zhu S, Tan P (2019) Batch dropblock network for person re-identification and beyond. In Proceedings of IEEE ICCV, pp 3690–3700, Seoul

  38. Hou R, Ma B, Chang H, Gu X, Shan S, Chen X (2019) Interaction-and-aggregation network for person re-identification. In Proceedings of IEEE CVPR, pp 9309–9318, California

  39. Zhuang Z, Wei L, Xie L, Zhang T, Zhang H, Wu H, Ai H, Tian Q (2020) Rethinking the distribution gap of person re-identification with camera-based batch normalization

  40. Zheng Z, Yang X, Yu Z, Zheng L, Yang Y, Kautz J (2019) Joint discriminative and generative learning for person re-identification. In Proceedings of IEEE CVPR, pp 2133–2142, California

  41. Yang W, Huang H, Zhang Z, Chen X, Zhang S (2019) Towards rich feature discovery with class activation maps augmentation for person re-identification. In Proceedings of IEEE CVPR, pp 1389–1398, California, (2019)

  42. Wang G, Yang S, Liu H, Wang Z, Yang Y, Wang S, Yu G, Zhou E, Sun J (2020) High-order information matters: Learning relation and topology for occluded person re-identification. In Proceedings of IEEE CVPR, pp 6448–6457, Washington

  43. Dai Z, Chen M, Zhu S, Tian P (2018) Batch feature erasing for person re-identification and beyond

  44. Li W, Zhu X, Gong S (2018) Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach. In Proceedings of IEEE CVPR, pp 2285–2294, Salt Lake City

  45. He L, Liang J, Li H, Sun Z (2018) Deep spatial feature reconstruction for partial person re-identification: Alignment-free approach. In Proceedings of IEEE CVPR, pp 7073–7082, Salt Lake City

  46. Miao J, Wu Y, Liu P, Ding Y, Yang Y (2019) Pose-guided feature alignment for occluded person re-identification. In Proceedings of IEEE ICCV, pp 542–551, Seoul

  47. Miao J, Wu Y, Yang Y (2021) Identifying visible parts via pose estimation for occluded person re-identification. IEEE Trans Neural Netw Learn Syst, p 99

  48. Zheng W, Li X, Xiang T, Liao S, Lai J, Gong S (2015) Partial person re-identification

  49. Sun Y, Xu Q, Li Y, Zhang C, Li Y, Wang S, Sun J (2019) Perceive where to focus: Learning visibility-aware part-level features for partial person reidentification. In Proceedings of IEEE CVPR, pp 393–402, California

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyong Huang.

Ethics declarations

Conflicts of interest

The authors declare no conflict of interest exists.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, Z., Qin, W., Huang, Z. et al. Joining features by global guidance with bi-relevance trihard loss for person re-identification. Neural Comput & Applic 34, 8697–8712 (2022). https://doi.org/10.1007/s00521-021-06852-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06852-4

Keywords

Navigation