Skip to main content

Negative Samples are at Large: Leveraging Hard-Distance Elastic Loss for Re-identification

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13684))

Included in the following conference series:

Abstract

We present a Momentum Re-identification (MoReID) framework that can leverage a very large number of negative samples in training for general re-identification task. The design of this framework is inspired by Momentum Contrast (MoCo), which uses a dictionary to store current and past batches to build a large set of encoded samples. As we find it less effective to use past positive samples which may be highly inconsistent to the encoded feature property formed with the current positive samples, MoReID is designed to use only a large number of negative samples stored in the dictionary. However, if we train the model using the widely used Triplet loss that uses only one sample to represent a set of positive/negative samples, it is hard to effectively leverage the enlarged set of negative samples acquired by the MoReID framework. To maximize the advantage of using the scaled-up negative sample set, we newly introduce Hard-distance Elastic loss (HE loss), which is capable of using more than one hard sample to represent a large number of samples. Our experiments demonstrate that a large number of negative samples provided by MoReID framework can be utilized at full capacity only with the HE loss, achieving the state-of-the-art accuracy on three re-ID benchmarks, VeRi-776, Market-1501, and VeRi-Wild.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Circle loss [31] is designed in two types, pairwise and tripletwise, and we adopt the pairwise type as the ID loss in MoReID architecture.

  2. 2.

    In fact, most methods that use InfoNCE which is modified for supervised learning use one of these two. [25] used \(\mathcal {L}_q^{i,in}\) while [10, 21, 45] used \(\mathcal {L}_q^{i,out}\).

References

  1. Aich, A., Zheng, M., Karanam, S., Chen, T., Roy-Chowdhury, A.K., Wu, Z.: Spatio-temporal representation factorization for video-based person re-identification. In: ICCV (2021)

    Google Scholar 

  2. Chen, H., Lagadec, B., Bremond, F.: ICE: inter-instance contrastive encoding for unsupervised person re-identification. In: ICCV (2021)

    Google Scholar 

  3. Chen, P., Liu, W., Dai, P., Liu, J.: Occlude them all: occlusion-aware attention network for occluded person Re-ID. In: ICCV (2021)

    Google Scholar 

  4. Chen, T., et al.: ABD-Net: attentive but diverse person re-identification. In: ICCV (2019)

    Google Scholar 

  5. Chen, X., Xie, S., He, K.: An empirical study of training self-supervised vision transformers. In: ICCV (2021)

    Google Scholar 

  6. Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: AutoAugment: learning augmentation policies from data. In: CVPR (2019)

    Google Scholar 

  7. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. In: CVPR (2019)

    Google Scholar 

  8. Fu, D., et al.: Unsupervised pre-training for person re-identification. In: CVPR (2021)

    Google Scholar 

  9. Goyal, P., et al.: Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv:1706.02677 (2018)

  10. Gunel, B., Du, J., Conneau, A., Stoyanov, V.: Supervised contrastive learning for pre-trained language model fine-tuning. In: ICLR (2021)

    Google Scholar 

  11. Hao, X., Zhao, S., Ye, M., Shen, J.: Cross-modality person re-identification via modality confusion and center aggregation. In: ICCV (2021)

    Google Scholar 

  12. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR (2020)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  14. He, L., Liao, X., Liu, W., Liu, X., Cheng, P., Mei, T.: FastReID: a PyTorch toolbox for general instance re-identification. arXiv:2006.02631 (2020)

  15. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. arXiv:1703.07737 (2017)

  16. Hoffer, E., Ailon, N.: Deep metric learning using triplet network. In: Feragen, A., Pelillo, M., Loog, M. (eds.) SIMBAD 2015. LNCS, vol. 9370, pp. 84–92. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24261-3_7

    Chapter  Google Scholar 

  17. Huang, Y., Wu, Q., Xu, J., Zhong, Y., Zhang, Z.: Clothing status awareness for long-term person re-identification. In: ICCV (2021)

    Google Scholar 

  18. Khorramshahi, P., Peri, N., Chen, J., Chellappa, R.: The devil is in the details: self-supervised attention for vehicle re-identification. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 369–386. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58568-6_22

    Chapter  Google Scholar 

  19. Khosla, P., et al.: Supervised contrastive learning. In: NeurIPS (2020)

    Google Scholar 

  20. Kumar, R., Weill, E., Aghdasi, F., Sriram, P.: Vehicle re-identification: an efficient baseline using triplet embedding. In: IJCNN (2019)

    Google Scholar 

  21. Lee, H., Kwon, H.: Self-supervised contrastive learning for cross-domain hyperspectral image representation. In: ICASSP (2022)

    Google Scholar 

  22. Li, M., Huang, X., Zhang, Z.: Self-supervised geometric features discovery via interpretable attention for vehicle re-identification and beyond. In: ICCV (2021)

    Google Scholar 

  23. Li, Y., He, J., Zhang, T., Liu, X., Zhang, Y., Wu, F.: Diverse part discovery: occluded person re-identification with part-aware transformer. In: CVPR (2021)

    Google Scholar 

  24. Meng, D., et al.: Parsing-based view-aware embedding network for vehicle re-identification. In: CVPR (2020)

    Google Scholar 

  25. Miech, A., Alayrac, J.B., Smaira, L., Laptev, I., Sivic, J., Zisserman, A.: End-to-end learning of visual representations from uncurated instructional videos. In: CVPR (2020)

    Google Scholar 

  26. van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv:1807.03748v2 (2018)

  27. Park, H., Lee, S., Lee, J., Ham, B.: Learning by aligning: visible-infrared person re-identification using cross-modal correspondences. In: ICCV (2021)

    Google Scholar 

  28. Rao, Y., Chen, G., Lu, J., Zhou, J.: Counterfactual attention learning for fine-grained visual categorization and re-identification. In: ICCV (2021)

    Google Scholar 

  29. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: CVPR (2015)

    Google Scholar 

  30. Sohn, K.: Improved deep metric learning with multi-class n-pair loss objective. In: NeurIPS (2016)

    Google Scholar 

  31. Sun, Y., et al.: Circle loss: a unified perspective of pair similarity optimization. In: CVPR (2020)

    Google Scholar 

  32. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: CVPR (2017)

    Google Scholar 

  33. Wang, F., Cheng, J., Liu, W., Liu, H.: Additive margin softmax for face verification. IEEE Sign. Process. Lett. 25(7), 926–930 (2018)

    Article  Google Scholar 

  34. Wang, F., Xiang, X., Cheng, J., Yuille, A.L.: NormFace: \(\text{L}_\text{2 }\) hypersphere embedding for face verification. In: ACM MM (2017)

    Google Scholar 

  35. Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: CVPR (2018)

    Google Scholar 

  36. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018)

    Google Scholar 

  37. Wang, X., Hua, Y., Kodirov, E., Robertson, N.M.: Ranked list loss for deep metric learning. In: CVPR (2019)

    Google Scholar 

  38. Wu, Z., Xiong, Y., Yu, S., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: CVPR (2018)

    Google Scholar 

  39. Yan, C., Pang, G., Jiao, J., Bai, X., Feng, X., Shen, C.: Occluded person re-identification with single-scale global representations. In: ICCV (2021)

    Google Scholar 

  40. Yan, C., et al.: BV-person: a large-scale dataset for bird-view person re-identification. In: ICCV (2021)

    Google Scholar 

  41. Zhang, L., Rusinkiewicz, S.: Learning local descriptors with a CDF-based dynamic soft margin. In: ICCV (2019)

    Google Scholar 

  42. Zhang, X., Zhang, R., Cao, J., Gong, D., You, M., Shen, C.: Part-guided attention learning for vehicle instance retrieval. IEEE Trans. Intell. Transp. Syst. (2020)

    Google Scholar 

  43. Zhao, J., Zhao, Y., Li, J., Yan, K., Tian, Y.: Heterogeneous relational complement for vehicle re-identification. In: ICCV (2021)

    Google Scholar 

  44. Zheng, K., Liu, W., He, L., Mei, T., Luo, J., Zha, Z.J.: Group-aware label transfer for domain adaptive person re-identification. In: CVPR (2021)

    Google Scholar 

  45. Zheng, M., et al.: Weakly supervised contrastive learning. In: ICCV (2021)

    Google Scholar 

  46. Zheng, Y., et al.: Online pseudo label generation by hierarchical cluster dynamics for adaptive person re-identification. In: ICCV (2021)

    Google Scholar 

  47. Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. In: AAAI (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyungtae Lee .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 195 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, H., Eum, S., Kwon, H. (2022). Negative Samples are at Large: Leveraging Hard-Distance Elastic Loss for Re-identification. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13684. Springer, Cham. https://doi.org/10.1007/978-3-031-20053-3_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20053-3_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20052-6

  • Online ISBN: 978-3-031-20053-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics