Skip to main content
Log in

Gait recognition based on margin sample mining loss

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Gait recognition, as one of the new biometric techniques, mainly judges and identifies a target pedestrian through its walking posture. Gait recognition is effective at long distances, difficult to camouflage and requires no contact or cooperation with the target pedestrian. However, the accuracy of gait recognition is affected by external factors, such as the shooting angle of the video, the clothes and bags worn by the target. In this paper, we solve the above problems based on two aspects. Firstly, a gait recognition method based on MSM Loss is proposed. In this way we are able to extract more discriminative spatio-temporal features; Secondly, we also introduce a new input method, which makes each input sequence more closely related, thus improving the gait recognition rate. Finally, the proposed method is verified on the CASIA-B and OU-MVLP dataset. In CASIA-B, the average recognition rate is obtained under the walking conditions of normal, with bags and with clothes. With rank-1 accuracy under LT, the method proposed in this paper can reach 96.4% under NM, 89.1% under BG and 71.2% under CL. And under the normal walking conditions, our method performs better compared with the best existing gait recognition methods. And in OU-MVLP, we get 87.5% accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Abbreviations

MSM Loss:

Margin Sample Mining Loss

NM:

Walking under normal condition

BG:

Walking with bag

CL:

Walking with clothes

CNN:

Convolutional neural network

GAN:

Generative adversarial network

DRL:

Disentangled representation learning

PEI:

Period energy image

FC:

Full connection layer

ST:

Small sample training

MT:

Medium sample training

LT:

Large sample training

References

  1. Bashir K, Xiang T, Gong S (2009) Gait recognition using gait entropy image, in: 3rd International Conference on Imaging for Crime Detection and Prevention, ICDP 2009, London, UK, December 3, 2009, IET / IEEE, 1–6

  2. Chao H, He Y, Zhang J, Feng J (2019) Gaitset: Regarding gait as a set for cross-view gait recognition, in: The Thirty-Third AAAI Conference on Artifificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artifificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artifificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27–February 1, 2019, AAAI Press, 8126–8133.

  3. Cho C, Chao W, Lin S, Chen Y (2009) A vision-based analysis system for gait recognition in patients with parkinson’s disease. Expert Syst Appl 36(3):7033–7039

    Article  Google Scholar 

  4. Fan C, Peng Y, Cao C, Liu X, Hou S, Chi J, Huang Y, Li Q, He Z (2020) Gaitpart: Temporal part-based model for gait recognition, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020, Computer Vision Foundation/IEEE, 14213–14221

  5. Fu Y, Wei Y, Zhou H, Shi G, Huang X, Wang Z, Yao TS Huang, Horizontal pyramid matching for person reidentifification, CoRR abs/1804.05275. arXiv:1804.05275

  6. Han J, Bhanu B (2006) Individual recognition using gait energy image. IEEE Trans Pattern Anal Mach Intell 28(2):316–322

    Article  Google Scholar 

  7. He Y, Zhang J, Shan H, Wang L (2019) Multi-task gans for view-specifific feature learning in gait recognition. IEEE Trans Inf Forensics Secur. 14(1):102–113

    Article  Google Scholar 

  8. Hermans A, Beyer L, Leibe B. In defense of the triplet loss for person reidentifification, CoRR abs/1703.07737. arXiv:1703.07737

  9. Hu M, Wang Y, Zhang Z, Little JJ, Huang D (2013) View-invariant discriminative projection for multi-view gait-based human identifification. IEEE Trans Inf Forensics Secur 8(12):2034–2045

    Article  Google Scholar 

  10. Huh J, Seo Y (2019) Understanding edge computing: engineering evolution with artifificial intelligence. IEEE Access 7:164229–164245

    Article  Google Scholar 

  11. Kusakunniran W, Wu Q, Zhang J, Li H, Wang L (2014) Recognizing gaits across views through correlated motion co-clustering. IEEE Trans Image Process 23(2):696–709

    Article  MathSciNet  MATH  Google Scholar 

  12. Lai DTH, Begg RK, Palaniswami M (2009) Computational intelligence in gait research: a perspective on current applications and future challenges. IEEE Trans Inf Technol Biomed 13(5):687–702

    Article  Google Scholar 

  13. Li X, Makihara Y, Xu C, Yagi Y, Ren M (2020) Gait recognition via 100 semi-supervised disentangled representation learning to identity and covariate features, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020, Computer Vision Foundation/IEEE, 13306–13316

  14. Liu H, Zhu X, Lei Z, Li SZ (2019) Adaptiveface: Adaptive margin and sampling for face recognition, in: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019, Computer Vision Foundation / IEEE, 11947–11956

  15. Nixon MS, Carter JN, Nash JM, Huang PS, Cunado D, Stevenage SV (1999) Motion analysis and tracking, in: IEEE colloquium on Motion Analysis & Tracking

  16. Park SW, Huh JH, Kim JC (2020) Began v3: avoiding mode collapse in 115 gans using variational inference. Electronics 9(4):688

    Article  Google Scholar 

  17. Park SW, Ko JS, Huh JH, Kim JC. Review on generative adversarial networks: Focusing on computer vision and its applications

  18. Phillips PJ (2002). Human identification technical challenges. International Conference on Image Processing. IEEE.

  19. Sarkar S, Phillips PJ, Liu Z, Vega IR, Grother P, Bowyer KW (2005) The humanid gait challenge problem: data sets, performance, and analysis. IEEE Trans Pattern Anal Mach Intell 27(2):162–177

    Article  Google Scholar 

  20. Shiraga K, Makihara Y, Muramatsu D, Echigo T, Yagi Y (2016) Geinet: View-invariant gait recognition using a convolutional neural network. In 2016 International Conference on Biometrics (ICB), pages 1–8, 1, 7

  21. Stevenage SV, Nixon MS, Vince K (2010) Visual analysis of gait as a cue to identity. Appl Cogn Psychol 13(6):513–526

    Article  Google Scholar 

  22. Sun J, Yang W, Xue J, Liao Q (2020) An equalized margin loss for face recognition. IEEE Trans Multim 22(11):2833–2843

    Article  Google Scholar 

  23. Takemura N, Makihara Y, Muramatsu D, Echigo T, Yagi Y (2018) Multi-view large population gait dataset and its performance evaluation for cross view gait recognition. IPSJ Trans Comput Vis Appl 10:4. https://doi.org/10.1186/s41074-018-0039-6

    Article  Google Scholar 

  24. Tariq M, Shah MA (2017) Review of model-free gait recognition in biometrie systems, in: 23rd International Conference on Automation and Computing, ICAC 2017, Huddersfifield, United Kingdom, September 7–8, 2017, IEEE, 1–7.

  25. Wu Z, Huang Y, Wang L, Wang X, Tan T (2017) A comprehensive study on cross-view gait based human identifification with deep cnns. IEEE Trans Pattern Anal Mach Intell 39(2):209–226

    Article  Google Scholar 

  26. Xiao Q, Luo H, Zhang C. Margin sample mining loss: A deep learning 70 based method for person re-identifification, CoRR abs/1710.00478. arXiv: 1710.00478

  27. Xue W, Ai H, Sun T, Song C, Huang Y, Wang L (2020) Frame-Gan: increasing the frame rate of gait videos with generative adversarial networks. Neurocomputing 380:95–104

    Article  Google Scholar 

  28. Yu S, Tan D, Tan T (2006) A framework for evaluating the effffect of view angle, clothing and carrying condition on gait recognition, in: 18th international conference on pattern recognition (ICPR 2006), 20–24 august 2006, Hong Kong, China, IEEE Computer Society, 441–444.

  29. Yu S, Chen H, Reyes EBG, Poh N (2017) Gaitgan: Invariant gait feature extraction using generative adversarial networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21–26, 2017, IEEE Computer Society, 532–539

  30. Yu S, Chen H, Wang Q, Shen L, Huang Y (2017) Invariant feature extraction for gait recognition using only one uniform model. Neurocomputing 239:81–93

    Article  Google Scholar 

Download references

Funding

This paper is supported by 2021 Key Research and Development Plan of Shaanxi Province (no:2021SF-377).

Author information

Authors and Affiliations

Authors

Contributions

Xuan Nie: Conceptualization, Methodology, Software, Writing - Original Draft, Writing - Review & Editing, Supervision, Resources.

Hongmei Li: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing - Original Draft, Investigation.

Corresponding author

Correspondence to Xuan Nie.

Ethics declarations

Conflict of interests

We have no conflict of interests to disclose.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nie, X., Li, H. Gait recognition based on margin sample mining loss. Multimed Tools Appl 82, 969–987 (2023). https://doi.org/10.1007/s11042-022-13019-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-13019-3

Keywords

Navigation