Skip to main content
Log in

Gait representation and recognition from temporal co-occurrence of flow fields

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

This paper proposes a new gait representation that encodes the dynamics of a gait period through a 2D array of 17-bin histograms. Every histogram models the co-occurrence of optical flow states at every pixel of the normalized template that bounds the silhouette of a target subject. Five flow states (up, down, left, right, null) are considered. The first histogram bin counts the number of frames over the gait period in which the optical flow for the corresponding pixel is null. In turn, each of the remaining 16 bins represents a pair of flow states and counts the number of frames in which the optical flow vector has changed from one state to the other during the gait period. Experimental results show that this representation is significantly more discriminant than previous proposals that only consider the magnitude and instantaneous direction of optical flow, especially as the walking direction gets closer to the viewing direction, which is where state-of-the-art gait recognition methods yield the lowest performance. The dimensionality of that gait representation is reduced through principal component analysis. Finally, gait recognition is performed through supervised classification by means of support vector machines. Experimental results using the public CMU MoBo and AVAMVG datasets show that the proposed approach is advantageous over state-of-the-art gait representation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bazin, A.I., Nixon, M.S.: Gait verification using probabilistic methods, In: Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION’05) - Volume 1 - Volume 01, series WACV-MOTION ’05, pp. 60–65. IEEE Computer Society, Washington, DC, 2005. https://doi.org/10.1109/ACVMOT.2005.55

  2. He, W., Li, P.: Gait recognition using the temporal information of leg angles, In: 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT), 2010, vol. 5, pp. 78–83 (2010)

  3. Choudhury, S.D., Tjahjadi, T.: Gait recognition based on shape and motion analysis of silhouette contours. Comput. Vis. Image Underst. 117(12), 1770–1785 (2013)

    Article  Google Scholar 

  4. Kovac, J., Peer, P.: Human skeleton model based dynamic features for walking speed invariant gait recognition. Math. Prob. Eng. 2014, 1 (2014)

    Article  Google Scholar 

  5. Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23, 257–267 (2001)

    Article  Google Scholar 

  6. Lee, C.P., Tan, A.W., Tan, S.C.: Time-sliced averaged motion history image for gait recognition. J. Vis. Commun. Image Represent. 25(5), 822–826 (2014)

    Article  Google Scholar 

  7. Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006)

    Article  Google Scholar 

  8. Hosseini, N.K., Nordin, M.J.: Human gait recognition: A silhouette based approach. J. Autom. Control Eng. 1(2), 103–105 (2013)

    Article  Google Scholar 

  9. Tan, D., Huang, K., Yu, S., Tan, T.: Efficient night gait recognition based on template matching. In: Proceedings of the 18th International Conference on Pattern Recognition - Volume 03, series ICPR ’06, pp. 1000–1003. IEEE Computer Society, Washington, DC (2006). https://doi.org/10.1109/ICPR.2006.478

  10. Tao, D., Li, X., Wu, X., Maybank, S.J.: General tensor discriminant analysis and gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1700–1715 (2007)

    Article  Google Scholar 

  11. Hayder Ali, C.A.E.G.M., Dargham, J.: C.A.E.G.M., Dargham, Jamal: Gait recognition using gait energy image. Int. J. Signal Process. Image Proc. Pattern Recognit. 4, 3.141–3.152 (2011)

    Google Scholar 

  12. Tee, C., Goh, M., Teoh, A.: Gait recognition using sparse grassmannian locality preserving discriminant analysis, In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pp. 2989–2993 (2013)

  13. Kusakunniran, W., Wu, Q., Li, H., Zhang, J.: Automatic gait recognition using weighted binary pattern on video. In: Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, 2009. AVSS ’09, pp. 49–54 (2009)

  14. Lishani, A.O., Boubchir, L., Khalifa, E., Bouridane, A.: Human gait recognition using GEI-based local multi-scale feature descriptors. Multimed. Tools Appl. 77, 1–16 (2018)

    Article  Google Scholar 

  15. Wang, C., Zhang, J., Wang, L., Pu, J., Yuan, X.: Human identification using temporal information preserving gait template. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2164–2176 (2012)

    Article  Google Scholar 

  16. Tang, J., Luo, J., Tjahjadi, T., Guo, F.: Robust arbitrary-view gait recognition based on 3d partial similarity matching. IEEE Trans. Image Process. 26(1), 7–22 (2017)

    Article  MathSciNet  Google Scholar 

  17. Jia, N., Li, C.-T., Sanchez, V., Liew, A.W.-C.: Fast and robust framework for view-invariant gait recognition. In: 5th International Workshop on Biometrics and Forensics (IWBF), 2017, pp. 1–6. IEEE (2017)

  18. BenAbdelkader, C., Cutler, R., Nanda, H., Davis, L.S.: Eigengait: Motion-based recognition of people using image self-similarity. In: Proceedings of the Third International Conference on Audio- and Video-Based Biometric Person Authentication, series AVBPA ’01, pp. 284–294. Springer, London (2001). http://dl.acm.org/citation.cfm?id=646073.677457

  19. Bashir, K., Xiang, T., Gong, S.: Gait representation using flow fields. In: Proceedings of the British Machine Vision Conference, pp. 113.1–113.11. BMVA Press (2009)

  20. Lam, T.H.W., Cheung, K.H., Liu, J.N.K.: Gait flow image: a silhouette-based gait representation for human identification. Pattern Recognit. 44(4), 973–987 (2011)

    Article  MATH  Google Scholar 

  21. Castro, F.M., Marín-Jimenez, M.J., Medina-Carnicer, R.: Pyramidal fisher motion for multiview gait recognition. In: Proceedings of the 2014 22Nd International Conference on Pattern Recognition, series ICPR ’14, pp. 1692–1697. IEEE Computer Society, Washington, DC (2014). https://doi.org/10.1109/ICPR.2014.298

  22. Mahfouf, Z., Bouchrika, I., Merouani, H.F., Harrati, N.: Gait biometrics via optical flow motion features for people identification. In: 17th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), 2016, pp. 312–321. IEEE (2016)

  23. Mahfouf, Z., Merouani, H.F., Bouchrika, I., Harrati, N.: Investigating the use of motion-based features from optical flow for gait recognition. Neurocomputing 283, 140–149 (2018)

    Article  Google Scholar 

  24. Laptev, I., Marszaek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: CVPR (2008)

  25. Wang, H., Kläser, A., Schmid, C., Liu, C.-L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103(1), 60–79 (2013)

    Article  MathSciNet  Google Scholar 

  26. Peng, X., Qiao, Y., Peng, Q.: Motion boundary based sampling and 3d co-occurrence descriptors for action recognition. Image Vis. Comput. 32(9), 616–628 (2014)

    Article  Google Scholar 

  27. Sarkar, S., Phillips, P.J., Liu, Z., Vega, I.R., Grother, P., Bowyer, K.W.: The humanid gait challenge problem: data sets, performance, and analysis. IEEE Trans. Pattern Anal. Mach. Intell. 27, 162–177 (2005)

    Article  Google Scholar 

  28. Rashwan, H.A., García, M.A., Puig, D.: Variational optical flow estimation based on stick tensor voting. IEEE Trans. Image Process. 22(7), 2589–2599 (2013)

    Article  Google Scholar 

  29. Lee, H., Hong, S., Kim, E.: An efficient gait recognition with backpack removal. EURASIP J. Adv. Signal Process 2009, 4.61–4.67 (2009). https://doi.org/10.1155/2009/384384

    Article  MATH  Google Scholar 

  30. Gross, R., Shi, J.: The CMU motion of body (mobo) database. Robotics Institute, Pittsburgh, PA, Technical Report CMU-RI-TR-01-18 (2001)

  31. Lopez-Fernandez, A.C.P.M.M.-J.D., Madrid-Cuevas, F.J., Muoz-Salinas, R.: The AVA multi-view dataset for gait recognition (AVAMVG) In: International Workshop on Activity Monitoring by Multiple Distributed Sensing (AMMDS) (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hatem A. Rashwan.

Appendix

Appendix

Each of the 17-bin histograms models a co-occurrence of optical flow states between a pair of templates separated by m frames, \(T_i\) and \(T_j\), where \(j = (i+m)\%n\), \(0< m < n\) and \(0\le i,j < n\), with n being the number of frames in a gait period. Those bins are noted as: \(HV _i\), \(HR _i\), \(HL _i\), \(LR _i\), \(LL _i\), \(LH _i\), \(RR _i\), \(RL _i\), \(RH _i\), \(VU _i\), \(VD _i\), \(UU _i\), \(UD _i\), \(UV _i\), \(DU _i\), \(DD _i\), \(DV _i\).

$$\begin{aligned} HV _i(x,y)= & {} H_i(x,y)\times V_j(x,y), \end{aligned}$$
(13)
$$\begin{aligned} HR _i(x,y)= & {} H_i(x,y)\times R_j(x,y), \end{aligned}$$
(14)
$$\begin{aligned} HL _i(x,y)= & {} H_i(x,y)\times L_j(x,y), \end{aligned}$$
(15)
$$\begin{aligned} LR _i(x,y)= & {} L_i(x,y)\times R_j(x,y), \end{aligned}$$
(16)
$$\begin{aligned} LL _i(x,y)= & {} L_i(x,y)\times L_j(x,y), \end{aligned}$$
(17)
$$\begin{aligned} LH _i(x,y)= & {} L_i(x,y)\times H_j(x,y), \end{aligned}$$
(18)
$$\begin{aligned} RR _i(x,y)= & {} R_i(x,y)\times R_j(x,y), \end{aligned}$$
(19)
$$\begin{aligned} RL _i(x,y)= & {} R_i(x,y)\times L_j(x,y), \end{aligned}$$
(20)
$$\begin{aligned} RH _i(x,y)= & {} R_i(x,y)\times H_j(x,y), \end{aligned}$$
(21)
$$\begin{aligned} VU _i(x,y)= & {} V_i(x,y)\times U_j(x,y), \end{aligned}$$
(22)
$$\begin{aligned} VD _i(x,y)= & {} V_i(x,y)\times D_j(x,y), \end{aligned}$$
(23)
$$\begin{aligned} UU _i(x,y)= & {} U_i(x,y)\times U_j(x,y), \end{aligned}$$
(24)
$$\begin{aligned} UD _i(x,y)= & {} U_i(x,y)\times D_i(x,y), \end{aligned}$$
(25)
$$\begin{aligned} UV _i(x,y)= & {} U_i(x,y)\times V_j(x,y), \end{aligned}$$
(26)
$$\begin{aligned} DU _i(x,y)= & {} D_i(x,y)\times U_j(x,y), \end{aligned}$$
(27)
$$\begin{aligned} DD _i(x,y)= & {} D_i(x,y)\times D_j(x,y), \end{aligned}$$
(28)
$$\begin{aligned} DV _i(x,y)= & {} D_i(x,y)\times V_j(x,y). \end{aligned}$$
(29)

Finally, an accumulation \(h \times w\) histogram of 17 bins is computed. The 17 bins can be defined as:

$$\begin{aligned} HV (x,y)= & {} \sum _{i=0}^{n-1} HV _i(x,y), \end{aligned}$$
(30)
$$\begin{aligned} HR (x,y)= & {} \sum _{i=0}^{n-1} HR _i(x,y), \end{aligned}$$
(31)
$$\begin{aligned} HL (x,y)= & {} \sum _{i=0}^{n-1} HL _i(x,y), \end{aligned}$$
(32)
$$\begin{aligned} LR (x,y)= & {} \sum _{i=0}^{n-1} LR _i(x,y), \end{aligned}$$
(33)
$$\begin{aligned} LL (x,y)= & {} \sum _{i=0}^{n-1} LL _i(x,y), \end{aligned}$$
(34)
$$\begin{aligned} LH (x,y)= & {} \sum _{i=0}^{n-1} LH _i(x,y), \end{aligned}$$
(35)
$$\begin{aligned} RR (x,y)= & {} \sum _{i=0}^{n-1} RR _i(x,y), \end{aligned}$$
(36)
$$\begin{aligned} RL (x,y)= & {} \sum _{i=0}^{n-1} RL _i(x,y), \end{aligned}$$
(37)
$$\begin{aligned} RH (x,y)= & {} \sum _{i=0}^{n-1} RH _i(x,y), \end{aligned}$$
(38)
$$\begin{aligned} VU (x,y)= & {} \sum _{i=0}^{n-1} VU _i(x,y), \end{aligned}$$
(39)
$$\begin{aligned} VD (x,y)= & {} \sum _{i=0}^{n-1} VD _i(x,y), \end{aligned}$$
(40)
$$\begin{aligned} UU (x,y)= & {} \sum _{i=0}^{n-1} UU _i(x,y), \end{aligned}$$
(41)
$$\begin{aligned} UD (x,y)= & {} \sum _{i=0}^{n-1} UD _i(x,y), \end{aligned}$$
(42)
$$\begin{aligned} UV (x,y)= & {} \sum _{i=0}^{n-1} UV _i(x,y), \end{aligned}$$
(43)
$$\begin{aligned} DU (x,y)= & {} \sum _{i=0}^{n-1} DU _i(x,y), \end{aligned}$$
(44)
$$\begin{aligned} DD (x,y)= & {} \sum _{i=0}^{n-1} DD _i(x,y), \end{aligned}$$
(45)
$$\begin{aligned} DV (x,y)= & {} \sum _{i=0}^{n-1} DV _i(x,y). \end{aligned}$$
(46)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rashwan, H.A., García, M.Á., Chambon, S. et al. Gait representation and recognition from temporal co-occurrence of flow fields. Machine Vision and Applications 30, 139–152 (2019). https://doi.org/10.1007/s00138-018-0982-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-018-0982-3

Keywords

Navigation