Skip to main content

A Context-Aware Method for View-Point Invariant Long-Term Re-identification

  • Conference paper
  • First Online:
Computer Vision, Imaging and Computer Graphics – Theory and Applications (VISIGRAPP 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 983))

  • 570 Accesses

Abstract

In this work, we propose a novel context-aware framework towards long-term person re-identification. In contrast to the classical context-unaware architecture, in this method we exploit contextual features that can be identified reliably and guide the re-identification process in a much faster and accurate manner. The system is designed for the long-term Re-ID in walking scenarios, so persons are characterized by soft-biometric features (i.e., anthropometric and gait) acquired using a Kinect\(^\mathrm {TM}\) v.2 sensor. Context is associated to the posture of the person with respect to the camera, since the quality of the data acquired from the used sensor significantly depends on this variable. Within each context, only the most relevant features are selected with the help of feature selection techniques, and custom individual classifiers are trained. Afterwards, a context-aware ensemble fusion strategy which we term as ‘Context specific score-level fusion’, merges the results of individual classifiers. In typical ‘in-the-wild’ scenarios the samples of a person may not appear in all contexts of interest. To tackle this problem we propose a cross-context analysis where features are mapped between contexts and allow the transfer of the identification characteristics of a person between different contexts. We demonstrate in this work the experimental verification of the performance of the proposed context-aware system against the classical context-unaware system. We include in the results the analysis of switching context conditions within a video sequence through a pilot study of circular path movement. All the analysis accentuate the impact of contexts in simplifying the searching process by bestowing promising results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    ‘in-the-wild’ refers to the unconstrained settings.

  2. 2.

    More details on KS20 VisLab Multi-View Kinect skeleton dataset is available in the laboratory website http://vislab.isr.ist.utl.pt/vislab_multiview_ks20/.

  3. 3.

    http://users.spa.aalto.fi/jpohjala/featureselection/.

  4. 4.

    We used ‘SpineShoulder’ i.e., the base of the neck refering to joint number 20 of Kinect\(^\mathrm {TM}\) v.2 ( https://msdn.microsoft.com/en-us/library/microsoft.kinect.jointtype.aspx) as the torso joint towards context detection, since it remains more or less stable while walking.

  5. 5.

    KS20 VisLab Multi-View Kinect skeleton dataset: http://vislab.isr.ist.utl.pt/vislab_multiview_ks20/. Access to the Vislab Multi-view KS20 dataset is available upon request. Contact the corresponding author if you are interested in this dataset.

  6. 6.

    For body joint types and enumeration, refer to the link: https://msdn.microsoft.com/en-us/library/microsoft.kinect.jointtype.aspx.

  7. 7.

    In the publicly available dataset also, only the skeleton data is provided. Nevertheless, color and depth information can be made available on demand.

  8. 8.

    1-context and 2-contexts work only for the full cover scenario, and hence other sparse cover and single cover scenarios for the same are represented via crossmark, refering ‘Not Applicable’.

References

  1. Nambiar, A., Bernardino, A., Nascimento, J.C., Fred, A.: Context-aware person re-identification in the wild via fusion of gait and anthropometric features. In: B-Wild Workshop at 12th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 973–980 (2017)

    Google Scholar 

  2. Barbosa, I.B., Cristani, M., Del Bue, A., Bazzani, L., Murino, V.: Re-identification with RGB-D sensors. In: Fusiello, A., Murino, V., Cucchiara, R. (eds.) ECCV 2012. LNCS, vol. 7583, pp. 433–442. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33863-2_43

    Chapter  Google Scholar 

  3. Gianaria, E., Grangetto, M., Lucenteforte, M., Balossino, N.: Human classification using gait features. In: Cantoni, V., Dimov, D., Tistarelli, M. (eds.) International Workshop on Biometric Authentication, pp. 16–27. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13386-7_2

    Chapter  Google Scholar 

  4. Andersson, V.O., de Araújo, R.M.: Person identification using anthropometric and gait data from kinect sensor. In: AAAI, pp. 425–431 (2015)

    Google Scholar 

  5. Munaro, M., Fossati, A., Basso, A., Menegatti, E., Van Gool, L.: One-shot person re-identification with a consumer depth camera. In: Gong, S., Cristani, M., Yan, S., Loy, C.C. (eds.) Person Re-Identification. ACVPR, pp. 161–181. Springer, London (2014). https://doi.org/10.1007/978-1-4471-6296-4_8

    Chapter  Google Scholar 

  6. Context definition. https://en.wiktionary.org/wiki/context

  7. Palmisano, C., Tuzhilin, A., Gorgoglione, M.: Using context to improve predictive modeling of customers in personalization applications. IEEE Trans. Knowl. Data Eng. 20, 1535–1549 (2008)

    Article  Google Scholar 

  8. Ding, Y., Meng, X., Chai, G., Tang, Y.: User identification for instant messages. In: Lu, B.-L., Zhang, L., Kwok, J. (eds.) ICONIP 2011. LNCS, vol. 7064, pp. 113–120. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24965-5_13

    Chapter  Google Scholar 

  9. Panniello, U., Hill, S., Gorgoglione, M.: Using context for online customer re-identification. Expert Syst. Appl. 64, 500–511 (2016)

    Article  Google Scholar 

  10. Alldieck, T., Bahnsen, C.H., Moeslund, T.B.: Context-aware fusion of RGB and thermal imagery for traffic monitoring. Sensors 16, 1947 (2016)

    Article  Google Scholar 

  11. Wei, L., Shah, S.K.: Human activity recognition using deep neural network with contextual information. In: International Conference on Computer Vision Theory and Applications, pp. 34–43 (2017)

    Google Scholar 

  12. Zhang, L., Kalashnikov, D.V., Mehrotra, S., Vaisenberg, R.: Context-based person identification framework for smart video surveillance. Mach. Vis. Appl. 25, 1711–1725 (2014)

    Article  Google Scholar 

  13. Leng, Q., Hu, R., Liang, C., Wang, Y., Chen, J.: Person re-identification with content and context re-ranking. Multimedia Tools Appl. 74, 6989–7014 (2015)

    Article  Google Scholar 

  14. Garcia, J., Martinel, N., Micheloni, C., Gardel, A.: Person re-identification ranking optimisation by discriminant context information analysis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1305–1313 (2015)

    Google Scholar 

  15. Silva, H., Fred, A.: Feature subspace ensembles: a parallel classifier combination scheme using feature selection. In: Haindl, M., Kittler, J., Roli, F. (eds.) MCS 2007. LNCS, vol. 4472, pp. 261–270. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-72523-7_27

    Chapter  Google Scholar 

  16. Whitney, A.W.: A direct method of nonparametric measurement selection. IEEE Trans. Comput. C-20(9), 1100–1103 (1971)

    Article  Google Scholar 

  17. Pohjalainen, J., Räsänen, O., Kadioglu, S.: Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Comput. Speech Lang. 29, 145–171 (2015)

    Article  Google Scholar 

  18. Ross, A.A., Nandakumar, K., Jain, A.: Handbook of Multibiometrics, vol. 6. Springer, Heidelberg (2006). https://doi.org/10.1007/0-387-33123-9

    Book  Google Scholar 

  19. Nambiar, A., Bernardino, A., Nascimento, J.C., Fred, A.: Towards view-point invariant person re-identification via fusion of anthropometric and gait features from Kinect measurements, pp. 108–119 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Athira Nambiar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nambiar, A., Bernardino, A. (2019). A Context-Aware Method for View-Point Invariant Long-Term Re-identification. In: Cláudio, A., et al. Computer Vision, Imaging and Computer Graphics – Theory and Applications. VISIGRAPP 2017. Communications in Computer and Information Science, vol 983. Springer, Cham. https://doi.org/10.1007/978-3-030-12209-6_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-12209-6_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-12208-9

  • Online ISBN: 978-3-030-12209-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics