A Context-Aware Method for View-Point Invariant Long-Term Re-identification

  • Athira NambiarEmail author
  • Alexandre Bernardino
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 983)


In this work, we propose a novel context-aware framework towards long-term person re-identification. In contrast to the classical context-unaware architecture, in this method we exploit contextual features that can be identified reliably and guide the re-identification process in a much faster and accurate manner. The system is designed for the long-term Re-ID in walking scenarios, so persons are characterized by soft-biometric features (i.e., anthropometric and gait) acquired using a Kinect\(^\mathrm {TM}\) v.2 sensor. Context is associated to the posture of the person with respect to the camera, since the quality of the data acquired from the used sensor significantly depends on this variable. Within each context, only the most relevant features are selected with the help of feature selection techniques, and custom individual classifiers are trained. Afterwards, a context-aware ensemble fusion strategy which we term as ‘Context specific score-level fusion’, merges the results of individual classifiers. In typical ‘in-the-wild’ scenarios the samples of a person may not appear in all contexts of interest. To tackle this problem we propose a cross-context analysis where features are mapped between contexts and allow the transfer of the identification characteristics of a person between different contexts. We demonstrate in this work the experimental verification of the performance of the proposed context-aware system against the classical context-unaware system. We include in the results the analysis of switching context conditions within a video sequence through a pilot study of circular path movement. All the analysis accentuate the impact of contexts in simplifying the searching process by bestowing promising results.


  1. 1.
    Nambiar, A., Bernardino, A., Nascimento, J.C., Fred, A.: Context-aware person re-identification in the wild via fusion of gait and anthropometric features. In: B-Wild Workshop at 12th IEEE International Conference on Automatic Face & Gesture Recognition, pp. 973–980 (2017)Google Scholar
  2. 2.
    Barbosa, I.B., Cristani, M., Del Bue, A., Bazzani, L., Murino, V.: Re-identification with RGB-D sensors. In: Fusiello, A., Murino, V., Cucchiara, R. (eds.) ECCV 2012. LNCS, vol. 7583, pp. 433–442. Springer, Heidelberg (2012). Scholar
  3. 3.
    Gianaria, E., Grangetto, M., Lucenteforte, M., Balossino, N.: Human classification using gait features. In: Cantoni, V., Dimov, D., Tistarelli, M. (eds.) International Workshop on Biometric Authentication, pp. 16–27. Springer, Cham (2014). Scholar
  4. 4.
    Andersson, V.O., de Araújo, R.M.: Person identification using anthropometric and gait data from kinect sensor. In: AAAI, pp. 425–431 (2015)Google Scholar
  5. 5.
    Munaro, M., Fossati, A., Basso, A., Menegatti, E., Van Gool, L.: One-shot person re-identification with a consumer depth camera. In: Gong, S., Cristani, M., Yan, S., Loy, C.C. (eds.) Person Re-Identification. ACVPR, pp. 161–181. Springer, London (2014). Scholar
  6. 6.
  7. 7.
    Palmisano, C., Tuzhilin, A., Gorgoglione, M.: Using context to improve predictive modeling of customers in personalization applications. IEEE Trans. Knowl. Data Eng. 20, 1535–1549 (2008)CrossRefGoogle Scholar
  8. 8.
    Ding, Y., Meng, X., Chai, G., Tang, Y.: User identification for instant messages. In: Lu, B.-L., Zhang, L., Kwok, J. (eds.) ICONIP 2011. LNCS, vol. 7064, pp. 113–120. Springer, Heidelberg (2011). Scholar
  9. 9.
    Panniello, U., Hill, S., Gorgoglione, M.: Using context for online customer re-identification. Expert Syst. Appl. 64, 500–511 (2016)CrossRefGoogle Scholar
  10. 10.
    Alldieck, T., Bahnsen, C.H., Moeslund, T.B.: Context-aware fusion of RGB and thermal imagery for traffic monitoring. Sensors 16, 1947 (2016)CrossRefGoogle Scholar
  11. 11.
    Wei, L., Shah, S.K.: Human activity recognition using deep neural network with contextual information. In: International Conference on Computer Vision Theory and Applications, pp. 34–43 (2017)Google Scholar
  12. 12.
    Zhang, L., Kalashnikov, D.V., Mehrotra, S., Vaisenberg, R.: Context-based person identification framework for smart video surveillance. Mach. Vis. Appl. 25, 1711–1725 (2014)CrossRefGoogle Scholar
  13. 13.
    Leng, Q., Hu, R., Liang, C., Wang, Y., Chen, J.: Person re-identification with content and context re-ranking. Multimedia Tools Appl. 74, 6989–7014 (2015)CrossRefGoogle Scholar
  14. 14.
    Garcia, J., Martinel, N., Micheloni, C., Gardel, A.: Person re-identification ranking optimisation by discriminant context information analysis. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1305–1313 (2015)Google Scholar
  15. 15.
    Silva, H., Fred, A.: Feature subspace ensembles: a parallel classifier combination scheme using feature selection. In: Haindl, M., Kittler, J., Roli, F. (eds.) MCS 2007. LNCS, vol. 4472, pp. 261–270. Springer, Heidelberg (2007). Scholar
  16. 16.
    Whitney, A.W.: A direct method of nonparametric measurement selection. IEEE Trans. Comput. C-20(9), 1100–1103 (1971)CrossRefGoogle Scholar
  17. 17.
    Pohjalainen, J., Räsänen, O., Kadioglu, S.: Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Comput. Speech Lang. 29, 145–171 (2015)CrossRefGoogle Scholar
  18. 18.
    Ross, A.A., Nandakumar, K., Jain, A.: Handbook of Multibiometrics, vol. 6. Springer, Heidelberg (2006). Scholar
  19. 19.
    Nambiar, A., Bernardino, A., Nascimento, J.C., Fred, A.: Towards view-point invariant person re-identification via fusion of anthropometric and gait features from Kinect measurements, pp. 108–119 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute for Systems and RoboticsInstituto Superior TécnicoLisbonPortugal

Personalised recommendations