Advertisement

Personal and Ubiquitous Computing

, Volume 14, Issue 8, pp 685–694 | Cite as

Multimodal identification and tracking in smart environments

  • Vivek Menon
  • Bharat JayaramanEmail author
  • Venu Govindaraju
Original Article

Abstract

We present a model for unconstrained and unobtrusive identification and tracking of people in smart environments and answering queries about their whereabouts. Our model supports biometric recognition based upon multiple modalities such as face, gait, and voice in a uniform manner. The key technical idea underlying our approach is to abstract a smart environment by a state transition system in which each state records a set of individuals who are present in various zones of the environment. Since biometric recognition is inexact, state information is inherently probabilistic in nature. An event abstracts a biometric recognition step, and the transition function abstracts the reasoning necessary to effect state transitions. In this manner, we are able to integrate different biometric modalities uniformly and also different criteria for state transitions. Fusion of biometric modalities is also supported by our model. We define performance metrics for a smart environment in terms of the concepts of ‘precision’ and ‘recall’. We have developed a prototype implementation of our proposed concepts and provide experimental results in this paper. Our conclusion is that the state transition model is an effective abstraction of a smart environment and serves as a good basis for developing practical systems.

Keywords

Smart environments Identification Tracking Biometrics Multimodal fusion State transition system Probabilistic events Performance metrics Precision  Recall 

Notes

Acknowledgments

This work was done while Vivek Menon was a Visiting Research Scientist at the Center for Unified Biometrics and Sensors (CUBS), University at Buffalo. Thanks to Philip Kilinskas for his help in developing the experimental prototype; Dr. Jason J. Corso for discussions on Markov models; and members of CUBS for their comments and suggestions on an earlier version of this paper [16].

References

  1. 1.
    Aghajan HK, Augusto JC, Wu C, McCullagh PJ, Walkden J (2007) Distributed vision-based accident management for assisted living. In: Okadome T, Yamazaki T, Makhtari M (eds) ICOST ’07, lecture notes in computer science, vol 4541. Springer, Berlin, pp 196–205Google Scholar
  2. 2.
    Bar-Shalom Y, Li X (1995) Multitarget-multisensor tracking: principles and techniques. Yaakov Bar-ShalomGoogle Scholar
  3. 3.
    Bernardin K, Stiefelhagen R, Waibel A (2008) Probabilistic integration of sparse audio-visual cues for identity tracking. In: Proceedings of the 16th ACM international conference on multimedia (MM ’08), ACM, pp 151–158.Google Scholar
  4. 4.
    Bouchaffra D, Govindaraju V, Srihari S (1999) A methodology for mapping scores to probabilities. IEEE Trans Pattern Anal Mach Intell 21(9):923–927CrossRefGoogle Scholar
  5. 5.
    Bui HH, Venkatesh S, West G (2001) Tracking and surveillance in wide-area spatial environments using the abstract Hidden Markov Model. Int J Pattern Recogn AI 15(1):177–195CrossRefGoogle Scholar
  6. 6.
    Cao H, Govindaraju V (2007) Vector model based indexing and retrieval of handwritten medical forms. In: Proceedings of the international conference on document analysis and recognition (ICDAR ’07), IEEE Computer Society, pp 88–92Google Scholar
  7. 7.
    Das SK, Roy N, Roy A (2006) Context-aware resource management in multi-inhabitant smart homes: a framework based on Nash H-learning. Pervasive Mob Comput 2(4):372–404CrossRefGoogle Scholar
  8. 8.
    Ekenel HK, Fischer M, Jin Q, Stiefelhagen R (2007) Multi-modal Person identification in a smart environment. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR ’07), IEEE Computer Society, pp 1–8Google Scholar
  9. 9.
    Fox D, Hightower J, Liao L, Schulz D, Borriello G (2003) Bayesian filtering for location estimation. IEEE Pervasive Comput 2(3):24–33CrossRefGoogle Scholar
  10. 10.
    Hewitt R (2007) Seeing with openCV: implementing eigenface. SERVO Magazine, pp 44–50Google Scholar
  11. 11.
    Hewitt R, Belongie S (2006) Active learning in face recognition: using tracking to build a face model. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshop (CVPRW ’06), IEEE Computer Society, p 157Google Scholar
  12. 12.
    Hightower J, Borriello G (2001) Location systems for ubiquitous computing. IEEE Comput 34(8):57–66Google Scholar
  13. 13.
    Krumm J, Harris S, Meyers B, Brumitt B, Hale M, Shafer S (2000) Multi-camera multi-person tracking for easyliving. In: Proceedings of the third IEEE international workshop on visual surveillance, IEEE Computer Society, p 3Google Scholar
  14. 14.
    Luque J et al (2007) Audio, video and multimodal person identification in a smart room. In: Stiefelhagen R, Garofolo J (eds) Multimodal technologies for perception of humans. Lecture notes in computer science, vol 4122. Springer, USA, pp 258–269CrossRefGoogle Scholar
  15. 15.
    Manesis T, Avouris N (2005) Survey of position location techniques in mobile systems. In: Proceedings of the 7th international conference on human computer interaction with mobile devices and services (MobileHCI ’05), ACM, pp 291–294Google Scholar
  16. 16.
    Menon V, Jayaraman B, Govindaraju V (2008) Biometrics driven smart environments: abstract framework and evaluation. In: Proceedings of the 5th international conference on ubiquitous intelligence and computing (UIC ’08), Springer, Berlin, pp 75–89Google Scholar
  17. 17.
    Menon V, Jayaraman B, Govindaraju V (2008) Integrating recognition and reasoning in smart environments. In: Proceedings of the 4th IET international conference on intelligent environments (IE ’08), pp 1–8Google Scholar
  18. 18.
    Misra A, Das SK (2005) Location estimation (determination and prediction) techniques in smart environments. In: Smart P (eds) Environments: technology, applications, ambient intelligence. Wiley-Interscience, pp 193–228Google Scholar
  19. 19.
  20. 20.
    Pentland A, Choudhury T (2000) Face recognition for smart environments. IEEE Comput 33(2):50–55Google Scholar
  21. 21.
    van Rijsbergen CJ (1979) Information retrieval. Butterworths, LondonGoogle Scholar
  22. 22.
    Rumbaugh J, Jacobson I, Booch G (2004) Unified modeling language reference manual, 2nd edn. Addison-Wesley Professional, ReadingGoogle Scholar
  23. 23.
    Satyanarayanan M (2001) Pervasive computing: vision and challenges. IEEE Pers Commun 8(4):10–17CrossRefGoogle Scholar
  24. 24.
    Schulz D, Fox D, Hightower J (2003) People tracking with anonymous and Id-sensors using Rao-Blackwellised particle filters. In: Proceedings of the 18th international joint conference on artificial intelligence (IJCAI ’03), pp 921–926Google Scholar
  25. 25.
    Tulyakov S, Wu C, Govindaraju V (2009) On the difference between optimal combination functions for verification and identification systems. Intern J Pattern Recognit Artif IntellGoogle Scholar
  26. 26.
    Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86CrossRefGoogle Scholar
  27. 27.
    Weiser M (1991) The computer for the 21st century. Sci Am 265(3):66–75CrossRefGoogle Scholar
  28. 28.
    Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38(4)Google Scholar
  29. 29.
    Zhang S, Janakiraman R, Sim T, Kumar S (2005) Continuous verification using multimodal biometrics. In: Zhang D, Jain AK (eds) Advances in biometrics, lecture notes in computer science, vol 3832. Springer, Berlin, pp 562–570Google Scholar

Copyright information

© Springer-Verlag London Limited 2010

Authors and Affiliations

  • Vivek Menon
    • 1
  • Bharat Jayaraman
    • 2
    Email author
  • Venu Govindaraju
    • 3
  1. 1.Amrita UniversityCoimbatoreIndia
  2. 2.Department of Computer Science and EngineeringUniversity at BuffaloBuffaloUSA
  3. 3.Center for Unified Biometrics and SensorsUniversity at BuffaloBuffaloUSA

Personalised recommendations