Activity Recognition Using Imagery for Smart Home Monitoring

Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 730)

Abstract

In this chapter, we will describe our comprehensive literature survey on using vision technologies for in-home activity monitoring using computer vision techniques, as well as computational intelligence (CI) approaches. Specifically, through our survey of the body of work, we will address the following questions:
  1. I.

    What are the challenges of using standard RGB cameras for activity analysis and how are they solved?

     
  2. II.

    Why do most existing algorithms perform so poorly in real-world settings?

     
  3. III.

    Which the design choices should be considered when deciding between wearable cameras or stationary cameras for activity analysis?

     
  4. IV.

    What does CI bring to the vision world as compared to computer vision techniques in the activity analysis domain?

     

Through our literature survey, as well as based on our own research in both the wearable and non-wearable domain, we share our experiences to enable researchers to make their own design choices as they enter the field of vision-based technologies. We present the hierarchy of the literature survey in Fig. 1.

Notes

Acknowledgements

This work in part is sponsored by the NIH award #1K01LM012439.

References

  1. 1.
    Bambach, S., Lee, S., Crandall, D.J., Yu, C.: Lending a hand: detecting hands and recognizing activities in complex egocentric interactions. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1949–1957 (2015). doi: 10.1109/ICCV.2015.226
  2. 2.
    Banerjee, T., Keller, J.M., Popescu, M., Skubic, M.: Recognizing complex instrumental activities of daily living using scene information and fuzzy logic. Comput. Vis. Image Underst. 140, 68–82 (2015)CrossRefGoogle Scholar
  3. 3.
    Banerjee, T., Rantz, M., Li, M., Popescu, M., Stone, E., Skubic, M., Scott, S.: Monitoring hospital rooms for safety using depth images. AI for Gerontechnology, Arlington, Virginia, USA (2012)Google Scholar
  4. 4.
    Banerjee, T., Yefimova, M., Keller, J., Skubic, M., Woods, D., Rantz, M.: Case studies of older adults sedentary behavior in the primary living area using kinect depth data. J. Ambient Intell, Smart Environ (2016)Google Scholar
  5. 5.
    Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001). doi: 10.1109/34.910878 CrossRefGoogle Scholar
  6. 6.
    Cho, Y., Nam, Y., Choi, Y.J., Cho, W.D.: Smartbuckle: human activity recognition using a 3-axis accelerometer and a wearable camera. In: Proceedings of the 2nd International Workshop on Systems and Networking Support for Health Care and Assisted Living Environments, p. 7. ACM (2008)Google Scholar
  7. 7.
    Choudhury, S.D., Tjahjadi, T.: Gait recognition based on shape and motion analysis of silhouette contours. Comput. Vis. Image Underst. 117(12), 1770–1785 (2013)CrossRefGoogle Scholar
  8. 8.
    Collins, R.T., Gross, R., Shi, J.: Silhouette-based human identification from body shape and gait. In: Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 366–371 (2002). doi: 10.1109/AFGR.2002.1004181
  9. 9.
    Crispim, C.F., Bathrinarayanan, V., Fosty, B., Konig, A., Romdhane, R., Thonnat, M., Bremond, F.: Evaluation of a monitoring system for event recognition of older people. In: 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 165–170 (2013). doi: 10.1109/AVSS.2013.6636634
  10. 10.
    Demiris, G., Oliver, D.P., Giger, J., Skubic, M., Rantz, M.: Older adults’ privacy considerations for vision based recognition methods of eldercare applications. Technol. Health Care 17(1), 41–48 (2009)Google Scholar
  11. 11.
    Dierick, F., Penta, M., Renaut, D., Detrembleur, C.: A force measuring treadmill in clinical gait analysis. Gait Posture 20(3), 299–303 (2004)CrossRefGoogle Scholar
  12. 12.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010). doi: 10.1109/TPAMI.2009.167 CrossRefGoogle Scholar
  13. 13.
    Gabel, M., Gilad-Bachrach, R., Renshaw, E., Schuster, A.: Full body gait analysis with kinect. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1964–1967 (2012). doi: 10.1109/EMBC.2012.6346340
  14. 14.
    Galna, B., Barry, G., Jackson, D., Mhiripiri, D., Olivier, P., Rochester, L.: Accuracy of the microsoft kinect sensor for measuring movement in people with parkinson’s disease. Gait Posture 39(4), 1062–1068 (2014)CrossRefGoogle Scholar
  15. 15.
    Gavrila, D.M., Davis, L.S.: 3-d model-based tracking of humans in action: a multi-view approach. In: Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 73–80 (1996). doi: 10.1109/CVPR.1996.517056
  16. 16.
    Gonzales, R., Woods, R.: 3 edn. Addison-Wesley Longman Publishing Co. (2007)Google Scholar
  17. 17.
    Hirasaki, E., Moore, S.T., Raphan, T., Cohen, B.: Effects of walking velocity on vertical head and body movements during locomotion. Exp. Brain Res. 127(2), 117–130 (1999)CrossRefGoogle Scholar
  18. 18.
    Hofmann, M., Bachmann, S., Rigoll, G.: 2.5d gait biometrics using the depth gradient histogram energy image. In: 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 399–403 (2012). doi: 10.1109/BTAS.2012.6374606
  19. 19.
    Kakadiaris, L., Metaxas, D.: Model-based estimation of 3d human motion. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1453–1459 (2000). doi: 10.1109/34.895978 CrossRefGoogle Scholar
  20. 20.
    Kale, A., Rajagopalan, A.N., Cuntoor, N., Kruger, V.: Gait-based recognition of humans using continuous hmms. In: Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 336–341 (2002). doi: 10.1109/AFGR.2002.1004176
  21. 21.
    Liu, Z., Sarkar, S.: Simplest representation yet for gait recognition: averaged silhouette. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., vol. 4, pp. 211–214 (2004). doi: 10.1109/ICPR.2004.1333741
  22. 22.
    Lu, J., Zhang, E.: Gait recognition for human identification based on ica and fuzzy svm through multiple views fusion. Pattern Recogn. Lett. 28(16), 2401–2411 (2007)CrossRefGoogle Scholar
  23. 23.
    Man, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(2), 316–322 (2006). doi: 10.1109/TPAMI.2006.38 CrossRefGoogle Scholar
  24. 24.
    Matsuo, K., Yamada, K., Ueno, S., Naito, S.: An attention-based activity recognition for egocentric video. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 565–570 (2014). doi: 10.1109/CVPRW.2014.87
  25. 25.
    Muramatsu, D., Shiraishi, A., Makihara, Y., Uddin, M.Z., Yagi, Y.: Gait-based person recognition using arbitrary view transformation model. IEEE Trans. Image Process. 24(1), 140–154 (2015). doi: 10.1109/TIP.2014.2371335 MathSciNetCrossRefGoogle Scholar
  26. 26.
    Narayan, S., Kankanhalli, M.S., Ramakrishnan, K.R.: Action and interaction recognition in first-person videos. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 526–532 (2014). doi: 10.1109/CVPRW.2014.82
  27. 27.
    Niyogi, S.A., Adelson, E.H.: Analyzing and recognizing walking figures in xyt. In: 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 469–474 (1994). doi: 10.1109/CVPR.1994.323868
  28. 28.
    Park, S., Aggarwal, J.K.: A hierarchical bayesian network for event recognition of human actions and interactions. Multimed. Syst. 10(2), 164–179 (2004)CrossRefGoogle Scholar
  29. 29.
    Pirsiavash, H., Ramanan, D.: Detecting activities of daily living in first-person camera views. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2847–2854 (2012). doi: 10.1109/CVPR.2012.6248010
  30. 30.
    Rogez, G., Supancic, J.S., Ramanan, D.: Understanding everyday hands in action from rgb-d images. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3889–3897 (2015). doi: 10.1109/ICCV.2015.443
  31. 31.
    Rowley, H.A., Rehg, J.M.: Analyzing articulated motion using expectation-maximization. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 935–941 (1997). doi: 10.1109/CVPR.1997.609440
  32. 32.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Schneider, B., Banerjee, T.: Preliminary investigation of walking motion using a combination of image and signal processing. In: 2016 International Conference on Computational Science and Computational Intelligence (CSCI) (2016)Google Scholar
  34. 34.
    Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56(1), 116–124 (2013)CrossRefGoogle Scholar
  35. 35.
    Stone, E.E., Anderson, D., Skubic, M., Keller, J.M.: Extracting footfalls from voxel data. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 1119–1122 (2010). doi: 10.1109/IEMBS.2010.5627102
  36. 36.
    Stone, E.E., Skubic, M.: Passive in-home measurement of stride-to-stride gait variability comparing vision and kinect sensing. In: Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE, pp. 6491–6494. IEEE (2011)Google Scholar
  37. 37.
    Stone, E.E., Skubic, M.: Unobtrusive, continuous, in-home gait measurement using the microsoft kinect. IEEE Trans. Biomed. Eng. 60(10), 2925–2932 (2013). doi: 10.1109/TBME.2013.2266341 CrossRefGoogle Scholar
  38. 38.
    Sung, J., Ponce, C., Selman, B., Saxena, A.: Human activity detection from rgbd images. Plan, activity, and intent recognition, vol. 64 (2011)Google Scholar
  39. 39.
    Ugbolue, U.C., Papi, E., Kaliarntas, K.T., Kerr, A., Earl, L., Pomeroy, V.M., Rowe, P.J.: The evaluation of an inexpensive, 2d, video based gait assessment system for clinical use. Gait Posture 38(3), 483–489 (2013)CrossRefGoogle Scholar
  40. 40.
    Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 91–96. ACM (1995)Google Scholar
  41. 41.
    Veeraraghavan, A., Chellappa, R., Roy-Chowdhury, A.K.: The function space of an activity. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 1, pp. 959–968 (2006). doi: 10.1109/CVPR.2006.304
  42. 42.
    Wang, F., Stone, E., Skubic, M., Keller, J.M., Abbott, C., Rantz, M.: Toward a passive low-cost in-home gait assessment system for older adults. IEEE J. Biomed. Health Inf. rmatics 17(2), 346–355 (2013). doi: 10.1109/JBHI.2012.2233745 CrossRefGoogle Scholar
  43. 43.
    Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103(1), 60–79 (2013)MathSciNetCrossRefGoogle Scholar
  44. 44.
    Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1290–1297. IEEE (2012)Google Scholar
  45. 45.
    Watanabe, Y., Hatanaka, T., Komuro, T., Ishikawa, M.: Human gait estimation using a wearable camera. In: 2011 IEEE Workshop on Applications of Computer Vision (WACV), pp. 276–281 (2011). doi: 10.1109/WACV.2011.5711514
  46. 46.
    Webster, K.E., Wittwer, J.E., Feller, J.A.: Validity of the gaitrite walkway system for the measurement of averaged and individual step parameters of gait. Gait Posture 22(4), 317–321 (2005)CrossRefGoogle Scholar
  47. 47.
    Xia, L., Aggarwal, J.K.: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2834–2841 (2013). doi: 10.1109/CVPR.2013.365
  48. 48.
    Yamato, J., Ohya, J., Ishii, K.: Recognizing human action in time-sequential images using hidden markov model. In: Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 379–385 (1992). doi: 10.1109/CVPR.1992.223161
  49. 49.
    Yang, X., Tian, Y.: Effective 3d action recognition using eigenjoints. J. Vis. Commun. Image Represent. 25(1), 2–11 (2014)MathSciNetCrossRefGoogle Scholar
  50. 50.
    Yu, G., Liu, Z., Yuan, J.: Discriminative orderlet mining for real-time recognition of human-object interaction. In: Asian Conference on Computer Vision, pp. 50–65. Springer (2014)Google Scholar
  51. 51.
    Yu, S., Tan, D., Tan, T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: 18th International Conference on Pattern Recognition (ICPR’06), vol. 4, pp. 441–444 (2006). doi: 10.1109/ICPR.2006.67
  52. 52.
    Zhao, G., Liu, G., Li, H., Pietikainen, M.: 3d gait recognition using multiple cameras. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), pp. 529–534 (2006). doi: 10.1109/FGR.2006.2

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringWright State UniversityFairbornUSA

Personalised recommendations