Advertisement

Understanding of Human Behavior with a Robotic Agent Through Daily Activity Analysis

  • Ioannis KostavelisEmail author
  • Manolis Vasileiadis
  • Evangelos Skartados
  • Andreas Kargakos
  • Dimitrios Giakoumis
  • Christos-Savvas Bouganis
  • Dimitrios Tzovaras
Article
  • 39 Downloads

Abstract

Personal assistive robots to be realized in the near future should have the ability to seamlessly coexist with humans in unconstrained environments, with the robot’s capability to understand and interpret the human behavior during human–robot cohabitation significantly contributing towards this end. Still, the understanding of human behavior through a robot is a challenging task as it necessitates a comprehensive representation of the high-level structure of the human’s behavior from the robot’s low-level sensory input. The paper at hand tackles this problem by demonstrating a robotic agent capable of apprehending human daily activities through a method, the Interaction Unit analysis, that enables activities’ decomposition into a sequence of units, each one associated with a behavioral factor. The modelling of human behavior is addressed with a Dynamic Bayesian Network that operates on top of the Interaction Unit, offering quantification of the behavioral factors and the formulation of the human’s behavioral model. In addition, light-weight human action and object manipulation monitoring strategies have been developed, based on RGB-D and laser sensors, tailored for onboard robot operation. As a proof of concept, we used our robot to evaluate the ability of the method to differentiate among the examined human activities, as well as to assess the capability of behavior modeling of people with Mild Cognitive Impairment. Moreover, we deployed our robot in 12 real house environments with real users, showcasing the behavior understanding ability of our method in unconstrained realistic environments. The evaluation process revealed promising performance and demonstrated that human behavior can be automatically modeled through Interaction Unit analysis, directly from robotic agents.

Keywords

Human behavior understanding Daily activities interpretation Interaction Unit analysis Bayesian networks Mobile robots 

Notes

Acknowledgements

We would like to thank Fundacio ACE Barcelona Alzheimer Institute & Research Centre as well as the Medical University of Lublin, Poland, for offering their expertise in the annotation of the behavioral factors in the IU analysis as well as for offering their premises for the dataset acquisition with RAMCIP robot.

Funding

This work has been supported by the EU Horizon 2020 funded project namely: “Robotic Assistant for MCI Patients at home (RAMCIP)” under the Grant Agreement with No. 643433.

Compliance with Ethical Standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv (CSUR) 43(3):16CrossRefGoogle Scholar
  2. 2.
    Atkeson CG, Hale JG, Pollick F, Riley M, Kotosaka S, Schaul S, Shibata T, Tevatia G, Ude A, Vijayakumar S et al (2000) Using humanoid robots to study human behavior. IEEE Intell Syst Appl 15(4):46–56CrossRefGoogle Scholar
  3. 3.
    Baddeley AD, Baddeley H, Bucks R, Wilcock G (2001) Attentional control in Alzheimer’s disease. Brain 124(8):1492–1508CrossRefGoogle Scholar
  4. 4.
    Berelson B, Steiner GA (1964) Human behavior: an inventory of scientific findings. Trans-action 1:2–2Google Scholar
  5. 5.
    Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkzbMATHGoogle Scholar
  6. 6.
    Bloem BR, Valkenburg VV, Slabbekoorn M, Willemsen MD (2001) The multiple tasks test: development and normal strategies. Gait Posture 14(3):191–202CrossRefGoogle Scholar
  7. 7.
    Charalampous K, Kostavelis I, Gasteratos A (2016) Robot navigation in large-scale social maps: an action recognition approach. Expert Syst Appl 66:261–273CrossRefGoogle Scholar
  8. 8.
    Charalampous K, Kostavelis I, Gasteratos A (2017) Recent trends in social aware robot navigation: a survey. Robot Auton Syst 93:85–104CrossRefGoogle Scholar
  9. 9.
    Chrungoo A, Manimaran S, Ravindran B (2014) Activity recognition for natural human robot interaction. In: International conference on social robotics. Springer, pp 84–94Google Scholar
  10. 10.
    Coppola C, Cosar S, Faria DR, Bellotto N (2017) Automatic detection of human interactions from RGB-D data for social activity classification. In: 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 871–876Google Scholar
  11. 11.
    Doumanoglou A, Kouskouridas R, Malassiotis S, Kim TK (2016) Recovering 6D object pose and predicting next-best-view in the crowd. In: IEEE conference on computer vision and pattern recognition, pp 3583–3592Google Scholar
  12. 12.
    Faria DR, Premebida C, Nunes U (2014) A probabilistic approach for human everyday activities recognition using body motion from RGB-D images. In: The 23rd IEEE international symposium on robot and human interactive communication, pp 732–737Google Scholar
  13. 13.
    Foka AF, Trahanias PE (2010) Probabilistic autonomous robot navigation in dynamic environments with human motion prediction. Int J Soc Robot 2(1):79–94CrossRefGoogle Scholar
  14. 14.
    Garrell A, Villamizar M, Moreno-Noguer F, Sanfeliu A (2017) Teaching robots proactive behavior using human assistance. Int J Soc Robot 9(2):231–249CrossRefGoogle Scholar
  15. 15.
    Grześ M, Hoey J, Khan SS, Mihailidis A, Czarnuch S, Jackson D, Monk A (2014) Relational approach to knowledge engineering for POMDP-based assistance systems as a translation of a psychological model. Int J Approx Reason 55(1):36–58CrossRefGoogle Scholar
  16. 16.
    Hall ET (1966) The hidden dimension. Doubleday, New YorkGoogle Scholar
  17. 17.
    Han F, Reily B, Hoff W, Zhang H (2017) Space-time representation of people based on 3D skeletal data: a review. Comput Vis Image Underst 158:85–105CrossRefGoogle Scholar
  18. 18.
    Hoey J, Plötz T, Jackson D, Monk A, Pham C, Olivier P (2011) Rapid specification and automated generation of prompting systems to assist people with dementia. Pervasive Mobile Comput 7(3):299–318CrossRefGoogle Scholar
  19. 19.
    Karplus K, Brown M, Hughey R, Krogh A, Mian IS, Haussler D (1996) Dirichlet mixtures: a method for improving detection of weak but significant protein sequence homology. Comput Appl Biosci 12:327–345Google Scholar
  20. 20.
    Katz S, Downs TD, Cash HR, Grotz RC (1970) Progress in development of the index of ADL1. Gerontologist 10(1 Part 1):20–30CrossRefGoogle Scholar
  21. 21.
    Kim B, Pineau J (2016) Socially adaptive path planning in human environments using inverse reinforcement learning. Int J Soc Robot 8(1):51–66CrossRefGoogle Scholar
  22. 22.
    Koppula H, Saxena A (2013) Learning spatio-temporal structure from RGB-D videos for human activity detection and anticipation. In: International conference on machine learning, pp 792–800Google Scholar
  23. 23.
    Kostavelis I, Kargakos A, Giakoumis D, Tzovaras D (2017) Robots workspace enhancement with dynamic human presence for socially-aware navigation. In: International conference on computer vision systems. Springer, pp 279–288Google Scholar
  24. 24.
    Leigh A, Pineau J (2014) Laser-based person tracking for clinical locomotion analysis. In: IROS-rehabilitation and assistive roboticsGoogle Scholar
  25. 25.
    Leite I, Martinho C, Paiva A (2013) Social robots for long-term interaction: a survey. Int J Soc Robot 5(2):291–308CrossRefGoogle Scholar
  26. 26.
    Martinez-Contreras F, Orrite-Urunuela C, Herrero-Jaraba E, Ragheb H, Velastin SA (2009) Recognizing human actions using silhouette-based HMM. In: Sixth IEEE international conference on advanced video and signal based surveillance. IEEE, pp 43–48Google Scholar
  27. 27.
    Norman DA, Shallice T (1986) Attention to action. In: Davidson RJ, Schwartz GE, Shapiro D (eds) Consciousness and self-regulation. Springer, New York, pp 1–18Google Scholar
  28. 28.
    Oreifej O, Liu Z (2013) HON4D: Histogram of oriented 4D normals for activity recognition from depth sequences. In: IEEE conference on computer vision and pattern recognition, pp 716–723Google Scholar
  29. 29.
    Panangadan A, Matarić M, Sukhatme GS (2010) Tracking and modeling of human activity using laser rangefinders. Int J Soc Robot 2(1):95–107CrossRefGoogle Scholar
  30. 30.
    Piyathilaka L, Kodagoda S (2015) Human activity recognition for domestic robots. In: Mejias L, Corke P, Roberts J (eds) Field and service robotics. Springer, Berlin, pp 395–408Google Scholar
  31. 31.
    Premebida C, Faria D, Souza F (2017) Dynamic Bayesian network for time-dependent classification problems in robotics. In: Prieto Tejedor J (ed) Bayesian inference, chapter 15. InTech, CroatiaGoogle Scholar
  32. 32.
    Rahmani H, Mahmood A, Huynh DQ, Mian A (2014) Real time action recognition using histograms of depth gradients and random decision forests. In: 2014 IEEE winter conference on applications of computer vision. IEEE, pp 626–633Google Scholar
  33. 33.
    Roitberg A, Perzylo A, Somani N, Giuliani M, Rickert M, Knoll A (2014) Human activity recognition in the context of industrial human–robot interaction. In: Asia-Pacific Signal and Information Processing Association. IEEE, pp 1–10Google Scholar
  34. 34.
    Rusu RB, Marton ZC, Blodow N, Dolha M, Beetz M (2008) Towards 3D point cloud based object maps for household environments. Robot Auton Syst 56(11):927–941CrossRefGoogle Scholar
  35. 35.
    Rybok L, Schauerte B, Al-Halah Z, Stiefelhagen R (2014) Important stuff, everywhere! activity recognition with salient proto-objects as context. In: IEEE winter conference on applications of computer vision. IEEE, pp 646–651Google Scholar
  36. 36.
    Ryu H, Monk A (2005) Will it be a capital letter: signalling case mode in mobile phones. Interact Comput 17(4):395–418CrossRefGoogle Scholar
  37. 37.
    Ryu H, Monk A (2009) Interaction unit analysis: a new interaction design framework. Hum Comput Interact 24(4):367–407CrossRefGoogle Scholar
  38. 38.
    Salah A, Ruiz-del Solar J, Mericli C, Oudeyer PY (2012) Human behavior understanding for robotics. In: Salah AA, Ruiz-del-Solar J, Meriçli Ç, Oudeyer P-Y (eds) Human behavior understanding. Springer, Berlin, pp 1–16CrossRefGoogle Scholar
  39. 39.
    Salah AA, Gevers T, Sebe N, Vinciarelli A et al (2010) Challenges of human behavior understanding. In: Salah AA, Ruiz-del-Solar J, Meriçli Ç, Oudeyer P-Y (eds) HBU. Springer, Berlin, pp 1–12Google Scholar
  40. 40.
    Salah AA, Lepri B, Pianesi F, Pentland AS (2011) Human behavior understanding for inducing behavioral change: application perspectives. In: International workshop on human behavior understanding. Springer, pp 1–15Google Scholar
  41. 41.
    Santos L, Khoshhal K, Dias J (2015) Trajectory-based human action segmentation. Pattern Recognit 48(2):568–579CrossRefGoogle Scholar
  42. 42.
    Schmidler SC, Liu JS, Brutlag DL (2000) Bayesian segmentation of protein secondary structure. J Comput Biol 7(1–2):233–248CrossRefGoogle Scholar
  43. 43.
    Schmidt T, Newcombe R, Fox D (2015) DART: dense articulated real-time tracking with consumer depth cameras. Auton Robots 39:239–258CrossRefGoogle Scholar
  44. 44.
    Schmidt-Rohr SR, Losch M, Dillmann R (2008) Human and robot behavior modeling for probabilistic cognition of an autonomous service robot. In: IEEE international symposium on robot and human interactive communication. IEEE, pp 635–640Google Scholar
  45. 45.
    Shan J, Akella S (2014) 3D human action segmentation and recognition using pose kinetic energy. In: IEEE workshop on advanced robotics and its social impacts. IEEE, pp 69–75Google Scholar
  46. 46.
    Shotton J, Sharp T, Kipman A, Fitzgibbon A, Finocchio M, Blake A, Cook M, Moore R (2013) Real-time human pose recognition in parts from single depth images. Commun ACM 56(1):116–124CrossRefGoogle Scholar
  47. 47.
    Skinner BF (1953) Science and human behavior. Simon and Schuster, New YorkGoogle Scholar
  48. 48.
    Sonn U (1996) Longitudinal studies of dependence in daily life activities among elderly persons. Scand J Rehabilit Med Suppl 34:1–35Google Scholar
  49. 49.
    Stavropoulos G, Giakoumis D, Moustakas K, Tzovaras D (2017) Automatic action recognition for assistive robots to support MCI patients at home. In: 10th international conference on pervasive technologies related to assistive environments. ACM, pp 366–371Google Scholar
  50. 50.
    Sung J, Ponce C, Selman B, Saxena A (2012) Unstructured human activity detection from RGBD images. In: IEEE international conference on robotics and automation. IEEE, pp 842–849Google Scholar
  51. 51.
    Takayama L, Pantofaru C (2009) Influences on proxemic behaviors in human–robot interaction. In: IEEE international conference on intelligent robots and systems. IEEE, pp 5495–5502Google Scholar
  52. 52.
    Taylor J, Shotton J, Sharp T, Fitzgibbon A (2012) The Vitruvian manifold: Inferring dense correspondences for one-shot human pose estimation. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 103–110Google Scholar
  53. 53.
    Tsai MJ, Wu CL, Pradhan SK, Xie Y, Li TY, Fu LC, Zeng YC (2016) Context-aware activity prediction using human behavior pattern in real smart home environments. In: 2016 IEEE international conference on automation science and engineering (CASE). IEEE, pp 168–173Google Scholar
  54. 54.
    Vasileiadis M, Malassiotis S, Giakoumis D, Bouganis CS, Tzovaras D (2017) Robust human pose tracking for realistic service robot applications. In: IEEE international conference on computer vision workshops, pp 1363–1372Google Scholar
  55. 55.
    Wang J, Liu Z, Wu Y (2014) Learning actionlet ensemble for 3D human action recognition. In: Wang J (ed) Human action recognition with depth cameras. Springer, Basel, pp 11–40CrossRefGoogle Scholar
  56. 56.
    Whiten C, Laganiere R, Bilodeau GA (2013) Efficient action recognition with MoFREAK. In: International conference on computer and robot vision. IEEE, pp 319–325Google Scholar
  57. 57.
    Wu J, Osuntogun A, Choudhury T, Philipose M, Rehg JM (2007) A scalable approach to activity recognition based on object use. In: IEEE international conference on computer vision. IEEE, pp 1–8Google Scholar
  58. 58.
    Yang X, Tian Y (2014) Effective 3D action recognition using eigenjoints. J Vis Commun Image Represent 25(1):2–11MathSciNetCrossRefGoogle Scholar
  59. 59.
    Zhu Y, Chen W, Guo G (2014) Evaluating spatiotemporal interest point features for depth-based action recognition. Image Vis Comput 32(8):453–464CrossRefGoogle Scholar
  60. 60.
    Ziaeefard M, Bergevin R (2015) Semantic human activity recognition: a literature review. Pattern Recognit 48(8):2329–2345CrossRefGoogle Scholar
  61. 61.
    Zipf GK (2016) Human behavior and the principle of least effort: an introduction to human ecology. Ravenio Books, CambridgeGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  • Ioannis Kostavelis
    • 1
    Email author
  • Manolis Vasileiadis
    • 2
  • Evangelos Skartados
    • 1
  • Andreas Kargakos
    • 1
  • Dimitrios Giakoumis
    • 1
  • Christos-Savvas Bouganis
    • 2
  • Dimitrios Tzovaras
    • 1
  1. 1.Information Technologies InstituteCentre for Research and Technology HellasThermiGreece
  2. 2.Department of Electrical and Electronic EngineeringImperial College LondonLondonUK

Personalised recommendations