Advertisement

Human Activities Transfer Learning for Assistive Robotics

  • David Ada Adama
  • Ahmad LotfiEmail author
  • Caroline Langensiepen
  • Kevin Lee
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 650)

Abstract

Assisted living homes aim to deploy tools to promote better living of elderly population. One of such tools is assistive robotics to perform tasks a human carer would normally be required to perform. For assistive robots to perform activities without explicit programming, a major requirement is learning and classifying activities while it observes a human carry out the activities. This work proposes a human activity learning and classification system from features obtained using 3D RGB-D data. Different classifiers are explored in this approach and the system is evaluated on a publicly available data set, showing promising results which is capable of improving assistive robots performance in living environments.

Keywords

Activity recognition Activity classification Assistive robotics 

References

  1. 1.
    Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. J. Big Data 3, 9 (2016)CrossRefGoogle Scholar
  2. 2.
    Bezdek, J.C.: Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York (1981)CrossRefzbMATHGoogle Scholar
  3. 3.
    Lu, J., Behbood, V., Hao, P., Zuo, H., Xue, S., Zhang, G.: Transfer learning using computational intelligence: a survey. Knowl.-Based Syst. 80, 14–23 (2015)CrossRefGoogle Scholar
  4. 4.
    Shell, J., Coupland, S.: Fuzzy transfer learning: methodology and application. Inf. Sci. 293, 59–79 (2015)CrossRefGoogle Scholar
  5. 5.
    Iglesias, J.A., Angelov, P., Ledezma, A., Sanchis, A.: Human activity recognition based on evolving fuzzy systems. Int. J. Neural Syst. 20(05), 355–364 (2010)CrossRefGoogle Scholar
  6. 6.
    Zhang, H., Yoshie, O.: Improving human activity recognition using subspace clustering. In: IEEE Machine Learning and Cybernetics (ICMLC), vol. 3, pp. 1058–1063 (2012)Google Scholar
  7. 7.
    Han, F., Reily, B., Hoff, W., Zhang, H.: Space-time representation of people based on 3D skeletal data: a review. Comput. Vis. Image Underst. 158, 85–105 (2017)CrossRefGoogle Scholar
  8. 8.
    Koppula, H.S., Gupta, R., Saxena, A.: Learning human activities and object affordances from RGB-D videos. Int. J. Robot. Res. 32, 951–970 (2013)CrossRefGoogle Scholar
  9. 9.
    Li, S.-Z., Yu, B., Wu, W., Su, S.-Z., Ji, R.-R.: Feature learning based on SAEPCA network for human gesture recognition in RGBD images. Neurocomputing 151, 565–573 (2015)CrossRefGoogle Scholar
  10. 10.
    Kviatkovsky, I., Rivlin, E., Shimshoni, I.: Online action recognition using covariance of shape and motion. In: IEEE Conference on Computer Vision and Pattern Recognition (2014)Google Scholar
  11. 11.
    Faria, D.R., Premebida, C., Nunes, U.: A probabilistic approach for human everyday activities recognition using body motion from RGB-D images. In: 23rd IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN, pp. 732–737. IEEE (2014)Google Scholar
  12. 12.
    Sung, J., Ponce, C., Selman, B., Saxena, A.: Human activity detection from RGBD images. In: Proceedings of the 16th AAAI Conference on Plan, Activity, and Intent Recognition (AAAIWS11-16), pp. 47–55. AAAI Press (2011)Google Scholar
  13. 13.
    Jalal, A., Kamal, S.: Real-time life logging via a depth silhouette based human activity recognition system for smart home services. In: 11th IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS, pp. 74–80 (2014)Google Scholar
  14. 14.
    Gu, Y., Do, H., Ou, Y., Sheng, W.: Human gesture recognition through a kinect sensor. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1379–1384. IEEE (2012)Google Scholar
  15. 15.
    Adama, D.A., Lotfi, A., Langensiepen, C., Lee, K., Trindade, P.: Learning human activities for assisted living robotics. In: Proceedings of 10th Conference on Pervasive Technology Related to Assistive Environments (PETRA 2017), Island of Rhodes, Greece, 21–23 June 2017Google Scholar
  16. 16.
    Hussein, M.E., Torki, M., Gowayyed, M.A., El-Saban, M.: Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence, pp. 2466–2472, Beijing, China. AAAI Press (2013)Google Scholar
  17. 17.
    Wei, P., Zheng, N., Zhao, Y., Zhu, S.-C.: Concurrent action detection with structural prediction. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3136–3143. IEEE (2013)Google Scholar
  18. 18.
    Microsoft, Developing with Kinect for Windows. https://developer.microsoft.com/en-us/windows/kinect/develop
  19. 19.
    Cornell activity datasets CAD-60. http://pr.cs.cornell.edu/humanactivities/data.php
  20. 20.
    Cippitelli, E., Gasparrini, S., Gambi, E., Spinsante, S.: A human activity recognition system using skeleton data from RGBD sensors. Comput. Intell. Neurosci. 2016, (2016). Article ID 4351435. 14 PagesGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • David Ada Adama
    • 1
  • Ahmad Lotfi
    • 1
    Email author
  • Caroline Langensiepen
    • 1
  • Kevin Lee
    • 1
  1. 1.School of Science and TechnologyNottingham Trent UniversityNottinghamUK

Personalised recommendations