Advertisement

Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage

  • Dima DamenEmail author
  • Osian Haines
  • Teesid Leelasawassuk
  • Andrew Calway
  • Walterio Mayol-Cuevas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8927)

Abstract

We present an online fully unsupervised approach for automatically extracting video guides of how objects are used from wearable gaze trackers worn by multiple users. Given egocentric video and eye gaze from multiple users performing tasks, the system discovers task-relevant objects and automatically extracts guidance videos on how these objects have been used. In the assistive mode, the paper proposes a method for selecting a suitable video guide to be displayed to a novice user indicating how to use an object, purely triggered by the user’s gaze. The approach is tested on a variety of daily tasks ranging from opening a door, to preparing coffee and operating a gym machine.

Keywords

Video guidance Wearable computing Real-time computer vision Assistive computing Object discovery Object usage 

References

  1. 1.
    Bleser, G., Almeida, L., Behera, A., Calway, A., Cohn, A., Damen, D., Domingues, H., Gee, A., Gorecky, D., Hogg, D., Kraly, M., Macaes, G., Marin, F., Mayol-Cuevas, W., Miezal, M., Mura, K., Petersen, N., Vignais, N., Santos, L., Spaas, G., Stricker, D.: Cognitive workflow capturing and rendering with on- body sensor networks (cognito). German Research Center for Artificial Intelligence, DFKI Research Reports (RR) (2013)Google Scholar
  2. 2.
    Damen, D., Bunnun, P., Calway, A., Mayol-Cuevas, W.: Real-time learning and detection of 3D texture-less objects: A scalable approach. In: British Machine Vision Conference (BMVC) (2012)Google Scholar
  3. 3.
    Damen, D., Leelasawassuk, T., Haines, O., Calway, A., Mayol-Cuevas, W.: You-do, i-learn: Discovering task relevant objects and their modes of interaction from multi-user egocentric video. In: British Machine Vision Conference (BMVC) (2014)Google Scholar
  4. 4.
    De Beugher, S., Ichiche, Y., Brone, G., Geodeme, T.: Automatic analysis of eye-tracking data using object detection algorithms. In: Workshop on Perasive Eye Traking and Mobile Eye-based Interaction (PETMEI) (2012)Google Scholar
  5. 5.
    Fathi, A., Li, Y., Rehg, J.M.: Learning to recognize daily actions using gaze. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part I. LNCS, vol. 7572, pp. 314–327. Springer, Heidelberg (2012) Google Scholar
  6. 6.
    Fathi, A., Rehg, J.: Modeling actions through state changes. In: Computer Vision and Pattern Recognition (CVPR) (2013)Google Scholar
  7. 7.
    Fathi, A., Ren, X., Rehg, J.: Learning to recognise objects in egocentric activities. In: Computer Vision and Pattern Recognition (CVPR) (2011)Google Scholar
  8. 8.
    Henderson, J.: Human gaze control during real-world scene perception. Trends in Cognitive Sciences 7(11) (2003)Google Scholar
  9. 9.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Int. Sym. on Mixed and Augmented Reality (ISMAR) (2007)Google Scholar
  10. 10.
    Laboratories, A.S.: Mobile Eye-XG. http://www.asleyetracking.com/
  11. 11.
    Land, M.: Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research (2006)Google Scholar
  12. 12.
    Lee, Y., Ghosh, J., Grauman, K.: Discovering important people and objects for egocentric video summarization. In: Computer Vision and Pattern Recognition (CVPR) (2012)Google Scholar
  13. 13.
    Leelasawassuk, T., Mayol-Cuevas, W.: 3D from looking: Using wearable gaze tracking for hands-free and feedback-free object modelling. In: Int. Sym. on Wearable Computers (ISWC) (2013)Google Scholar
  14. 14.
    Li, Y., Fathi, A., Rehg, J.: Learning to predict gaze in egocentric video. In: Int. Conf. on Computer Vision (ICCV) (2013)Google Scholar
  15. 15.
    Lu, Z., Grauman, K.: Story-driven summarization for egocentric video. In: Computer Vision and Pattern Recognition (CVPR) (2013)Google Scholar
  16. 16.
    Petersen, N., Stricker, D.: Learning task structure from video examples for workflow tracking and authoring. In: International Symposium on Mixed and Augmented Reality (ISMAR) (2012)Google Scholar
  17. 17.
    Ren, X., Gu, C.: Figure-ground segmentation improves handled object recognition in egocentric video. In: Computer Vision and Pattern Recognition (CVPR) (2010)Google Scholar
  18. 18.
    Rosten, E., Reitmayr, G., Drummond, T.: Real-Time video annotations for augmented reality. In: Bebis, G., Boyle, R., Koracin, D., Parvin, B. (eds.) ISVC 2005. LNCS, vol. 3804, pp. 294–302. Springer, Heidelberg (2005) Google Scholar
  19. 19.
    Salvucci, D., Goldberg, J.: Identifying fixations and saccades in eye-tracking protocols. In: Sym. on Eye Tracking Research & Applications (2000)Google Scholar
  20. 20.
    Sun, L., Klank, U., Beetz, M.: EyeWatchMe - 3D hand and object tracking for inside out activity analysis. In: Computer Vision and Pattern Recognition Workshop (CVPRW) (2009)Google Scholar
  21. 21.
    Sundaram, S., Mayol-Cuevas, W.: What are we doing here? egocentric activity recognition on the move for contextual mapping. In: Int. Conf. on Robotics and Automation (ICRA) (2012)Google Scholar
  22. 22.
    Takemura, K., Kohashi, Y., Suenaga, T., Takamatsu, J., Ogasawara, T.: Estimating 3D point-of-regard and visualizing gaze trajectories under natural head movements. In: Sym. on Eye-Tracking Research & Applications (ETRA) (2010)Google Scholar
  23. 23.
    Wade, N., Tatler, B.: The moving tablet of the eye: the origins of modern eye movement research. Oxford University Press (2005)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Dima Damen
    • 1
    Email author
  • Osian Haines
    • 1
  • Teesid Leelasawassuk
    • 1
  • Andrew Calway
    • 1
  • Walterio Mayol-Cuevas
    • 1
  1. 1.Department of Computer ScienceUniversity of BristolBristolUK

Personalised recommendations